Today Google released its third and final version of the Android 12 Developer Preview. Next month we should see the first beta where there will be even more changes. Highlights mentioned in today's release can be found below.
Improved web linking - In Android 12 we’re making some changes to help users get to their content faster and more seamlessly. First, we’ve changed the default handling of links that aren’t verified through Android App Links or manually approved for links by the user. Now the OS will directly open them in the default browser, rather than showing a chooser dialog. To make it easier for users to approve your app for links, we’ve added a new Intent that takes them to “Open by default” in Settings. If you want to ensure that only your app can handle links from your domain, you can use App Links. We’ve added new adb commands to help you configure and test your links. More here.
Rich haptic experiences - We’re expanding the tools we offer for creating informative haptic feedback for UI events, immersive and delightful effects for gaming, and attentional haptics for productivity. We’ve added expressive effects like low tick that take advantage of the broader frequency bandwidth of the latest actuators. Game developers can now access multiple, different actuators independently in game controllers to deliver the same effect synchronously or different haptic effects on multiple actuators. For developers, we recommend using the constants and primitives as building blocks for rich haptic effects - constants to enhance UI events and haptic composer to sequence primitives for more complex effects. You can try these APIs to the fullest on Pixel 4 devices today, and we’re continuing to work with our device-maker partners to bring the latest in haptics support to users across the ecosystem.
Video encoding improvements - Android 12 standardizes the set of keys for controlling the range of the video Quantization Parameters (QP), allowing developers to avoid vendor-specific code. The new keys are available in the MediaFormat API and also in the NDK Media library. Video encoders must specify a minimum video quality threshold to ensure that users don't experience extremely low quality when videos are complex.
Camera2 vendor extensions - Many of our device manufacturer partners have built custom camera effects—such as bokeh, HDR, night mode, and others—that they want apps to use to create differentiated experiences on their devices. We’ve already supported these custom effects through a set of vendor extensions in our CameraX library, and now in Android 12 we’re exposing the vendor extensions directly in the platform as well. This helps apps that have complex Camera2 implementations to take advantage of the extensions without having to make significant changes to legacy code. The extension APIs expose exactly the same set of effects as in CameraX, and those are already supported on many different devices, so you can use them right out of the box. More here.
Quad bayer camera sensor support - Many Android devices today ship with ultra high-resolution camera sensors, typically with Quad / Nona Bayer patterns, and these offer great flexibility in terms of image quality and low-light performance. In Android 12, we’re introducing new platform APIs that let third-party apps take full advantage of these versatile sensors. The new APIs support the unique behavior of these sensors and take into account that they might support different stream configurations and combinations when operating in full resolution or ‘maximum resolution’ mode vs ‘default’ mode.
Faster machine learning - In Android 12, we invested in key areas so that developers can make the most of ML accelerators and always get the best possible performance through the Neural Networks API. In terms of performance improvements - we have more than halved inference call overhead by introducing improvements such as padding, sync fences and reusable execution objects. We’ve also made ML accelerator drivers updatable outside of platform releases, through Google Play services. This will make it easier for developers to take advantage of the latest drivers on any compatible device, and make sure that ML performance improvements and bug fixes reach users faster than ever before.
Standardizing GPU compute - We are deprecating the RenderScript APIs in favor of cross-platform GPU compute solutions such as Vulkan and OpenGL. We want you to have confidence that your high-performance workloads will run on GPU hardware, and many devices are already shipping with only CPU support for RenderScript. The existing APIs will continue to work for the time-being, and we've open-sourced a library for RenderScript intrinsics such as blur that uses the highly-optimized intrinsics platform code. Samples and a migration guide for using Vulkan to implement image processing are also available. More here.
May 18th can't come soon enough and Google just gave us our last glimpse of what they have been working and I'm really hyped for it!
Link(s)
Comments
Post a Comment