Advertisement · 728 × 90

Posts by Android Developers Blog

Preview
Android Studio supports Gemma 4: our most capable local model for agentic coding _Posted by Matthew Warner, Google Product Manager_ _ _ Every developer's AI workflow and needs are unique, and it's important to be able to choose how AI helps your development. In January, we introduced the ability to choose any local or remote AI model to power AI functionality in Android Studio, and today, we're announcing the availability of Gemma 4 for AI coding assistance in Android Studio. This new local model trained on Android development provides the best of both worlds: the privacy and cost-efficiency of on-device processing alongside state-of-the-art reasoning and tool-calling capabilities. ### AI assistance, locally delivered By running locally on your machine, Gemma 4 gives you AI code assistance that doesn't require an internet connection or an API key for its core operations. Key benefits include: * **Privacy and security:** Your code stays on your machine. Gemma 4 processes all Agent Mode requests locally, making it an ideal choice for developers working with data privacy requirements or in secure corporate environments. * **Cost efficiency:** Run complex agentic workflows without worrying about hitting quotas. Gemma 4 is optimized to run efficiently on modern development hardware, utilizing local GPU and RAM to provide snappy, responsive assistance. * **Offline availability:** Use the agent to write code even when you don’t have an internet connection. * **State-of-the-art reasoning:** Gemma 4 delivers best-in-class reasoning, capable of complex multi-step coding tasks in Agent Mode. ### Powerful agentic coding Gemma 4 was trained for Android development with agentic tool calling capabilities. When you select Gemma 4 as your local model, you can leverage Agent Mode for a variety of development use cases, such as: * **Designing new features:** Developers can ask the agent to build a new feature or an entire app with commands like “build a calculator app” and the agent will not only generate the UI code but will use Android best practices like writing in Kotlin and using Jetpack Compose. * **Refactoring:** You can give high-level commands such as "Extract all hardcoded strings and migrate them to strings.xml." The agent will scan your codebase, identify instances requiring changes, and apply the edits across multiple files simultaneously. * **Bug fixing and build resolution:** If a project fails to build or has persistent lint errors, you can prompt the agent to "Build my project and fix any errors." The agent will navigate to the offending code and iteratively apply fixes until the build is successful. ### Recommended hardware requirements The 26B MoE is recommended for Android app developers using a machine with the minimum hardware requirements. Total RAM needed includes both Android Studio and Gemma. Model | Total RAM needed | Storage needed ---|---|--- Gemma E2B | 8GB | 2 GB Gemma E4B | 12 GB | 4 GB Gemma 26B MoE | 24 GB | 17 GB ### Get started To get started, ensure you have the latest version of **Android Studio** installed. 1. Install an LLM provider, such as LM Studio or Ollama, on your local computer. 2. In **Settings > Tools > AI > Model Providers** add your LM Studio or Ollama instance. 1. Download the Gemma 4 model from Ollama or LM Studio. Refer to hardware requirements for model size selection. 2. In Agent Mode, select **Gemma 4** as your active model. For a detailed walkthrough on configuration, check out the official documentation on how to use a local model. We are excited to see how Gemma 4 enables more private, secure, and powerful development workflows. As always, your feedback is essential as we continue to refine the AI experience in Android Studio. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, YouTube, or X. Happy coding!
1 week ago 0 0 0 0
Preview
Announcing Gemma 4 in the AICore Developer Preview _Posted by David Chou, Product Manager and Caren Chang, Developer Relations Engineer_ _ _ _ _ At Google, we’re committed to bringing the most capable AI models directly to the Android devices in your pocket. Today, we’re thrilled to announce the release of our latest state-of-the-art open model:**Gemma 4**. These models are the foundation for the next generation of Gemini Nano, so code you write today for Gemma 4 will automatically work on Gemini Nano 4-enabled devices that will be available later this year. With Gemini Nano 4, you’ll benefit from our additional performance optimizations so you can ship to production across the Android ecosystem with the most efficient on-device inference. You can get early access to this model today through the AICore Developer Preview. _Select the Gemini Nano 4 Fast model in the Developer Preview UI_ _ _to see its blazing fast inference speed in action before you write any code_ _ Because Gemma 4 natively supports over 140 languages, you can expect improved localized, multilingual experiences for your global audience. Furthermore, Gemma 4 offers industry-leading performance with multimodal understanding, allowing your apps to understand and process text, images, and audio. To give you the best balance of performance and efficiency, Gemma 4 on Android comes in two sizes: * **E4B:** Designed for higher reasoning power and complex tasks. * **E2B:** Optimized for maximum speed (3x faster than the E4B model!) and lower latency. The new model is up to 4x faster than previous versions and uses up to 60% less battery. Starting today, you can experiment with improved capabilities including: * **Reasoning:** Chain-of-thought commands and conditional statements can now be expected to return higher quality results. For example: _“Determine if the following comment for a discussion thread passes the community guidelines. The comment does not pass the community guideline if it contains one or more of these reason_for_flag: profanity, derogatory language, hate speech”. If the review passes the community guidelines, return {true}. Otherwise, return {false, reason_for_flag}.”_ * **Math:** With better math skills, the model can now more accurately answer questions. For example: _“If I get 26 paychecks per year, how much should I contribute each paycheck to reach my savings goal of $10,000 over the course of a year?”_ * **Time understanding:** The model is now more capable when reasoning about time, making it more accurate for use cases that involve calendars, reminders, and alarms. For example: _“The event is at 6PM on August 18th, and a reminder should be sent out 10 hours before the event. Return the time and date the reminder should be sent.”_ * **Image understanding:** Use cases that involve OCR (Optical Character Recognition) - such as chart understanding, visual data extraction, and handwriting recognition - will now return more accurate results. Join the Developer Preview today to download these models in preview models and start building next-generation features right away. _Start building with Gemma 4_ ### Start testing the model You can try out the model without code by following the Developer Preview guide. If you want to jump straight into integrating these models with your existing workflow, we’ve made that seamless. Head over to Android Studio to refine your prompt and build with the familiar ML Kit Prompt API. We’ve introduced a new ability to specify a model, allowing you to target the E2B (fast) or E4B (full) variants for testing. // Define the configuration with a specific track and preference val previewFullConfig = generationConfig { modelConfig = ModelConfig { releaseTrack = ModelReleaseTrack.PREVIEW preference = ModelPreference.FULL } } // Initialize the GenerativeModel with the configuration val previewModel = GenerativeModel.getClient(previewFullConfig) // Verify that the specific preview model is available val previewModelStatus = previewModel.checkStatus() if (previewModelStatus == FeatureStatus.AVAILABLE) { // Proceed with inference val response = previewModel.generateContent("If I get 26 paychecks per year, how much I should contribute each paycheck to reach my savings goal of $10k over the course of a year? Return only the amount.") } else { // Handle the case where the preview model is not available // (e.g., print out log statements) } ### What to expect during the Developer Preview The goal of this Developer Preview is to give you a head start on refining prompt accuracy and exploring new use cases for your specific apps. We will be making several updates throughout the preview period, including support for tool calling, structured output, system prompts, and thinking mode in Prompt API, making it easier to take full advantage of the new capabilities and significant performance optimizations in Gemma 4. The preview models are available for testing on AICore-enabled devices. These models will run on the latest generation of specialized AI accelerators from Google, MediaTek, and Qualcomm Technologies. On other devices, the models will initially run on a CPU implementation that is not representative of final production performance. If your device is not AICore-enabled, you can also test these models via the AI Edge Gallery app. We’ll provide support for more devices in the future. ### How to get started Ready to see what Gemma 4 can do for your users? 1. **Opt-in:** Sign up for the AICore Developer Preview. 2. **Download:** Once opted in, you can trigger the download of the latest Gemma 4 models directly to your supported test device. 3. **Build:** Update your ML Kit implementation to target the new models and start building in Android Studio.
1 week ago 0 0 0 0
Preview
Gemma 4: The new standard for local agentic intelligence on Android Posted by Matthew McCullough, VP of Product Management Android Development Today, we are enhancing Android development with Gemma 4, our latest state-of-the-art open model designed with complex reasoning and autonomous tool-calling capabilities. Our vision is to enable local agentic AI on Android across the entire software lifecycle, from development to production. Android supports a range of Gemma 4 models, from the most efficient ones running directly on-device in your apps to more powerful ones running on your development machine to help you build apps. We are bringing Gemma 4 to Android developers through two pillars: * **Local-first Agentic coding:** Experience powerful, local AI code assistance with Gemma 4 in Android Studio in your development computer. * **On-device intelligence:** Build intelligent experiences using the ML Kit GenAI Prompt API to run Gemma 4 directly on Android device hardware. ## Coding with Gemma 4 in Android Studio When building Android apps, Android Studio can use Gemma 4 to leverage its state-of-the-art reasoning power and native support for tool use, while keeping the model and inference contained entirely on your local machine. Gemma 4 was trained on Android development and designed with Agent Mode in mind. This means that when you select Gemma 4 as your local model, you can leverage the full suite of Agent Mode capabilities for a variety of Android development use cases, including refactoring legacy code, building an entire app or new features, and applying fixes iteratively. Learn more about the possibilities Gemma 4 brings to your app development flow and how to get started. ## Prototyping with Gemma 4 on-device Since the introduction of Gemini Nano as the foundation model on Android, it has become available on over 140 million devices. Gemma 4 is the base model for the next generation of Gemini Nano (Gemini Nano 4) that is optimized for performance and quality on Android devices. This model is up to 4x faster than the previous version and uses up to 60% less battery. To make it as easy as possible to preview and prototype with Gemma 4 E2B and E4B models directly on AICore-supported devices, we’re launching the AICore Developer Preview. While we continue to expand the ML Kit GenAI Prompt API surface to unlock additional advanced capabilities of the model, you can already start exploring new use cases with Gemma 4 using the Prompt API. Prepare your apps for the launch of the Gemini Nano 4 on the new flagship Android devices later this year by prototyping with Gemma 4 today. Read about the upcoming features and deep dive into AICore Developer Preview and its Gemma 4 support here. ## Local agentic intelligence with Gemma 4 Running Gemma 4 locally, you can leverage its advanced reasoning and tool-calling capabilities in your entire workflow, from developing with the AI coding assistant in Android Studio to shipping intelligent features in your app with ML Kit GenAI Prompt API. This local-first approach, available under Gemma’s open Apache license, provides an alternative for developers to innovate in a privacy-centric and cost effective manner. In a future release, we will update Android Bench to include Gemma 4 and other open models, providing the quantified data you need to navigate performance trade-offs and select the best model for your use case. We can’t wait to see what you build!
1 week ago 0 0 0 0
Preview
Increase Guidance and Control over Agent Mode with Android Studio Panda 3 __ __Posted by Matt Dyor, Senior Product Manager__ __ __ _ _ Android Studio Panda 3 is now stable and ready for you to use in production. This release gives you even more control and customization over your AI-powered workflows, making it easier than ever to build high-quality Android apps. Whether you're bringing new capabilities to an existing app or standing up a brand new app, these updates elevate your development experience by allowing your AI Agent in Android Studio to learn your specific practices and giving you granular control over its permissions. Lastly, in addition to AI skills and Agent Mode enchantments, Android Studio Panda 3 also includes updated support for build Android apps for cars. Here’s a deep dive into what’s new: ## Agent skills Create a more helpful AI agent by using agent skills in Android Studio. Agent skills are specialized instructions that teach the agent new capabilities and best practices for a specific workflow, which the agent can then leverage as needed. This significantly reduces the level of detail required for your day-to-day prompts. Agent skills work with Gemini in Android Studio or with other remote 3rd party LLMs you integrate into the agent framework in Android Studio. You and members of your team can create skills that tell the agent exactly how you want to handle specific tasks in your codebase. For example, you could create a custom “code review” skill tailored to your organization's coding standards, or custom skill to provide the agent with more information on using an in-house library. Once you have created a skill, the agent will be able to use it automatically, or you can manually trigger it by typing @ followed by the skill name. Check out the documentation to learn more about how to create skills for your codebase, or better yet—ask your agent to help you build a new skill and it will guide you through the details! _Manually Trigger Agent Skill in Android Studio_ #### Getting Started To build a skill for your project, do the following: * Create a .skills directory inside your project's root folder. * Place a SKILL.md file inside this new directory. * Add a name and description to the file to define your custom workflow, and your skill is ready. * Optionally include scripts, assets, and references to provide even more guidance to your agent. _Agent skills in Android Studio_ ## Manage permissions for Agent Mode You control your codebase, and you can now be more deliberate with which data and capabilities you choose to share with AI agents. The new granular agent permissions in Android Studio let you decide exactly what agents can do for you. When Agent Mode needs to read files, run shell commands, or access the web, it explicitly asks for your permission. We know that 'approval fatigue' is a real risk in AI workflows—when a tool asks for permission too often, it’s easy to start clicking 'Allow' without fully reviewing the action. By offering granular 'Always Allow' rules for trusted operations and an optional sandbox for experimental ones, Android Studio helps you stay focused on the high-stakes decisions that actually require your manual sign-off. _Agent Permissions_ Agent permissions are intuitive to set up and use. For example, granting high-level permissions automatically authorizes related sub-tools, while commands you have previously approved will run automatically without interrupting your flow. Rest assured, accessing sensitive files like SSH keys will always require your explicit sign-off. For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent. _Agent Shell Sandbox_ ## Empty Car App Library App template We’re making it easier to build Android apps for cars. Building apps for the car used to mean wrestling with complex configurations just to get the project to build successfully. Now, you can accelerate your development with the new “Empty Car App Library App” template in Android Studio. This template takes care of the required boilerplate code for a driving-optimized app on both Android Auto and Android Automotive OS, saving you significant time and effort. Instead of getting bogged down in setup, you can focus on creating the best experience for your users on the road. #### Getting Started To use the new template: * Select **New Project** on the Welcome to Android Studio screen (or **File > New > New Project** from within a project). * Search for or select the **Empty Car App Library App** template. * Name your app and click **Finish** to generate your driving-optimized app. _Empty Car App Library App template_ ## Android Studio Panda releases Panda 3 builds off last month’s AI-focused Panda 2 release. Check out Go from prompt to working prototype with Android Studio Panda 2 post to learn more about new Android Studio features, including the AI-powered New Project Flow that takes you from prompt to prototype and the Version Upgrade Assistant that takes the toil out of updating your dependencies. ## Get started Dive in and accelerate your development. Download Android Studio Panda 3 and start exploring these powerful new agentic features today. As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!
1 week ago 0 0 0 0
Preview
Get your Wear OS apps ready for the 64-bit requirement _Posted by Michael Stillwell, Developer Relations Engineer and Dimitris Kosmidis, Product Manager, Wear OS_ 64-bit architectures provide performance improvements and a foundation for future innovation, delivering faster and richer experiences for your users. We’ve supported 64-bit CPUs since Android 5. This aligns Wear OS with recent updates for Google TV and other form factors, building on the 64-bit requirement first introduced for mobile in 2019. Today, we are extending this 64-bit requirement to Wear OS. This blog provides guidance to help you prepare your apps to meet these new requirements. ### The 64-bit requirement: timeline for Wear OS developers Starting September 15, 2026: * All new apps and app updates that include native code will be required to provide 64-bit versions in addition to 32-bit versions when publishing to Google Play. * Google Play will start blocking the upload of non-compliant apps to the Play Console. We are not making changes to our policy on 32-bit support, and Google Play will continue to deliver apps to existing 32-bit devices. The vast majority of Wear OS developers has already made this shift, with 64-bit compliant apps already available. For the remaining apps, we expect the effort to be small. ### Preparing for the 64-bit requirement Many apps are written entirely in non-native code (i.e. Kotlin or Java) and do not need any code changes. However, it is important to note that even if you do not write native code yourself, a dependency or SDK could be introducing it into your app, so you still need to check whether your app includes native code. ## Assess your app * **Inspect your APK or app bundle** for native code using the APK Analyzer in Android Studio. * **Look for .so files** within the lib folder. For ARM devices, 32-bit libraries are located in lib/armeabi-v7a, while the 64-bit equivalent is lib/arm64-v8a. * **Ensure parity:** The goal is to ensure that your app runs correctly in a 64-bit-only environment. While specific configurations may vary, for most apps this means that for each native 32-bit architecture you support, you should include the corresponding 64-bit architecture by providing the relevant .so files for both ABIs. * **Upgrade SDKs:** If you only have 32-bit versions of a third-party library or SDK, reach out to the provider for a 64-bit compliant version. ### How to test 64-bit compatibility The 64-bit version of your app should offer the same quality and feature set as the 32-bit version. The Wear OS Android Emulator can be used to verify that your app behaves and performs as expected in a 64-bit environment. **Note:** Since Wear OS apps are required to target Wear OS 4 or higher to be submitted to Google Play, you are likely already testing on these newer, 64-bit only images. When testing, pay attention to native code loaders such as SoLoader or older versions of OpenSSL, which may require updates to function correctly on 64-bit only hardware. ### Next steps We are announcing this requirement now to give developers a six-month window to bring their apps into compliance before enforcement begins in September 2026. For more detailed guidance on the transition, please refer to our in-depth documentation on supporting 64-bit architectures. This transition marks an exciting step for the future of Wear OS and the benefits that 64-bit compatibility will bring to the ecosystem.
1 week ago 0 0 0 0
Preview
Media3 1.10 is out _Posted by Andrew Lewis,__Software Engineer_ ## Media3 1.10 is out! Media3 1.10 includes new features, bug fixes and feature improvements, including Material3-based playback widgets, expanded format support in ExoPlayer and improved speed adjustment when exporting media with Transformer. Read on to find out more, and check out the full release notes for a comprehensive list of changes. #### Playback UI and Compose We are continuing to expand the media3-ui-compose-material3 module to help you build Compose UIs for playback. We've added a new Player Composable that combines a ContentFrame with customizable playback controls, giving you an out-of-the-box player widget with a modern UI. This release also adds a ProgressSlider Composable for displaying player progress and performing seeks using dragging and tapping gestures. For playback speed management, a new PlaybackSpeedControl is available in the base media3-ui-compose module, alongside a styled PlaybackSpeedToggleButton in the Material 3 module. We'll continue working on new additions like track selection utils, subtitle support and more customization options in the upcoming Media3 releases. We're eager to hear your feedback so please share your thoughts on the project issue tracker. Player _Composable in the Media3 Compose demo app_ #### Playback feature enhancements Media3 1.10 includes a variety of additions and improvements across the playback modules: * Format support: ExoPlayer now supports extracting Dolby Vision Profile 10 and Versatile Video Coding (VVC) tracks in MP4 containers, and we've introduced MPEG-H UI manager support in the decoder_mpeghextension. The IAMF extension now seamlessly supports binaural output, either through the decoder viaiamf_tools or through the Android OS Spatializer, with new logic to match the output layout of the speakers. * Ad playback: Improvements to reliability, improved HLS interstitial support forX-PLAYOUT-LIMIT and X-SNAP, and with the latest IMA SDK dependency you can control whether ad click-through URLs open in custom tabs with setEnableCustomTabs. HLS: ExoPlayer now allows location fallback upon encountering load errors if redundant streams from different locations are available. * Session: MediaSessionService now extends LifecycleService, allowing apps to access the lifecycle scoping of the service. One of our key focus areas this year is on playback efficiency and performance. Media3 1.10 includes experimental support for scheduling the core playback loop in a more efficient way. You can try this out by enabling experimentalSetDynamicSchedulingEnabled() via the ExoPlayer.Builder. We plan to make further improvements in future releases so stay tuned! #### Media editing and Transformer For developers building media editing experiences, we've made speed adjustments more robust. EditedMediaItem.Builder.setFrameRate()can now set a maximum output frame rate for video. This is particularly helpful for controlling output size and maintaining performance when increasing media speed with setSpeed(). #### New modules for frame extraction and applying Lottie effects In this release we've split some functionality into new modules to reduce the scope of some dependencies: * FrameExtractor has been removed from the main media3-inspector module, so please migrate your code to use the new media3-inspector-framemodule and update your imports toandroidx.media3.inspector.frame.FrameExtractor. * We have also moved theLottieOverlayeffect to a separate media3-effect-lottie module. As a reminder, this gives you a straightforward way to apply vector-based Lottie animations directly to video frames. Please get in touch via the issue tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!
1 week ago 0 0 0 0
Preview
Monzo boosts performance metrics by up to 35% with a simple R8 update _Posted by Ben Weiss, Senior Developer Relations Engineer_ Monzo is a UK digital bank with 15 million customers and growing. As the app scaled, the engineering team identified app startup time as a critical area for improvement but worried it would require significant changes to their codebase. By fully enabling R8 optimizations, Monzo achieved a massive 35% reduction in their Application Not Responding (ANR) rate. This simple change proved that impactful optimizations don't always require complex engineering efforts. ## Unlocking broad performance wins with R8 full mode Monzo identified R8 full mode as an easy fix worth trying; and it worked, improving performance across the board: * ● **Startup Reliability:** Cold starts improved by 30%, Warm starts by 24%, and Hot starts by 14%. * ● **Launch Speed:** P50 launch times improved by 11% and P90 launch times by 12%. * ● **Efficiency:** Overall app size was reduced by 9%. * ● **Stability:** ANR reduction of 35%. ## Enabling optimizations with a single change Many Android apps use an outdated default configuration file which disables most functionality of the R8 optimizer. The main change Monzo made to unlock these performance improvements was to replace the `proguard-android.txt` default file with `proguard-android-optimize.txt`. This change removes the `-dontoptimize` instruction and allows R8 to properly do its job. buildTypes { release { isMinifyEnabled = true isShrinkResources = true proguardFiles( getDefaultProguardFile("proguard-android-optimize.txt"), ) } } After making this change, it's worth looking at your Keep configuration files. These files tell R8 which parts of your code to leave alone (usually because they're called dynamically or by external libraries). Tidying up unnecessary Keep rules means R8 can do more. ## Improving scroll performance with Baseline Profiles To further enhance the user experience, Monzo implemented Baseline Profiles, specifically targeting scroll and rendering performance on their main feed. This strategy ensured that the most common user journeys—opening the app and scrolling the feed—were fully optimized. The impact on rendering was substantial: P90 scroll performance became 71% faster, and P95 scroll performance improved by 87%. Now scrolling the app is smoother than before. Monzo built this into their release process to maintain these improvements over time. "We trigger the baseline profile generation every week day (before running our nightly builds) and commit the latest changes once completed," Neumayer explains. ## Keeping up with modern Android development Monzo's experience shows what's possible when you stay up to date with Android build-tooling recommendations. While legacy apps often struggle with complex reflection usage, Monzo found the transition straightforward by documenting their Keep Rules properly. "We always add a comment explaining why Keep Rules are in place, so we know when it's safe to remove the rules," Neumayer notes. Neumayer's advice for other teams? Regularly check your practices against current standards: "Take a look at the latest recommendations from Google around app performance and check if you're following all the latest advice." To get started and learn more about R8, visit https://d.android.com/r8
1 week ago 0 0 0 0
Advertisement
Preview
Android developer verification: Rolling out to all developers on Play Console and Android Developer Console __ _Posted by Matthew Forsythe, Director Product Management, Android App Safety_ Android is for everyone. It’s built on a commitment to an open and safe platform. Users should feel confident installing apps, no matter where they get them from. However, our recent analysis found over **90 times** more malware from sideloaded sources than on Google Play. So as an extra layer of security, we are rolling out Android developer verification to help prevent malicious actors from hiding behind anonymity to repeatedly spread harm. Over the past several months, we’ve worked closely with the community to improve the design so we account for the many ways people use Android to balance openness with safety. ## Start your verification today Today, we’re starting to **roll out Android developer verification to all developers** in both the new Android Developer Console and Play Console. This allows you to complete your verification and register your apps before user-facing changes begin later this year. * If you only distribute apps outside of Google Play, you can create an account in Android Developer Console today. * If you're on Google Play, check your Play Console account for updates over the next few weeks. If you’ve already verified your identity here, then you’re likely already set. ## Most of your users’ download experience will not change at all While verification tools are rolling out now, the experience for users downloading your apps will not change until later this year. The user side protections will first go live in Brazil, Indonesia, Singapore, and Thailand this September, before expanding globally in 2027. We’ve shared this timeline early to ensure you have ample time to complete your verification. Following this deadline, for the vast majority of users, the experience of installing apps will stay exactly the same. It’s only when a user tries to install an unregistered app that they’ll require ADB or advanced flow, helping us keep the broader community safe while preserving the flexibility for our power users. _Developers can still choose where to distribute their apps. Most users’ download experience will not change_ ## Tailoring the verification experience to your feedback To balance the need for safety with our commitment to openness, we’ve improved the verification experience based on your feedback. We’ve streamlined the developer experience to be more integrated with existing workflows and maintained choice for power users. * **For Android Studio developers:** In the next two months, you’ll see your app's registration status right in Android Studio when you generate a signed App Bundle or APK. _You’ll see your app's registration status in Android Studio when you generate a signed App Bundle or APK._ * **For Play developers:** If you've completed Play Console's developer verification requirements, your identity is already verified and we'll automatically register eligible Play apps for you. In the rare case that we are unable to register your apps for you, you will need to follow the manual app claim process. Over the next couple of weeks, more details will be provided in the Play Console and through email. Also, you’ll be able to register apps you distribute outside of Play in the Play Console too. _The Android developer verification page in your Play Console will show the registration status for each of your apps._ * **For students and hobbyists:** To keep Android accessible to everyone, we're building a free, no government ID required, limited distribution account so you can share your work with up to 20 devices. You only need an email account to get started. Sign up for early access. We’ll send invites in June. * **For power users:** We are maintaining the choice to install apps from any source. You can use the new advanced flow for sideloading unregistered apps or continue using ADB. This maintains choice while protecting vulnerable users. ## What’s next? We’re rolling this out carefully and working closely with developers, users, and our partners. In April, we’ll introduce Android Developer Verifier, a new Google system service that will be used to check if an app is registered to a verified developer. * **April 2026:** Users will start to see Android Developer Verifier in their Google Systems services settings. * **June 2026:** Early access: Limited distribution accounts for students and hobbyists. * **August 2026:** * Limited distribution accounts launch globally. * Advanced flow for power users launches globally. * **September 30, 2026:** Apps must be registered by verified developers in order to be installed and updated on certified Android devices in Brazil, Indonesia, Singapore, and Thailand. Unregistered apps can be sideloaded with ADB or advanced flow. * **2027 and beyond:** We will roll out this requirement globally. We’re committed to an Android that is both open and safe. Check out our developer guides to get started today.
1 week ago 0 0 0 0
Preview
Redefining Location Privacy: New Tools and Improvements for Android 17 _Posted by Robert Clifford, Developer Relations Engineer and Manjeet Rulhania, Software Engineer_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A pillar of the Android ecosystem is our shared commitment to user trust. As the mobile landscape has evolved, so does our approach to protecting sensitive information. In Android 17, we’re introducing a suite of new location privacy features designed to give users more control and provide developers elegant solutions for data minimization and product safety. Our strategy focuses on introducing new tools to balance high-quality experiences with robust privacy protections, and improving transparency for users to help manage their data. #### Introducing the location button: simplified access for one time use For many common tasks, like finding a nearby shop or tagging a social post, your app doesn’t need permanent or background access to a user's precise location.With Android 17, we are introducing the location button, a new UI element designed to provide a well-lit path for responsible one time precise location access. Industry partners have requested this new feature as a way to bring a simpler, and more private location flow to their users. _ _ #### Users get better privacy protection Moving the decision making for location sharing to the point where a user takes action, helps the user make a clearer choice about how much information they want to share and for how long. This empowers users to limit data sharing to only what apps need in that session. Once consent is provided, this session based access eliminates repeated prompts for location dependent features. This benefits developers by creating a smoother experience for their users and providing high confidence in user intent, as access is explicitly requested at the moment of action. #### Full UI customization to match your app’s aesthetic The location button provides extensive customization options to ensure integration with your app's aesthetic while maintaining system-wide recognizability. You can modify the button's visual style including: * Background and icon color scheme * Outline style * Size and shape Additionally, you can select the appropriate text label from a predefined list of options. To ensure security and trust, the location icon itself remains mandatory and non-customizable, while the font size is system-managed to respect user accessibility settings #### Simplified Integration with Jetpack and automatic backwards compatibility The location button will be provided as a Jetpack library, ensuring easy integration into your existing app layouts similar to any other Jetpack view implementation, and simplifying how you request permission to access precise location. Additionally, when you implement location button with the Jetpack library it will automatically handle backwards compatibility by defaulting to the existing location prompt when a user taps it on a device running Android 16 or below. The Android location button is available for testing as of Android 17 Beta 3. #### Location access transparency Users often struggle to understand the tools they can use to monitor and control access to their location data. In Android 17, we are aligning location permission transparency with the high standards already set for the Microphone and Camera. _ _ _ _ * Updated Location Indicator: A persistent indicator will now appear to inform a user whenever a non-system app accesses their location * Attribution & Control: Users can tap the indicator to see exactly which apps have recently accessed their location and manage those permissions immediately through a "Recent app use" dialog. #### Strengthening user privacy with density-based Coarse Location Android 17 is also improving the algorithm for approximate (coarse) locations to be aware of population density. Previously, coarse locations used a static 2 km-wide grid, which in low-population areas may not be sufficiently private since a 2km square could often contain only a handful of users. The new approach replaces this fixed grid with a dynamically-sized area based on local population density. By increasing the grid for areas with lower population density, Android ensures a more consistent privacy guarantee across different environments from dense urban centers to remote regions. #### Improved runtime permission dialog The runtime permission dialog for location is one of the more complex flows for users to navigate, with users being asked to decide on the granularity and length of permission access they are willing to grant to each app. In an effort to help users to make the most informed privacy decisions with less friction, we’ve redesigned the dialog to make "**Precise** " and "**Approximate** " choices more visually distinct, encouraging users to select the level of access which best suits their needs. #### Start building for Android 17 The new location privacy tools are available now in Beta 3. We’re looking for your feedback to help refine these features before the general release. * Feedback: Report issues on the [Official Tracker] or chat with us in the [Android Dev Slack]. Build a smoother, more private experience today.
2 weeks ago 0 0 0 0
Preview
The Third Beta of Android 17 _Posted by Matthew McCullough, VP of Product Management, Android Developer_ Android 17 has officially reached platform stability today with Beta 3. That means that the API surface is locked; you can perform final compatibility testing and push your Android 17-targeted apps to the Play Store. In addition, Beta 3 brings a host of new capabilities to help you build better, more secure, and highly integrated applications. ### Get your apps, libraries, tools, and game engines ready! If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17. Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are some changes to focus on: * **Resizability on large screens:** Once you target Android 17, you can no longer opt out of maintaining orientation, resizability and aspect ratio constraints on large screens. * **Dynamic code loading:** If your app targets Android 17 or higher, the Safer Dynamic Code Loading (DCL) protection introduced in Android 14 for DEX and JAR files now extends to native libraries. All native files loaded using System.load() must be marked as read-only. Otherwise, the system throws UnsatisfiedLinkError. * **Enable CT by default** : Certificate transparency (CT) is enabled by default. (On Android 16, CT is available but apps had to opt in.) * **Local network protections** : Apps targeting Android 17 or higher have local network access blocked by default. Switch to using privacy preserving pickers if possible, and use the new ACCESS_LOCAL_NETWORK for broad, persistent access. ### Media and camera enhancements #### Photo Picker customization options Android now allows you to tailor the visual presentation of the photo picker to better complement your app’s user interface. By leveraging the new PhotoPickerUiCustomizationParams API, you can modify the grid view aspect ratio from the standard 1:1 square to a 9:16 portrait display. This flexibility extends to both the ACTION_PICK_IMAGES intent and the embedded photo picker, enabling you to maintain a cohesive aesthetic when users interact with media. This is all part of our effort to help make the privacy-preserving Android photo picker fit seamlessly with your app experience. Learn more about how you can embed the photo picker directly into your app for the most native experience. val params = PhotoPickerUiCustomizationParams.Builder() .setAspectRatio(PhotoPickerUiCustomizationParams.ASPECT_RATIO_PORTRAIT_9_16) .build() val intent = Intent(MediaStore.ACTION_PICK_IMAGES).apply { putExtra(MediaStore.EXTRA_PICK_IMAGES_UI_CUSTOMIZATION_PARAMS, params) } startActivityForResult(intent, REQUEST_CODE) **Support for the RAW14 image format:** Android 17 introduces support for the RAW14 image format — the de-facto industry standard for high-end digital photography — via the new ImageFormat.RAW14 constant. RAW14 is a single-channel, 14-bit per pixel format that uses a densely packed layout where every four consecutive pixels are packed into seven bytes. **Vendor-defined camera extensions:** Android 17 adds Vendor-defined extensions to enable hardware partners define and implement custom camera extension modes to provide you access to the best and latest camera features, such as 'Super Resolution' or cutting-edge AI-driven enhancements. You can query for these modes using the isExtensionSupported(int) API. **Camera device type APIs:** New Android 17 APIs allow you to query the underlying device type to identify if a camera is built-in hardware, an external USB webcam, or a virtual camera. #### Bluetooth LE Audio hearing aid support Android now includes a specific device category for Bluetooth Low Energy (BLE) Audio hearing aids. With the addition of the AudioDeviceInfo.TYPE_BLE_HEARING_AID constant, your app can now distinguish hearing aids from regular headsets. val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager val devices = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS) val isHearingAidConnected = devices.any { it.type == AudioDeviceInfo.TYPE_BLE_HEARING_AID } #### Granular audio routing for hearing aids Android 17 allows users to independently manage where specific system sounds are played. They can choose to route notifications, ringtones, and alarms to connected hearing aids or the device's built-in speaker. #### Extended HE-AAC software encoder Android 17 introduces a system-provided Extended HE-AAC software encoder. This encoder supports both low and high bitrates using unified speech and audio coding. You can access this encoder via the MediaCodec API using the name `c2.android.xheaac.encoder` or by querying for the `audio/mp4a-latm` MIME type. val encoder = MediaCodec.createByCodecName("c2.android.xheaac.encoder") val format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 48000, 1) format.setInteger(MediaFormat.KEY_BIT_RATE, 24000) format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectXHE) encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE) ### Performance and Battery Enhancements #### Reduce wakelocks with listener support for allow-while-idle alarms Android 17 introduces a new variant of AlarmManager.setExactAndAllowWhileIdle that accepts an OnAlarmListener instead of a PendingIntent. This new callback-based mechanism is ideal for apps that currently rely on continuous wakelocks to perform periodic tasks, such as messaging apps maintaining socket connections. val alarmManager = getSystemService(AlarmManager::class.java) val listener = AlarmManager.OnAlarmListener { // Do work here } alarmManager.setExactAndAllowWhileIdle( AlarmManager.ELAPSED_REALTIME_WAKEUP, SystemClock.elapsedRealtime() + 60000, listener, null ) ### Privacy updates #### System-provided Location Button Android is introducing a system-rendered location button that you will be able to embed directly into your app's layout using an Android Jetpack library. When a user taps this system button, your app is granted precise location access for the current session only. To implement this, you need to declare the USE_LOCATION_BUTTON permission. #### Discrete password visibility settings for touch and physical keyboards This feature splits the existing "Show passwords" system setting into two distinct user preferences: one for touch-based inputs and another for physical (hardware) keyboard inputs. Characters entered via physical keyboards are now hidden immediately by default. val isPhysical = event.source and InputDevice.SOURCE_KEYBOARD == InputDevice.SOURCE_KEYBOARD val shouldShow = android.text.ShowSecretsSetting.shouldShowPassword(context, isPhysical) ### Security #### Enforced read-only dynamic code loading To improve security against code injection attacks, Android now enforces that dynamically loaded native libraries must be read-only. If your app targets Android 17 or higher, all native files loaded using System.load() must be marked as read-only beforehand. val libraryFile = File(context.filesDir, "my_native_lib.so") // Mark the file as read-only before loading to comply with Android 17+ security requirements libraryFile.setReadOnly() System.load(libraryFile.absolutePath) #### Post-Quantum Cryptography (PQC) Hybrid APK Signing To prepare for future advancements in quantum computing, Android is introducing support for Post-Quantum Cryptography (PQC) through the new v3.2 APK Signature Scheme. This scheme utilizes a hybrid approach, combining a classical signature with an ML-DSA signature. ### User experience and system UI #### Better support for widgets on external displays This feature improves the visual consistency of app widgets when they are shown on connected or external displays with different pixel densities using DP or SP units. val options = appWidgetManager.getAppWidgetOptions(appWidgetId) val displayId = options.getInt(AppWidgetManager.OPTION_APPWIDGET_DISPLAY_ID) val remoteViews = RemoteViews(context.packageName, R.layout.widget_layout) remoteViews.setViewPadding( R.id.container, 16f, 8f, 16f, 8f, TypedValue.COMPLEX_UNIT_DIP ) #### Hidden app labels on the home screen Android now provides a user setting to hide app names (labels) on the home screen workspace. Ensure your app icon is distinct and recognizable. #### Desktop Interactive Picture-in-Picture Unlike traditional Picture-in-Picture, these pinned windows remain interactive while staying always-on-top of other application windows in desktop mode. val appTask: ActivityManager.AppTask = activity.getSystemService(ActivityManager::class.java).appTasks[0] appTask.requestWindowingLayer( ActivityManager.AppTask.WINDOWING_LAYER_PINNED, context.mainExecutor, object : OutcomeReceiver<Int, Exception> { override fun onResult(result: Int) { if (result == ActivityManager.AppTask.WINDOWING_LAYER_REQUEST_GRANTED) { // Task successfully moved to pinned layer } } override fun onError(error: Exception) {} } ) #### Redesigned screen recording toolbar ### Core functionality #### VPN app exclusion settings By using the new ACTION_VPN_APP_EXCLUSION_SETTINGS Intent, your app can launch a system-managed Settings screen where users can select applications to bypass the VPN tunnel. val intent = Intent(Settings.ACTION_VPN_APP_EXCLUSION_SETTINGS) if (intent.resolveActivity(packageManager) != null) { startActivity(intent) } #### OpenJDK 25 and 21 API updates This update brings extensive features and refinements from OpenJDK 21 and OpenJDK 25, including the latest Unicode support and enhanced SSL support for named groups in TLS. ### Get started with Android 17 You can enroll any supported Pixel device or use the 64-bit system images with the Android Emulator. * Compile against the new SDK and report issues on the feedback page. * Test your current app for compatibility and learn whether your app is affected by changes in Android 17. For complete information, visit the Android 17 developer site.
2 weeks ago 0 0 0 0
Preview
Meet the class of 2026 for the Google Play Apps Accelerator _Posted by Robbie McLachlan, Developer Marketing_ _ _ The wait is over! We are incredibly excited to share the Google Play Apps Accelerator class of 2026. We’ve handpicked a group of high-potential studios from across the globe to embark on a 12-week journey designed to supercharge their success. Here’s what’s in store for the program’s first ever class: * Curated learning: virtual masterclasses and workshops led by industry trailblazers. * Guidance & mentorship: 1-to-1 sessions covering everything from technical scaling to leadership. * Direct access: exclusive sessions with experts from Google and the world's top studios. Without further ado, join us in congratulating them! ### Google Play Apps Accelerator | Class of 2026 **_Americas_** _Anytune_ _AstroVeda_ _BetterYou_ _Changed_ _Focus Forge_ _Human Program_ _Know Your Lemons_ _kweliTV_ _Language Innovation_ _Matraquinha_ _MR ROCCO_ _MUU nutrition_ _NKENNE_ _Skarvo_ _Starcrossed_ _Wishfinity_ | **_Asia Pacific_** _Human Health_ _Kitakuji_ _Lazy Surfers_ _Mellers Tech_ _Reehee Company_ | **_Europe, Middle East & Africa_** _cabuu_ _Class54 Education_ _Digital Garden_ _EverPixel_ _Geolives_ _HelloMind_ _ifal_ _Idea Accelerator_ _Maposcope_ _Ochy_ _Picastro_ _Pixelbite_ _Record Scanner_ _Talkao_ _unorderly_ _Xeropan International_ ---|---|--- Congratulations again to all the founders selected, we can’t wait to see your apps grow on our platform. The Google Play Apps Accelerator is part of our mission to help businesses of all sizes grow on Google Play and reach their full potential. Discover more about Google Play’s programs, resources and tools.
2 weeks ago 0 0 0 0
Preview
Contact Picker: Privacy-First Contact Sharing _Posted by Roxanna Aliabadi Walker, Senior Product Manager_ Privacy and user control remain at the heart of the Android experience. Just as the photo picker made media sharing secure and easy to implement, we are now bringing that same level of privacy, simplicity, and great user experience to contact selection. ### A New Standard for Contact Privacy Historically, applications requiring access to a specific user's contacts relied on the broad READ_CONTACTS permission. While functional, this approach often granted apps more data than necessary. The new Android Contact Picker, introduced in Android 17, changes this dynamic by providing a standardized, secure, and searchable interface for contact selection. This feature allows users to grant apps access only to the specific contacts they choose, aligning with Android's commitment to data transparency and minimized permission footprints. ### How It Works Developers can integrate the Contact Picker using the Intent.ACTION_PICK_CONTACTS intent. This updated API offers several powerful capabilities: * **Granular Data Requests:** Apps can specify exactly which fields they need, such as phone numbers or email addresses, rather than receiving the entire contact record. * **Multi-Selection Support:** The picker supports both single and multiple contact selections, giving developers more flexibility for features like group invitations. * **Selection Limits:** Developers can set custom limits on the number of contacts a user can select at one time. * **Temporary Access:** Upon selection, the system returns a Session URI that provides temporary read access to the requested data, ensuring that access does not persist longer than necessary. * **Access to other profiles:** When using this new intent, the interface will allow users to select contents from other user profiles such as a work profile, cloned profile or a private space. * **Optimized Performance:** The Contact Picker returns a single Uri that allows for collective result querying, eliminating the need to query individual contact Uri separately as required by ACTION_PICK. This efficiency further reduces system overhead by utilizing a single Binder transaction. ### Backward Compatibility and Implementation For devices running Android 17 or higher, the system automatically upgrades legacy ACTION_PICK intents that specify contact data types to the new, more secure interface. However, to take full advantage of advanced features like multi-selection, developers are encouraged to update their implementation code and utilize the ContentResolver to query the returned Session URI. Integrate the contact pickerTo integrate the Contact Picker, developers use the ACTION_PICK_CONTACTS intent. Below is a code example demonstrating how to launch the picker and request specific data fields, such as email and phone numbers. // State to hold the list of selected contacts var contacts by remember { mutableStateOf<List<Contact>>(emptyList()) } // Launcher for the Contact Picker intent val pickContact = rememberLauncherForActivityResult(StartActivityForResult()) { if (it.resultCode == Activity.RESULT_OK) { val resultUri = it.data?.data ?: return@rememberLauncherForActivityResult // Process the result URI in a background thread coroutine.launch { contacts = processContactPickerResultUri(resultUri, context) } } } // Define the specific contact data fields you need val requestedFields = arrayListOf( Email.CONTENT_ITEM_TYPE, Phone.CONTENT_ITEM_TYPE, ) // Set up the intent for the Contact Picker val pickContactIntent = Intent(ACTION_PICK_CONTACTS).apply { putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5) putStringArrayListExtra( EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS, requestedFields ) putExtra(EXTRA_PICK_CONTACTS_MATCH_ALL_DATA_FIELDS, false) } // Launch the picker pickContact.launch(pickContactIntent) After the user makes a selection, the app processes the result by querying the returned Session URI to extract the requested contact information. // Data class representing a parsed Contact with selected details data class Contact(val id: String, val name: String, val email: String?, val phone: String?) // Helper function to query the content resolver with the URI returned by the Contact Picker. // Parses the cursor to extract contact details such as name, email, and phone number private suspend fun processContactPickerResultUri( sessionUri: Uri, context: Context ): List<Contact> = withContext(Dispatchers.IO) { // Define the columns we want to retrieve from the ContactPicker ContentProvider val projection = arrayOf( ContactsContract.Contacts._ID, ContactsContract.Contacts.DISPLAY_NAME_PRIMARY, ContactsContract.Data.MIMETYPE, // Type of data (e.g., email or phone) ContactsContract.Data.DATA1, // The actual data (Phone number / Email string) ) val results = mutableListOf<Contact>() // Note: The Contact Picker Session Uri doesn't support custom selection & selectionArgs. context.contentResolver.query(sessionUri, projection, null, null, null)?.use { cursor -> // Get the column indices for our requested projection val contactIdIdx = cursor.getColumnIndex(ContactsContract.Contacts._ID) val mimeTypeIdx = cursor.getColumnIndex(ContactsContract.Data.MIMETYPE) val nameIdx = cursor.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME_PRIMARY) val data1Idx = cursor.getColumnIndex(ContactsContract.Data.DATA1) while (cursor.moveToNext()) { val contactId = cursor.getString(contactIdIdx) val mimeType = cursor.getString(mimeTypeIdx) val name = cursor.getString(nameIdx) ?: "" val data1 = cursor.getString(data1Idx) ?: "" // Determine if the current row represents an email or a phone number val email = if (mimeType == Email.CONTENT_ITEM_TYPE) data1 else null val phone = if (mimeType == Phone.CONTENT_ITEM_TYPE) data1 else null // Add the parsed contact to our results list results.add(Contact(contactId, name, email, phone)) } } return@withContext results } Check out the full documentation here. ### Best Practices for Developers To provide the best user experience and maintain high security standards, we recommend the following: * **Data Minimization:** Only request the specific data fields (e.g., email) your app needs. * **Immediate Persistence:** Persist selected data immediately, as the Session URI access is temporary.
2 weeks ago 0 0 0 0
Preview
Beyond Infotainment: Extending Android Automotive OS for Software-defined Vehicles _Posted by_ Eser Erdem, Senior Engineering Manager, Android Automotive _ _ At Google we’re deeply committed to the automotive industry--not just as a technology provider, but as a partner in the industry's transformation. We believe that car makers and users should have choice and flexibility, and that open platforms are the best enablers. For over a decade, we have provided Android Automotive OS (AAOS) as an open platform for infotainment, enabling rich innovation and differentiation in the in-vehicle digital experience. However, as vehicles modernize, car makers face new hurdles: fragmented software across compute components, poor portability between architectures, and a lack of granular update capabilities. To address these problems, we are expanding AAOS beyond infotainment with Android Automotive OS for Software Defined Vehicles (AAOS SDV)--an open platform featuring a modular structure, a topology-agnostic communication layer, and the support for granular updates. The transition toward SDVs is an incredible industry transformation, and we are eager to contribute to the broader ecosystem making it happen. Later this year, AAOS SDV will be available in the Android Open Source Project (AOSP) for uses beyond infotainment. By bringing our SDV platform into the open-source domain, we empower the industry to develop or enhance features that lower costs, accelerate time to market, and provide significant advantages across the automotive landscape. #### A Foundation for the Software-Defined Vehicle AAOS SDV is engineered to address the core challenges of modern vehicle development. This new AAOS expansion provides a compact, performant and scalable software foundation based on a headless Android native stack, extending much deeper into the vehicle architecture to power software components throughout the vehicle such as the seat actuator, instrument cluster, climate control, lighting, cameras, mirrors, telemetry, and more. AAOS SDV’s core is a lightweight Android-based operating system incorporating low-level automotive specific frameworks for communications, diagnostics, software updates, and more. This enables AAOS SDV to power many different vehicle controllers, tackling Core Compute, Body Controls, and Cluster domains. In addition, the AAOS SDV platform includes a new framework, Display Safety, for implementing instrument cluster applications including audible chimes, regulatory camera, and sophisticated graphics that blend seamlessly with AAOS IVI content. Display Safety includes a safety design toolchain and a reference safety monitor, allowing OEMs to meet functional safety requirements leveraging the diverse platform safety mechanisms of Automotive SoCs. ### **Flexible Deployment for AAOS SDV** _Engineered for flexibility, the AAOS SDV framework can utilize hypervisor-backed virtualization with virtio support to separate software domains, or it can be deployed on bare metal for optimal low-latency performance._ #### Transforming the Developer Experience AAOS SDV is designed to power modern vehicles, but it was also designed to change how modern vehicle software is developed, tested and delivered with the goals to reduce development time and cost while increasing innovation and agility. With its optimized development workflows, our open-source SDV platform provides a wide range of benefits across the automotive industry: * **Accelerated Time-to-Market:** AAOS SDV components can accelerate development with production ready software for various components that can be further modified. * **Standard Signal Catalog:** A new standard signal catalog to bring OEMs and automotive suppliers onto the same page eliminates redundant engineering efforts and significantly reduces platform development costs. * **Optimized for virtual cloud development:** AAOS SDV was designed ground-up to support virtual cloud development - enabling partners to design, test and validate components in the car well ahead of hardware availability. AAOS SDV already runs on Android Virtual Device (Cuttlefish), and works well with existing Google Cloud integrations such as Google Cloud Horizon, enabling a digital twin solution at scale. * **A Service-Oriented Architecture:** Vehicle functions are developed as topology-agnostic services which are reusable across different architectures. The platform treats the vehicle as a dynamic, connected system. This allows for granular, service-level updates with built-in dependency handling, enabling you to deploy new features over-the-air and create continuous improvement loops. * **Future-Ready for new services:** The platform is designed to simplify the development of telemetry, AI training feedback loops, accelerating the deployment of advanced features for both enterprise fleets and consumer vehicles. #### Production Ready: Partnering with Renault We are proud to highlight our deep partnership with Renault to underscore the production readiness of the AAOS SDV platform. Renault is currently leveraging the Android Automotive OS SDV platform for its upcoming Renault Trafic e-Tech, “[...] production set to begin in late 2026”. The Renault Trafic e-Tech validates the platform's ability to accelerate development and enable a new generation of software-defined commercial vehicles. #### Scaling Ready: Partnering with Qualcomm Qualcomm is scaling the Android Automotive OS SDV platform through our strategic partnership. At CES 2026, Qualcomm introduced Snapdragon vSoC on Google Cloud and announced a scaling collaboration to deliver a turnkey, pre-integrated AAOS SDV stack on Snapdragon Digital Chassis platforms. #### Building an Open AAOS Ecosystem The power of AAOS comes from its vibrant ecosystem. To prepare for the open source release later this year, we are proactively working with leading industry carmakers, suppliers, silicon platforms, and software vendors to ensure that the AAOS SDV platform is well supported and robustly integrated within the automotive ecosystem. We look forward to sharing more updates with our partners in the months ahead.
2 weeks ago 0 0 0 0
Preview
Android developer verification: Balancing openness and choice with safety _Posted by_ _Matthew Forsythe, Director Product Management, Android App Safety_ __ _ _ Android proves you don't have to choose between an open ecosystem and a secure one. Since announcing updated verification requirements, we've worked with the community to ensure these protections are robust yet respectful of platform freedom. We've heard from power users that they want to take educated risks to install software from unverified developers. Today, we're sharing details on a new advanced flow that provides this option. #### Advanced flow safeguards against coercion Android is built on choice. That is why we’ve developed the advanced flow – an approach that allows power users to maintain the ability to sideload apps from unverified developers. This flow is **a one-time** process for power users – but it was designed carefully to prevent those in the midst of a scam attempt from being coerced by high pressure tactics to install malicious software. In these scenarios, scammers exploit fear – using threats of financial ruin, legal trouble, or harm to a loved one – to create a sense of extreme urgency. They stay on the phone with victims, coaching them to bypass security warnings and disable security settings before the victim has a chance to think or seek help. According to a 2025 report from the Global Anti-Scam Alliance (GASA), 57% of surveyed adults experienced a scam in the past year, resulting in a global consumer loss of $442 billion. Because the consequences of these scams that use sophisticated social engineering tactics are so severe, we have carefully engineered the advanced flow to provide the critical time and space needed to break the cycle of coercion. ### How the advanced flow works for users * **Enable developer mode in system settings** : Activating this is simple. This prevents accidental triggers or "one-tap" bypasses often used in high-pressure scams. * **Confirm you aren't being coached:** There is a quick check to make sure that no one is talking you into turning off your security. While power users know how to vet apps, scammers often pressure victims into disabling protections. * **Restart your phone and reauthenticate:** This cuts off any remote access or active phone calls a scammer might be using to watch what you’re doing. * **Come back after the protective waiting period and verify:** There is a one-time, one-day wait and then you can confirm that this is really you who’s making this change with our biometric authentication (fingerprint or face unlock) or device PIN. Scammers rely on manufactured urgency, so this breaks their spell and gives you time to think. * **Install apps:** Once you confirm you understand the risks, you’re all set to install apps from unverified developers, with the option of enabling for 7 days or indefinitely. For safety, you’ll still see a warning that the app is from an unverified developer, but you can just tap “Install Anyway.” #### A secure Android for every developer We know a "one size fits all" approach doesn't work for our diverse ecosystem. We want to ensure that identity verification isn't a barrier to entry, so we’re providing different paths to fit your specific needs. In addition to the advanced flow we’re building free, limited distribution accounts for students and hobbyists. This allows you to share apps with a small group (up to 20 devices) without needing to provide a government-issued ID or pay a registration fee. This ensures Android remains an open platform for learning and experimentation while maintaining robust protections for the broader community. Limited distribution accounts and advanced flow for users will be available in August before the new developer verification requirements take effect. Visit our website for more details. We look forward to sharing more in the coming days and weeks.
3 weeks ago 0 0 0 0
Preview
Get inspired and take your apps to desktop __ _Posted by Ivy Knight,  Senior Design Advocate, Android_ We're thrilled to announce major updates to our design resources, giving you the comprehensive guidance you need to create polished, adaptive Android apps across all form factors! We now have Desktop Experience guidance and a refreshed Android Design Gallery. ### New Desktop Experience Design Guidance Your users are engaging with Android apps on more diverse devices than ever before—from phones and foldables to laptops and external monitors. A "desktop experience" occurs anytime your app is in a desktop-like mode, typically involving a non-touch input device like a keyboard or mouse, or another display such as a monitor (read more in the connected display announcement). This means designing for larger screens and accommodating additional input states. These new design experiences are meant to maximize productivity for your users with higher information density, multi-tasking capabilities. Dive into desktop experience guidance to help optimize your app with desktop design principles, input interaction guidance, and system UI considerations. The new guidance includes foundational guides where you can learn design principles that make desktop experiences unique, such as how multitasking is at the core of desktop experiences. When your app is in a desktop experience, keep in mind crucial interaction experiences, such as how to best design around unique input interactions, like choosing cursors from system provided cursors. For specialized actions not covered by system icons, consider creating a custom cursor icon, while ensuring it remains easy for users to find on the page. A desktop experience brings more multitasking features, like windowing, so expect your app to take on a variety of dimensions with a header bar. Desktops have much larger screens than mobile, and users typically interact using a mouse which has finer precision than a finger on a touch screen. This means you can present a UI with higher information density so your users can be more productive! Want to get started quickly? Check out the walkthrough to go from mobile to desktop and design along with the updated Adaptive Design lab. For more on criteria that makes a differentiated quality app, read the newly updated adaptive app quality guidelines and adaptive developer guidance. ### Introducing the Android Design Gallery Looking for inspiration? We've launched the Android Design Gallery! This new resource is a living catalog of inspirational examples across multiple verticals, form factors, and UX patterns. We'll be continually adding new inspirational examples, so check back often to see the latest and greatest in Android design.
3 weeks ago 0 1 0 0
Preview
Room 3.0 - Modernizing the Room _Posted by Daniel Santiago Rivera, Software Engineer_ _ _ _ _ The first alpha of Room 3.0 has been released! Room 3.0 is a major breaking version of the library that focuses on Kotlin Multiplatform (KMP) and adds support for JavaScript and WebAssembly (WASM) on top of the existing Android, iOS and JVM desktop support. In this blog we outline the breaking changes, the reasoning behind Room 3.0, and the various things you can do to migrate from Room 2.0. Breaking changes Room 3.0 includes the following breaking API changes: * **Dropping SupportSQLite APIs:** Room 3.0 is fully backed by the androidx.sqlite driver APIs. The SQLiteDriver APIs are KMP-compatible and removing Room’s dependency on Android's API simplifies the API surface for Android since it avoids having two possible backends. * **No more Java code generation:** Room 3.0 exclusively generates Kotlin code. This aligns with the evolving Kotlin-first paradigm but also simplifies the codebase and development process, enabling faster iterations. * **Focus on KSP:** We are also dropping support for Java Annotation Processing (AP) and KAPT. Room 3.0 is solely a KSP (Kotlin Symbol Processing) processor, allowing for better processing of Kotlin codebases without being limited by the Java language. * **Coroutines first:** Room 3.0 embraces Kotlin coroutines, making its APIs coroutine-first. Coroutines is the KMP-compatible asynchronous framework and making Room be asynchronous by nature is a critical requirement for supporting web platforms. A new package To prevent compatibility issues with existing Room 2.x implementations and for libraries with transitive dependencies to Room (for example, WorkManager), Room 3.0 resides in a new package which means it also has a new maven group and artifact ids. For example, androidx.room:room-runtime has become androidx.room3:room3-runtime and classes such as androidx.room.RoomDatabase will now be located at android.room3.RoomDatabase. Kotlin and Coroutines First With no more Java code generation, Room 3.0 also requires KSP and the Kotlin compiler even if the codebase interacting with Room is in Java. It is recommended to have a multi-module project where Room usage is concentrated and the Kotlin Gradle Plugin and KSP can be applied without affecting the rest of the codebase. Room 3.0 also requires Coroutines and more specifically DAO functions have to be suspending unless they are returning a reactive type, such as a Flow. Room 3.0 disallows blocking DAO functions. See the Coroutines on Android documentation on getting started integrating Coroutines into your application. Migration to SQLiteDriver APIs With the shift away from SupportSQLite, apps will need to migrate to the SQLiteDriver APIs. This migration is essential to leveraging the full benefits of Room 3.0, including allowing the use of the bundled SQLite library via the BundledSQLiteDriver. You can start migrating to the driver APIs today with Room 2.7.0+. We strongly encourage you to avoid any further usage of SupportSQLite. If you migrate your Room integrations to SQLiteDriver APIs, then the transition to Room 3.0 is easier since the package change mostly involves updating symbol references (imports) and might require minimal changes to call-sites. For a brief overview of the SQLiteDriver APIs, check out the SQLiteDriver APIs documentation. For more details on how to migrate Room to use SQLiteDriver APIs, check out the official documentation to migrate from SupportSQLite. Room SupportSQLite wrapper We understand completely removing SupportSQLite might not be immediately feasible for all projects. To ease this transition, Room 2.8.0, the latest version of the Room 2.0 series, introduced a new artifact called androidx.room:room-sqlite-wrapper. This artifact offers a compatibility API that allows you to convert a RoomDatabase into a SupportSQLiteDatabase, even if the SupportSQLite APIs in the database have been disabled due to a SQLiteDriver being installed. This provides a temporary bridge for developers who need more time to fully migrate their codebase. This artifact continues to exist in Room 3.0 as androidx.room3:room3-sqlite-wrapper to enable the migration to Room 3.0 while still supporting critical SupportSQLite usage. For example, invocations of Database.openHelper.writableDatabase can be replaced by roomDatabase.getSupportWrapper() and a wrapper would be provided even if setDriver() is called on Room’s builder. For more details check out the room-sqlite-wrapper documentation. Room and SQLite Web Support Support for the Kotlin Multiplatform targets JS and WasmJS and brings some of the most significant API changes. Specifically, many APIs in Room 3.0 are suspend functions since proper support for web storage is asynchronous. The SQLiteDriver APIs have also been updated to support the Web and a new web asynchronous driver is available in androidx.sqlite:sqlite-web. It is a Web Worker based driver that enables persisting the database in the Origin private file system (OPFS). For more details on how to set up Room for the Web check out the Room 3.0 release notes. Custom DAO Return Types Room 3.0 introduces the ability to add custom integrations to Room similar to RxJava and Paging. Through a new annotation API called @DaoReturnTypeConverter you can create your own integration such that Room’s generated code becomes accessible at runtime, this enables  @Dao functions having their custom return types without having to wait for the Room team to add the support. Existing integrations are migrated to use this functionality and thus will now require for those who rely on it to add the converters to the @Database or @Dao definitions. For example, the Paging converter will be located in the android.room3:room3-paging artifact and it's called PagingSourceDaoReturnTypeConverter. Meanwhile for LiveData the converter is in android.room3:room3-livedata and is called LiveDataReturnTypeConverter. For more details check out the DAO Return Type Converters section in the Room 3.0 release notes. Maintenance mode of Room 2.x Since the development of Room will be focused on Room 3, the current Room 2.x version enters maintenance mode. This means that no major features will be developed but patch releases (2.8.1, 2.8.2, etc.) will still occur with bug fixes and dependency updates. The team is committed to this work until Room 3 becomes stable. Final thoughts We are incredibly excited about the potential of Room 3.0 and the opportunities it unlocks for the Kotlin ecosystem. Stay tuned for more updates as we continue this journey!
4 weeks ago 0 0 0 0
Advertisement
Preview
TikTok reduces code size by 58% and improves app performance for new features with Jetpack Compose _Posted by Ajesh R Pai, Developer Relations Engineer & Ben Trengrove, Developer Relations Engineer_ TikTok is a global short-video platform known for its massive user base and innovative features. The team is constantly releasing updates, experiments, and new features for their users. Faced with the challenge of maintaining velocity while managing technical debt, the TikTok Android team turned to Jetpack Compose. The team wanted to enable faster, higher-quality iteration of product requirements. By leveraging Compose, the team sought to improve engineering efficiency by writing less code and reducing cognitive load, while also achieving better performance and stability. Streamlining complex UI to accelerate developer productivity TikTok pages are often more complex than they appear, containing numerous layered conditional requirements. This complexity often resulted in difficult-to-maintain, sub-optimally structured View hierarchies and excessive View nesting, which caused performance degradation due to an increased number of measure passes. Compose offered a direct solution to this structural problem. Furthermore, Compose’s measurement strategy helps reduce double taxation, making measure performance easier to optimize. To improve developer productivity, TikTok’s central Design System team provides a component library for teams working on different app features.  The team observed that Development in Compose is simple; leveraging small composables is highly effective, while incorporating large UI blocks with conditional logic is both straightforward and has minimal overhead. Building a path forward through strategic migration By strategically adopting Jetpack Compose, TikTok was able to stay on top of technical debt, while also continuing to focus on creating great experiences for their users. The ability of Compose to handle conditional logic cleanly and streamline composition allowed the team to achieve up to a 78% reduction in page loading time on new or fully rewritten pages. This improvement was 20–30% in smaller cases, and 70–80% for full rewrites and new features. They also were able to reduce their code size by 58%, when compared to the same feature built in Views. The team has further shared a couple of learnings: TikTok team’s overall strategy was to incrementally migrate specific user journeys. This gave them an opportunity to migrate, confirm measurable benefits, then scale to more screens. They started with using Compose to simplify the overall structure in the QR code feature and saw the improvements. The team later expanded the migration to the Login and Sign-up experiences. The team shared some additional learnings: While checking performance during migration, the TikTok team found that using many small ComposeViews to replace elements inside a single ViewHolder, caused composition overhead. They achieved better results by expanding the migration to use one single ComposeView for the entire ViewHolder. When migrating a Fragment inside ViewPager, which has custom height logic and conditional logic to hide and show ui based on experiments, the performance wasn’t impacted. In this case, migrating the ViewPager to Composable performed better than migrating the Fragment. Jun Shen really likes that Compose "reduces the amount of code required for feature development, improves testability, and accelerates delivery". The team plans to steadily increase Compose adoption, making it their preferred framework in the long term. Jetpack Compose proved to be a powerful solution for improving both their developer experience and production metrics at scale. ### Get Started with Jetpack Compose Learn more about how Jetpack Compose can help your team.
4 weeks ago 0 0 0 0
Preview
Level Up: Test Sidekick and prepare for upcoming program milestones __ _Posted by Maru Ahues Bouza, PM Director, Games on Google Play_ Last September, we shared our vision for the future of Google Play Games grounded in a core belief: the best way to drive your game’s success is to deliver a world-class player experience. We launched the Google Play Games Level Up program to recognize and reward great gaming experiences, while providing you with a powerful toolkit and new promotional opportunities to grow your games. The momentum since our announcement has been incredibly positive, with more than 600 million gamers now using Play Games Services every month. Developers are also finding success, with one-third of all game installs on the Play Store now coming from editorially-driven organic discovery. In fact, in 2025, Level Up features have driven over 2.5 billion incremental acquisitions for featured games, in addition to an average uplift of 25% in installs during the featuring windows. Today, we're inviting you to start testing Play Games Sidekick to keep your players in the action, sharing new Play Console updates to optimize your reach, and helping you prepare for our upcoming program milestones. Boost retention and immersion with Play Games Sidekick Play Games Sidekick is a helpful in-game overlay that gives players instant access to relevant gaming information—like rewards, offers, achievements, and quest progress— keeping them immersed while driving higher engagement for developers.  It serves as a seamless bridge to the highly visible "You" tab, connecting your game to 160 million monthly active users already engaging there and doubles as an active gaming companion that enhances the player experience with helpful, AI-generated Game Tips. Deep Rock Galactic: Survivor keeps players in the action with Play Games Sidekick Today, Sidekick officially debuts in over 90 games, with the experience expanding to all Level Up titles later this year. But you don’t need to wait for the broader rollout to get your game ready. You can now enable Sidekick through Play Console to preview and test how your players will interact with features like Achievements, Streaks, Play Points Coupons, and Game Tips.  Upon completing your testing, be sure to push Sidekick for production to ensure your game meets the Level Up user experience guidelines. Enable Play Games Sidekick in Play Console to begin testing ** ** **Optimize reach and operations with new Play Console updates** We are also rolling out two new Play Console updates to help you optimize your reach and streamline operations: * Pre-reg device breakdowns: To aid launch decisions, you can now analyze the device distribution of your pre-registered audience by key device attributes including Android version, RAM and SoC. This enables you to optimize game performance, minimum specs, and marketing spend for the players already waiting for your game. Identify launch-day risks and optimize performance for your players with new pre-registration device breakdowns * Real-time feedback:  With Level Up+, our tier for high-performing games, qualifying titles can unlock promotional content featuring and tools like deep-links and audience targeting. While submissions must meet Play’s quality guidelines, you no longer have to wait 24 hours to learn about issues. You can now get immediate feedback on quality whenever possible. Your 2026 checklist: Securing your Level Up benefits Today, all games on Google Play qualify for Google Play Games Level Up. However, in order to maintain access to Level Up benefits like Play Points offers, expanded APK size limits, Play Store collections and campaigns consideration, or access to high-visibility surfaces like You tab and Sidekick, you’ll need to ensure your game meets user experience guidelines by their upcoming milestones: By July 2026: * Integrate Play Games Sidekick to offer a quick and easy entry point to access rewards, offers, and achievements through an in-game overlay. * Implement achievements with Play Games Services, to support authentication with the modern Gamer Profile, and to keep players engaged across the lifespan of your game. By November 2026: * **Implement  cloud save** to enable progress sync across devices. Last week, we announced that we’re working on an expanded Level Up program that builds on our successful foundation to further improve gaming experiences. The update will introduce new requirements that will unlock additional benefits like lower service fees. Engaging with the program now ensures your work is strategically aligned with these future updates. We’ll share more details in the coming months. In the meantime, the path to your first program milestone begins today. By prioritizing these user experience guidelines now, you’re investing in the long-term value of your game and ensuring it’s built to thrive for every player. Head over to Play Console to start testing Sidekick and take the next step in your Level Up journey.
4 weeks ago 0 0 0 0
Preview
Expanding our stage for PC and paid titles _ __Posted by Aurash Mahbod, VP and GM, Games on Google Play_ Google Play is proud to be the home of over 200,000 games—many of which defined the mobile-first era. But as cross-platform becomes the standard for players, we are evolving our ecosystem to match the scale of your ambitions. In recent years, we focused on elevating Android gaming quality while significantly deepening our support for native PC titles. We know that maximizing your game’s reach across different platforms is complex. The Level Up program serves as your strategic roadmap, helping you prioritize optimizations that drive great experiences on Android. Building on this foundation, we’re doubling down on our investment to make Play the most accessible home for every category of play. We’re adding new tools for paid games and making the PC game discovery to purchase seamless. Keep reading to learn more about how we’re creating a bigger stage for your games. **Scale your discovery across mobile and PC platforms** Building a bigger stage starts with making your games easier to find—and easier to buy—no matter which device your players prefer. We’re expanding your reach by bringing cross-platform discovery directly to the mobile storefront. * With the new PC section in the Games tab, your PC titles gain high visibility placement among our most active mobile players. * The PC badge ensures your cross-platform investment is recognized. This creates more opportunities to acquire players on mobile and transition them seamlessly to your high-fidelity PC experience. _PC in the Games tab and PC badging expands your game’s reach_ * With ‘buy once play anywhere’ pricing, we’re making it easier to sell your games across different devices.  If you choose to opt-in your mobile game for Google Play Games on PC, you can now offer a single price that covers both mobile and PC versions. We’re rolling out this feature in EAP with select games including Brotato: Premium. * For PC-only games, players can now complete the full purchase journey on Google Play Games on PC with the same trusted security and privacy standards they expect from Google Play. _'__Buy_ _once play anywhere’ pricing to sell your games across devices_ **Lower the purchase barrier with Game Trials  ** To help you convert high-intent buyers with less friction, we’re introducing Game Trials, a feature that enables players to experience your game for a limited time before making a purchase on mobile. Accessible directly from your game’s store listing, Game Trials provides a fast-track for players to start exploring your world with a single tap. Game trials are now in testing with select titles and we’ll roll it out to more titles soon. * To ensure this is low maintenance for you, Game Trials is added directly into your Android App Bundle. This enables you to offer a high quality trial without the burden of a separate codebase or a demo version of your app. * Play ensures trials are secure and seamless. Game Trials are once per user and protects your game while the trial is active. When it ends, players can purchase your game and keep their progress. * We’re also working on tools that will give you more control—such as specifying a custom time limit or an in-game event to conclude the trial. _Game Trial for DREDGE to help convert high-intent buyers_ **Diversify your revenue with a dedicated player community on Play Pass** **** Play Pass is another way to diversify revenue and grow your player audience. It has been a strong launchpad for indie hits such as Isle of Arrows, Slay the Spire, and Dead Cells. With Play Pass, you can reach highly dedicated players seeking a more curated gaming experience, free of ads and in-app purchases. To help you deepen engagement, paid titles on Play Pass can now opt in to Google Play Games on PC — making it easy for players to find and play your games on a larger screen. Later this year, you can nominate your game through a streamlined opt-in process directly in Play Console. **Drive long term sales with Wishlists and Discounts** Wishlists and Discounts are one of the most effective ways to capture player intent and drive long term sales. To support players at every stage of their purchase journey, we’re integrating them directly into Play. Players can save titles to their wishlist and manage them from library settings. To keep your game top-of-mind, players will receive automated notifications for your latest discounts — starting with mobile and expanding soon to PC games. _Wishlist and discount notifications drives long term sales, rolling out today_ **How leading studios are finding a new path to success on Play** We’re thrilled to welcome Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs to Play [1]. It marks an exciting expansion of our catalog and a step forward in our mission to build a bigger gaming ecosystem for all developers. This growth is fueled by our developer community, whose feedback continues to shape our roadmap and help us better support your success. _Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs is coming to Play._ That mission brings us to GDC and the Independent Games Festival (IGF) Awards 2], where the next generation of games awaits! This year, we’re inviting you to come along for the ride as we go backstage to chat with the finalists and winners, sharing the moments of triumph and the creative stories behind their development. Not joining us at GDC? You can take the next step in your journey to [launch your game on Google Play today. 1. Sledding Game, 9 Kings, Potion Craft, and Moonlight Peaks are coming to Google Play in 2026. Low Budget Repairs is scheduled for release in 2027. [Back] 2. Independent Games Festival (IGF) Awards is hosted by Game Developers Conference (GDC) and requires a valid GDC pass for entry. [Back]
4 weeks ago 0 0 0 0
Preview
Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager __ _Posted by  Mayuri Khinvasara Khabya, Developer Relations Engineer __(LinkedIn and X)_ In the dynamic world of social media, user attention is won or lost quickly. Meta apps (Facebook and Instagram) are among the world's largest social platforms and serve billions of users globally. For Meta, delivering videos seamlessly isn't just a feature, it's the core of their user experience. Short-form videos, particularly Facebook Newsfeed and Instagram Reels, have become a primary driver of engagement. They enable creative expression and rapid content consumption; connecting and entertaining people around the world. This blog post takes you through the journey of how Meta transformed video playback for billions by delivering true instant playback. **The latency gap in short form videos** Short-form videos lead to highly fast paced interactions as users quickly scroll through their feeds. Delivering a seamless transition between videos in an ever-changing feed introduces unique hurdles for instantaneous playback. Hence we need solutions that go beyond traditional disk caching and standard reactive playback strategies. **The path forward with Media3 PreloadManager** To address the shifts in consumption habits from rise in short form content and the limitations of traditional long form playback architecture, Jetpack Media3 introduced PreloadManager. This component allows developers to move beyond disk caching, offering granular control and customization to keep media ready in memory before the user hits play. Read this blog series to understand technical details about media playback with PreloadManager. **How Meta achieved true instant playback** **Existing Complexities** Previously, Meta used a combination of warmup (to get players ready) and prefetch (to cache content on disk) for video delivery. While these methods helped improve network efficiency, they introduced significant challenges. Warmup required instantiating multiple player instances sequentially, which consumed significant memory and limited preloading to only a few videos. This high resource demand meant that a more scalable robust solution could be applied to deliver the instant playback expected on modern, fast-scrolling social feeds. ** Integrating Media3 PreloadManager** To achieve truly instant playback, Meta's Media Foundation Client team integrated the Jetpack Media3 PreloadManager into Facebook and Instagram. They chose the DefaultPreloadManager to unify their preloading and playback systems. This integration required refactoring Meta's existing architecture to enable efficient resource sharing between the PreloadManager and ExoPlayer instances. This strategic shift provided a key architectural advantage: the ability to parallelize preloading tasks and manage many videos using a single player instance. This dramatically increased preloading capacity while eliminating the high memory complexities of their previous approach. ** ** ### Optimization and Performance Tuning The team then performed extensive testing and iterations to optimize performance across Meta's diverse global device ecosystem. Initial aggressive preloading sometimes caused issues, including increased memory usage and scroll performance slowdowns. To solve this, they fine-tuned the implementation by using careful memory measurements, considering device fragmentation, and tailoring the system to specific UI patterns. **Fine tuning implementation to specific UI patterns** Meta applied different preloading strategies and tailored the behavior to match the specific UI patterns of each app: * Facebook Newsfeed: The UI prioritizes the video currently coming into view. The manager preloads only the current video to ensure it starts the moment the user pauses their scroll. This "current-only" focus minimizes data and memory footprints in an environment where users may see many static posts between videos. While the system is presently designed to preload just the video in view, it can be adjusted to also preload upcoming (future) videos. * Instagram Reels: This is a pure video environment where users swipe vertically. For this UI, the team implemented an "adjacent preload" strategy. The PreloadManager keeps the videos immediately after the current Reel ready in memory. This bi-directional approach ensures that whether a user swipes up or down, the transition remains instant and smooth. The result was a dramatic improvement in the Quality of Experience (QoE) including improvements in Playback Start and Time to First Frame for the user. ### Scaling for a diverse global device ecosystem Scaling a high-performance video stack across billions of devices requires more than just aggressive preloading; it requires intelligence. Meta faced initial challenges with memory pressure and scroll lag, particularly on mid-to-low-end hardware. To solve this, they built a Device Stress Detection system around the Media3 implementation. The apps now monitor I/O and CPU signals in real-time. If a device is under heavy load, preloading is paused to prioritize UI responsiveness. ** ** This device-aware optimization ensures that the benefit of instant playback doesn't come at the cost of system stability, allowing even users on older hardware to experience a smoother, uninterrupted feed. ### Architectural wins and code health Beyond the user-facing metrics, the migration to Media3 PreloadManageroffered long-term architectural benefits. While the integration and tuning process needed multiple iterations to balance performance, the resulting codebase is more maintainable. The team found that the PreloadManager API integrated cleanly with the existing Media3 ecosystem, allowing for better resource sharing. For Meta, the adoption of Media3 PreloadManager was a strategic investment in the future of video consumption. ** ** By adopting preloading and adding device-intelligent gates, they successfully increased total watch time on their apps and improved the overall engagement of their global community. **Resulting impact on Instagram and Facebook** The proactive architecture delivered immediate and measurable improvements across both platforms. ** ** * Facebook experienced faster playback starts, decreased playback stall rates and a reduction in bad sessions (like rebuffering, delayed start time, lower quality,etc) which overall resulted in higher watch time. * Instagram saw faster playback starts and an increase in total watch time. Eliminating join latency (the interval from the user's action to the first frame display) directly increased engagement metrics. The fewer interruptions due to reduced buffering meant users watched more content, which showed through engagement metrics. **Key engineering learnings at scale** As media consumption habits evolve, the demand for instant experiences will continue to grow. Implementing proactive memory management and optimizing for scale and device diversity ensures your application can meet these expectations efficiently. * Prioritize intelligent preloading Focus on delivering a reliable experience by minimizing stutters and loading times through preloading. Rather than simple disk caching, leveraging memory-level preloading ensures that content is ready the moment a user interacts with it. * Align your implementation with UI patterns Customize preloading behavior as per your apps’s UI. For example, use a "current-only" focus for mixed feeds like Facebook to save memory, and an "adjacent preload" strategy for vertical environments like Instagram Reels. ## * Leverage Media3 for long-term code health Integrating with Media3 APIs rather than a custom caching solution allows for better resource sharing between the player and the PreloadManager, enabling you to manage multiple videos with a single player instance. This results in a future-proof codebase that is easier for engineering teams to not only maintain and optimize over time but also benefit from the latest feature updates. * Implement device aware optimizations Broaden your market reach by testing on various devices, including mid-to-low-end models. Use real-time signals like CPU, memory, and I/O to adapt features and resource usage dynamically. # **Learn More** To get started and learn more, visit  * Explore the Media3 PreloadManager documentation. * Read the blog series for advanced technical and implementation details. * Part 1: Introducing Preloading with Media3 * Part 2: A deep dive into Media3's PreloadManager * Check out the sample app to see preloading in action. Now you know the secrets for instant playback. Go try them out!
1 month ago 0 0 0 0
Preview
Elevating AI-assisted Android development and improving LLMs with Android Bench Posted by Matthew McCullough, VP of Product Management, Android Developer We want to make it faster and easier for you to build high-quality Android apps, and one way we’re helping you be more productive is by putting AI at your fingertips. We know you want AI that truly understands the nuances of the Android platform, which is why we’ve been measuring how LLMs perform Android development tasks. Today we released the first version of Android Bench, our official leaderboard of LLMs for Android development. Our goal is to provide model creators with a benchmark to evaluate LLM capabilities for Android development. By establishing a clear, reliable baseline for what high quality Android development looks like, we’re helping model creators identify gaps and accelerate improvements—which empowers developers to work more efficiently with a wider range of helpful models to choose for AI assistance—which ultimately will lead to higher quality apps across the Android ecosystem. Designed with real-world Android development tasks We created the benchmark by curating a task set against a range of common Android development areas. It is composed of real challenges of varying difficulty, sourced from public GitHub Android repositories. Scenarios include resolving breaking changes across Android releases, domain-specific tasks like networking on wearables, and migrating to the latest version of Jetpack Compose, to name a few. Each evaluation attempts to have an LLM fix the issue reported in the task, which we then verify using unit or instrumentation tests. This model-agnostic approach allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day. We validated this methodology with several LLM makers, including JetBrains. > “Measuring AI’s impact on Android is a massive challenge, so it’s great to see a framework that’s this sound and realistic. While we’re active in benchmarking ourselves, Android Bench is a unique and welcome addition. This methodology is exactly the kind of rigorous evaluation Android developers need right now.” > - Kirill Smelov, Head of AI Integrations at JetBrains. The first Android Bench results For this initial release, we wanted to purely measure model performance and not focus on agentic or tool use. The models were able to successfully complete 16-72% of the tasks. This is a wide range that demonstrates some LLMs already have a strong baseline for Android knowledge, while others have more room for improvement. Regardless of where the models are at now, we’re anticipating continued improvement as we encourage LLM makers to enhance their models for Android development. The LLM with the highest average score for this first release is Gemini 3.1 Pro, followed closely by Claude Opus 4.6. You can try all of the models we evaluated for AI assistance for your Android projects by using API keys in the latest stable version of Android Studio. Providing developers and LLM makers with transparency We value an open and transparent approach, so we made our methodology, dataset, and test harness publicly available on GitHub. One challenge for any public benchmark is the risk of data contamination, where models may have seen evaluation tasks during their training process. We have taken measures to ensure our results reflect genuine reasoning rather than memorization or guessing, including a thorough manual review of agent trajectories, or the integration of a canary string to discourage training. Looking ahead, we will continue to evolve our methodology to preserve the integrity of the dataset, while also making improvements for future releases of the benchmark—for example, growing the quantity and complexity of tasks. We’re looking forward to how Android Bench can improve AI assistance long-term. Our vision is to close the gap between concept and quality code. We're building the foundation for a future where no matter what you imagine, you can build it on Android.
1 month ago 0 0 0 0
Preview
Battery Technical Quality Enforcement is Here: How to Optimize Common Wake Lock Use Cases __ _Posted by Alice Yuan, Senior Developer Relations Engineer_ In recognition that excessive battery drain is top of mind for Android users, Google has been taking significant steps to help developers build more power-efficient apps. On March 1st, 2026, Google Play Store began rolling out the wake lock technical quality treatments to improve battery drain. This treatment will roll out gradually to impacted apps over the following weeks. Apps that consistently exceed the "Excessive Partial Wake Lock" threshold in Android vitals may see tangible impacts on their store presence, including warnings on their store listing and exclusion from discovery surfaces such as recommendations. Users may see a warning on your store listing if your app exceeds the bad behavior threshold. This initiative elevated battery efficiency to a core vital metric alongside stability metrics like crashes and ANRs. The "bad behavior threshold" is defined as holding a non-exempted partial wake lock for at least two hours on average while the screen is off in more than 5% of user sessions in the past 28 days. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback, location access, or user-initiated data transfer. You can view the full definition of excessive wake locks in our Android vitals documentation. As part of our ongoing initiative to improve battery life across the Android ecosystem, we have analyzed thousands of apps and how they use partial wake locks. While wake locks are sometimes necessary, we often see apps holding them inefficiently or unnecessarily, when more efficient solutions exist. This blog will go over the most common scenarios where excessive wake locks occur and our recommendations for optimizing wake locks.  We have already seen measurable success from partners like WHOOP, who leveraged these recommendations to optimize their background behavior. **Using a foreground service vs partial wake locks** We’ve often seen developers struggle to understand the difference between two concepts when doing background execution: foreground service and partial wake locks. A foreground service is a lifecycle API that signals to the system that an app is performing user-perceptible work and should not be killed to reclaim memory, but it does not automatically prevent the CPU from sleeping when the screen turns off. In contrast, a partial wake lock is a mechanism specifically designed to keep the CPU running even while the screen is off. While a foreground service is often necessary to continue a user action, a manual acquisition of a partial wake lock is only necessary in conjunction with a foreground service for the duration of the CPU activity. In addition, you don't need to use a wake lock if you're already utilizing an API that keeps the device awake. Refer to the flow chart in Choose the right API to keep the device awake to ensure you have a strong understanding of what tool to use to avoid acquiring a wake lock in scenarios where it’s not necessary. **Third party libraries acquiring wake locks** It is common for an app to discover that it is flagged for excessive wake locks held by a third-party SDK or system API acting on its behalf. To identify and resolve these wake locks, we recommend the following steps: * Check Android vitals: Find the exact name of the offending wake lock in the excessive partial wake locks dashboard. Cross-reference this name with the Identify wake locks created by other APIs guidance to see if it was created by a known system API or Jetpack library. If it is, you may need to optimize your usage of the API and can refer to the recommended guidance. * Capture a System Trace: If the wake lock cannot be easily identified, reproduce the wake lock issue locally using a system trace and inspect it with the Perfetto UI. You can learn more about how to do this in the Debugging other types of excessive wake locks section of this blog post. * Evaluate Alternatives: If an inefficient third-party library is responsible and cannot be configured to respect battery life, consider communicating the issue with the SDK's owners, finding an alternative SDK or building the functionality in-house. **Common wake lock scenarios** Below is a breakdown of some of the specific use cases we have reviewed, along with the recommended path to optimize your wake lock implementation. **User-Initiated Upload or Download** Example use cases: * Video streaming apps where the user triggers a download of a large file for offline access. * Media backup apps where the user triggers uploading their recent photos via a notification prompt. How to reduce wake locks: * Do not acquire a manual wake lock. Instead, use the User-Initiated Data Transfer (UIDT) API. This is the designated path for long running data transfer tasks initiated by the user, and it is exempted from excessive wake lock calculations. **One-Time or Periodic Background Syncs** Example use cases: * An app performs periodic background syncs to fetch data for offline access. * Pedometer apps that fetch step count periodically. How to reduce wake locks: * Do not acquire a manual wake lock. Use WorkManager configured for one-time or periodic work.  WorkManager respects system health by batching tasks and has a minimum periodic interval (15 minutes), which is generally sufficient for background updates. * If you identify wake locks created by WorkManager or JobScheduler with high wake lock usage, it may be because you’ve misconfigured your worker to not complete in certain scenarios. Consider analyzing the worker stop reasons, particularly if you’re seeing high occurrences of STOP_REASON_TIMEOUT. workManager.getWorkInfoByIdFlow(syncWorker.id) .collect { workInfo -> if (workInfo != null) { val stopReason = workInfo.stopReason logStopReason(syncWorker.id, stopReason) } } * In addition to logging worker stop reasons, refer to our documentation on debugging your workers. Also, consider collecting and analyzing system traces to understand when wake locks are acquired and released. * Finally, check out our case study with WHOOP, where they were able to discover an issue with configuration of their workers and reduce their wake lock impact significantly. ### Bluetooth Communication Example use cases: * Companion device app prompts the user to pair their Bluetooth external device. * Companion device app listens for hardware events on an external device and user visible change in notification. * Companion device app’s user initiates a file transfer between the mobile and bluetooth device. * Companion device app performs occasional firmware updates to an external device via Bluetooth. How to reduce wake locks: * Use companion device pairing to pair Bluetooth devices to avoid acquiring a manual wake lock during Bluetooth pairing. * Consult the Communicate in the background guidance to understand how to do background Bluetooth communication. * Using WorkManager is often sufficient if there is no user impact to a delayed communication. If a manual wake lock is deemed necessary, only hold the wake lock for the duration of Bluetooth activity or processing of the activity data. ### Location Tracking Example use cases: * Fitness apps that cache location data for later upload such as plotting running routes * Food delivery apps that pull location data at a high frequency to update progress of delivery in a notification or widget UI. How to reduce wake locks: * Consult our guidance to Optimize location usage. Consider implementing timeouts, leveraging location request batching, or utilizing passive location updates to ensure battery efficiency. * When requesting location updates using the FusedLocationProvider or LocationManager APIs, the system automatically triggers a device wake-up during the location event callback. This brief, system-managed wake lock is exempted from excessive partial wake lock calculations. * Avoid acquiring a separate, continuous wake lock for caching location data, as this is redundant. Instead, persist location events in memory or local storage and leverage WorkManager to process them at periodic intervals. override fun onCreate(savedInstanceState: Bundle?) { locationCallback = object : LocationCallback() { override fun onLocationResult(locationResult: LocationResult?) { locationResult ?: return // System wakes up CPU for short duration for (location in locationResult.locations){ // Store data in memory to process at another time } } } } ### High Frequency Sensor Monitoring Example use cases: * Pedometer apps that passively collect steps, or distance traveled. * Safety apps that monitor the device sensors for rapid changes in real time, to provide features such as crash detection or fall detection. How to reduce wake locks: * If using SensorManager, reduce usage to periodic intervals and only when the user has explicitly granted access through a UI interaction. High frequency sensor monitoring can drain the battery heavily due to the number of CPU wake-ups and processing that occurs. * If you’re tracking step counts or distance traveled, rather than using SensorManager, leverage Recording API or consider utilizing Health Connect to access historical and aggregated device step counts to capture data in a battery-efficient manner. * If you’re registering a sensor with SensorManager, specify a maxReportLatencyUs of 30 seconds or more to leverage sensor batching to minimize the frequency of CPU interrupts. When the device is subsequently woken by another trigger such as a user interaction, location retrieval, or a scheduled job, the system will immediately dispatch the cached sensor data. val accelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER) sensorManager.registerListener(this, accelerometer, samplingPeriodUs, // How often to sample data maxReportLatencyUs // Key for sensor batching ) * If your app requires both location and sensor data, synchronize their event retrieval and processing. By piggybacking sensor readings onto the brief wake lock the system holds for location updates, you avoid needing a wake lock to keep the CPU awake. Use a worker or a short-duration wake lock to handle the upload and processing of this combined data. ### Remote Messaging Example use cases: * Video or sound monitoring companion apps that need to monitor events that occur on an external device connected using a local network. * Messaging apps that maintain a network socket connection with the desktop variant. How to reduce wake locks: * If the network events can be processed on the server side, use FCM to receive information on the client. You may choose to schedule an expedited worker if additional processing of FCM data is required. * If events must be processed on the client side via a socket connection, a wake lock is not needed to listen for event interrupts. When data packets arrive at the Wi-Fi or Cellular radio, the radio hardware triggers a hardware interrupt in the form of a kernel wake lock. You may then choose to schedule a worker or acquire a wake lock to process the data. * For example, if you’re using ktor-network to listen for data packets on a network socket, you should only acquire a wake lock when packets have been delivered to the client and need to be processed. val readChannel = socket.openReadChannel() while (!readChannel.isClosedForRead) { // CPU can safely sleep here while waiting for the next packet val packet = readChannel.readRemaining(1024) if (!packet.isEmpty) { // Data Arrived: The system woke the CPU and we should keep it awake via manual wake lock (urgent) or scheduling a worker (non-urgent) performWorkWithWakeLock { val data = packet.readBytes() // Additional logic to process data packets } } } ## Summary By adopting these recommended solutions for common use cases like background syncs, location tracking, sensor monitoring and network communication, developers can work towards reducing unnecessary wake lock usage. To continue learning, read our other technical blog post or watch our technical video on how to discover and debug wake locks: Optimize your app battery using Android vitals wake lock metric. Also, consult our updated wakelock documentation. To help us continue improving our technical resources, please share any additional feedback on our guidance in our documentation feedback survey.
1 month ago 0 0 0 0
Preview
How WHOOP decreased excessive partial wake lock sessions by over 90% _Posted by Breana Tate, Developer Relations Engineer, Mayank Saini, Senior Android Engineer, Sarthak Jagetia, Senior Android Engineer and Manmeet Tuteja, Android Engineer II_ Building an Android app for a wearable means the real work starts when the screen turns off. WHOOP helps members understand how their body responds to training, recovery, sleep, and stress, and for the many WHOOP members on Android, reliable background syncing and connectivity are what make those insights possible. ** ** Earlier this year, Google Play released a new metric in Android vitals: Excessive partial wake locks. This metric measures the percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours in a 24-hour period. The aim of this metric is to help you identify and address possible sources of battery drain, which is crucial for delivering a great user experience. ** ** Beginning March 1, 2026, apps that continue to not meet the quality threshold may be excluded from Google Play discovery surfaces. A warning may also be placed on the Google Play Store listing, indicating the app might use more battery than expected. According to Mayank Saini, Senior Android Engineer at WHOOP,  this “presented the team with an opportunity to raise the bar on Android efficiency,” after Android vitals flagged the app’s excessive partial wake lock % as 15%—which exceeded the recommended 5% threshold. The team viewed the Android vitals metric as a clear signal that their background work was holding the CPU awake longer than necessary. Resolving this would allow them to continue to deliver a great user experience while simultaneously decreasing wasted background time and maintaining reliable and timely Bluetooth connectivity and syncing. ** ** Identifying the issue ** ** To figure out where to get started, the team first turned to Android vitals for more insight into which wake locks were affecting the metric. By consulting the Android vitals excessive partial wake locks dashboard, they were able to identify the biggest contributor to excessive partial wake locks as one of their WorkManager workers (identified in the dashboard as androidx.work.impl.background.systemjob.SystemJobService). To support the WHOOP “always-on experience”, the app uses WorkManager for background tasks like periodic syncing and delivering recurring updates to the wearable. ** ** While the team was aware that WorkManager acquires a wake lock while executing tasks in the background, they previously did not have visibility into how all of their background work (beyond just WorkManager) was distributed until the introduction of the excessive partial wake locks metric in Android vitals. ** ** With the dashboard identifying WorkManager as the main contributor, the team was then able to focus their efforts on identifying which of their workers was contributing the most and work towards resolving the issue. ** ** Making use of internal metrics and data to better narrow down the cause ** ** WHOOP already had internal infrastructure set up to monitor WorkManager metrics. They periodically monitor: 1. Average Runtime: For how long does the worker run? 2. Timeouts: How often is the worker timing out instead of completing? 3. Retries: How often does the worker retry if the work timed out or failed? 4. Cancellations: How often was the work cancelled? ** ** Tracking more than just worker successes and failures gives the team visibility into their work’s efficiency. ** ** The internal metrics flagged high average runtime for a select few workers, enabling them to narrow the investigation down even further. ** ** In addition to their internal metrics, the team also used Android Studio’s Background Task Inspector to inspect and debug the workers of interest, with a specific focus on associated wake locks, to align with the metric flagged in Android vitals. Investigation: Distinguishing between worker variants ** ** WHOOP uses both one-time and periodic scheduling for some workers. This allows the app to reuse the same Worker logic for identical tasks with the same success criteria, differing only in timing. ** ** Using their internal metrics made it possible to narrow their search to a specific worker, but they couldn't tell if the bug occurred when the worker was one-time, periodic, or both. So, they rolled out an update to use WorkManager’s setTraceTag method to distinguish between the one-time and periodic variants of the same Worker. ** ** This extra detail would allow them to definitively identify which Worker variant (periodic or one-time) was contributing the most to sessions with excessive partial wake locks. However, the team was surprised when the data revealed that neither variant appeared to be contributing more than the other. Manmeet Tuteja, Android Engineer II at WHOOP said “that split also helped us confirm the issue was happening in both variants, which pointed away from scheduling configuration and toward a shared business logic problem inside the worker implementation.” Diving deeper on worker behavior and fixing the root cause ** ** With the knowledge that they needed to take a look at logic within the worker,  the team re-examined worker behavior for the workers that had been flagged during their investigation. Specifically, they were looking for instances in which work may have been getting stuck and not completing. ** ** All of this culminated in finding the root cause of the excessive wake locks: ** ** A CoroutineWorker that was designed to wait for a connection to the WHOOP sensor before proceeding. ** ** If the work started with no sensor connected, whoopSensorFlow–which indicates if the sensor is connected– was null. The SensorWorker didn’t treat this as an early-exit condition and kept running, effectively waiting indefinitely for a connection. As a result, WorkManager held a partial wake lock until the work timed out, leading to high background wake lock usage and frequent, unwanted rescheduling of the SensorWorker. ** ** To address this, the WHOOP team updated the worker logic to check the connection status before attempting to execute the core business logic. If the sensor isn’t available, the worker exits, avoiding a timeout scenario and releasing the wake lock. The following code snippet shows the solution: class SensorWorker(appContext: Context, params: WorkerParameters): CoroutineWorker(appContext, params) { override suspend fun doWork(): Result { ... // Check the sensor state and perform work or return failure return whoopSensorFlow.replayCache .firstOrNull() ?.let { cachedData -> processSensorData(cachedData) Result.success() } ?: run { Result.failure() } } Achieving a 90% decrease in sessions with excessive partial wake locks ** ** After rolling out the fix, the team continued to monitor the Android vitals dashboard to confirm the impact of the changes. Ultimately, WHOOP saw their excessive partial wake lock percentage drop from 15% to less than 1% just 30 days after implementing the changes to their Worker. As a result of the changes, the team has seen fewer instances of work timing out without completing, resulting in lower average runtimes. The WHOOP team’s advice to other developers that want to improve their background work’s efficiency: Get Started If you’re interested in trying to reduce your app’s excessive partial wake locks or trying to improve worker efficiency, view your app’s excessive partial wake locks metric in Android vitals, and review the wake locks documentation for more best practices and debugging strategies.
1 month ago 0 0 0 0
Preview
A new era for choice and openness _ _Posted by  Sameer Samat, President of Android Ecosystem_ _ _ _ _ _ Android has always driven innovation in the industry through its unique flexibility and openness. At this important moment, we want to continue leading the way in how developers distribute their apps and games to people on billions of devices across many form factors. A modern platform must be flexible, providing developers and users with choice and openness as well as a safe experience. Today we are announcing substantial updates that evolve our business model and build on our long history of openness globally.  We’re doing that in three ways: more billing options, a program for registered app stores, and lower fees and new programs for developers. Expanded billing choice on Google Play for users and developers Google Play is giving developers even more billing choice and freedom in how they handle transactions. Mobile developers will have the option to use their own billing systems in their app alongside Google Play’s billing, or they can guide users outside of their app to their own websites for purchases. Our goal is to offer this flexibility in a way that maximizes choice and safety for users. Leading the way in store choice We’re introducing a program that makes sideloading qualified app stores even easier. Our new Registered App Stores program will provide a more streamlined installation flow for Android app stores that meet certain quality and safety benchmarks. Once this change has rolled out, app stores that choose to participate in this optional program will have registered with us and so users who sideload them will have a more simplified installation flow (see graphic below).  If a store chooses not to participate, nothing changes for them and they retain the same experience as any other sideloaded app on Android. This gives app stores more ways to reach users and gives users more ways to easily and safely access the apps and games they love. This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval. Lower pricing and new programs to support developers Google Play’s fees are already the lowest among major app stores, and today we are taking this even further by introducing a new business model that decouples fees for using our billing system and introduces new, lower service fees. Once this rolls out: 1. Billing: For those developers who choose to use Google Play’s billing system, they will be charged a market-specific rate separate from the service fee. In the European Economic Area (EEA), UK, and US that rate will be 5%. 2. Service Fees: 1. For new installs (first time installs from users after the new fees are launched in a region), we are reducing the in-app purchase (IAP) service fee to 20%. 2. We are launching an Apps Experience Program and revamping our Google Play Games Level Up program to incentivize building great software experiences across Android form factors associated with clear quality benchmarks and enhanced user benefits.  Those developers who choose to participate in these programs will have even lower rates. Participating IAP developers will have a 20% service fee for transactions from existing installs and a 15% fee on transactions from new app installs. 3. Our service fee for recurring subscriptions will be 10%. Rollout timelines This is a significant evolution, and we plan to share additional details in the coming months. To make sure we have enough time to build the necessary technical infrastructure, enable a seamless transition for developers, and ensure alignment with local regulations, these updated fees will roll out on the following staggered schedule: * By June 30: EEA, the United Kingdom and the US. * By September 30: Australia * By December 31:  Korea and Japan * By September 30, 2027: The updates will reach the rest of the world. We will also launch the updated Google Play Games Level Up program and new App Experience program by September 30 for EEA, UK, US, and Australia and then it will roll out in line with the rest of the schedule above. We plan to launch Registered App Stores with a version of a major Android release by the end of the year. Resolving disputes with Epic Games With these updates, we have also resolved our disputes worldwide with Epic Games. We believe these changes will make for a stronger Android ecosystem with even more successful developers and higher-quality apps and games available across more form factors for everyone. We look forward to our continued work with the developer community to build the next generation of digital experiences.
1 month ago 0 0 0 0
Preview
Android devices extend seamlessly to connected displays _Posted by Francesco Romano, Senior Developer Relations Engineer on Android  _ We are excited to announce a major milestone in bringing mobile and desktop computing closer together on Android: connected display support has reached general availability with the Android 16 QPR3 release! As shown at Google I/O 2025, connected displays allow users to connect their Android devices to an external monitor and instantly access a desktop windowing environment. Apps can be used in free-form or maximized windows and users can multitask just like they would on a desktop OS. Google and Samsung have collaborated to bring a seamless and powerful desktop windowing experience to devices across the Android ecosystem running Android 16 while connected to an external display. This is now generally available on supported devices* to users who can connect their supported Pixel and Samsung phones to external monitors, enabling new opportunities for building more engaging and more productive app experiences that adapt across form factors. **How does it work?** When a supported Android phone or foldable is connected to an external display, a new desktop session starts on the connected display. The experience on the connected display is similar to the experience on a desktop, including a taskbar that shows active apps and lets users pin apps for quick access. Users are able to run multiple apps side by side simultaneously in freely resizable windows on the connected display. Phone connected to an external display with a desktop session on the display while the phone maintains its own state. When a device that supports desktop windowing (such as a tablet like the Samsung Galaxy Tab S11) is connected to an external display, the desktop session is extended across both displays, unlocking an even more expansive workspace. The two displays then function as one continuous system, allowing app windows, content, and the cursor to move freely between the displays. Tablet connected to an external display, extending the desktop session across both displays. ## Why does it matter? In the Android 16 QPR3 release, we finalized the windowing behaviors, taskbar interactions, and input compatibility (mouse and keyboard) that define the connected display experience. We also included compatibility treatments to scale windows and avoid app restarts when switching displays. If your app is built with adaptive design principles, it will automatically have the desktop look and feel, and users will feel right at home. If the app is locked to portrait or assumes a touch-only interface, now is the time to modernize. In particular, pay attention to these key best practices for optimal app experiences on connected displays: * Don't assume a constant Display object: The Display object associated with your app's context can change when an app window is moved to an external display or if the display configuration changes. Your app should gracefully handle configuration change events and query display metrics dynamically rather than caching them. * Account for density configuration changes: External displays can have vastly different pixel densities than the primary device screen. Ensure your layouts and resources adapt correctly to these changes to maintain UI clarity and usability. Use density-independent pixels (dp) for layouts, provide density-specific resources, and ensure your UI scales appropriately. * Correctly support external peripherals: When users connect to an external monitor, they often create a more desktop-like environment. This frequently involves using external keyboards, mice, trackpads, webcams, microphones, and speakers. Improve the support for keyboard and mouse interactions. **Building for the desktop future with modern tools** We provide several tools to help you build the desktop experience. Let’s recap the latest updates to our core adaptive libraries! **New window size classes: Large and Extra-large** The biggest update in Jetpack WindowManager 1.5.0 is the addition of two new width window size classes: Large and Extra-large. Window size classes are our official, opinionated set of viewport breakpoints that help you design and develop adaptive layouts. With 1.5.0, we're extending this guidance for screens that go beyond the size of typical tablets. Here are the new width breakpoints: * Large: For widths between 1200dp and 1600dp * Extra-large: For widths ≥1600dp The different window size classes based on display width. On very large surfaces, simply scaling up a tablet's Expanded layout isn't always the best user experience. An email client, for example, might comfortably show two panes (a mailbox and a message) in the Expanded window size class. But on an Extra-large desktop monitor, the email client could elegantly display three or even four panes, perhaps a mailbox, a message list, the full message content, and a calendar/tasks panel, all at once. To include the new window size classes in your project, simply call the function from the WindowSizeClass.BREAKPOINTS_V2 set instead of WindowSizeClass.BREAKPOINTS_V1: val currentWindowMetrics = WindowMetricsCalculator.getOrCreate() .computeCurrentWindowMetrics(LocalContext.current) val sizeClass = WindowSizeClass.BREAKPOINTS_V2 .computeWindowSizeClass(currentWindowMetrics) ** Then apply the correct layout when you’re sure your app has at least that much space: if(sizeClass.isWidthAtLeastBreakpoint( WindowSizeClass.WIDTH_DP_LARGE_LOWER_BOUND)){ ... // Window is at least 1200 dp wide. } ** ** Build adaptive layouts with Jetpack Navigation 3** Navigation 3 is the latest addition to the Jetpack collection. Navigation 3, which just reached its first stable release, is a powerful navigation library designed to work with Compose. Navigation 3 is also a great tool for building adaptive layouts by allowing multiple destinations to be displayed at the same time and allowing seamless switching between those layouts. This system for managing your app's UI flow is based on Scenes. A Scene is a layout that displays one or more destinations at the same time. A SceneStrategy determines whether it can create a Scene. Chaining SceneStrategy instances together allows you to create and display different scenes for different screen sizes and device configurations. For out-of-the-box canonical layouts, like list-detail and supporting pane, you can use the Scenes from the Compose Material 3 Adaptive library (available in version 1.3 and above). It's also easy to build your own custom Scenes by modifying the Scene recipes or starting from scratch. For example, let’s consider a Scene that displays three panes side by side: class ThreePaneScene<T : Any>( override val key: Any, override val previousEntries: List<NavEntry<T>>, val firstEntry: NavEntry<T>, val secondEntry: NavEntry<T>, val thirdEntry: NavEntry<T> ) : Scene<T> { override val entries: List<NavEntry<T>> = listOf(firstEntry, secondEntry, thirdEntry) override val content: @Composable (() -> Unit) = { Row(modifier = Modifier.fillMaxSize()) { Column(modifier = Modifier.weight(1f)) { firstEntry.Content() } Column(modifier = Modifier.weight(1f)) { secondEntry.Content() } Column(modifier = Modifier.weight(1f)) { thirdEntry.Content() } } } In this scenario, you could define a SceneStrategy to show three panes if the window width is wide enough and the entries from your back stack have declared that they support being displayed in a three-pane scene. class ThreePaneSceneStrategy<T : Any>(val windowSizeClass: WindowSizeClass) : SceneStrategy<T> { override fun SceneStrategyScope<T>.calculateScene(entries: List<NavEntry<T>>): Scene<T>? { if (windowSizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_LARGE_LOWER_BOUND)) { val lastThree = entries.takeLast(3) if (lastThree.size == 3 && lastThree.all { it.metadata.containsKey(MULTI_PANE_KEY) }) { val firstEntry = lastThree[0] val secondEntry = lastThree[1] val thirdEntry = lastThree[2] return ThreePaneScene( key = Triple(firstEntry.contentKey, secondEntry.contentKey, thirdEntry.contentKey), previousEntries = entries.dropLast(3), firstEntry = firstEntry, secondEntry = secondEntry, thirdEntry = thirdEntry ) } } return null } } You can use your ThreePaneSceneStrategy with other strategies when creating your NavDisplay. For example, we could also add a TwoPaneStrategy to display two panes side by side when there isn't enough space to show three. val strategy = ThreePaneSceneStrategy() then TwoPaneSceneStrategy() NavDisplay(..., sceneStrategy = strategy, entryProvider = entryProvider { entry<MyScreen>(metadata = mapOf(MULTI_PANE_KEY to true))) { ... } ... other entries... } ) If there isn't enough space to display three or two panes—both our custom scene strategies return null. In this case, NavDisplay falls back to displaying the last entry in the back stack in a single pane using SinglePaneScene. By using scenes and strategies, you can add one, two, and three pane layouts to your app!    An adaptive app showing three-pane navigation on wide screens. Checkout the documentation to learn more on how to create custom layouts using Scenes in Navigation 3. ### Standalone adaptive layouts If you need a standalone layout, the Compose Material 3 Adaptive library helps you create adaptive UIs like list-detail and supporting pane layouts that adapt themselves  to window configurations automatically based on  window size classes or device postures. The good news is that the library is already up to date with the new breakpoints! Starting from version 1.2, the default pane scaffold directive functions support Large and Extra-large width window size classes. You only need to opt-in by declaring in your Gradle build file that you want to use the new breakpoints: currentWindowAdaptiveInfo(supportLargeAndXLargeWidth = true) ### Getting started Explore the connected display feature in the latest Android release. Get Android 16 QPR3 on a supported device, then connect it to an external monitor to start testing your app today! Dive into the updated documentation on multi-display support and window management to learn more about implementing these best practices. Feedback Your feedback is crucial as we continue to refine the connected display desktop experience. Share your thoughts and report any issues through our official feedback channels. We're committed to making Android a versatile platform that adapts to the many ways users want to interact with their apps and devices. The improvements to connected display support are another step in that direction, and we think your users will love the desktop experiences you'll build! *Note: At the time the article is written, connected displays are supported on Pixel 8, 9, 10 series and on a wide array of Samsung devices, including S26, Fold7, Flip7, and Tab S11.
1 month ago 0 0 0 0
Advertisement
Preview
Go from prompt to working prototype with Android Studio Panda 2 _Posted by  __Matt Dyor, Senior Product Manager_ Android Studio Panda 2 is now stable and ready for you to use in production. This release brings new agentic capabilities to Android Studio, enabling the agent to create an entire working application from scratch with the AI-powered New Project flow, and allowing the agent to automate the manual work of dependency updates. Whether you're building your first prototype or maintaining a large, established codebase, these updates bring new efficiency to your workflow by enabling Gemini in Android Studio to help more than ever. Here’s a deep dive into what’s new: ## Create New Projects with AI Say goodbye to boilerplate starter templates that just get you to the start line. With the AI-powered New Project flow, you can now build a working app prototype with just a single prompt. The agent reduces the time you spend setting up dependencies, writing boilerplate code, and creating basic navigation, allowing you to focus on the creative aspects of app development. The AI-powered New Project flow allows you to describe exactly what you want to build - you can even upload images for style inspiration. The agent then creates a detailed project plan for your review. When you're ready, the agent turns your plan into a first draft of your app using Android best practices, including Kotlin, Compose, and the latest stable libraries. Under your direction, it creates an autonomous generation loop: it generates the necessary code, builds the project, analyzes any build errors, and attempts to self-correct the code, looping until your project builds successfully. It then deploys your app to an Android Emulator and walks through each screen, verifying that the implementation works correctly and is true to your original request. Whether you need a simple single-screen layout, a multi-page app with navigation, or even an application integrated with Gemini APIs, the AI-powered New Project flow can handle it. ### Getting Started To use the agent to set up a project, do the following: 1. Start Android Studio. 2. Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project) 3. Select Create with AI. 4. Type your prompt into the text entry field and click Next.  For best results we recommend using a paid Gemini API key or third-party remote model. Create a New Project with AI in Android Studio > 5. Name your app and click Finish to start the generation process. > > 6. Validate the finished app using the project plan and by running your app in the Android Emulator or on an Android device. > > AI-powered New Project flow For more details on the New Project flow, check out the official documentation. ### Share What You Build We want to hear from you and see the apps you’re able to build using the New Project flow. Share your apps with us by using #AndroidStudio in your social posts. We’ll be amplifying some of your submissions on our social channels. ### Unlock more with your Gemini API key While the agent works out-of-the-box using Android Studio's default no-cost model, providing your own Google AI Studio API key unlocks the full potential of the assistant. By connecting a paid Gemini API key, you get access to the fastest and latest models from Google. It also allows the New Project flow to access Nano Banana, our best model for image generation, in order to ideate on UI design — allowing the agent to create richer, higher fidelity application designs. In the AI-powered New Project flow, this increased capability means larger context windows for more tailored generation, as well as superior code quality. Furthermore, because the Agent uses Nano Banana behind the scenes for enhanced design generation, your prototype doesn't just work well—it features visually appealing, modern UI layouts and looks professional from the get go. ## Version Upgrade Assistant Keeping your project dependencies up to date is time-consuming and often causes cascading build errors. You fix one issue by updating a dependency, only to introduce a new issue somewhere else. The Version Upgrade Assistant in Android Studio just made that a problem of the past. You can now let AI do the heavy lifting of managing dependencies and boilerplate so you can focus on creating unique experiences for your users. To use this feature, simply right-click in your version catalog, select AI, and then Update Dependencies. Version Upgrade Assistant accessed from Version Catalog You can also access the Version Upgrade Assistant from the Refactor menu—just choose Update all libraries with AI. Version Upgrade Assistant accessed from the Refactor menu The agent runs multiple automated rounds—attempting builds, reading error messages, and adjusting versions—until the build succeeds. Instead of manually fighting through dependency conflicts, you can let the agent handle the iterative process of finding a stable configuration for you. Read the documentation for more information on Version Upgrade Assistant. ## Gemini 3.1 Pro is available in Android Studio We released Gemini 3.1 Pro preview, and it is even better than Gemini 3 Pro for reasoning and intelligence. You can access it in Android Studio by plugging in your Gemini API key. Put the new model to work on your toughest bugs, code completion, and UI logic. Let us know what you think of the new model. Gemini 3.1 Pro Now Available in Android Studio ## Get started Dive in and accelerate your development. Download Android Studio Panda 2 Feature Drop and start exploring these powerful new agentic features today. As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!
1 month ago 0 0 0 0
Preview
Supercharge your Android development with 6 expert tips for Gemini in Android Studio __ _Posted by  Trevor Johns, Developer Relations Engineer_ __ _ _ _ _ _ _ _ _ _ _ In January we announced Android Studio Otter 3 Feature Drop in stable, including Agent Mode enhancements and many other updates to provide more control and flexibility over using AI to help you build high quality Android apps. To help you get the most out of Gemini in Android Studio and all the new capabilities, we sat down with Google engineers and Google Developer Experts to gather their best practices for working with the latest features—including Agent mode and the New Project Assistant. Here are some useful insights to help you get the best out of your development: 1. Build apps from scratch with the New Project Assistant The new Project Assistant—now available in the latest Canary builds—integrates Gemini with the Studio's New Project wizard. By simply providing prompts and (optionally) design mockups, you can generate entire applications from scratch, including scaffolding, architecture, and Jetpack Compose layouts. Integrated with the Android Emulator, it can deploy your build and "walk through" the app, making sure it’s functioning correctly and that the rendered screens actually match your vision. Additionally, you can use Agent Mode to then continue to work on the app and iterate, leveraging Gemini to refine your app to fit your vision. Also, while this feature works with the default (no-cost) model, we highly recommend using this feature with an AI Studio API Key to access the latest models — like Gemini 3.1 Pro or 3.0 Flash — which excel at agentic workflows. Additionally, adding your API Key allows the New Project Assistant to use Nano Banana behind the scenes to help with ideating on UI design, improving the visual fidelity of the generated application! - Trevor Johns, Developer Relations Engineer. Dialog for setting up a new project. 2. Ask the Agent to refine your code by providing it with ‘intentional’ contexts When using Gemini Agents, the quality of the output is directly tied to the boundaries you set. Don't just ask it to "fix this code"— be very intentional with the context that you provide it and be specific about what you want (and what you don't). Improve the output by providing recent blogs or docs so the model can make accurate suggestions based on these. Ask the Agent to simplify complex logic, or if it see’s any fundamental problems with it, or even ask it to scan for security risks in areas where you feel uncertain. Being firm with your instructions—even telling the model "please do not invent things" in instances where you are using very new or experimental APIs—helps keep the AI focused on the outputs you are trying to achieve. - Alejandra Stamato, Android Google Developer Expert and Android Engineer at HubSpot. ** ** 3. Use documentation with Agent mode to provide context for new libraries ** ** To prevent the model from hallucinating code for niche or brand-new libraries, leverage Android Studio’s Agent tools, to have access to documentation: Search Android Docs and Fetch Android Docs. You can direct Gemini to search the Android Knowledge Base or specific documentation articles. The model can choose to use this if it thinks it’s missing some information, which is good especially when you use niche API’s, or one’s which aren’t as common. ** ** If you are certain you want the model to consult the documentation and to make sure those tools are triggered, a good trick is to add something like ‘search the official documentation’ or ‘check the docs’ to your prompts. And for documentation on different libraries which aren’t Android specific, install a MCP Server that lets you access documentation like Context7 (or something similar). - Jose Alcérreca, Android Developer Relations Engineer, Google. ** ** 4. Use AI to help build Agents.md files for using custom frameworks, libraries and design systems To make sure Agent uses custom frameworks, libraries and design systems you have two options 1) In settings, Android Studio allows you to specify rules to be followed when Gemini is performing these actions for you. Or 2) Create Agents.md files in your application and specify how things should be done or act as guidance for when AI is performing a task, specific frameworks, design systems, or specific ways of doing things (such as the exact architecture, things to do or what not to do), in a standard bullet point way to give AI clear instructions. Manage AGENTS.md files as context. You can also use Agents.md file at the root of the project, and can have them in different modules (or even subdirectories) of your project as well! The more context you have or the more guidance available when you’re working, that will be available for AI to access. If you get stuck creating these Agents.md files you can use AI to help build them, or give you foundations based on the projects you have and then edit them so you don’t have to start from scratch. - Joe Birch, Android Google Developer Expert and Staff Engineer at Buffer. 5. Offload the tedious tasks to Agent and save yourself time You can get Gemini in Android Studio agent to help you make tasks such as writing and reviewing faster. For example it can help writing commit messages, giving you a good summary which you can then review and save yourself time. Additionally, get it to write tests; under your direction the Agent can look at the other tests in your project and write a good test for you to run following best practices just by looking at them. Another good example of a tedious task is writing a new parser for a certain JSON format. Just give Gemini a few examples and it will get you started very quickly. - Diego Perez, Android Software Engineer, Google ** ** 6. Control what you are sharing with AI using simple opt-outs or commands, alongside paid models. If you want to control what is shared with AI whilst on the no-cost plans, you can opt out some or all your code from model training by adding an AI exclusions file (‘.aiexclude’) to your project. This file uses glob pattern matching similar to a .gitignore file, specifying sensitive directories or files that should be hidden from the AI. You can place .aiexclude files anywhere within the project and its VCS roots to control which files AI features are allowed to access. An example of an `.aiexclude` file in Android Studio. Alternatively, in Android Studio settings, you can also opt out of context sharing either on a per project or per user basis (although this method limits the functionality of a number of features because the AI won’t see your code). Remember, paid plans never use your code for model training. This includes both users using an AI Studio API Key, and businesses who are subscribed to Gemini Code Assist. - Trevor Johns, Developer Relations Engineer. Hear more from the Android team and Google Developer Experts about Gemini in Android Studio in our recent fireside chat and download Android Studio to get started.
1 month ago 0 0 0 0
Preview
Preventing accessibility permission abuse in the Android ecosystem _Posted by Bethel Otuteye - Senior Director, Product Management, Android App Safety_ Security is a foundational pillar of Android, we continually work on ways to make the platform safer for everyone. This release builds on our ongoing efforts, which include a range of APIs and features designed to help developers protect user data and fight against malware. From Credential Manager for a streamlined and more secure sign-in experience to resources on Preventing Fraudulent Activity, we're consistently working to identify and close potential vulnerabilities. We know that a strong security posture requires collaboration across the entire ecosystem. It's a joint effort between us and the developer community to create a more secure experience for everyone. ### Protecting your apps from snooping with a single line of code To further enhance user security, Android is continually evolving its defenses against malicious apps that attempt to abuse the Accessibility API's powerful features. The abuse includes reading sensitive information, such as passwords and financial details, directly from the screen and even manipulating a user's device by injecting touches. To combat this, we have a feature in Android 16 that gives you a powerful tool to prevent this type of abuse with a single line of code. The accessibilityDataSensitive flag allows you to explicitly mark a view or composable as containing sensitive data. When this flag is set to true,  apps with the accessibility permission that have not explicitly set the isAccessibilityTool to ‘true’ (isAccessibilityTool=true) are blocked from accessing the view's data or performing interactions on it. This simple but effective change helps to prevent malware from stealing information and performing unauthorized actions, without impacting the functionality of legitimate accessibility tools. Note: If an app is not an accessibility tool but requests accessibility permissions and sets isAccessibilityTool=true, it will be rejected on Play and will be blocked by Play Protect on user devices. As an added benefit for developers, we've integrated this new functionality with the existing setFilterTouchesWhenObscured method. If you're already using setFilterTouchesWhenObscured(true) to protect against touchjacking, your views will automatically be treated as sensitive data for accessibility. This ensures that a large portion of the developer community will immediately benefit from this security enhancement. ### Getting started We encourage you to use the setFilterTouchesWhenObscured (recommended) or the accessibilityDataSensitive flag on any screen that contains sensitive information, including login pages, payment flows, and any view displaying personal or financial data. #### For Jetpack Compose ### ** setFilterTouchesWhenObscured| accessibilityDataSensitive ---|--- val composeView = LocalView.current DisposableEffect(Unit) { composeView.filterTouchesWhenObscured = true onDispose { composeView.filterTouchesWhenObscured = false } } | Use the semantics modifier to apply the sensitiveData property to a composable.BasicText { text = “Your password”,            modifier = Modifier.semantics {                sensitiveData = true }} ** #### For View-based apps In your XML layout, add the relevant attribute to the sensitive view. ### setFilterTouchesWhenObscured| accessibilityDataSensitive ---|--- <TextView android:filterTouchesWhenObscured="true" /> | <TextView android:accessibilityDataSensitive="true" /> ### ### Alternatively, you can set the property programmatically in Java or Kotlin: ### setFilterTouchesWhenObscured| accessibilityDataSensitive ---|--- myView.filterTouchesWhenObscured = true; | myView.isAccessibilityDataSensitive = true; myView.setFilterTouchesWhenObscured(true) | myView.setAccessibilityDataSensitive(true); You can read more about the accessibilityDataSensitive and setFilterTouchesWhenObscured flags in the Tapjacking guide. ### Partnering with developers to keep users safe We've been working with developers from the start to ensure this feature meets their needs, and we're already hearing great feedback. ### "We've always prioritized protecting our customers' sensitive financial data, which required us to build our own protection layer against accessibility-based malware. Revolut strongly supports the introduction of this new, official Android API, as it allows us to gradually move away from our custom code in favor of a robust, single-line platform defense." - Vladimir Kozhevnikov, Android Engineer at Revolut We believe these new tools represent a significant step forward in our mission to make Android a safer platform for everyone. By leveraging setFilterTouchesWhenObscured or adopting accessibilityDataSensitive, you can play a crucial role in protecting your users from malicious accessibility-based attacks. We encourage all developers to integrate these features into their apps to strengthen the security of the Android ecosystem as a whole. Together, we can build a more secure and trustworthy experience for all Android users.
1 month ago 2 0 0 0
Preview
Spotlight Week: Android Safety and Security _Posted by Todd Burner - Android Developer Relations_ ## Announcing our Safety and Security Spotlight Week! This week, we’re launching our first-ever Spotlight Week dedicated to Android Safety and Security. To kick things off, we’re excited to announce the launch of early access for the new Android developer verification experience. The first few invites are rolling out soon and you can still sign up today. This program is a foundational step in our commitment to elevate Android security and make the ecosystem safer for everyone. The Safety and Security Spotlight Week will provide resources—blog posts, videos, and more—all designed to help you build more secure apps and prepare for upcoming changes. Here’s what’s coming: * Android developer verification (Tuesday, November 11th): We’re kicking off the week with a deep dive into the new Android developer verification requirements. We’ll explain what they are, how to get ready, and a preview of the new Android Developer Console. * Learn more in the deep-dive blog post. * Watch the video. * Play Integrity API - stronger threat detection, simpler integration (Wednesday, November 12th): Learn how to better protect your apps and games from abuse and attack with the Play Integrity API. We’ll cover use cases and recommended practices, and introduce the new in-app remediation prompts that help users resolve integrity issues and API error codes automatically. * Accessibility APIs (Thursday, November 13th): We’ll cover how adding just one line to your code with the isAccessibilityDataSentitive flag, can protect your user’s sensitive information and prevent abuse of the Android accessibility APIs. * Cyber Security Awareness with Advanced Protection Mode (Thursday, November 13th): Advanced Protection offers greater security functionality for Android’s most sensitive users by allowing them to make changes to device protection settings across Android through one simple toggle. That’s a look at what we’ll be covering during our Safety and Security Spotlight Week. Be sure to check back here throughout the week, as we’ll be updating this post with all the latest links. Follow the Android Developers channels on X and LinkedIn to get the latest updates as they happen.
1 month ago 2 0 0 0
Preview
The Second Beta of Android 17 _Posted by Matthew McCullough, VP Product Management, Android Developer_ Today we're releasing the second beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This update delivers a range of new capabilities, including the EyeDropper API and a privacy-preserving Contacts Picker. We're also adding advanced ranging, cross-device handoff APIs, and more. This release continues the shift in our release cadence, following this annual major SDK release in Q2 with a minor SDK update. ## User Experience & System UI ### Bubbles Bubbles is a windowing mode feature that offers a new floating UI experience separate from the messaging bubbles API. Users can create an app bubble on their phone, foldable, or tablet by long-pressing an app icon on the launcher. On large screens, there is a bubble bar as part of the taskbar where users can organize, move between, and move bubbles to and from anchored points on the screen. You should follow the guidelines for supporting multi-window mode to ensure your apps work correctly as bubbles. EyeDropper API A new system-level EyeDropper API allows your app to request a color from any pixel on the display without requiring sensitive screen capture permissions. val eyeDropperLauncher = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result -> if (result.resultCode == Activity.RESULT_OK) { val color = result.data?.getIntExtra(Intent.EXTRA_COLOR, Color.BLACK) // Use the picked color in your app } } fun launchColorPicker() { val intent = Intent(Intent.ACTION_OPEN_EYE_DROPPER) eyeDropperLauncher.launch(intent) } ### Contacts Picker A new system-level contacts picker via ACTION_PICK_CONTACTS grants temporary, session-based read access to only the specific data fields requested by the user, reducing the need for the broad READ_CONTACTS permissions. It also allows for selections from the device’s personal or work profiles. val contactPicker = rememberLauncherForActivityResult(StartActivityForResult()) { if (it.resultCode == RESULT_OK) { val uri = it.data?.data ?: return@rememberLauncherForActivityResult // Handle result logic processContactPickerResults(uri) } } val dataFields = arrayListOf(Email.CONTENT_ITEM_TYPE, Phone.CONTENT_ITEM_TYPE) val intent = Intent(ACTION_PICK_CONTACTS).apply { putStringArrayListExtra(EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS, dataFields) putExtra(EXTRA_ALLOW_MULTIPLE, true) putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5) } contactPicker.launch(intent) ### Easier pointer capture compatibility with touchpads Previously, touchpads reported events in a very different way from mice when an app had captured the pointer, reporting the locations of fingers on the pad rather than the relative movements that would be reported by a mouse. This made it quite difficult to support touchpads properly in first-person games. Now, by default the system will recognize pointer movement and scrolling gestures when the touchpad is captured, and report them just like mouse events. You can still request the old, detailed finger location data by explicitly requesting capture in the new “absolute” mode. // To request the new default relative mode (mouse-like events) // This is the same as requesting with View.POINTER_CAPTURE_MODE_RELATIVE view.requestPointerCapture() // To request the legacy absolute mode (raw touch coordinates) view.requestPointerCapture(View.POINTER_CAPTURE_MODE_ABSOLUTE) **Interactive Chooser resting bounds** By calling getInitialRestingBounds on Android's ChooserSession, your app can identify the target position the Chooser occupies after animations and data loading are complete, enabling better UI adjustments. ## Connectivity & Cross-Device ### Cross-device app handoff A new Handoff API allows you to specify application state to be resumed on another device, such as an Android tablet. When opted in, the system synchronizes state via CompanionDeviceManager and displays a handoff suggestion in the launcher of the user's nearby devices. This feature is designed to offer seamless task continuity, enabling users to pick up exactly where they left off in their workflow across their Android ecosystem. Critically, Handoff supports both native app-to-app transitions and app-to-web fallback, providing maximum flexibility and ensuring a complete experience even if the native app is not installed on the receiving device. ### Advanced ranging APIs We are adding support for 2 new ranging technologies - 1. UWB DL-TDOA which enables apps to use UWB for indoor navigation. This API surface is FIRA (Fine Ranging Consortium) 4.0 DL-TDOA spec compliant and enables privacy preserving indoor navigation  (avoiding tracking of the device by the anchor). 2. Proximity Detection which enables apps to use the new ranging specification being adopted by WFA (WiFi Alliance). This technology provides improved reliability and accuracy compared to existing Wifi Aware based ranging specification. ### Data plan enhancements To optimize media quality, your app can now retrieve carrier-allocated maximum data rates for streaming applications using getStreamingAppMaxDownlinkKbps and getStreamingAppMaxUplinkKbps. ## Core Functionality, Privacy & Performance ### Local Network Access Android 17 introduces the ACCESS_LOCAL_NETWORK runtime permission to protect users from unauthorized local network access. Because this falls under the existing NEARBY_DEVICES permission group, users who have already granted other NEARBY_DEVICES permissions will not be prompted again. By declaring and requesting this permission, your app can discover and connect to devices on the local area network (LAN), such as smart home devices or casting receivers. This prevents malicious apps from exploiting unrestricted local network access for covert user tracking and fingerprinting. Apps targeting Android 17 or higher will now have two paths to maintain communication with LAN devices: adopt system-mediated, privacy-preserving device pickers to skip the permission prompt, or explicitly request this new permission at runtime to maintain local network communication. ### Time zone offset change broadcast Android now provides a reliable broadcast intent, ACTION_TIMEZONE_OFFSET_CHANGED, triggered when the system's time zone offset changes, such as during Daylight Saving Time transitions. This complements the existing broadcast intents ACTION_TIME_CHANGED and ACTION_TIMEZONE_CHANGED, which are triggered when the Unix timestamp changes and when the time zone ID changes, respectively. ### NPU Management and Prioritization Apps targeting Android 17 that need to directly access the NPU must declare FEATURE_NEURAL_PROCESSING_UNIT in their manifest to avoid being blocked from accessing the NPU. This includes apps that use the LiteRT NPU delegate, vendor-specific SDKs, as well as the deprecated NNAPI. **ICU 78 and Unicode 17 support** Core internationalization libraries have been updated to ICU 78, expanding support for new scripts, characters, and emoji blocks, and enabling direct formatting of time objects. ### SMS OTP protection Android is expanding its SMS OTP protection by automatically delaying access to SMS messages with OTP. Previously, the protection was primarily focused on the SMS Retriever format wherein the delivery of messages containing an SMS retriever hash is delayed for most apps for three hours. However, for certain apps like the default SMS app, etc and the app that corresponds to the hash are exempt from this delay. This update extends the protection to all SMS messages with OTP. For most apps, SMS messages containing an OTP will only be accessible after a delay of three hours to help prevent OTP hijacking. The SMS_RECEIVED_ACTION broadcast will be withheld and sms provider database queries will be filtered. The SMS message will be available to these apps after the delay. **Delayed access to WebOTP format SMS messages** If the app has the permission to read SMS messages but is not the intended recipient of the OTP (as determined by domain verification), the WebOTP format SMS message will only be accessible after three hours have elapsed. This change is designed to improve user security by ensuring that only apps associated with the domain mentioned in the message can programmatically read the verification code. This change applies to all apps regardless of their target API level. **Delayed access to standard SMS messages with OTP** For SMS messages containing an OTP that do not use the WebOTP or SMS Retriever formats, the OTP SMS will only be accessible after three hours for most apps. This change only applies to apps that target Android 17 (API level 37) or higher. Certain apps such as the default SMS, assistant app, along with connected device companion apps, etc will be exempt from this delay. All apps that rely on reading SMS messages for OTP extraction should transition to using SMS Retriever or SMS User Consent APIs to ensure continued functionality. ## The Android 17 schedule We're going to be moving quickly from this Beta to our Platform Stability milestone, targeted for March. At this milestone, we'll deliver final SDK/NDK APIs. From that time forward, your app can target SDK 37 and publish to Google Play to help you complete your testing and collect user feedback in the several months before the general availability of Android 17. ### A year of releases We plan for Android 17 to continue to get updates in a series of quarterly releases. The upcoming release in Q2 is the only one where we introduce planned app breaking behavior changes. We plan to have a minor SDK release in Q4 with additional APIs and features. ## Get started with Android 17 You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 2. If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 2 and wait for the release of 26Q1. We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release. For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you’re set up, here are some of the things you should do: * Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page. * Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it. We’ll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas. For complete information, visit the Android 17 developer site. ### Join the conversation As we move toward Platform Stability and the general availability of Android 17 later this year, your feedback remains our most valuable asset. Whether you’re an early adopter on   the Canary channel or an app developer testing on Beta 2, consider joining our communities and filing feedback. We’re listening.
1 month ago 0 0 0 0