برچسب: Android

  • Android Developers Blog: Announcing Jetpack Navigation 3



    Posted by Don Turner – Developer Relations Engineer

    Navigating between screens in your app should be simple, shouldn’t it? However, building a robust, scalable, and delightful navigation experience can be a challenge. For years, the Jetpack Navigation library has been a key tool for developers, but as the Android UI landscape has evolved, particularly with the rise of Jetpack Compose, we recognized the need for a new approach.

    Today, we’re excited to introduce Jetpack Navigation 3, a new navigation library built from the ground up specifically for Compose. For brevity, we’ll just call it Nav3 from now on. This library embraces the declarative programming model and Compose state as fundamental building blocks.

    Why a new navigation library?

    The original Jetpack Navigation library (sometimes referred to as Nav2 as it’s on major version 2) was initially announced back in 2018, before AndroidX and before Compose. While it served its original goals well, we heard from you that it had several limitations when working with modern Compose patterns.

    One key limitation was that the back stack state could only be observed indirectly. This meant there could be two sources of truth, potentially leading to an inconsistent application state. Also, Nav2’s NavHost was designed to display only a single destination – the topmost one on the back stack – filling the available space. This made it difficult to implement adaptive layouts that display multiple panes of content simultaneously, such as a list-detail layout on large screens.

    illustration of single pane and two-pane layouts showing list and detail features

    Figure 1. Changing from single pane to multi-pane layouts can create navigational challenges

    Founding principles

    Nav3 is built upon principles designed to provide greater flexibility and developer control:

      • You own the back stack: You, the developer, not the library, own and control the back stack. It’s a simple list which is backed by Compose state. Specifically, Nav3 expects your back stack to be SnapshotStateList<T> where T can be any type you choose. You can navigate by adding or removing items (Ts), and state changes are observed and reflected by Nav3’s UI.
      • Get out of your way: We heard that you don’t like a navigation library to be a black box with inaccessible internal components and state. Nav3 is designed to be open and extensible, providing you with building blocks and helpful defaults. If you want custom navigation behavior you can drop down to lower layers and create your own components and customizations.
      • Pick your building blocks: Instead of embedding all behavior within the library, Nav3 offers smaller components that you can combine to create more complex functionality. We’ve also provided a “recipes book” that shows how to combine components to solve common navigation challenges.

    illustration of the Nav3 display observing changes to the developer-owned back stack

    Figure 2. The Nav3 display observes changes to the developer-owned back stack.

    Key features

      • Adaptive layouts: A flexible layout API (named Scenes) allows you to render multiple destinations in the same layout (for example, a list-detail layout on large screen devices). This makes it easy to switch between single and multi-pane layouts.
      • Modularity: The API design allows navigation code to be split across multiple modules. This improves build times and allows clear separation of responsibilities between feature modules.

        moving image demonstrating custom animations and predictive back features on a mobile device

        Figure 3. Custom animations and predictive back are easy to implement, and easy to override for individual destinations.

        Basic code example

        To give you an idea of how Nav3 works, here’s a short code sample.

        // Define the routes in your app and any arguments.
        data object Home
        data class Product(val id: String)
        
        // Create a back stack, specifying the route the app should start with.
        val backStack = remember { mutableStateListOf<Any>(Home) }
        
        // A NavDisplay displays your back stack. Whenever the back stack changes, the display updates.
        NavDisplay(
            backStack = backStack,
        
            // Specify what should happen when the user goes back
            onBack = { backStack.removeLastOrNull() },
        
            // An entry provider converts a route into a NavEntry which contains the content for that route.
            entryProvider = { route ->
                when (route) {
                    is Home -> NavEntry(route) {
                        Column {
                            Text("Welcome to Nav3")
                            Button(onClick = {
                                // To navigate to a new route, just add that route to the back stack
                                backStack.add(Product("123"))
                            }) {
                                Text("Click to navigate")
                            }
                        }
                    }
                    is Product -> NavEntry(route) {
                        Text("Product ${route.id} ")
                    }
                    else -> NavEntry(Unit) { Text("Unknown route: $route") }
                }
            }
        )
        

        Get started and provide feedback

        To get started, check out the developer documentation, plus the recipes repository which provides examples for:

          • common navigation UI, such as a navigation rail or bar
          • conditional navigation, such as a login flow
          • custom layouts using Scenes

        We plan to provide code recipes, documentation and blogs for more complex use cases in future.

        Nav3 is currently in alpha, which means that the API is liable to change based on feedback. If you have any issues, or would like to provide feedback, please file an issue.

        Nav3 offers a flexible and powerful foundation for building modern navigation in your Compose applications. We’re really excited to see what you build with it.

        Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.




    Source link

  • What’s new in Android development tools



    Posted by Mayank Jain – Product Manager, Android Studio

    Android Studio continues to advance Android development by empowering developers to build better app experiences, faster. Our focus has been on improving AI-driven functionality with Gemini, streamlining UI creation and testing, and helping you future-proof apps for the evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in the fast-paced world of mobile development.

    You can check out the What’s new in Android Developer Tools session at Google I/O 2025 to see some of the new features in action or better yet, try them out yourself by downloading Android Studio Narwhal Feature Drop (2025.2.1) in the preview release channel. Here’s a look at our latest developments:

    Get the latest Gemini 2.5 Pro model in Android Studio

    The power of artificial intelligence through Gemini is now deeply integrated into Android Studio, helping you at all stages of Android app development. Now with access to Gemini 2.5 Pro, we’re continuing to look for new ways to use AI to supercharge Android development — and help you build better app experiences, faster.

    Journeys for Android Studio

    We’re also introducing agentic AI with Gemini in Android Studio.Testing your app is now much easier when you create journeys – just describe the actions and assertions in natural language for the user journeys you want to test, and Gemini performs the tests for you. Creating journeys lets you test your app’s critical user journeys across various devices without writing extensive code. You can then run these tests on local physical or virtual Android devices to validate that the test worked as intended by reviewing detailed results directly within the IDE. Although the feature is experimental, the goal is to increase the speed that you can ship high-quality code, while significantly reducing the amount of time you spend manually testing, validating, or reproducing issues.

    moving image of Gemini testing an app in Android Studio

    Journeys for Android Studio uses Gemini to test your app.

    https://www.youtube.com/watch?v=mP1tlIKK0R4

    Suggested fixes for crashes with Gemini

    The App Quality Insights panel has a great new feature. The crash insights now analyzes your app’s source code referenced from the crash, and not only offers a comprehensive analysis and explanation of the crash, in some cases it even offers a source fix! With just a few clicks, you are able to review the changes, accept the code suggestions, and push the changes to your source control. Now you can determine the root cause of a crash and fix it much faster!

    screenshot of crash analysis with Gemini in Android Studio

    Crash analysis with Gemini

    AI features in Studio Labs (stable releases only)

    We’ve heard feedback that developers want to access AI features in stable channels as soon as possible. You can now discover and try out the latest AI experimental features through the Studio Labs menu in the Settings menu starting with Narwhal stable release. You can get a first look at AI experiments, share your feedback, and help us bring them into the IDE you use everyday. Go to the Studio Labs tab in Settings and enable the features you would like to start using. These AI features are automatically enabled in canary releases and no action is required.

    screenshot of AI features in Studio Labs

    AI features in Studio Labs

      • Compose preview generation with Gemini

      • Gemini can automatically generate Jetpack Compose preview code saving you time and effort. You can access this feature by right-clicking within a composable and navigating to Gemini > Generate Compose Preview or Generate Compose Preview for this file, or by clicking the link in an empty preview panel. The generated preview code is presented in a diff view that enables you to quickly accept, edit, or reject the suggestions, providing a faster way to visualize your composables.

        moving image of compose preview generation with gemini in Android Studio

        Compose Preview generation with Gemini

      • Transform UI with Gemini

      • You can now transform UI code within the Compose Preview environment using natural language directly in the preview. To use it, right click in the Compose Preview and select “Transform UI With Gemini”. Then enter your natural language requests, such as “Center align these buttons,” to guide Gemini in adjusting your layout or styling, or select specific UI elements in the preview for better context. Gemini will then edit your Compose UI code in place, which you can review and approve, speeding up the UI development workflow.

        side by side screenshots showing transforming UI with Gemini in Android Studio

        Transform UI with Gemini

      • Image attachment in Gemini

      • You can now attach image files and provide additional information along with your prompt. For example: you can attach UI mock-ups or screenshots to tell Gemini context about your app’s layout. Consequently, Gemini can generate Compose code based on a provided image, or explain the composables and data flow of a UI screenshot.

        screenshot of image atteachment and preview generation via Gemini in Android Studio

        Image attachment and preview generation via Gemini in Android Studio

      • @File context in Gemini

      • You can now attach your project files as context in chat interactions with Gemini in Android Studio. This lets you quickly reference files in your prompts for Gemini. In the Gemini chat input, type @ to bring up a file completion menu and select files to attach. You can also click the Context drop-down to see which files were automatically attached by Gemini. This gives you more control over the context sent to Gemini.

        screenshot of @File context in Gemini in Android Studio

        @File context in Gemini

    Rules in Prompt Library

    Rules in Gemini let you define preferred coding styles or output formats within the Prompt Library. You can also mention your preferred tech stack and languages. When you set these preferences once, they are automatically applied to all subsequent prompts sent to Gemini. Rules help the AI understand project standards and preferences for more accurate and tailored code assistance. For example, you can create a rule such as “Always give me concise responses in Kotlin.”

    prompt library in Android Studio

    Prompt Library Improvements

    Gemini in Android Studio for businesses

    Gemini in Android Studio for businesses is now available. It provides all the benefits of Gemini in Android Studio, plus enterprise-grade privacy and security features backed by Google Cloud — giving your team the confidence they need to deploy AI at scale while keeping their data protected.

    Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions. Discover the full list of Gemini in Android for business features available for your organization.

    Improved tools for creating great user experiences

    Elevate your Compose UI development with the latest Android Studio enhancements.

    Compose preview improvements

    Compose preview interaction is now more efficient with the latest navigation improvements. Click on the preview name to jump to the preview definition or click the individual component to jump to the function where it’s defined. Hover states provide immediate visual feedback as you mouse over a preview frame. Improved keyboard arrow navigation eases movement through multiple previews, enabling faster UI iteration and refinement. Additionally, the Compose preview picker is now also available in the stable release.

    moving image of compose preview navigation improvements in Android Studio

    Compose preview navigation improvements

    Compose preview picker in Android Studio

    Compose preview picker

    Resizable Previews

    While in Compose Preview’s focus mode in Android Studio, you can now resize the preview window by dragging its edges. This gives you instant visual feedback on how your UI adapts to different screen sizes, ensuring responsiveness and visual consistency. This rapid iteration helps create UIs that look great on any Android device.

    ALT TEXT

    Resizable Preview

    Embedded Android XR Emulator

    The Android XR Emulator now launches by default in the embedded state. You can now deploy your application, navigate the 3D space and use the Layout Inspector directly inside Android Studio, streamlining your development flow.

    Embedded XR emulator in Android Studio

    Embedded XR Emulator

    Improved tools for future-proofing and testing your Android apps

    We’ve enhanced some of your favorite features so that you can test more confidently, future-proof your apps, and ensure app compatibility across a wide range of devices and Android versions.

    Streamlined testing with Backup and Restore support

    Android Studio offers built-in Backup and Restore support by letting you trigger app backups on connected devices directly from the Running Devices window. You can also configure your Run/Debug settings to automatically restore from a previous backup when launching your app. This simplifies the process of validating your app’s Backup and Restore implementation and speeds up development by reducing manual setup for testing.

    Streamlined testing with backup and restore support in Android Studio

    Streamlined testing with Backup and Restore support

    Android’s transition to 16 KB Page Size

    The underlying architecture of Android is evolving, and a key step forward is the transition to 16 KB page sizes. This fundamental change requires all Android apps with native code or dependencies to be recompiled for compatibility. To help you navigate this transition smoothly, Android Studio now offers proactive warnings when building APKs or Android App Bundles that are incompatible with 16 KB devices. Using the APK Analyzer, you can also find out which libraries are incompatible with 16 KB devices. To test your apps in this new environment, a dedicated 16 KB emulator target is also available in Android Studio alongside existing 4 KB images.

    Android’s transition to 16 KB page size in Android Studio

    Android’s transition to 16 KB page size

    Backup and Sync your Studio settings

    When you sign in with your Google account or a JetBrains account in Android Studio, you can now sync your customizations and preferences across all installs and restore preferences automatically on remote Android Studio instances. Simply select “Enable Backup and Sync” while you’re logging in to Android Studio, or from the Settings > Backup and Sync page, and follow the prompts.

    Backup and sync settings in Android Studio

    Backup and Sync your Studio settings

    Increasing developer productivity with Android’s Kotlin Multiplatform improvements

    Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. Usage has been growing in the developer community, with apps such as Google Docs now using it in production. We’ve released new Android Studio KMP project templates, updated Jetpack libraries and new codelabs (Get Started with KMP and Migrate Existing Apps to Room KMP) to help developers who are looking to get started with KMP.

    Experimental and features that are coming soon to Android Studio

    Android Studio Cloud (experimental)

    Android Studio Cloud is now available as an experimental public preview, accessible through Firebase Studio. This service streams a Linux virtual machine running Android Studio directly to your web browser, enabling Android application development from anywhere with an internet connection. Get started quickly with dedicated workspaces featuring pre-downloaded Android SDK components. Explore sample projects or seamlessly access your existing Android app projects from GitHub without a local installation. Please note that Android Studio Cloud is currently in an experimental phase. Features and capabilities are subject to significant change, and users may encounter known limitations.

    Android Studio Cloud

    Version Upgrade Agent (coming soon)

    The Version Upgrade Agent, as part of Gemini in Android Studio, is designed to save you time and effort by automating your dependency upgrades. It intelligently analyzes your Android project, parses the release notes for included libraries, and proposes updates directly from your libs.versions.toml file or the refactoring menu (right-click > Refactor > Update dependencies). The agent automatically updates dependencies to the latest compatible version, builds the project, fixes any errors, and repeats until all errors are fixed. Once the dependencies are upgraded, the agent generates a report showing the changes it made, as well as a high level summary highlighting the changes included in the updated libraries.

    Version updgrade agent in Android Studio

    Version Upgrade Agent

    https://www.youtube.com/watch?v=ubyPjBesW-8

    Agent Mode (coming soon)

    Agent Mode is a new autonomous AI feature using Gemini, designed to handle complex, multi-stage development tasks that go beyond typical AI assistant capabilities, invoking multiple tools to accomplish tasks on your behalf.

    You can describe a complex goal, like integrating a new API, and the agent will formulate an execution plan that spans across files in your project — adding necessary dependencies, editing files, and iteratively fixing bugs. This feature aims to empower all developers to tackle intricate challenges and accelerate the building and prototyping process. You can access it via the Gemini chat window in Android Studio.

    Agent Mode in Android Studio

    Agent Mode

    Play Policy Insights beta in Android Studio (coming soon)

    Android Studio now includes richer insights and guidance on Google Play policies that might impact your app. This information, available as lint checks, helps you build safer apps from the start, preventing issues that could disrupt your launch process and cost more time and resources to fix later on. These lint checks will present an overview of the policy, do and don’ts, and links to Play policy pages where you can find more information about the policy.

    Play Policy Insights beta in Android Studio

    Play Policy Insights beta in Android Studio

    IntelliJ Platform Update (2025.1)

    Here are some important IDE improvements in the IntelliJ IDEA 2025.1 platform release

      • Kotlin K2 mode: Android Studio now supports Kotlin K2 mode in Android-specific features requiring language support such as Live Edit, Compose Preview and many more

      • Improved dependency resolution in Kotlin build scripts: Makes your Kotlin build scripts for Android projects more stable and predictable

      • Hints about code alterations by Kotlin compiler plugins: Gives you clearer insights into how plugins used in Android development modify your Kotlin code

      • Automatic download of library sources for Gradle projects: Simplifies debugging and understanding your Android project dependencies by providing immediate access to their source code

      • Support for Gradle Daemon toolchains: Helps prevent potential JVM errors during your Android project builds and ensures smoother synchronization

      • Automatic plugin updates: Keeps your Android development tools within IntelliJ IDEA up-to-date effortlessly

    To Summarize

    Android Studio Narwhal Feature Drop (2025.2.1) is now available in the Android Studio canary channel with some amazing features to help your Android development

    AI-powered development tools for Android

      • Journeys for Android Studio: Validate app flows easily using tests and assertions in natural language
      • Suggested fixes for crashes with Gemini: Determine the root cause of a crash and fix it much faster with Gemini
      • AI features in Studio Labs
          • Compose preview generation with Gemini: Generate Compose previews with Gemini’s code suggestions
          • Transform UI with Gemini: Transform UI in Compose Preview with natural language, speeding development
          • Image attachment in Gemini: Attach images to Gemini for context-aware code generation
          • @File context in Gemini: Reference project files in Gemini chats for quick AI prompts
      • Rules in Prompt Library: Define preferred coding styles or output formats within the Prompt Library

    Improved tools for creating great user experiences

      • Compose preview improvements: Navigate the Compose Preview using clickable names and components
      • Resizable preview: Instantly see how your Compose UI adapts to different screen sizes
      • Embedded XR Emulator: XR Emulator now launches by default in the embedded state

    Improved tools for future-proofing and testing your Android apps

      • Streamlined testing with Backup and Restore support: Effortless app testing, trigger backups, auto-restore for faster validation
      • Android’s transition to 16 KB Page Size: Prepare for Android’s 16KB page size with Studio’s early warnings and testing
      • Backup and Sync your Studio settings: Sync Android Studio settings across devices and restore automatically for convenience
      • Increasing developer productivity with Android’s Kotlin Multiplatform improvements: simplified cross-platform Android and iOS development with new tools

    Experimental and features that are coming soon to Android Studio

      • Android Studio Cloud (experimental): Develop Android apps from any browser with just an internet connection
      • Version Upgrade Agent (coming soon): Automated dependency updates save time and effort, ensuring projects stay current
      • Agent Mode (coming soon): Empowering developers to tackle multistage complex tasks that go beyond typical AI assistant capabilities
      • Play Policy Insights beta in Android Studio (coming soon): Insights and guidance on Google Play policies that might impact your app

    How to get started

    Ready to try the exciting new features in Android Studio?

    You can download the canary version of Android Studio Narwhal Feature Drop (2025.1.2) today to incorporate these new features into your workflow or try the latest AI features using Studio Labs in the stable version of Android Studio Meerkat. You can also install them side by side by following these instructions.

    As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let’s build the future of Android apps together!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.





    Source link

  • Android Developers Blog: New in-car app experiences



    Posted by Ben Sagmoe – Developer Relations Engineer

    The in-car experience continues to evolve rapidly, and Google remains committed to pushing the boundaries of what’s possible. At Google I/O 2025, we’re excited to unveil the latest advancements for drivers, car manufacturers, and developers, furthering our goal of a safe, seamless, and helpful connected driving experience.

    Today’s car cabins are increasingly digital, offering developers exciting new opportunities with larger displays and more powerful computing. Android Auto is now supported in nearly all new cars sold, with almost 250 million compatible vehicles on the road.

    We’re also seeing significant growth in cars powered by Android Automotive OS with Google built-in. Over 50 models are currently available, with more launching this year. This growth is fueled by a thriving app ecosystem, including over 300 apps already available on the Play Store. These include apps optimized for a safe and seamless experience while driving as well as entertainment apps for while you’re parked and waiting in your car—many of which are adaptive mobile apps that have been seamlessly brought to cars through the Car Ready Mobile Apps Program.

    A vibrant developer community is essential to delivering these innovative in-car experiences utilizing the different screens within the car cabin. This past year, we’ve focused on key areas to help empower developers to build more differentiated experiences in cars across both platforms, as we embark on the Gemini era in cars!

    Gemini for Cars

    Exciting news for in-car experiences: Gemini, Google’s advanced AI, is coming to vehicles! This unlocks a new era of safe and helpful interactions on the go.

    Gemini enables natural voice conversations and seamless multitasking, empowering drivers to get more done simply by speaking naturally. Imagine effortlessly finding charging stations or navigating to a location pulled directly from an email, all with just your voice.

    You can learn how to leverage Gemini’s potential to create engaging in-car experiences in your app.

    Navigation apps can integrate with Gemini using three core intent formats, allowing you to start navigation, display relevant search results, and execute custom actions, such as enabling users to report incidents like traffic congestion using their voice.

    Gemini for cars will be rolling out in the coming months. Get ready to build the next generation of in-car AI experiences!

    New developer programs and tools

    table of app categories showing availability in android Auto and cars with Google built-in, including media, navigation, point-of-interest, internet of things, weather, video, browsers, games, and communication such as messaging and voip

    Last year, we introduced car app quality tiers to inspire developers to create high quality in-car experiences. By developing your app in compliance with the Car ready tier, you can bring video, gaming, or browser apps to run while parked in cars with Google built-in with almost no additional effort. Learn more about Car Ready Mobile Apps.

    Your app can further shine in cars within the Car optimized and Car differentiated tiers to unlock experiences while the car is in motion, and also when transitioning between parked and driving modes, while utilizing the different screens within the modern car cabin. Check the car app quality guidelines for details.

    To start with, across both Android Auto and for cars with Google built-in, we’ve made some exciting improvements for Car App Library:

      • The Weather app category has graduated from beta: any developer can now publish weather apps to production tracks on both Android Auto and cars with Google Built-in. Before you publish your app, check that it meets the quality guidelines for weather apps.
      • Two new templates, the SectionedItemTemplate and MediaPlaybackTemplate, are now available in the Car App Library 1.8 alpha release for use on Android Auto. These templates are a great fit for building templated media apps, allowing for increased customization in layout and browsing structure.

        example of sectioneditemtemplate on the left and mediaplaybacktemplate on the right

    On Android Auto, many new app categories and capabilities are now in beta:

      • We are adding support for Building media apps with the Car App Library, enabling media app developers to build both richer and more complete experiences that users are used to on their phones. During beta, developers can build and publish media apps built using the Car App Library to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta. 

      • The communications category is in beta. We’ve simplified calling integration for calling apps by utilizing the CallsManager Jetpack API. Together with the templates provided by the Car App Library, this enables communications apps to build features like full message history, upcoming meetings list, rich in-call views, and more. During beta, developers can build and publish communications apps to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.

      • Games are now supported in Android Auto, while parked, on phones running Android 15 and above. You can already find some popular titles like Angry Birds 2, Farm Heroes Saga, Candy Crush Soda Saga and Beach Buggy Racing 2. The Games category is in Beta and developers can publish games to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.

    Finally, we have further simplified building, testing and distribution experience for developers building apps for Android Automotive OS cars with Google built-in:

      • Distribution through Google Play is more flexible than ever. It’s now possible for apps in the parked categories to distribute in the same APK or App Bundle to cars with Google built-in as to phones, including through the mobile release track. Learn more on how to Distribute to cars.

      • Android Automotive OS on Pixel Tablet is now generally available, giving you a physical device option for testing Android Automotive OS apps without buying or renting a car. Additionally, the most recent system images include support for acting as an Android Auto receiver, meaning you can use the same device to test both your app’s experience on Android Auto and Android Automotive OS. Apply for access to these images.

    The road ahead

    You can look forward to more updates later this year, including:

      • Video apps will be supported on Android Auto, starting with phones running Android 16 on select compatible cars. If your app is already adaptive, enabling your app experience while parked only requires minimal steps to distribute to cars.

      • For Android Automotive OS cars running Android 14+ with Google built-in, we are working with car manufacturers to add additional app compatibility, to enable thousands of adaptive mobile apps in the next phase of the Car Ready Mobile Apps Program.

      • Updated design documentation that visualizes car app quality guidelines and integration paths to simplify designing your app for cars.

      • Google Play Services for cars with Google built-in are expanding to bring them on-par with mobile, including:
        • a. Passkeys and Credential Manager APIs for a more seamless user sign-in experience.
          b. Quick Share, which will enable easy cross-device sharing from phone to car.

      • Pre-launch reports for Android Automotive OS are coming soon to the Play Console, helping you ensure app quality before distributing your app to cars.

    Be sure to keep up to date through goo.gle/cars-whats-new on these features and more as we continuously invest in the future of Android in the car. Stay tuned for more resources to help you build innovative and engaging experiences for drivers and passengers.

    Ready to publish your car app? Check our guidance for distributing to cars.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Peacock built adaptively on Android to deliver great experiences across screens



    Posted by Sa-ryong Kang and Miguel Montemayor – Developer Relations Engineers

    Peacock is NBCUniversal’s streaming service app available in the US, offering culture-defining entertainment including live sports, exclusive original content, TV shows, and blockbuster movies. The app continues to evolve, becoming more than just a platform to watch content, but a hub of entertainment.

    Today’s users are consuming entertainment on an increasingly wider array of device sizes and types, and in particular are moving towards mobile devices. Peacock has adopted Jetpack Compose to help with its journey in adapting to more screens and meeting users where they are.

    https://www.youtube.com/watch?v=ooRcQFMYzmA

    Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

    Adapting to more flexible form factors

    The Peacock development team is focused on bringing the best experience to users, no matter what device they’re using or when they want to consume content. With an emerging trend from app users to watch more on mobile devices and large screens like foldables, the Peacock app needs to be able to adapt to different screen sizes. As more devices are introduced, the team needed to explore new solutions that make the most out of each unique display permutation.

    The goal was to have the Peacock app to adapt to these new displays while continually offering high-quality entertainment without interruptions, like the stream reloading or visual errors. While thinking ahead, they also wanted to prepare and build a solution that was ready for Android XR as the entertainment landscape is shifting towards including more immersive experiences.

    quote card featuring a headshot of Diego Valente, Head of Mobile, Peacock & Global Streaming, reads 'Thinking adaptively isn't just about supporting tablets or large screens - it's about future proofing your app. Investing in adaptability helps you meet user's expectations of having seamless experiencers across all their devices and sets you up for what's next.'

    Building a future-proof experience with Jetpack Compose

    In order to build a scalable solution that would help the Peacock app continue to evolve, the app was migrated to Jetpack Compose, Android’s toolkit for building scalable UI. One of the essential tools they used was the WindowSizeClass API, which helps developers create and test UI layouts for different size ranges. This API then allows the app to seamlessly switch between pre-set layouts as it reaches established viewport breakpoints for different window sizes.

    The API was used in conjunction with Kotlin Coroutines and Flows to keep the UI state responsive as the window size changed. To test their work and fine tune edge case devices, Peacock used the Android Studio emulator to simulate a wide range of Android-based devices.

    Jetpack Compose allowed the team to build adaptively, so now the Peacock app responds to a wide variety of screens while offering a seamless experience to Android users. “The app feels more native, more fluid, and more intuitive across all form factors,” said Diego Valente, Head of Mobile, Peacock and Global Streaming. “That means users can start watching on a smaller screen and continue instantly on a larger one when they unfold the device—no reloads, no friction. It just works.”

    Preparing for immersive entertainment experiences

    In building adaptive apps on Android, John Jelley, Senior Vice President, Product & UX, Peacock and Global Streaming, says Peacock has also laid the groundwork to quickly adapt to the Android XR platform: “Android XR builds on the same large screen principles, our investment here naturally extends to those emerging experiences with less developmental work.”

    The team is excited about the prospect of features unlocked by Android XR, like Multiview for sports and TV, which enables users to watch multiple games or camera angles at once. By tailoring spatial windows to the user’s environment, the app could offer new ways for users to interact with contextual metadata like sports stats or actor information—all without ever interrupting their experience.

    Build adaptive apps

    Learn how to unlock your app’s full potential on phones, tablets, foldables, and beyond.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Android Developers Blog: Updates to the Android XR SDK: Introducing Developer Preview 2



    Posted by Matthew McCullough – VP of Product Management, Android Developer

    Since launching the Android XR SDK Developer Preview alongside Samsung, Qualcomm, and Unity last year, we’ve been blown away by all of the excitement we’ve been hearing from the broader Android community. Whether it’s through coding live-streams or local Google Developer Group talks, it’s been an outstanding experience participating in the community to build the future of XR together, and we’re just getting started.

    Today we’re excited to share an update to the Android XR SDK: Developer Preview 2, packed with new features and improvements to help you develop helpful and delightful immersive experiences with familiar Android APIs, tools and open standards created for XR.

    At Google I/O, we have two technical sessions related to Android XR. The first is Building differentiated apps for Android XR with 3D content, which covers many features present in Jetpack SceneCore and ARCore for Jetpack XR. The future is now, with Compose and AI on Android XR covers creating XR-differentiated UI and our vision on the intersection of XR with cutting-edge AI capabilities.

    Android XR sessions at Google I/O 2025

    Building differentiated apps for Android XR with 3D content and The future is now, with Compose and AI on Android XR

    What’s new in Developer Preview 2

    Since the release of Developer Preview 1, we’ve been focused on making the APIs easier to use and adding new immersive Android XR features. Your feedback has helped us shape the development of the tools, SDKs, and the platform itself.

    With the Jetpack XR SDK, you can now play back 180° and 360° videos, which can be stereoscopic by encoding with the MV-HEVC specification or by encoding view-frames adjacently. The MV-HEVC standard is optimized and designed for stereoscopic video, allowing your app to efficiently play back immersive videos at great quality. Apps built with Jetpack Compose for XR can use the SpatialExternalSurface composable to render media, including stereoscopic videos.

    Using Jetpack Compose for XR, you can now also define layouts that adapt to different XR display configurations. For example, use a SubspaceModifier to specify the size of a Subspace as a percentage of the device’s recommended viewing size, so a panel effortlessly fills the space it’s positioned in.

    Material Design for XR now supports more component overrides for TopAppBar, AlertDialog, and ListDetailPaneScaffold, helping your large-screen enabled apps that use Material Design effortlessly adapt to the new world of XR.

    An app adapts to XR using Material Design for XR with the new component overrides

    An app adapts to XR using Material Design for XR with the new component overrides

    In ARCore for Jetpack XR, you can now track hands after requesting the appropriate permissions. Hands are a collection of 26 posed hand joints that can be used to detect hand gestures and bring a whole new level of interaction to your Android XR apps:

    moving image demonstrates how hands bring a natural input method to your Android XR experience.

    Hands bring a natural input method to your Android XR experience.

    For more guidance on developing apps for Android XR, check out our Android XR Fundamentals codelab, the updates to our Hello Android XR sample project, and a new version of JetStream with Android XR support.

    The Android XR Emulator has also received updates to stability, support for AMD GPUs, and is now fully integrated within the Android Studio UI.

    the Android XR Emulator in Android STudio

    The Android XR Emulator is now integrated in Android Studio

    Developers using Unity have already successfully created and ported existing games and apps to Android XR. Today, you can upgrade to the Pre-Release version 2 of the Unity OpenXR: Android XR package! This update adds many performance improvements such as support for Dynamic Refresh Rate, which optimizes your app’s performance and power consumption. Shaders made with Shader Graph now support SpaceWarp, making it easier to use SpaceWarp to reduce compute load on the device. Hand meshes are now exposed with occlusion, which enables realistic hand visualization.

    Check out Unity’s improved Mixed Reality template for Android XR, which now includes support for occlusion and persistent anchors.

    We recently launched Android XR Samples for Unity, which demonstrate capabilities on the Android XR platform such as hand tracking, plane tracking, face tracking, and passthrough.

    moving image of Google’s open-source Unity samples demonstrating platform features and showing how they’re implemented

    Google’s open-source Unity samples demonstrate platform features and show how they’re implemented

    The Firebase AI Logic for Unity is now in public preview! This makes it easy for you to integrate gen AI into your apps, enabling the creation of AI-powered experiences with Gemini and Android XR. The Firebase AI Logic fully supports Gemini’s capabilities, including multimodal input and output, and bi-directional streaming for immersive conversational interfaces. Built with production readiness in mind, Firebase AI Logic is integrated with core Firebase services like App Check, Remote Config, and Cloud Storage for enhanced security, configurability, and data management. Learn more about this on the Firebase blog or go straight to the Gemini API using Vertex AI in Firebase SDK documentation to get started.

    Continuing to build the future together

    Our commitment to open standards continues with the glTF Interactivity specification, in collaboration with the Khronos Group. which will be supported in glTF models rendered by Jetpack XR later this year. Models using the glTF Interactivity specification are self-contained interactive assets that can have many pre-programmed behaviors, like rotating objects on a button press or changing the color of a material over time.

    Android XR will be available first on Samsung’s Project Moohan, launching later this year. Soon after, our partners at XREAL will release the next Android XR device. Codenamed Project Aura, it’s a portable and tethered device that gives users access to their favorite Android apps, including those that have been built for XR. It will launch as a developer edition, specifically for you to begin creating and experimenting. The best news? With the familiar tools you use to build Android apps today, you can build for these devices too.

    product image of XREAL’s Project Aura against a nebulous black background

    XREAL’s Project Aura

    The Google Play Store is also getting ready for Android XR. It will list supported 2D Android apps on the Android XR Play Store when it launches later this year. If you are working on an Android XR differentiated app, you can get it ready for the big launch and be one of the first differentiated apps on the Android XR Play Store:

    And we know many of you are excited for the future of Android XR on glasses. We are shaping the developer experience now and will share more details on how you can participate later this year.

    To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries, and resources you need to work with the Android XR SDK. In particular, try out our samples and codelabs.

    We welcome your feedback, suggestions, and ideas as you’re helping shape Android XR. Your passion, expertise, and bold ideas are vital as we continue to develop Android XR together. We look forward to seeing your XR-differentiated apps when Android XR devices launch later this year!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • 16 things to know for Android developers at Google I/O 2025



    Posted by Matthew McCullough – VP of Product Management, Android Developer

    Today at Google I/O, we announced the many ways we’re helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here’s a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!

    Building AI into your Apps

    1: Building intelligent apps with Generative AI

    Generative AI enhances apps’ experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.

    New experiences across devices

    2: One app, every screen: think adaptive and unlock 500 million screens

    Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we’re helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices – a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral’s streaming service (available in the US) is building adaptively to meet users where they are.

    https://www.youtube.com/watch?v=ooRcQFMYzmA

    Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

    3: Material 3 Expressive: design for intuition and emotion

    The new Material 3 Expressive update provides tools to enhance your product’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

    moving image of Material 3 Expressive demo

    4: Smarter widgets, engaging live updates

    Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

    moving image of Material 3 Expressive demo

    5: Enhanced Camera & Media: low light boost and battery savings

    This year’s I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.

    6: Build next-gen app experiences for Cars

    We’re launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we’ll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.

    7: Build for Android XR’s expanding ecosystem with Developer Preview 2 of the SDK

    We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung’s Project Moohan, you’ll also see more devices including a new portable Android XR device from our partners at XREAL. There’s lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

    product image of XREAL’s Project Aura against a nebulous black background

    XREAL’s Project Aura

    8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6

    This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

    moving image displays examples of Material 3 Expressive on Wear OS experiences

    Some examples of Material 3 Expressive on Wear OS experiences

    9: Engage users on Google TV with excellent TV apps

    You can leverage more resources within Compose’s core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We’re also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.

    Developer productivity

    10: Build beautiful apps faster with Jetpack Compose

    Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

    moving image of compose adaptive layouts updates in the Google Play app

    Compose Adaptive Layouts Updates in the Google Play app

    11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily

    Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We’ve released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what’s new in Android’s Kotlin Multiplatform.

    12: Gemini in Android Studio: AI Agents to help you work

    Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What’s new in Android development tools.

    https://www.youtube.com/watch?v=ubyPjBesW-8

    13: Android Studio: smarter with Gemini

    In this latest release, we’re empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What’s new in Android development tools.

    moving image of Gemini in Android Studio Agentic Experiences including Journeys and Version Upgrade

    And the latest on driving business growth

    14: What’s new in Google Play

    Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we’re continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What’s new in Google Play to learn more.

    a moving image of three mobile devices displaying how content is displayed on the Play Store

    15: Start migrating to Play Games Services v2 today

    Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.

    16: And of course, Android 16

    We unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.

    Check out all of the Android and Play content at Google I/O

    This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What’s New in Android and the full Android track of sessions, and whether you’re joining in person or around the world, we can’t wait to engage with you!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Outfit Your Team with Android Tablets for Just $75 Each

    Outfit Your Team with Android Tablets for Just $75 Each


    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    Equipping a team with modern, mobile tech can be a balancing act—functionality and performance matter, but so does staying within budget. That’s where this deal on the onn. 11″ Tablet Pro really shines. A Walmart store brand, these onn. tablets are just $74.99 (regularly $159), it’s an easy decision for business leaders looking to scale their tech resources without scaling costs.

    Despite its budget-friendly price tag, this tablet is built for everyday productivity. It runs on Android 13, offering a familiar interface that syncs smoothly with cloud-based apps, email platforms, messaging tools, and more. It’s great for teams already using Android phones—onboarding is minimal, and the user experience is intuitive.

    The large 11-inch LCD is crisp and vibrant with a 2000 x 1200 resolution, making it ideal for streaming presentations, reviewing reports, or even hosting virtual meetings. Whether you’re using it for point-of-sale systems, training materials, front-desk kiosks, or remote communications, this tablet delivers a sharp, responsive experience.

    Under the hood, the 2.2GHz octa-core processor and 4GB of RAM provide reliable speed for multitasking. Combined with 128GB of internal storage (expandable via microSD), there’s plenty of room for documents, media, and business apps. Plus, dual cameras allow for both video conferencing and on-the-go image capture, which is useful for field teams, social media managers, and sales staff.

    Battery life is often a pain point with mobile devices, but this one lasts up to 16 hours, giving your team an all-day companion that won’t die mid-task. Whether it’s used in the office or on the road, charging anxiety becomes a thing of the past.

    And since this is an open-box unit, you’re getting a like-new device at nearly half the price. Each tablet is thoroughly tested and verified. Although the box may exhibit minor signs of handling, the hardware inside remains in new condition.

    Get this onn. 11″ Tablet Pro for just $74.99 (regularly $159) while it’s still available.

    StackSocial prices subject to change.

    Equipping a team with modern, mobile tech can be a balancing act—functionality and performance matter, but so does staying within budget. That’s where this deal on the onn. 11″ Tablet Pro really shines. A Walmart store brand, these onn. tablets are just $74.99 (regularly $159), it’s an easy decision for business leaders looking to scale their tech resources without scaling costs.

    Despite its budget-friendly price tag, this tablet is built for everyday productivity. It runs on Android 13, offering a familiar interface that syncs smoothly with cloud-based apps, email platforms, messaging tools, and more. It’s great for teams already using Android phones—onboarding is minimal, and the user experience is intuitive.

    The large 11-inch LCD is crisp and vibrant with a 2000 x 1200 resolution, making it ideal for streaming presentations, reviewing reports, or even hosting virtual meetings. Whether you’re using it for point-of-sale systems, training materials, front-desk kiosks, or remote communications, this tablet delivers a sharp, responsive experience.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • Android Developers Blog: The Android Show: I/O Edition



    Posted by Matthew McCullough – Vice President, Product Management, Android Developer

    We just dropped an I/O Edition of The Android Show, where we unpacked exciting new experiences coming to the Android ecosystem: a fresh and dynamic look and feel, smarts across your devices, and enhanced safety and security features. Join Sameer Samat, President of Android Ecosystem, and the Android team to learn about exciting new development in the episode below, and read about all of the updates for users.

    Tune into Google I/O next week – including the Developer Keynote as well as the full Android track of sessions – where we’re covering these topics in more detail and how you can get started.

    https://www.youtube.com/watch?v=l3yDd3CmA_Y

    Start building with Material 3 Expressive

    The world of UX design is constantly evolving, and you deserve the tools to create truly engaging and impactful experiences. That’s why Material Design’s latest evolution, Material 3 Expressive, provides new ways to make your product more engaging, easy to use, and desirable. Learn more, and try out the new Material 3 Expressive: an expansion pack designed to enhance your app’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. It comes with new components, motion-physics system, type styles, colors, shapes and more.

    Material 3 Expressive will be coming to Android 16 later this year; check out the Google I/O talk next week where we’ll dive into this in more detail.

    A fluid design built for your watch’s round display

    Wear OS 6, arriving later this year, brings Material 3 Expressive design to Google’s smartwatch platform. New design language puts the round watch display at the heart of the experience, and is embraced in every single component and motion of the System, from buttons to notifications. You’ll be able to try new visual design and upgrade existing app experiences to a new level. Next week, tune in to the What’s New in Android session to learn more.

    Plus some goodies in Android 16…

    We also unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, you can try out the latest Beta of Android 16.

    A few new features that Android 16 adds which developers should pay attention to are Live updates, professional media and camera features, desktop windowing for tablets, major accessibility enhancements and much more:

      • Live Updates allow your app to show time-sensitive progress updates. Use the new ProgressStyle template for an improved experience around navigation, deliveries, and rideshares.

    Watch the What’s New in Android session and the Live updates talk to learn more.

    Tune in next week to Google I/O

    This was just a preview of some Android-related news, so remember to tune in next week to Google I/O, where we’ll be diving into a range of Android developer topics in a lot more detail. You can check out What’s New in Android and the full Android track of sessions to start planning your time.

    We can’t wait to see you next week, whether you’re joining in person or virtually from anywhere around the world!



    Source link

  • Foundational Tools in Android | Kodeco

    Foundational Tools in Android | Kodeco


    Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive
    catalogue of 50+ books and 4,000+ videos.

    Learn more

    © 2025 Kodeco Inc



    Source link

  • Building delightful Android camera and media experiences



    Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital – Developer Relations Engineers

    Hello Android Developers!

    We are the Android Developer Relations Camera & Media team, and we’re excited to bring you something a little different today. Over the past several months, we’ve been hard at work writing sample code and building demos that showcase how to take advantage of all the great potential Android offers for building delightful user experiences.

    Some of these efforts are available for you to explore now, and some you’ll see later throughout the year, but for this blog post we thought we’d share some of the learnings we gathered while going through this exercise.

    Grab your favorite Android plush or rubber duck, and read on to see what we’ve been up to!

    Future-proof your app with Jetpack

    Nevin Mital

    One of our focuses for the past several years has been improving the developer tools available for video editing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which offer solutions for both single-asset and multi-asset video editing preview and export. Today, I’d like to focus on the Composition demo app, a sample app that showcases some of the multi-asset editing experiences that Transformer enables.

    I started by adding a custom video compositor to demonstrate how you can arrange input video sequences into different layouts for your final composition, such as a 2×2 grid or a picture-in-picture overlay. You can customize this by implementing a VideoCompositorSettings and overriding the getOverlaySettings method. This object can then be set when building your Composition with setVideoCompositorSettings.

    Here is an example for the 2×2 grid layout:

    object : VideoCompositorSettings {
      ...
    
      override fun getOverlaySettings(inputId: Int, presentationTimeUs: Long): OverlaySettings {
        return when (inputId) {
          0 -> { // First sequence is placed in the top left
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(-0.5f, 0.5f) // Top-left section of background
              .build()
          }
    
          1 -> { // Second sequence is placed in the top right
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(0.5f, 0.5f) // Top-right section of background
              .build()
          }
    
          2 -> { // Third sequence is placed in the bottom left
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(-0.5f, -0.5f) // Bottom-left section of background
              .build()
          }
    
          3 -> { // Fourth sequence is placed in the bottom right
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(0.5f, -0.5f) // Bottom-right section of background
              .build()
          }
    
          else -> {
            StaticOverlaySettings.Builder().build()
          }
        }
      }
    }
    

    Since getOverlaySettings also provides a presentation time, we can even animate the layout, such as in this picture-in-picture example:

    moving image of picture in picture on a mobile device

    Next, I spent some time migrating the Composition demo app to use Jetpack Compose. With complicated editing flows, it can help to take advantage of as much screen space as is available, so I decided to use the supporting pane adaptive layout. This way, the user can fine-tune their video creation on the preview screen, and export options are only shown at the same time on a larger display. Below, you can see how the UI dynamically adapts to the screen size on a foldable device, when switching from the outer screen to the inner screen and vice versa.

    What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

    moving image of suportive pane adaptive layout

    What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

    moving image of sequential composition preview in Android XR

    Orbiter(
      position = OrbiterEdge.Bottom,
      offset = EdgeOffset.inner(offset = MaterialTheme.spacing.standard),
      alignment = Alignment.CenterHorizontally,
      shape = SpatialRoundedCornerShape(CornerSize(28.dp))
    ) {
      Row (horizontalArrangement = Arrangement.spacedBy(MaterialTheme.spacing.mini)) {
        // Playback control for rewinding by 10 seconds
        FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
          Icon(
            painter = painterResource(id = R.drawable.rewind_10),
            contentDescription = "Rewind by 10 seconds"
          )
        }
        // Playback control for play/pause
        FilledTonalIconButton({ viewModel.togglePlay() }) {
          Icon(
            painter = painterResource(id = R.drawable.rounded_play_pause_24),
            contentDescription = 
                if(viewModel.compositionPlayer.isPlaying) {
                    "Pause preview playback"
                } else {
                    "Resume preview playback"
                }
          )
        }
        // Playback control for forwarding by 10 seconds
        FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
          Icon(
            painter = painterResource(id = R.drawable.forward_10),
            contentDescription = "Forward by 10 seconds"
          )
        }
      }
    }
    

    Jetpack libraries unlock premium functionality incrementally

    Donovan McMurray

    Not only do our Jetpack libraries have you covered by working consistently across existing and future devices, but they also open the doors to advanced functionality and custom behaviors to support all types of app experiences. In a nutshell, our Jetpack libraries aim to make the common case very accessible and easy, and it has hooks for adding more custom features later.

    We’ve worked with many apps who have switched to a Jetpack library, built the basics, added their critical custom features, and actually saved developer time over their estimates. Let’s take a look at CameraX and how this incremental development can supercharge your process.

    // Set up CameraX app with preview and image capture.
    // Note: setting the resolution selector is optional, and if not set,
    // then a default 4:3 ratio will be used.
    val aspectRatioStrategy = AspectRatioStrategy(
      AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
    var resolutionSelector = ResolutionSelector.Builder()
      .setAspectRatioStrategy(aspectRatioStrategy)
      .build()
    
    private val previewUseCase = Preview.Builder()
      .setResolutionSelector(resolutionSelector)
      .build()
    private val imageCaptureUseCase = ImageCapture.Builder()
      .setResolutionSelector(resolutionSelector)
      .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
      .build()
    
    val useCaseGroupBuilder = UseCaseGroup.Builder()
      .addUseCase(previewUseCase)
      .addUseCase(imageCaptureUseCase)
    
    cameraProvider.unbindAll()
    
    camera = cameraProvider.bindToLifecycle(
      this,  // lifecycleOwner
      CameraSelector.DEFAULT_BACK_CAMERA,
      useCaseGroupBuilder.build(),
    )
    

    After setting up the basic structure for CameraX, you can set up a simple UI with a camera preview and a shutter button. You can use the CameraX Viewfinder composable which displays a Preview stream from a CameraX SurfaceRequest.

    // Create preview
    Box(
      Modifier
        .background(Color.Black)
        .fillMaxSize(),
      contentAlignment = Alignment.Center,
    ) {
      surfaceRequest?.let {
        CameraXViewfinder(
          modifier = Modifier.fillMaxSize(),
          implementationMode = ImplementationMode.EXTERNAL,
          surfaceRequest = surfaceRequest,
         )
      }
      Button(
        onClick = onPhotoCapture,
        shape = CircleShape,
        colors = ButtonDefaults.buttonColors(containerColor = Color.White),
        modifier = Modifier
          .height(75.dp)
          .width(75.dp),
      )
    }
    
    fun onPhotoCapture() {
      // Not shown: defining the ImageCapture.OutputFileOptions for
      // your saved images
      imageCaptureUseCase.takePicture(
        outputOptions,
        ContextCompat.getMainExecutor(context),
        object : ImageCapture.OnImageSavedCallback {
          override fun onError(exc: ImageCaptureException) {
            val msg = "Photo capture failed."
            Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
          }
    
          override fun onImageSaved(output: ImageCapture.OutputFileResults) {
            val savedUri = output.savedUri
            if (savedUri != null) {
              // Do something with the savedUri if needed
            } else {
              val msg = "Photo capture failed."
              Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
            }
          }
        },
      )
    }
    

    You’re already on track for a solid camera experience, but what if you wanted to add some extra features for your users? Adding filters and effects are easy with CameraX’s Media3 effect integration, which is one of the new features introduced in CameraX 1.4.0.

    Here’s how simple it is to add a black and white filter from Media3’s built-in effects.

    val media3Effect = Media3Effect(
      application,
      PREVIEW or IMAGE_CAPTURE,
      ContextCompat.getMainExecutor(application),
      {},
    )
    media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
    useCaseGroupBuilder.addEffect(media3Effect)
    

    The Media3Effect object takes a Context, a bitwise representation of the use case constants for targeted UseCases, an Executor, and an error listener. Then you set the list of effects you want to apply. Finally, you add the effect to the useCaseGroupBuilder we defined earlier.

    moving image of sequential composition preview in Android XR

    (Left) Our camera app with no filter applied. 
     (Right) Our camera app after the createGrayscaleFilter was added.

    There are many other built-in effects you can add, too! See the Media3 Effect documentation for more options, like brightness, color lookup tables (LUTs), contrast, blur, and many other effects.

    To take your effects to yet another level, it’s also possible to define your own effects by implementing the GlEffect interface, which acts as a factory of GlShaderPrograms. You can implement a BaseGlShaderProgram’s drawFrame() method to implement a custom effect of your own. A minimal implementation should tell your graphics library to use its shader program, bind the shader program’s vertex attributes and uniforms, and issue a drawing command.

    Jetpack libraries meet you where you are and your app’s needs. Whether that be a simple, fast-to-implement, and reliable implementation, or custom functionality that helps the critical user journeys in your app stand out from the rest, Jetpack has you covered!

    Jetpack offers a foundation for innovative AI Features

    Mayuri Khinvasara Khabya

    Just as Donovan demonstrated with CameraX for capture, Jetpack Media3 provides a reliable, customizable, and feature-rich solution for playback with ExoPlayer. The AI Samples app builds on this foundation to delight users with helpful and enriching AI-driven additions.

    In today’s rapidly evolving digital landscape, users expect more from their media applications. Simply playing videos is no longer enough. Developers are constantly seeking ways to enhance user experiences and provide deeper engagement. Leveraging the power of Artificial Intelligence (AI), particularly when built upon robust media frameworks like Media3, offers exciting opportunities. Let’s take a look at some of the ways we can transform the way users interact with video content:

      • Empowering Video Understanding: The core idea is to use AI, specifically multimodal models like the Gemini Flash and Pro models, to analyze video content and extract meaningful information. This goes beyond simply playing a video; it’s about understanding what’s in the video and making that information readily accessible to the user.
      • Actionable Insights: The goal is to transform raw video into summaries, insights, and interactive experiences. This allows users to quickly grasp the content of a video and find specific information they need or learn something new!
      • Accessibility and Engagement: AI helps make videos more accessible by providing features like summaries, translations, and descriptions. It also aims to increase user engagement through interactive features.

    A Glimpse into AI-Powered Video Journeys

    The following example demonstrates potential video journies enhanced by artificial intelligence. This sample integrates several components, such as ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will be available soon on Github.

    moving images of examples of AI-powered video journeys

    (Left) Video summarization  
     (Right) Thumbnails timestamps and HDR frame extraction

    There are two experiences in particular that I’d like to highlight:

      • HDR Thumbnails: AI can help identify key moments in the video that could make for good thumbnails. With those timestamps, you can use the new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from videos, providing richer visual previews.
      • Text-to-Speech: AI can be used to convert textual information derived from the video into spoken audio, enhancing accessibility. On Android you can also choose to play audio in different languages and dialects thus enhancing personalization for a wider audience.

    Using the right AI solution

    Currently, only cloud models support video inputs, so we went ahead with a cloud-based solution.Iintegrating Firebase in our sample empowers the app to:

      • Generate real-time, concise video summaries automatically.
      • Produce comprehensive content metadata, including chapter markers and relevant hashtags.
      • Facilitate seamless multilingual content translation.

    So how do you actually interact with a video and work with Gemini to process it? First, send your video as an input parameter to your prompt:

    val promptData =
    "Summarize this video in the form of top 3-4 takeaways only. Write in the form of bullet points. Don't assume if you don't know"
    
    val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
    _outputText.value = OutputTextState.Loading
    
    viewModelScope.launch(Dispatchers.IO) {
        try {
            val requestContent = content {
                fileData(videoSource.toString(), "video/mp4")
                text(prompt)
            }
            val outputStringBuilder = StringBuilder()
    
            generativeModel.generateContentStream(requestContent).collect { response ->
                outputStringBuilder.append(response.text)
                _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
            }
    
            _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
    
        } catch (error: Exception) {
            _outputText.value = error.localizedMessage?.let { OutputTextState.Error(it) }
        }
    }
    

    Notice there are two key components here:

      • FileData: This component integrates a video into the query.
      • Prompt: This asks the user what specific assistance they need from AI in relation to the provided video.

    Of course, you can finetune your prompt as per your requirements and get the responses accordingly.

    In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI solutions like Gemini through Firebase, you can significantly elevate video experiences on Android. This combination enables advanced features like video summaries, enriched metadata, and seamless multilingual translations, ultimately enhancing accessibility and engagement for users. As these technologies continue to evolve, the potential for creating even more dynamic and intelligent video applications is vast.

    Go above-and-beyond with specialized APIs

    Mozart Louis

    Android 16 introduces the new audio PCM Offload mode which can reduce the power consumption of audio playback in your app, leading to longer playback time and increased user engagement. Eliminating the power anxiety greatly enhances the user experience.

    Oboe is Android’s premiere audio api that developers are able to use to create high performance, low latency audio apps. A new feature is being added to the Android NDK and Android 16 called Native PCM Offload playback.

    Offload playback helps save battery life when playing audio. It works by sending a large chunk of audio to a special part of the device’s hardware (a DSP). This allows the CPU of the device to go into a low-power state while the DSP handles playing the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), where the DSP also takes care of decoding.

    This can result in significant power saving while playing back audio and is perfect for applications that play audio in the background or while the screen is off (think audiobooks, podcasts, music etc).

    We created the sample app PowerPlay to demonstrate how to implement these features using the latest NDK version, C++ and Jetpack Compose.

    Here are the most important parts!

    First order of business is to assure the device supports audio offload of the file attributes you need. In the example below, we are checking if the device support audio offload of stereo, float PCM file with a sample rate of 48000Hz.

           val format = AudioFormat.Builder()
                .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
                .setSampleRate(48000)
                .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
                .build()
    
            val attributes =
                AudioAttributes.Builder()
                    .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                    .setUsage(AudioAttributes.USAGE_MEDIA)
                    .build()
           
            val isOffloadSupported = 
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
                    AudioManager.isOffloadedPlaybackSupported(format, attributes)
                } else {
                    false
                }
    
            if (isOffloadSupported) {
                player.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
            }
    

    Once we know the device supports audio offload, we can confidently set the Oboe audio streams’ performance mode to the new performance mode option, PerformanceMode::POWER_SAVING_OFFLOADED.

    // Create an audio stream
            AudioStreamBuilder builder;
            builder.setChannelCount(mChannelCount);
            builder.setDataCallback(mDataCallback);
            builder.setFormat(AudioFormat::Float);
            builder.setSampleRate(48000);
    
            builder.setErrorCallback(mErrorCallback);
            builder.setPresentationCallback(mPresentationCallback);
            builder.setPerformanceMode(PerformanceMode::POWER_SAVING_OFFLOADED);
            builder.setFramesPerDataCallback(128);
            builder.setSharingMode(SharingMode::Exclusive);
               builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
            Result result = builder.openStream(mAudioStream);
    

    Now when audio is played back, it will be offloading audio to the DSP, helping save power when playing back audio.

    There is more to this feature that will be covered in a future blog post, fully detailing out all of the new available APIs that will help you optimize your audio playback experience!

    What’s next

    Of course, we were only able to share the tip of the iceberg with you here, so to dive deeper into the samples, check out the following links:

    Hopefully these examples have inspired you to explore what new and fascinating experiences you can build on Android. Tune in to our session at Google I/O in a couple weeks to learn even more about use-cases supported by solutions like Jetpack CameraX and Jetpack Media3!



    Source link