دسته: اخبار اندروید

  • Enhanced Android desktop experiences with connected displays



    Posted by Francesco Romano – Developer Relations Engineer on Android, and Fahd Imtiaz – Product Manager, Android Developer

    Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

    Android has continued to evolve to enable users to be more productive on large screens.

    Today, we’re excited to share that connected displays support on compatible Android devices is now in developer preview with the Android 16 QPR1 Beta 2 release. As shown at Google I/O 2025, connected displays enable users to attach an external display to their Android device and transform a small screen device into a powerful tool with a large screen. This evolution gives users the ability to move apps beyond a single screen to unlock Android’s full productivity potential on external displays.

    The connected display update builds on our desktop windowing experience, a capability we previewed last year. Desktop windowing is set to launch later this year for users on compatible tablets running Android 16. Desktop windowing enables users to run multiple apps simultaneously and resize windows for optimal multitasking. This new windowing capability works seamlessly with split screen and other multitasking features users already love on Android and doesn’t require switching to a special mode.

    Google and Samsung have collaborated to bring a more seamless and powerful desktop windowing experience to large screen devices and phones with connected displays in Android 16 across the Android ecosystem. These advancements will enhance Samsung DeX, and also extend to other Android devices.

    For developers, connected displays and desktop windowing present new opportunities for building more engaging and more productive app experiences that seamlessly adapt across form factors. You can try out these features today on your connected display with the Android 16 QPR1 Beta 2 on select Pixel devices.

    What’s new in connected displays support?

    When a supported Android phone or foldable is connected to an external display through a DisplayPort connection, a new desktop session starts on the connected display. The phone and the external display operate independently, and apps are specific to the display on which they’re running.

    The experience on the connected display is similar to the experience on a desktop, including a task bar that shows running apps and lets users pin apps for quick access. Users are able to run multiple apps side by side simultaneously in freely resizable windows on the connected display.

    moving image of a phone connected to an external display, with a desktop session on the display while the phone maintains its own state.

    Phone connected to an external display, with a desktop session on the display while the phone maintains its own state.

    When a desktop windowing enabled device (like a tablet) is connected to an external display, the desktop session is extended across both displays, unlocking an even more expansive workspace. The two displays then function as one continuous system, allowing app windows, content, and the cursor to move freely between the displays.

    moving image of a tablet connected to an external display, extending the desktop session across both displays.

    Tablet connected to an external display, extending the desktop session across both displays.

    A cornerstone of this effort is the evolution of desktop windowing, which is stable in Android 16 and is packed with improvements and new capabilities.

    Desktop windowing stable release

    We’ve made substantial improvements in the stability and performance of desktop windowing in Android 16. This means users will encounter a smoother, more reliable experience when managing app windows on connected displays. Beyond general stability improvements, we’re introducing several new features:

      • Flexible window tiling: Multitasking gets a boost with more intuitive window tiling options. Users can more easily arrange multiple app windows side by side or in various configurations, making it simpler to work across different applications simultaneously on a large screen.
      • Multiple desktops: Users can set up multiple desktop sessions to match their distinct productivity requirements and switch between the desktops using keyboard shortcuts, trackpad gestures, and Overview.
      • Enhanced app compatibility treatments: New compatibility treatments ensure that even legacy apps behave more predictably and look better on external displays by default. This reduces the burden on developers while providing a better out-of-the-box experience for users.
      • Multi-instance management: Users can manage multiple instances of supporting applications (for example, Chrome or, Keep) through the app header button or taskbar context menu.
        This allows for quick switching between different instances of the same app.
      • Desktop persistence: Android can now better maintain window sizes, positions, and states across different desktops. This means users can set up their preferred workspace and have it restored across sessions, offering a more consistent and efficient workflow.

    Best practices for optimal app experiences on connected displays

    With the introduction of connected display support in Android, it’s important to ensure your apps take full advantage of the new display capabilities. To help you build apps that shine in this enhanced environment, here are some key development practices to follow:

    Build apps optimized for desktop

      • Design for any window size: With phones now connecting to external displays, your mobile app can run in a window of almost any size and aspect ratio. This means the app window can be as big as the screen of the connected display but also flex to fit a smaller window. In desktop windowing, the minimum window size is 386 x 352 dp, which is smaller than most phones. This fundamentally changes how you need to think about UI. With orientation and resizability changes in Android 16, it becomes even more critical for you to update your apps to support resizability and portrait and landscape orientations for an optimal experience with desktop windowing and connected displays. Make sure your app supports any window size by following the best practices on adaptive development.

    Handle dynamic display changes

      • Don’t assume a constant Display object: The Display object associated with your app’s context can change when an app window is moved to an external display or if the display configuration changes. Your app should gracefully handle configuration change events and query display metrics dynamically rather than caching them.
      • Account for density configuration changes: External displays can have vastly different pixel densities than the primary device screen. Ensure your layouts and resources adapt correctly to these changes to maintain UI clarity and usability. Use density-independent pixels (dp) for layouts, provide density-specific resources, and ensure your UI scales appropriately.

    Go beyond just the screen

      • Correctly support external peripherals: When users connect to an external monitor, they often create a more desktop-like environment. This frequently involves using external keyboards, mice, trackpads, webcams, microphones, and speakers. If your app uses camera or microphone input, the app should be able to detect and utilize peripherals connected through the external display or a docking station.
      • Handle keyboard actions: Desktop users rely heavily on keyboard shortcuts for efficiency. Implement standard shortcuts (for example, Ctrl+C, Ctrl+V, Ctrl+Z) and consider app-specific shortcuts that make sense in a windowed environment. Make sure your app supports keyboard navigation.
      • Support mouse interactions: Beyond simple clicks, ensure your app responds correctly to mouse hover events (for example, for tooltips or visual feedback), right-clicks (for contextual menus), and precise scrolling. Consider implementing custom pointers to indicate different actions.

    Getting started

    Explore the connected displays and enhanced desktop windowing features in the latest Android Beta. Get Android 16 QPR1 Beta 2 on a supported Pixel device (Pixel 8 and Pixel 9 series) to start testing your app today. Then enable desktop experience features in the developer settings.

    Support for connected displays in the Android Emulator is coming soon, so stay tuned for updates!

    Dive into the updated documentation on multi-display support and window management to learn more about implementing these best practices.

    Feedback

    Your feedback is crucial as we continue to refine these experiences. Please share your thoughts and report any issues through our official feedback channels.

    We’re committed to making Android a versatile platform that adapts to the many ways users want to interact with their apps and devices. The improvements to connected display support are another step in that direction, and we can’t wait to see the amazing experiences you’ll build!



    Source link

  • Build adaptive Android apps that shine across form factors



    Posted by Fahd Imtiaz – Product Manager, Android Developer

    https://www.youtube.com/watch?v=15oPNK1W0Tw

    If your app isn’t built to adapt, you’re missing out on the opportunity to reach a giant swath of users across 500 million devices! At Google I/O this year, we are exploring how adaptive development isn’t just a good idea, but essential to building apps that shine across the expanding Android device ecosystem. This is your guide to meeting users wherever they are, with experiences that are perfectly tailored to their needs.

    The advantage of building adaptive

    In today’s multi-device world, users expect their favorite applications to work flawlessly and intuitively, whether they’re on a smartphone, tablet, or Chromebook. This expectation for seamless experiences isn’t just about convenience; it’s an important factor for user engagement and retention.

    For example, entertainment apps (including Prime Video, Netflix, and Hulu) users on both phone and tablet spend almost 200% more time in-app (nearly 3x engagement) than phone-only users in the US*.

    Peacock, NBCUniversal’s streaming service has seen a trend of users moving between mobile and large screens and building adaptively enables a single build to work across the different form factors.

    “This allows Peacock to have more time to innovate faster and deliver more value to its customers.”

    – Diego Valente, Head of Mobile, Peacock and Global Streaming

    Adaptive Android development offers the strategic solution, enabling apps to perform effectively across an expanding array of devices and contexts through intelligent design choices that emphasize code reuse and scalability. With Android’s continuous growth into new form factors and upcoming enhancements such as desktop windowing and connected displays in Android 16, an app’s ability to seamlessly adapt to different screen sizes is becoming increasingly crucial for retaining users and staying competitive.

    Beyond direct user benefits, designing adaptively also translates to increased visibility. The Google Play Store actively helps promote developers whose apps excel on different form factors. If your application delivers a great experience on tablets or is excellent on ChromeOS, users on those devices will have an easier time discovering your app. This creates a win-win situation: better quality apps for users and a broader audience for you.

    examples of form factors across small phones, tablets, laoptops, and auto

    Latest in adaptive Android development from Google I/O

    To help you more effectively build compelling adaptive experiences, we shared several key updates at I/O this year.

    Build for the expanding Android device ecosystem

    Your mobile apps can now reach users beyond phones on over 500 million active devices, including foldables, tablets, Chromebooks, and even compatible cars, with minimal changes. Android 16 introduces significant advancements in desktop windowing for a true desktop-like experience on large screens and when devices are connected to external displays. And, Android XR is opening a new dimension, allowing your existing mobile apps to be available in immersive virtual environments.

    The mindset shift to Adaptive

    With the expanding Android device ecosystem, adaptive app development is a fundamental strategy. It’s about how the same mobile app runs well across phones, foldables, tablets, Chromebooks, connected displays, XR, and cars, laying a strong foundation for future devices and differentiating for specific form factors. You don’t need to rebuild your app for each form factor; but rather make small, iterative changes, as needed, when needed. Embracing this adaptive mindset today isn’t just about keeping pace; it’s about leading the charge in delivering exceptional user experiences across the entire Android ecosystem.

    examples of form factors including vr headset

    Leverage powerful tools and libraries to build adaptive apps:

      • Compose Adaptive Layouts library: This library makes adaptive development easier by allowing your app code to fit into canonical layout patterns like list-detail and supporting pane, that automatically reflow as your app is resized, flipped or folded. In the 1.1 release, we introduced pane expansion, allowing users to resize panes. The Socialite demo app showcased how one codebase using this library can adapt across six form factors. New adaptation strategies like “Levitate” (elevating a pane, e.g., into a dialog or bottom sheet) and “Reflow” (reorganizing panes on the same level) were also announced in 1.2 (alpha). For XR, component overrides can automatically spatialize UI elements.

      • Jetpack Navigation 3 (Alpha): This new navigation library simplifies defining user journeys across screens with less boilerplate code, especially for multi-pane layouts in Compose. It helps handle scenarios where list and detail panes might be separate destinations on smaller screens but shown together on larger ones. Check out the new Jetpack Navigation library in alpha.

      • Jetpack Compose input enhancements: Compose’s layered architecture, strong input support, and single location for layout logic simplify creating adaptive UIs. Upcoming in Compose 1.9 are right-click context menus and enhanced trackpad/mouse functionality.

      • Window Size Classes: Use window size classes for top-level layout decisions. AndroidX.window 1.5 introduces two new width size classes – “large” (1200dp to 1600dp) and “extra-large” (1600dp and larger) – providing more granular breakpoints for large screens. This helps in deciding when to expand navigation rails or show three panes of content. Support for these new breakpoints was also announced in the Compose adaptive layouts library 1.2 alpha, along with design guidance.

      • Compose previews: Get quick feedback by visualizing your layouts across a wide variety of screen sizes and aspect ratios. You can also specify different devices by name to preview your UI on their respective sizes and with their inset values.

      • Testing adaptive layouts: Validating your adaptive layouts is crucial and Android Studio offers various tools for testing – including previews for different sizes and aspect ratios, a resizable emulator to test across different screen sizes with a single AVD, screenshot tests, and instrumental behavior tests. And with Journeys with Gemini in Android Studio, you can define tests using natural language for even more robust testing across different window sizes.

    Ensuring app availability across devices

    Avoid unnecessarily declaring required features (like specific cameras or GPS) in your manifest, as this can prevent your app from appearing in the Play Store on devices that lack those specific hardware components but could otherwise run your app perfectly.

    Handling different input methods

    Remember to handle various input methods like touch, keyboard, and mouse, especially with Chromebook detachables and connected displays.

    Prepare for orientation and resizability API changes in Android 16

    Beginning in Android 16, for apps targeting SDK 36, manifest and runtime restrictions on orientation, resizability, and aspect ratio will be ignored on displays that are at least 600dp in both dimensions. To meet user expectations, your apps will need layouts that work for both portrait and landscape windows, and support resizing at runtime. There’s a temporary opt-out manifest flag at both the application and activity level to delay these changes until targetSdk 37, and these changes currently do not apply to apps categorized as “Games”. Learn more about these API changes.

    Adaptive considerations for games

    Games need to be adaptive too and Unity 6 will add enhanced support for configuration handling, including APIs for screenshots, aspect ratio, and density. Success stories like Asphalt Legends Unite show significant user retention increases on foldables after implementing adaptive features.

    examples of form factors including vr headset

    Start building adaptive today

    Now is the time to elevate your Android apps, making them intuitively responsive across form factors. With the latest tools and updates we’re introducing, you have the power to build experiences that seamlessly flow across all devices, from foldables to cars and beyond. Implementing these strategies will allow you to expand your reach and delight users across the Android ecosystem.

    Get inspired by the “Adaptive Android development makes your app shine across devices” talk, and explore all the resources you’ll need to start your journey at developer.android.com/adaptive-apps!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    *Source: internal Google data



    Source link

  • Samsung Galaxy S25 Edge battery life is as bad as you think

    Samsung Galaxy S25 Edge battery life is as bad as you think


    Samsung Galaxy S25 Edge Thickness Shown on top of Books

    C. Scott Brown / Android Authority

    When we first picked up the Samsung Galaxy S25 Edge, we marveled at its delicately thin frame. However, our wonder quickly turned to concern once we saw the phone’s 3,900mAh battery capacity. That’s not tiny, but it’s a pretty conservative cell for a 2025 flagship smartphone, especially one as tall and wide as the Galaxy S25 Plus.

    For comparison, the S25 Plus houses a sizable 4,900mAh cell, while even the compact Galaxy S25 nudges ahead of the Edge with 4,000mAh. But it’s also worth noting that the Edge features a larger display with a higher QHD resolution and a more demanding 200MP camera — factors that could all draw more power than those in the regular S25.

    Samsung itself claims the Edge should have battery life somewhere between the Galaxy S24 and S25, but our testing has found this to be rather optimistic. Let’s take a look at the results from our automated battery longevity tests, all conducted at a consistent display brightness of 300 nits. For comparison, we also tested both models of the S24, as well as the S25 and S25 Plus.

    Galaxy S25 Edge Battery Life Benchmarks

    Robert Triggs / Android Authority

    Unfortunately, the Galaxy S25 Edge underperforms across the board. It clocks fewer minutes than the Galaxy S25 in every test and fares worse than the Galaxy S24 series in most categories — except for our Zoom call test, where it beats the Snapdragon version of the Galaxy S24. It also matches the Exynos model in the 4K recording and Zoom tests. The only consistent result across the board is in camera capture time, where all phones performed similarly. Otherwise, the performance gap is significant and well outside the margin of error. The Edge’s battery life is clearly inferior to its siblings.

    In fact, the Edge’s real-world battery life is far worse than the seemingly minor 100mAh difference with the compact Galaxy S25 would suggest. On average, I calculate a roughly 20% reduction in video recording and playback longevity compared to the regular S25, and 27% worse performance in Zoom call duration. Web browsing fared a bit better, with only about an 8% decline, but that’s still worse than the battery size would lead you to expect.

    The Edge’s beefier specs drain the 3,900mAh battery even faster than the S25.

    Software optimization may play a role, but the Edge’s larger, sharper display undoubtedly draws more power than the regular S25. Combined with the smaller battery, this is a recipe for disappointing screen-on time.

    Samsung Galaxy S25 Edge camera hero

    Ryan Haines / Android Authority

    Setting aside the comparisons for a moment, let’s focus on screen-on time itself. Based on our tests, the Galaxy S25 Edge delivers about four and a bit hours of constant content capture, seven to eight hours of moderate use (like web browsing and video calls), and up to 17 hours of offline 4K video playback. These figures aren’t terrible in isolation, but fall an hour or two behind its siblings. And keep in mind that this is under ideal, out-of-the-box conditions. Add background tasks, heavy data use, or gaming, and things quickly deteriorate.

    Cutting it fine is an understatement, there’s no headroom for aging battery health here.

    Samsung has built the Edge with a battery capacity that clearly cuts it very fine for a full day of use. While the Galaxy S25 Edge might manage modest usage today, consider how it will perform after two or three years, especially at a price point of $1,100. A modest decline to 90% of its original battery capacity after two years could already spell trouble; a drop to 80% will have you reaching for a charger before the day is over.

    Additionally, we noticed the thin metal frame heating up frequently during use, which not only accelerates battery degradation but can also cause the battery to discharge inefficiently and increase self-discharge. This might explain why the phone seems to perform particularly poorly in demanding tests, like our Zoom call, and why Samsung has stuck to sluggish 25W charging again.

    Either way, it didn’t take a crystal ball to predict battery concerns with the Galaxy S25 Edge, but now we have the data to prove it. If you plan to keep your next phone for a few years, you might want to steer clear of Samsung’s ultra-thin flagship.



    Source link

  • Android Design at Google I/O 2025



    Posted by Ivy Knight – Senior Design Advocate

    Here’s your guide to the essential Android Design sessions, resources, and announcements for I/O ‘25:

    Check out the latest Android updates

    The Android Show: I/O Edition

    The Android Show had a special I/O edition this year with some exciting announcements like Material Expressive!

    Learn more about the new Live Update Notification templates in the Android Notifications & Live Updates for an in-depth look at what they are, when to use them, and why. You can also get the Live Update design template in the Android UI Kit, read more in the updated Notification guidance, and get hands-on with the Jetsnack Live Updates and Widget case study.

    Make your apps more expressive

    Get a jump on the future of Google’s UX design: Material 3 Expressive. Learn how to use new emotional design patterns to boost engagement, usability, and desire for your product in the Build Next-Level UX with Material 3 Expressive session and check out the expressive update on Material.io.

    Stay up to date with Android Accessibility Updates, highlighting accessibility features launching with Android 16: enhanced dark themes, options for those with motion sickness, a new way to increase text contrast, and more.

    Catch the Mastering text input in Compose session to learn more about how engaging robust text experiences are built with Jetpack Compose. It covers Autofill integration, dynamic text resizing, and custom input transformations. This is a great session to watch to see what’s possible when designing text inputs.

    Thinking across form factors

    These design resources and sessions can help you design across more Android form factors or update your existing experiences.

    Preview Gemini in-car, imagining seamless navigation and personalized entertainment in the New In-Car App Experiences session. Then explore the new Car UI Design Kit to bring your app to Android Car platforms and speed up your process with the latest Android form factor kit.

    Engaging with users on Google TV with excellent TV apps session discusses new ways the Google TV experience is making it easier for users to find and engage with content, including improvement to out-of-box solutions and updates to Android TV OS.

    Want a peek at how to bring immersive content, like 3D models, to Android XR with the Building differentiated apps for Android XR with 3D Content session.

    Plus WearOS is releasing an updated design kit @AndroidDesign Figma and learning Pathway.

    Tip top apps

    We’ve also released the following new Android design guidance to help you design the best Android experiences:

    In-app Settings

    Read up on the latest suggested patterns to build out your app’s settings.

    Help and Feedback

    Along with settings, learn about adding help and feedback to your app.

    Widget Configuration

    Does your app need setup? New guidance to help guide in adding configuration to your app’s widgets.

    Edge-to-edge design

    Allow your apps to take full advantage of the entire screen with the latest guidance on designing for edge-to-edge.

    Check out figma.com/@androiddesign for even more new and updated resources.

    Visit the I/O 2025 website, build your schedule, and engage with the community. If you are at the Shoreline come say hello to us in the Android tent at our booths.

    We can’t wait to see what you create with these new tools and insights. Happy I/O!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • How Androidify leverages Gemini, Firebase and ML Kit



    Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Manager

    We’re bringing back Androidify later this year, this time powered by Google AI, so you can customize your very own Android bot and share your creativity with the world. Today, we’re releasing a new open source demo app for Androidify as a great example of how Google is using its Gemini AI models to enhance app experiences.

    In this post, we’ll dive into how the Androidify app uses Gemini models and Imagen via the Firebase AI Logic SDK, and we’ll provide some insights learned along the way to help you incorporate Gemini and AI into your own projects. Read more about the Androidify demo app.

    App flow

    The overall app functions as follows, with various parts of it using Gemini and Firebase along the way:

    flow chart demonstrating Androidify app flow

    Gemini and image validation

    To get started with Androidify, take a photo or choose an image on your device. The app needs to make sure that the image you upload is suitable for creating an avatar.

    Gemini 2.5 Flash via Firebase helps with this by verifying that the image contains a person, that the person is in focus, and assessing image safety, including whether the image contains abusive content.

    val jsonSchema = Schema.obj(
       properties = mapOf("success" to Schema.boolean(), "error" to Schema.string()),
       optionalProperties = listOf("error"),
       )
       
    val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI())
       .generativeModel(
                modelName = "gemini-2.5-flash-preview-04-17",
       	     generationConfig = generationConfig {
                    responseMimeType = "application/json"
                    responseSchema = jsonSchema
                },
                safetySettings = listOf(
                    SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.CIVIC_INTEGRITY, HarmBlockThreshold.LOW_AND_ABOVE),
        	),
        )
    
     val response = generativeModel.generateContent(
                content {
                    text("You are to analyze the provided image and determine if it is acceptable and appropriate based on specific criteria.... (more details see the full sample)")
                    image(image)
                },
            )
    
    val jsonResponse = Json.parseToJsonElement(response.text)
    val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
    val error = jsonResponse.jsonObject["error"]?.jsonPrimitive?.content
    

    In the snippet above, we’re leveraging structured output capabilities of the model by defining the schema of the response. We’re passing a Schema object via the responseSchema param in the generationConfig.

    We want to validate that the image has enough information to generate a nice Android avatar. So we ask the model to return a json object with success = true/false and an optional error message explaining why the image doesn’t have enough information.

    Structured output is a powerful feature enabling a smoother integration of LLMs to your app by controlling the format of their output, similar to an API response.

    Image captioning with Gemini Flash

    Once it’s established that the image contains sufficient information to generate an Android avatar, it is captioned using Gemini 2.5 Flash with structured output.

    val jsonSchema = Schema.obj(
                properties = mapOf(
                    "success" to Schema.boolean(),
                    "user_description" to Schema.string(),
                ),
                optionalProperties = listOf("user_description"),
            )
    val generativeModel = createGenerativeTextModel(jsonSchema)
    
    val prompt = "You are to create a VERY detailed description of the main person in the given image. This description will be translated into a prompt for a generative image model..."
    
    val response = generativeModel.generateContent(
    content { 
           	text(prompt) 
                 	image(image) 
    	})
            
    val jsonResponse = Json.parseToJsonElement(response.text!!) 
    val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
    
    val userDescription = jsonResponse.jsonObject["user_description"]?.jsonPrimitive?.content
    

    The other option in the app is to start with a text prompt. You can enter in details about your accessories, hairstyle, and clothing, and let Imagen be a bit more creative.

    Android generation via Imagen

    We’ll use this detailed description of your image to enrich the prompt used for image generation. We’ll add extra details around what we would like to generate and include the bot color selection as part of this too, including the skin tone selected by the user.

    val imagenPrompt = "A 3D rendered cartoonish Android mascot in a photorealistic style, the pose is relaxed and straightforward, facing directly forward [...] The bot looks as follows $userDescription [...]"
    

    We then call the Imagen model to create the bot. Using this new prompt, we create a model and call generateImages:

    // we supply our own fine-tuned model here but you can use "imagen-3.0-generate-002" 
    val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
                "imagen-3.0-generate-002",
                safetySettings =
                ImagenSafetySettings(
                    ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
                    personFilterLevel = ImagenPersonFilterLevel.ALLOW_ALL,
                ),
    )
    
    val response = generativeModel.generateImages(imagenPrompt)
    
    val image = response.images.first().asBitmap()
    

    And that’s it! The Imagen model generates a bitmap that we can display on the user’s screen.

    Finetuning the Imagen model

    The Imagen 3 model was finetuned using Low-Rank Adaptation (LoRA). LoRA is a fine-tuning technique designed to reduce the computational burden of training large models. Instead of updating the entire model, LoRA adds smaller, trainable “adapters” that make small changes to the model’s performance. We ran a fine tuning pipeline on the Imagen 3 model generally available with Android bot assets of different color combinations and different assets for enhanced cuteness and fun. We generated text captions for the training images and the image-text pairs were used to finetune the model effectively.

    The current sample app uses a standard Imagen model, so the results may look a bit different from the visuals in this post. However, the app using the fine-tuned model and a custom version of Firebase AI Logic SDK was demoed at Google I/O. This app will be released later this year and we are also planning on adding support for fine-tuned models to Firebase AI Logic SDK later in the year.

    moving image of Androidify app demo turning a selfie image of a bearded man wearing a black tshirt and sunglasses, with a blue back pack into a green 3D bearded droid wearing a black tshirt and sunglasses with a blue backpack

    The original image… and Androidifi-ed image

    ML Kit

    The app also uses the ML Kit Pose Detection SDK to detect a person in the camera view, which triggers the capture button and adds visual indicators.

    To do this, we add the SDK to the app, and use PoseDetection.getClient(). Then, using the poseDetector, we look at the detectedLandmarks that are in the streaming image coming from the Camera, and we set the _uiState.detectedPose to true if a nose and shoulders are visible:

    private suspend fun runPoseDetection() {
        PoseDetection.getClient(
            PoseDetectorOptions.Builder()
                .setDetectorMode(PoseDetectorOptions.STREAM_MODE)
                .build(),
        ).use { poseDetector ->
            // Since image analysis is processed by ML Kit asynchronously in its own thread pool,
            // we can run this directly from the calling coroutine scope instead of pushing this
            // work to a background dispatcher.
            cameraImageAnalysisUseCase.analyze { imageProxy ->
                imageProxy.image?.let { image ->
                    val poseDetected = poseDetector.detectPersonInFrame(image, imageProxy.imageInfo)
                    _uiState.update { it.copy(detectedPose = poseDetected) }
                }
            }
        }
    }
    
    private suspend fun PoseDetector.detectPersonInFrame(
        image: Image,
        imageInfo: ImageInfo,
    ): Boolean {
        val results = process(InputImage.fromMediaImage(image, imageInfo.rotationDegrees)).await()
        val landmarkResults = results.allPoseLandmarks
        val detectedLandmarks = mutableListOf<Int>()
        for (landmark in landmarkResults) {
            if (landmark.inFrameLikelihood > 0.7) {
                detectedLandmarks.add(landmark.landmarkType)
            }
        }
    
        return detectedLandmarks.containsAll(
            listOf(PoseLandmark.NOSE, PoseLandmark.LEFT_SHOULDER, PoseLandmark.RIGHT_SHOULDER),
        )
    }
    

    moving image showing the camera shutter button activating when an orange droid figurine is held in the camera frame

    The camera shutter button is activated when a person (or a bot!) enters the frame.

    Get started with AI on Android

    The Androidify app makes an extensive use of the Gemini 2.5 Flash to validate the image and generate a detailed description used to generate the image. It also leverages the specifically fine-tuned Imagen 3 model to generate images of Android bots. Gemini and Imagen models are easily integrated into the app via the Firebase AI Logic SDK. In addition, ML Kit Pose Detection SDK controls the capture button, enabling it only when a person is present in front of the camera.

    To get started with AI on Android, go to the Gemini and Imagen documentation for Android.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Building delightful UIs with Compose



    Posted by Rebecca Franks – Developer Relations Engineer

    Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

    Material 3 Expressive

    Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

    https://www.youtube.com/watch?v=n17dnMChX14

    It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

    Material Expressive Component updates

    Material Expressive Component updates

    In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

    In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

    @Composable
    fun AndroidifyTheme(
       content: @Composable () -> Unit,
    ) {
       val colorScheme = LightColorScheme
    
    
       MaterialExpressiveTheme(
           colorScheme = colorScheme,
           typography = Typography,
           shapes = shapes,
           motionScheme = MotionScheme.expressive(),
           content = {
               SharedTransitionLayout {
                   CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                       content()
                   }
               }
           },
       )
    }
    

    Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

    moving example of expressive button shapes in slow motion

    The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

    Material Expressive Component updates

    Camera button with a MaterialShapes.Cookie9Sided shape

    Animations

    Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

    val interactionSource = remember { MutableInteractionSource() }
    val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
    Spacer(
       modifier
           .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
           .clip(MaterialShapes.Cookie9Sided.toShape())
           .size(size)
           .drawWithCache {
               //.. etc
           },
    )
    

    Camera button scale interaction

    Camera button scale interaction

    Shared element animations

    The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

    moving example of expressive button shapes in slow motion

    To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

    @Composable
    fun Modifier.sharedBoundsRevealWithShapeMorph(
       sharedContentState: 
    SharedTransitionScope.SharedContentState,
       sharedTransitionScope: SharedTransitionScope = 
    LocalSharedTransitionScope.current,
       animatedVisibilityScope: AnimatedVisibilityScope = 
    LocalNavAnimatedContentScope.current,
       boundsTransform: BoundsTransform = 
    MaterialTheme.motionScheme.sharedElementTransitionSpec,
       resizeMode: SharedTransitionScope.ResizeMode = 
    SharedTransitionScope.ResizeMode.RemeasureToBounds,
       restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
       targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
    )
    

    Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

    val animatedProgress =
       animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)
    
    
    val morph = remember {
       Morph(restingShape, targetShape)
    }
    val morphClip = MorphOverlayClip(morph, { animatedProgress.value })
    
    
    return this@sharedBoundsRevealWithShapeMorph
       .sharedBounds(
           sharedContentState = sharedContentState,
           animatedVisibilityScope = animatedVisibilityScope,
           boundsTransform = boundsTransform,
           resizeMode = resizeMode,
           clipInOverlayDuringTransition = morphClip,
           renderInOverlayDuringTransition = renderInOverlayDuringTransition,
       )
    

    View the full code snippet for this Modifer on GitHub.

    Autosize text

    With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

    BasicText(text,
    style = MaterialTheme.typography.titleLarge,
    autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
    )
    

    This is used front and center for the “Customize your own Android Bot” text:

    Text reads Customize your own Android Bot with an inline moving image

    “Customize your own Android Bot” text with inline GIF

    This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

    @Composable
    private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
       Box(modifier = modifier) {
           val animatedBot = "animatedBot"
           val text = buildAnnotatedString {
               append(stringResource(R.string.customize))
               // Attach "animatedBot" annotation on the placeholder
               appendInlineContent(animatedBot)
               append(stringResource(R.string.android_bot))
           }
           var placeHolderSize by remember {
               mutableStateOf(220.sp)
           }
           val inlineContent = mapOf(
               Pair(
                   animatedBot,
                   InlineTextContent(
                       Placeholder(
                           width = placeHolderSize,
                           height = placeHolderSize,
                           placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                       ),
                   ) {
                       DancingBot(
                           modifier = Modifier
                               .padding(top = 32.dp)
                               .fillMaxSize(),
                       )
                   },
               ),
           )
           BasicText(
               text,
               modifier = Modifier
                   .align(Alignment.Center)
                   .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
               style = MaterialTheme.typography.titleLarge,
               autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
               maxLines = 6,
               onTextLayout = { result ->
                   placeHolderSize = result.layoutInput.style.fontSize * 3.5f
               },
               inlineContent = inlineContent,
           )
       }
    }
    

    Composable visibility with onLayoutRectChanged

    With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

    In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

    var buttonBounds by remember {
       mutableStateOf<RelativeLayoutBounds?>(null)
    }
    var showColorSplash by remember {
       mutableStateOf(false)
    }
    Box(modifier = Modifier.fillMaxSize()) {
       PrimaryButton(
           buttonText = "Let's Go",
           modifier = Modifier
               .align(Alignment.BottomCenter)
               .onLayoutRectChanged(
                   callback = { bounds ->
                       buttonBounds = bounds
                   },
               ),
           onClick = {
               showColorSplash = true
           },
       )
    }
    

    We use these bounds as an indication of where to start the color splash animation from.

    moving image of a blue color splash transition between Androidify demo screens

    Learn more delightful details

    From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

    animated marquee example

    animated gradient button for AI powered actions example

    animated loading screen example

    Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX


    The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

    Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

    moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

    Under the hood

    The app combines a variety of different Google technologies, such as:

      • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
      • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
      • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
      • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

    This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

    How does the Androidify app work?

    The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

    Flow chart describing Androidify app flow

    Androidify app flow chart detailing how the app works with AI

    AI in Androidify with Gemini and ML Kit

    The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

      • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

    val response = generativeModel.generateContent(
       content {
           text(prompt)
           image(image)
       },
    )
    

      • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

      • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

      • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

      • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

    The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

    Explore more detailed information about AI usage in Androidify.

    Jetpack Compose

    The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

    Delightful details with the UI

    The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

    MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

    Androidify app UI showing camera button

    Camera button with a MaterialShapes.Cookie9Sided shape

    Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

      • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

        moving example of expressive button shapes in slow motion

      • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

        animated marquee example

      • Fun color splash animation as a transition between screens.

        moving image of a blue color splash transition between Androidify demo screens

      • Animating gradient buttons for the AI-powered actions.

        animated gradient button for AI powered actions example

    To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

    Adapting to different devices

    Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

    a collage of different adaptive layouts for the Androidify app across small and large screens

    Various adaptive layouts in the app

    For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

      • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

      • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

      • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

    Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

    CameraX and Media3 Compose

    To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

    The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

    @Composable
    private fun VideoPlayer(modifier: Modifier = Modifier) {
        val context = LocalContext.current
        var player by remember { mutableStateOf<Player?>(null) }
        LifecycleStartEffect(Unit) {
            player = ExoPlayer.Builder(context).build().apply {
                setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
                repeatMode = Player.REPEAT_MODE_ONE
                prepare()
            }
            onStopOrDispose {
                player?.release()
                player = null
            }
        }
        Box(
            modifier
                .background(MaterialTheme.colorScheme.surfaceContainerLowest),
        ) {
            player?.let { currentPlayer ->
                PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
            }
        }
    }
    

    Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

    var videoFullyOnScreen by remember { mutableStateOf(false) }     
    
    LaunchedEffect(videoFullyOnScreen) {
         if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
    } 
    
    // We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
    Modifier.onVisibilityChanged(
                    containerWidth = LocalView.current.width,
                    containerHeight = LocalView.current.height,
    ) { fullyVisible -> videoFullyOnScreen = fullyVisible }
    
    // A simple version of visibility changed detection
    fun Modifier.onVisibilityChanged(
        containerWidth: Int,
        containerHeight: Int,
        onChanged: (visible: Boolean) -> Unit,
    ) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
        onChanged(
            layoutBounds.boundsInRoot.top > 0 &&
                layoutBounds.boundsInRoot.bottom < containerHeight &&
                layoutBounds.boundsInRoot.left > 0 &&
                layoutBounds.boundsInRoot.right < containerWidth,
        )
    }
    

    Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

    val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
                OutlinedIconButton(
                    onClick = playPauseButtonState::onClick,
                    enabled = playPauseButtonState.isEnabled,
                ) {
                    val icon =
                        if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                    val contentDescription =
                        if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                    Icon(
                        painterResource(icon),
                        stringResource(contentDescription),
                    )
                }
    

    Check out the code for more details on how CameraX and Media3 were used in Androidify.

    Navigation 3

    Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

    @Composable
    fun MainNavigation() {
       val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
       NavDisplay(
           backStack = backStack,
           onBack = { backStack.removeLastOrNull() },
           entryProvider = entryProvider {
               entry<Home> { entry ->
                   HomeScreen(
                       onAboutClicked = {
                           backStack.add(About)
                       },
                   )
               }
               entry<Camera> {
                   CameraPreviewScreen(
                       onImageCaptured = { uri ->
                           backStack.add(Create(uri.toString()))
                       },
                   )
               }
               // etc
           },
       )
    }
    

    Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

    CameraLayout in Compose

    Learn more about Jetpack Navigation 3, currently in alpha.

    Learn more

    By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Android’s Kotlin Multiplatform announcements at Google I/O and KotlinConf 25



    Posted by Ben Trengrove – Developer Relations Engineer, Matt Dyor – Product Manager

    Google I/O and KotlinConf 2025 bring a series of announcements on Android’s Kotlin and Kotlin Multiplatform efforts. Here’s what to watch out for:

    Announcements from Google I/O 2025

    Jetpack libraries

    Our focus for Jetpack libraries and KMP is on sharing business logic across Android and iOS, but we have begun experimenting with web/WASM support.

    We are adding KMP support to Jetpack libraries. Last year we started with Room, DataStore and Collection, which are now available in a stable release and recently we have added ViewModel, SavedState and Paging. The levels of support that our Jetpack libraries guarantee for each platform have been categorised into three tiers, with the top tier being for Android, iOS and JVM.

    Tool improvements

    We’re developing new tools to help easily start using KMP in your app. With the KMP new module template in Android Studio Meerkat, you can add a new module to an existing app and share code to iOS and other supported KMP platforms.

    In addition to KMP enhancements, Android Studio now supports Kotlin K2 mode for Android specific features requiring language support such as Live Edit, Compose Preview and many more.

    How Google is using KMP

    Last year, Google Workspace began experimenting with KMP, and this is now running in production in the Google Docs app on iOS. The app’s runtime performance is on par or better than before1.

    It’s been helpful to have an app at this scale test KMP out, because we’re able to identify issues and fix issues that benefit the KMP developer community.

    For example, we’ve upgraded the Kotlin Native compiler to LLVM 16 and contributed a more efficient garbage collector and string implementation. We’re also bringing the static analysis power of Android Lint to Kotlin targets and ensuring a unified Gradle DSL for both AGP and KGP to improve the plugin management experience.

    New guidance

    We’re providing comprehensive guidance in the form of two new codelabs: Getting started with Kotlin Multiplatform and Migrating your Room database to KMP, to help you get from standalone Android and iOS apps to shared business logic.

    Kotlin Improvements

    Kotlin Symbol Processing (KSP2) is stable to better support new Kotlin language features and deliver better performance. It is easier to integrate with build systems, is thread-safe, and has better support for debugging annotation processors. In contrast to KSP1, KSP2 has much better compatibility across different Kotlin versions. The rewritten command line interface also becomes significantly easier to use as it is now a standalone program instead of a compiler plugin.

    KotlinConf 2025

    Google team members are presenting a number of talks at KotlinConf spanning multiple topics:

    Talks

      • Deploying KMP at Google Workspace by Jason Parachoniak, Troels Lund, and Johan Bay from the Workspace team discusses the challenges and solutions, including bugs and performance optimizations, encountered when launching Kotlin Multiplatform at Google Workspace, offering comparisons to ObjectiveC and a Q&A. (Technical Session)

      • The Life and Death of a Kotlin/Native Object by Troels Lund offers a high-level explanation of the Kotlin/Native runtime’s inner workings concerning object instantiation, memory management, and disposal. (Technical Session)

      • APIs: How Hard Can They Be? presented by Aurimas Liutikas and Alan Viverette from the Jetpack team delves into the lifecycle of API design, review processes, and evolution within AndroidX libraries, particularly considering KMP and related tools. (Technical Session)

      • Project Sparkles: How Compose for Desktop is changing Android Studio and IntelliJ with Chris Sinco and Sebastiano Poggi from the Android Studio team introduces the initiative (‘Project Sparkles’) aiming to modernize Android Studio and IntelliJ UIs using Compose for Desktop, covering goals, examples, and collaborations. (Technical Session)

      • JSpecify: Java Nullness Annotations and Kotlin presented by David Baker explains the significance and workings of JSpecify’s standard Java nullness annotations for enhancing Kotlin’s interoperability with Java libraries. (Lightning Session)

      • Lessons learned decoupling Architecture Components from platform specific code features Jeremy Woods and Marcello Galhardo from the Jetpack team sharing insights from the Android team on decoupling core components like SavedState and System Back from platform specifics to create common APIs. (Technical Session)

      • KotlinConf’s Closing Panel, a regular staple of the conference, returns, featuring Jeffrey van Gogh as Google’s representative on the panel. (Panel)

    Live Workshops

    If you are at KotlinConf in person, we will have guided live workshops with our new codelabs from above.

      • The codelab Migrating Room to Room KMP, also led by Matt Dyor, and Dustin Lam, Tomáš Mlynarič, demonstrates the process of migrating an existing Room database implementation to Room KMP within a shared module.

    We love engaging with the Kotlin community. If you are attending KotlinConf, we hope you get a chance to check out our booth, with opportunities to chat with our engineers, get your questions answered, and learn more about how you can leverage Kotlin and KMP.

    Learn more about Kotlin Multiplatform

    To learn more about KMP and start sharing your business logic across platforms, check out our documentation and the sample.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    1 Google Internal Data, March 2025



    Source link

  • Android Developers Blog: Announcing Jetpack Navigation 3



    Posted by Don Turner – Developer Relations Engineer

    Navigating between screens in your app should be simple, shouldn’t it? However, building a robust, scalable, and delightful navigation experience can be a challenge. For years, the Jetpack Navigation library has been a key tool for developers, but as the Android UI landscape has evolved, particularly with the rise of Jetpack Compose, we recognized the need for a new approach.

    Today, we’re excited to introduce Jetpack Navigation 3, a new navigation library built from the ground up specifically for Compose. For brevity, we’ll just call it Nav3 from now on. This library embraces the declarative programming model and Compose state as fundamental building blocks.

    Why a new navigation library?

    The original Jetpack Navigation library (sometimes referred to as Nav2 as it’s on major version 2) was initially announced back in 2018, before AndroidX and before Compose. While it served its original goals well, we heard from you that it had several limitations when working with modern Compose patterns.

    One key limitation was that the back stack state could only be observed indirectly. This meant there could be two sources of truth, potentially leading to an inconsistent application state. Also, Nav2’s NavHost was designed to display only a single destination – the topmost one on the back stack – filling the available space. This made it difficult to implement adaptive layouts that display multiple panes of content simultaneously, such as a list-detail layout on large screens.

    illustration of single pane and two-pane layouts showing list and detail features

    Figure 1. Changing from single pane to multi-pane layouts can create navigational challenges

    Founding principles

    Nav3 is built upon principles designed to provide greater flexibility and developer control:

      • You own the back stack: You, the developer, not the library, own and control the back stack. It’s a simple list which is backed by Compose state. Specifically, Nav3 expects your back stack to be SnapshotStateList<T> where T can be any type you choose. You can navigate by adding or removing items (Ts), and state changes are observed and reflected by Nav3’s UI.
      • Get out of your way: We heard that you don’t like a navigation library to be a black box with inaccessible internal components and state. Nav3 is designed to be open and extensible, providing you with building blocks and helpful defaults. If you want custom navigation behavior you can drop down to lower layers and create your own components and customizations.
      • Pick your building blocks: Instead of embedding all behavior within the library, Nav3 offers smaller components that you can combine to create more complex functionality. We’ve also provided a “recipes book” that shows how to combine components to solve common navigation challenges.

    illustration of the Nav3 display observing changes to the developer-owned back stack

    Figure 2. The Nav3 display observes changes to the developer-owned back stack.

    Key features

      • Adaptive layouts: A flexible layout API (named Scenes) allows you to render multiple destinations in the same layout (for example, a list-detail layout on large screen devices). This makes it easy to switch between single and multi-pane layouts.
      • Modularity: The API design allows navigation code to be split across multiple modules. This improves build times and allows clear separation of responsibilities between feature modules.

        moving image demonstrating custom animations and predictive back features on a mobile device

        Figure 3. Custom animations and predictive back are easy to implement, and easy to override for individual destinations.

        Basic code example

        To give you an idea of how Nav3 works, here’s a short code sample.

        // Define the routes in your app and any arguments.
        data object Home
        data class Product(val id: String)
        
        // Create a back stack, specifying the route the app should start with.
        val backStack = remember { mutableStateListOf<Any>(Home) }
        
        // A NavDisplay displays your back stack. Whenever the back stack changes, the display updates.
        NavDisplay(
            backStack = backStack,
        
            // Specify what should happen when the user goes back
            onBack = { backStack.removeLastOrNull() },
        
            // An entry provider converts a route into a NavEntry which contains the content for that route.
            entryProvider = { route ->
                when (route) {
                    is Home -> NavEntry(route) {
                        Column {
                            Text("Welcome to Nav3")
                            Button(onClick = {
                                // To navigate to a new route, just add that route to the back stack
                                backStack.add(Product("123"))
                            }) {
                                Text("Click to navigate")
                            }
                        }
                    }
                    is Product -> NavEntry(route) {
                        Text("Product ${route.id} ")
                    }
                    else -> NavEntry(Unit) { Text("Unknown route: $route") }
                }
            }
        )
        

        Get started and provide feedback

        To get started, check out the developer documentation, plus the recipes repository which provides examples for:

          • common navigation UI, such as a navigation rail or bar
          • conditional navigation, such as a login flow
          • custom layouts using Scenes

        We plan to provide code recipes, documentation and blogs for more complex use cases in future.

        Nav3 is currently in alpha, which means that the API is liable to change based on feedback. If you have any issues, or would like to provide feedback, please file an issue.

        Nav3 offers a flexible and powerful foundation for building modern navigation in your Compose applications. We’re really excited to see what you build with it.

        Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.




    Source link

  • Engage users on Google TV with excellent TV apps



    Posted by Shobana Radhakrishnan – Senior Director of Engineering, Google TV, and Paul Lammertsma – Developer Relations Engineer, Android

    Over the past year, Google TV and Android TV achieved over 270 million monthly active devices, establishing one of the largest smart TV OS footprints. Building on this momentum, we are excited to share new platform features and developer tools designed to help you increase app engagement with our expanding user base.

    https://www.youtube.com/watch?v=OosLbRBM9dA

    Google TV with Gemini capabilities

    Earlier this year, we announced that we’ll bring Gemini capabilities to Google TV, so users can speak more naturally and conversationally to find what to watch and get answers to complex questions.

    A user pulls up Gemini on a TV asking for kid-friendly movie recommendations similar to Jurassic Park. Gemini responds with several movie recommendations

    After each movie or show search, our new voice assistant will suggest relevant content from your apps, significantly increasing the discoverability of your content.

    A user pulls up Gemini on a TV asking for help explaining the solar system to a first grader. Gemini responds with YouTube videos to help explain the solar system

    Plus, users can easily ask questions about topics they’re curious about and receive insightful answers with supporting videos.

    We’re so excited to bring this helpful and delightful experience to users this fall.

    Video Discovery API

    Today, we’ve also opened partner enrollment for our Video Discovery API.

    Video Discovery optimizes Resumption, Entitlements, and Recommendations across all Google TV form factors to enhance the end-user experience and boost app engagement.

      • Resumption: Partners can now easily display a user’s paused video within the ‘Continue Watching’ row from the home screen. This row is a prime location that drives 60% of all user interactions on Google TV.
      • Entitlements: Video Discovery streamlines entitlement management, which matches app content to user eligibility. Users appreciate this because they can enjoy personalized recommendations without needing to manually update all their subscription details. This allows partners to connect with users across multiple discovery points on Google TV.
      • Recommendations: Video Discovery even highlights personalized content recommendations based on content that users watched inside apps.

    Partners can begin incorporating the Video Discovery API today, starting with resumption and entitlement integrations. Check out g.co/tv/vda to learn more.

    Jetpack Compose for TV

    Compose for TV 1.0 expands on the core and Material Compose libraries

    Last year, we launched Compose for TV 1.0 beta, which lets you build beautiful, adaptive UIs across Android, including Android TV OS.

    Now, Compose for TV 1.0 is stable, and expands on the core and Material Compose libraries. We’ve even seen how the latest release of Compose significantly improves app startup within our internal benchmarking mobile sample, with roughly a 20% improvement compared with the March 2024 release. Because Compose for TV builds upon these libraries, apps built with Compose for TV should also see better app startup times.

    New to building with Compose, and not sure where to start? Our updated Jetcaster audio streaming app sample demonstrates how to use Compose across form factors. It includes a dedicated module for playing podcasts on TV by combining separate view models with shared business logic.

    Focus Management Codelab

    We understand that focus management can be challenging at times. That’s why we’ve published a codelab that reviews how to set initial focus, prepare for unexpected focus traversal, and efficiently restore focus.

    Memory Optimization Guide

    We’ve released a comprehensive guide on memory optimization, including memory targets for low RAM devices as well. Combined with Android Studio’s powerful memory profiler, this helps you understand when your app exceeds those limits and why.

    In-App Ratings and Reviews

    Ratings and reviews entry point forJetStream sample app on TV

    Moreover, app ratings and reviews are essential for developers, offering quantitative and qualitative feedback on user experiences. Now, we’re extending the In-App Ratings and Reviews API to TV to allow developers to prompt users for ratings and reviews directly from Google TV. Check out our recent blog post detailing how to easily integrate the In-App Ratings and Reviews API.

    Android 16 for TV

    Android 16 for TV

    We’re excited to announce the upcoming release of Android 16 for TV. Developers can begin using the latest beta today. With Android 16, TV developers can access several great features:

      • Platform support for the Eclipsa Audio codec enables creators to use the IAMF spatial audio format. For ExoPlayer support that includes previous platform versions, see ExoPlayer’s IAMF decoder module.
      • There are various improvements to media playback speed, consistency and efficiency, as well as HDMI-CEC reliability and performance optimizations for 64-bit kernels.
      • Additional APIs and user experiences from Android 16 are also available. We invite you to explore the complete list from the Android 16 for TV release notes.

    What’s next

    We’re incredibly excited to see how these announcements will optimize your development journey, and look forward to seeing the fantastic apps you’ll launch on the platform!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link