برچسب: Jetpack

  • Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX


    The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

    Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

    moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

    Under the hood

    The app combines a variety of different Google technologies, such as:

      • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
      • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
      • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
      • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

    This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

    How does the Androidify app work?

    The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

    Flow chart describing Androidify app flow

    Androidify app flow chart detailing how the app works with AI

    AI in Androidify with Gemini and ML Kit

    The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

      • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

    val response = generativeModel.generateContent(
       content {
           text(prompt)
           image(image)
       },
    )
    

      • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

      • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

      • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

      • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

    The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

    Explore more detailed information about AI usage in Androidify.

    Jetpack Compose

    The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

    Delightful details with the UI

    The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

    MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

    Androidify app UI showing camera button

    Camera button with a MaterialShapes.Cookie9Sided shape

    Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

      • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

        moving example of expressive button shapes in slow motion

      • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

        animated marquee example

      • Fun color splash animation as a transition between screens.

        moving image of a blue color splash transition between Androidify demo screens

      • Animating gradient buttons for the AI-powered actions.

        animated gradient button for AI powered actions example

    To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

    Adapting to different devices

    Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

    a collage of different adaptive layouts for the Androidify app across small and large screens

    Various adaptive layouts in the app

    For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

      • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

      • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

      • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

    Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

    CameraX and Media3 Compose

    To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

    The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

    @Composable
    private fun VideoPlayer(modifier: Modifier = Modifier) {
        val context = LocalContext.current
        var player by remember { mutableStateOf<Player?>(null) }
        LifecycleStartEffect(Unit) {
            player = ExoPlayer.Builder(context).build().apply {
                setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
                repeatMode = Player.REPEAT_MODE_ONE
                prepare()
            }
            onStopOrDispose {
                player?.release()
                player = null
            }
        }
        Box(
            modifier
                .background(MaterialTheme.colorScheme.surfaceContainerLowest),
        ) {
            player?.let { currentPlayer ->
                PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
            }
        }
    }
    

    Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

    var videoFullyOnScreen by remember { mutableStateOf(false) }     
    
    LaunchedEffect(videoFullyOnScreen) {
         if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
    } 
    
    // We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
    Modifier.onVisibilityChanged(
                    containerWidth = LocalView.current.width,
                    containerHeight = LocalView.current.height,
    ) { fullyVisible -> videoFullyOnScreen = fullyVisible }
    
    // A simple version of visibility changed detection
    fun Modifier.onVisibilityChanged(
        containerWidth: Int,
        containerHeight: Int,
        onChanged: (visible: Boolean) -> Unit,
    ) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
        onChanged(
            layoutBounds.boundsInRoot.top > 0 &&
                layoutBounds.boundsInRoot.bottom < containerHeight &&
                layoutBounds.boundsInRoot.left > 0 &&
                layoutBounds.boundsInRoot.right < containerWidth,
        )
    }
    

    Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

    val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
                OutlinedIconButton(
                    onClick = playPauseButtonState::onClick,
                    enabled = playPauseButtonState.isEnabled,
                ) {
                    val icon =
                        if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                    val contentDescription =
                        if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                    Icon(
                        painterResource(icon),
                        stringResource(contentDescription),
                    )
                }
    

    Check out the code for more details on how CameraX and Media3 were used in Androidify.

    Navigation 3

    Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

    @Composable
    fun MainNavigation() {
       val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
       NavDisplay(
           backStack = backStack,
           onBack = { backStack.removeLastOrNull() },
           entryProvider = entryProvider {
               entry<Home> { entry ->
                   HomeScreen(
                       onAboutClicked = {
                           backStack.add(About)
                       },
                   )
               }
               entry<Camera> {
                   CameraPreviewScreen(
                       onImageCaptured = { uri ->
                           backStack.add(Create(uri.toString()))
                       },
                   )
               }
               // etc
           },
       )
    }
    

    Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

    CameraLayout in Compose

    Learn more about Jetpack Navigation 3, currently in alpha.

    Learn more

    By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Android Developers Blog: Announcing Jetpack Navigation 3



    Posted by Don Turner – Developer Relations Engineer

    Navigating between screens in your app should be simple, shouldn’t it? However, building a robust, scalable, and delightful navigation experience can be a challenge. For years, the Jetpack Navigation library has been a key tool for developers, but as the Android UI landscape has evolved, particularly with the rise of Jetpack Compose, we recognized the need for a new approach.

    Today, we’re excited to introduce Jetpack Navigation 3, a new navigation library built from the ground up specifically for Compose. For brevity, we’ll just call it Nav3 from now on. This library embraces the declarative programming model and Compose state as fundamental building blocks.

    Why a new navigation library?

    The original Jetpack Navigation library (sometimes referred to as Nav2 as it’s on major version 2) was initially announced back in 2018, before AndroidX and before Compose. While it served its original goals well, we heard from you that it had several limitations when working with modern Compose patterns.

    One key limitation was that the back stack state could only be observed indirectly. This meant there could be two sources of truth, potentially leading to an inconsistent application state. Also, Nav2’s NavHost was designed to display only a single destination – the topmost one on the back stack – filling the available space. This made it difficult to implement adaptive layouts that display multiple panes of content simultaneously, such as a list-detail layout on large screens.

    illustration of single pane and two-pane layouts showing list and detail features

    Figure 1. Changing from single pane to multi-pane layouts can create navigational challenges

    Founding principles

    Nav3 is built upon principles designed to provide greater flexibility and developer control:

      • You own the back stack: You, the developer, not the library, own and control the back stack. It’s a simple list which is backed by Compose state. Specifically, Nav3 expects your back stack to be SnapshotStateList<T> where T can be any type you choose. You can navigate by adding or removing items (Ts), and state changes are observed and reflected by Nav3’s UI.
      • Get out of your way: We heard that you don’t like a navigation library to be a black box with inaccessible internal components and state. Nav3 is designed to be open and extensible, providing you with building blocks and helpful defaults. If you want custom navigation behavior you can drop down to lower layers and create your own components and customizations.
      • Pick your building blocks: Instead of embedding all behavior within the library, Nav3 offers smaller components that you can combine to create more complex functionality. We’ve also provided a “recipes book” that shows how to combine components to solve common navigation challenges.

    illustration of the Nav3 display observing changes to the developer-owned back stack

    Figure 2. The Nav3 display observes changes to the developer-owned back stack.

    Key features

      • Adaptive layouts: A flexible layout API (named Scenes) allows you to render multiple destinations in the same layout (for example, a list-detail layout on large screen devices). This makes it easy to switch between single and multi-pane layouts.
      • Modularity: The API design allows navigation code to be split across multiple modules. This improves build times and allows clear separation of responsibilities between feature modules.

        moving image demonstrating custom animations and predictive back features on a mobile device

        Figure 3. Custom animations and predictive back are easy to implement, and easy to override for individual destinations.

        Basic code example

        To give you an idea of how Nav3 works, here’s a short code sample.

        // Define the routes in your app and any arguments.
        data object Home
        data class Product(val id: String)
        
        // Create a back stack, specifying the route the app should start with.
        val backStack = remember { mutableStateListOf<Any>(Home) }
        
        // A NavDisplay displays your back stack. Whenever the back stack changes, the display updates.
        NavDisplay(
            backStack = backStack,
        
            // Specify what should happen when the user goes back
            onBack = { backStack.removeLastOrNull() },
        
            // An entry provider converts a route into a NavEntry which contains the content for that route.
            entryProvider = { route ->
                when (route) {
                    is Home -> NavEntry(route) {
                        Column {
                            Text("Welcome to Nav3")
                            Button(onClick = {
                                // To navigate to a new route, just add that route to the back stack
                                backStack.add(Product("123"))
                            }) {
                                Text("Click to navigate")
                            }
                        }
                    }
                    is Product -> NavEntry(route) {
                        Text("Product ${route.id} ")
                    }
                    else -> NavEntry(Unit) { Text("Unknown route: $route") }
                }
            }
        )
        

        Get started and provide feedback

        To get started, check out the developer documentation, plus the recipes repository which provides examples for:

          • common navigation UI, such as a navigation rail or bar
          • conditional navigation, such as a login flow
          • custom layouts using Scenes

        We plan to provide code recipes, documentation and blogs for more complex use cases in future.

        Nav3 is currently in alpha, which means that the API is liable to change based on feedback. If you have any issues, or would like to provide feedback, please file an issue.

        Nav3 offers a flexible and powerful foundation for building modern navigation in your Compose applications. We’re really excited to see what you build with it.

        Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.




    Source link

  • What’s New in Jetpack Compose



    Posted by Nick Butcher – Product Manager

    At Google I/O 2025, we announced a host of features, performance, stability, libraries, and tools updates for Jetpack Compose, our recommended Android UI toolkit. With Compose you can build excellent apps that work across devices. Compose has matured a lot since it was first announced (at Google I/O 2019!) and we’re now seeing 60% of the top 1,000 apps in the Play Store such as MAX and Google Drive use and love it.

    New Features

    Since I/O last year, Compose Bill of Materials (BOM) version 2025.05.01 adds new features such as:

      • Autofill support that lets users automatically insert previously entered personal information into text fields.
      • Auto-sizing text to smoothly adapt text size to a parent container size.
      • Visibility tracking for when you need high-performance information on a composable’s position in its root container, screen, or window.
      • Animate bounds modifier for beautiful automatic animations of a Composable’s position and size within a LookaheadScope.
      • Accessibility checks in tests that let you build a more accessible app UI through automated a11y testing.

    LookaheadScope {
        Box(
            Modifier
                .animateBounds(this@LookaheadScope)
                .width(if(inRow) 100.dp else 150.dp)
                .background(..)
                .border(..)
        )
    }
    

    moving image of animate bounds modifier in action

    For more details on these features, read What’s new in the Jetpack Compose April ’25 release and check out these talks from Google I/O:

    If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we’re working on including:

      • Pausable Composition (see below)
      • Updates to LazyLayout prefetch
      • Context Menus
      • New modifiers: onFirstVisible, onVisbilityChanged, contentType
      • New Lint checks for frequently changing values and elements that should be remembered in composition

    Please try out the alpha features and provide feedback to help shape the future of Compose.

    Material Expressive

    At Google I/O, we unveiled Material Expressive, Material Design’s latest evolution that helps you make your products even more engaging and easier to use. It’s a comprehensive addition of new components, styles, motion and customization options that help you to build beautiful rich UIs. The Material3 library in the latest alpha BOM contains many of the new expressive components for you to try out.

    moving image of material expressive design example

    Learn more to start building with Material Expressive.

    Adaptive layouts library

    Developing adaptive apps across form factors including phones, foldables, tablets, desktop, cars and Android XR is now easier with the latest enhancements to the Compose adaptive layouts library. The stable 1.1 release adds support for predictive back gestures for smoother transitions and pane expansion for more flexible two pane layouts on larger screens. Furthermore, the 1.2 (alpha) release adds more flexibility for how panes are displayed, adding strategies for reflowing and levitating.

    moving image of compose adaptive layouts updates in the Google Play app

    Compose Adaptive Layouts Updates in the Google Play app

    Learn more about building adaptive android apps with Compose.

    Performance

    With each release of Jetpack Compose, we continue to prioritize performance improvements. The latest stable release includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations. Best of all these are available to you simply by upgrading your Compose dependency; no code changes required.

    bar chart of internal benchmarks for performance run on a Pixel 3a device from January to May 2023 measured by jank rate

    Internal benchmark, run on a Pixel 3a

    We continue to work on further performance improvements, notable changes in the latest alpha BOM include:

      • Pausable Composition allows compositions to be paused, and their work split up over several frames.
      • Background text prefetch enables text layout caches to be pre-warmed on a background thread, enabling faster text layout.
      • LazyLayout prefetch improvements enabling lazy layouts to be smarter about how much content to prefetch, taking advantage of pausable composition.

    Together these improvements eliminate nearly all jank in an internal benchmark.

    Stability

    We’ve heard from you that upgrading your Compose dependency can be challenging, encountering bugs or behaviour changes that prevent you from staying on the latest version. We’ve invested significantly in improving the stability of Compose, working closely with the many Google app teams building with Compose to detect and prevent issues before they even make it to a release.

    Google apps develop against and release with snapshot builds of Compose; as such, Compose is tested against the hundreds of thousands of Google app tests and any Compose issues are immediately actioned by our team. We have recently invested in increasing the cadence of updating these snapshots and now update them daily from Compose tip-of-tree, which means we’re receiving feedback faster, and are able to resolve issues long before they reach a public release of the library.

    Jetpack Compose also relies on @Experimental annotations to mark APIs that are subject to change. We heard your feedback that some APIs have remained experimental for a long time, reducing your confidence in the stability of Compose. We have invested in stabilizing experimental APIs to provide you a more solid API surface, and reduced the number of experimental APIs by 32% in the last year.

    We have also heard that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. In the latest alpha BOM, we have added a new opt-in feature to provide more diagnostic information. Note that this does not currently work with minified builds and comes at a performance cost, so we recommend only using this feature in debug builds.

    class App : Application() {
       override fun onCreate() {
            // Enable only for debug flavor to avoid perf impact in release
            Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
       }
    }
    

    Libraries

    We know that to build great apps, you need Compose integration in the libraries that interact with your app’s UI.

    A core library that powers any Compose app is Navigation. You told us that you often encountered limitations when managing state hoisting and directly manipulating the back stack with the current Compose Navigation solution. We went back to the drawing-board and completely reimagined how a navigation library should integrate with the Compose mental model. We’re excited to introduce Navigation 3, a new artifact designed to empower you with greater control and simplify complex navigation flows.

    We’re also investing in Compose support for CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.

    @Composable
    private fun VideoPlayer(
        player: Player?, // from media3
        modifier: Modifier = Modifier
    ) {
        Box(modifier) {
            PlayerSurface(player) // from media3-ui-compose
            player?.let {
                // custom play-pause button UI
                val playPauseButtonState = rememberPlayPauseButtonState(it) // from media3-ui-compose
                MyPlayPauseButton(playPauseButtonState, Modifier.align(BottomEnd).padding(16.dp))
            }
        }
    }
    

    To learn more, see the media3 Compose documentation and the CameraX samples.

    Tools

    We continue to improve the Android Studio tools for creating Compose UIs. The latest Narwhal canary includes:

      • Resizable Previews instantly show you how your Compose UI adapts to different window sizes
      • Preview navigation improvements using clickable names and components
      • Studio Labs 🧪: Compose preview generation with Gemini quickly generate a preview
      • Studio Labs 🧪: Transform UI with Gemini change your UI with natural language, directly from preview.
      • Studio Labs 🧪: Image attachment in Gemini generate Compose code from images.

    For more information read What’s new in Android development tools.

    moving image of resizable preview in Jetpack Compose

    Resizable Preview

    New Compose Lint checks

    The Compose alpha BOM introduces two new annotations and associated lint checks to help you to write correct and performant Compose code. The @FrequentlyChangingValue annotation and FrequentlyChangedStateReadInComposition lint check warns in situations where function calls or property reads in composition might cause frequent recompositions. For example, frequent recompositions might happen when reading scroll position values or animating values. The @RememberInComposition annotation and RememberInCompositionDetector lint check warns in situations where constructors, functions, and property getters are called directly inside composition (e.g. the TextFieldState constructor) without being remembered.

    Happy Composing

    We continue to invest in providing the features, performance, stability, libraries and tools that you need to build excellent apps. We value your input so please share feedback on our latest updates or what you’d like to see next.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Common media processing operations with Jetpack Media3 Transformer



    Posted by Nevin Mital – Developer Relations Engineer, and Kristina Simakova – Engineering Manager

    Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality.

    The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we’ll walk through some of the most common editing operations with Transformer and discuss its performance.

    Getting set up with Transformer

    To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you’ll:

      • Create one or many MediaItem instances from your video file(s), then
      • Apply item-specific edits to them by building an EditedMediaItem for each MediaItem,
      • Create a Transformer instance configured with settings applicable to the whole exported video,
      • and finally start the export to save your applied edits to a file.

    Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post!

    Here’s what this looks like in code:

    val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build()
    val editedMediaItem = EditedMediaItem.Builder(mediaItem).build()
    val transformer = 
      Transformer.Builder(context)
        .addListener(/* Add a Transformer.Listener instance here for completion events */)
        .build()
    transformer.start(editedMediaItem, outputFilePath)
    

    Transcoding, Trimming, Muting, and Resizing with the Transformer API

    Let’s now take a look at four of the most common single-asset media editing operations, starting with Transcoding.

    Transcoding is the process of re-encoding an input file into a specified output format. For this example, we’ll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change:

    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .setVideoMimeType(MimeTypes.VIDEO_H265)
        .setAudioMimeType(MimeTypes.AUDIO_AAC)
        .build()
    

    Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we’ll also include FFmpeg commands for each example to serve as a helpful reference. Here’s how you can perform the same transcoding with FFmpeg:

    $ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath
    

    The next operation we’ll try is Trimming.

    Specifically, we’ll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the “Getting set up” section above, here are the lines that change:

    // Configure the trim operation by adding a ClippingConfiguration to
    // the media item
    val clippingConfiguration =
       MediaItem.ClippingConfiguration.Builder()
         .setStartPositionMs(3000)
         .setEndPositionMs(8000)
         .build()
    val mediaItem =
       MediaItem.Builder()
         .setUri(mediaItemUri)
         .setClippingConfiguration(clippingConfiguration)
         .build()
    
    // Transformer also has a trim optimization feature we can enable.
    // This will prioritize Transmuxing over Transcoding where possible.
    // See more about Transmuxing further down in this post.
    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .experimentalSetTrimOptimizationEnabled(true)
        .build()
    

    With FFmpeg:

    $ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath
    

    Next, we can mute the audio in the exported video file.

    val editedMediaItem = 
      EditedMediaItem.Builder(mediaItem)
        .setRemoveAudio(true)
        .build()
    

    The corresponding FFmpeg command:

    $ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath
    

    And for our final example, we’ll try resizing the input video by scaling it down to half its original height and width.

    val scaleEffect = 
      ScaleAndRotateTransformation.Builder()
        .setScale(0.5f, 0.5f)
        .build()
    val editedMediaItem =
      EditedMediaItem.Builder(mediaItem)
        .setEffects(
          /* audio */ Effects(emptyList(), 
          /* video */ listOf(scaleEffect))
        )
        .build()
    

    An FFmpeg command could look like this:

    $ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath
    

    Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple.

    Transformer API Performance results

    Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device:

    (Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.)

    Input video format: 10s 720p H264 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~1300ms
    • Trimming video to 00:03-00:08: ~2300ms
    • Muting audio: ~200ms
    • Resizing video to half height and width: ~1200ms

    Input video format: 25s 360p VP8 video with Vorbis audio

    • Transcoding to H265 video and AAC audio: ~3400ms
    • Trimming video to 00:03-00:08: ~1700ms
    • Muting audio: ~1600ms
    • Resizing video to half height and width: ~4800ms

    Input video format: 4s 8k H265 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~2300ms
    • Trimming video to 00:03-00:08: ~1800ms
    • Muting audio: ~2000ms
    • Resizing video to half height and width: ~3700ms

    One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times.

    When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences:

    Transmuxing

      • Transformer’s preferred approach when possible – a quick transformation that preserves elementary streams.
      • Only applicable to basic operations, such as rotating, trimming, or container conversion.
      • No quality loss or bitrate change.

    Transmux

    Transcoding

      • Transformer’s fallback approach in cases when Transmuxing isn’t possible – Involves decoding and re-encoding elementary streams.
      • More extensive modifications to the input video are possible.
      • Loss in quality due to re-encoding, but can achieve a desired bitrate target.

    Transcode

    We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above.

    A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest.

    Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder’s output format and the input format must be compatible.

    If the optimization fails, Transformer automatically falls back to normal export.

    What’s next?

    As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs.

    To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We’ve already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app’s performance!



    Source link

  • Health Connect Jetpack SDK is now in beta and new feature updates



    Posted by Brenda Shaw – Health & Home Partner Engineering Technical Writer

    At Google, we are committed to empowering developers as they build exceptional health and fitness experiences. Core to that commitment is Health Connect, an Android platform that allows health and fitness apps to store and share the same on-device data. Android devices running Android 14 or that have the pre-installed APK will automatically have Health Connect by default in Settings. For pre-Android 14 devices, Health Connect is available for download from the Play Store.

    We’re excited to announce significant Health Connect updates like the Jetpack SDK Beta, new datatypes and new permissions that will enable richer, more insightful app functionalities.

    Jetpack SDK is now in Beta

    We are excited to announce the beta release of our Jetback SDK! Since its initial release, we’ve dedicated significant effort to improving data completeness, with a particular focus on enriching the metadata associated with each data point.

    In the latest SDK, we’re introducing two key changes designed to ensure richer metadata and unlock new possibilities for you and your users:

    Make Recording Method Mandatory

    To deliver more accurate and insightful data, the Beta introduces a requirement to specify one of four recording methods when writing data to Health Connect. This ensures increased data clarity, enhanced data analysis and improved user experience:

    If your app currently does not set metadata when creating a record:

    Before

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
    ) // error: metadata is not provided
    

    After

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata = Metadata.manualEntry()
    )
    

    If your app currently calls Metadata constructor when creating a record:

    Before

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata =
            Metadata(
                clientRecordId = "client id",
                recordingMethod = RECORDING_METHOD_MANUAL_ENTRY,
            ), // error: Metadata constructor not found
    )
    

    After

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata = Metadata.manualEntry(clientRecordId = "client id"),
    )
    

    Make Device Type Mandatory

    You will be required to specify device type when creating a Device object. A device object will be required for Automatically (RECORDING_METHOD_AUTOMATICALLY_RECORDED) or Actively (RECORDING_METHOD_ACTIVELY_RECORDED) recorded data.

    Before

    Device() // error: type not provided
    

    After

    Device(type = Device.Companion.TYPE_PHONE)
    

    We believe these updates will significantly improve the quality of data within your applications and empower you to create more insightful user experiences. We encourage you to explore the Jetpack SDK Beta and review the updated Metadata page and familiarize yourself with these changes.

    New background reads permission

    To enable richer, background-driven health and fitness experiences while maintaining user trust, Health Connect now features a dedicated background reads permission.

    This permission allows your app to access Health Connect data while running in the background, provided the user grants explicit consent. Users retain full control, with the ability to manage or revoke this permission at any time via Health Connect settings.

    Let your app read health data even in the background with the new Background Reads permission. Declare the following permission in your manifest file:

    <application>
      <uses-permission android:name="android.permission.health.READ_HEALTH_DATA_IN_BACKGROUND" />
    ...
    </application>
    

    Use the Feature Availability API to check if the user has the background read feature available, according to the version of Health Connect they have on their devices.

    Allow your app to read historic data

    By default, when granted read permission, your app can access historical data from other apps for the preceding 30 days from the initial permission grant. To enable access to data beyond this 30-day window, Health Connect introduces the PERMISSION_READ_HEALTH_DATA_HISTORY permission. This allows your app to provide new users with a comprehensive overview of their health and wellness history.

    Users are in control of their data with both background reads and history reads. Both capabilities require developers to declare the respective permissions, and users must grant the permission before developers can access their data. Even after granting permission, users have the option of revoking access at any time from Health Connect settings.

    Additional data access and types

    Health Connect now offers expanded data types, enabling developers to build richer user experiences and provide deeper insights. Check out the following new data types:

      • Exercise Routes allows users to share exercise routes with other apps for a seamless synchronized workout. By allowing users to share all routes or one route, their associated exercise activities and maps for their workouts will be synced with the fitness apps of their choice.

    Fitness app asking permission to access exercise route in Health Connect

      • The skin temperature data type measures peripheral body temperature unlocking insights around sleep quality, reproductive health, and the potential onset of illness.
      • Health Connect also provides a planned exercise data type to enable training apps to write training plans and workout apps to read training plans. Recorded exercises (workouts) can be read back for personalized performance analysis to help users achieve their training goals. Access granular workout data, including sessions, blocks, and steps, for comprehensive performance analysis and personalized feedback.

    These new data types empower developers to create more connected and insightful health and fitness applications, providing users with a holistic view of their well-being.

    To learn more about all new APIs and bug fixes, check out the full release notes.

    Get started with the Health Connect Jetpack SDK

    Whether you are just getting started with Health Connect or are looking to implement the latest features, there are many ways to learn more and have your voice heard.

      • Subscribe to our newsletter: Stay up-to-date with the latest news, announcements, and resources from Google Health and Fitness. Subscribe to our Health and Fitness Google Developer Newsletter and get the latest updates delivered straight to your inbox.
      • Check out our Health Connect developer guide: The Health and Fitness Developer Center is your one-stop-shop for building health and fitness apps on Android – including a robust guide for getting started with Health Connect.
      • Report an issue: Encountered a bug or technical issue? Report it directly to our team through the Issue Tracker so we can investigate and resolve it. You can also request a feature or provide feedback with Issue Tracker.

    We can’t wait to see what you create!



    Source link

  • Jetpack WindowManager 1.4 is stable



    Posted by Xiaodao Wu – Developer Relations Engineer

    Jetpack WindowManager keeps getting better. WindowManager gives you tools to build adaptive apps that work seamlessly across all kinds of large screen devices. Version 1.4, which is stable now, introduces new features that make multi-window experiences even more powerful and flexible. While Jetpack Compose is still the best way to create app layouts for different screen sizes, 1.4 makes some big improvements to activity embedding, including activity stack spinning, pane expansion, and dialog full-screen dim. Multi-activity apps can easily take advantage of all these great features.

    WindowManager 1.4 introduces a range of enhancements. Here are some of the highlights.

    WindowSizeClass

    We’ve updated the WindowSizeClass API to support custom values. We changed the API shape to make it easy and extensible to support custom values and add new values in the future. The high level changes are as follows:

      • Opened the constructor to take in minWidthDp and minHeightDp parameters so you can create your own window size classes
      • Added convenience methods for checking breakpoint validity
      • Deprecated WindowWidthSizeClass and WindowHeightSizeClass in favor of WindowSizeClass#isWidthAtLeastBreakpoint() and WindowSizeClass#isHeightAtLeastBreakpoint() respectively

    Here’s a migration example:

    // old 
    
    val sizeClass = WindowSizeClass.compute(widthDp, heightDp)
    when (sizeClass.widthSizeClass) {
      COMPACT -> doCompact()
      MEDIUM -> doMedium()
      EXPANDED -> doExpanded()
      else -> doDefault()
    }
    
    // new
    val sizeClass = WindowSizeClass.BREAKPOINTS_V1
                                   .computeWindowSizeClass(widthDp, heightDp)
    
    when {
      sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND) -> {
        doExpanded()
      }
      sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND) -> {
        doMedium()
      }
      else -> {
        doCompact()
      }
    }
    

    Some things to note in the new API:

      • The order of the when branches should go from largest to smallest to support custom values from developers or new values in the future
      • The default branch should be treated as the smallest window size class

    Activity embedding

    Activity stack pinning

    Activity stack pinning provides a way to keep an activity stack always on screen, no matter what else is happening in your app. This new feature lets you pin an activity stack to a specific window, so the top activity stays visible even when the user navigates to other parts of the app in a different window. This is perfect for things like live chats or video players that you want to keep on screen while users explore other content.

    private fun pinActivityStackExample(taskId: Int) {
     val splitAttributes: SplitAttributes = SplitAttributes.Builder()
       .setSplitType(SplitAttributes.SplitType.ratio(0.66f))
       .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT)
       .build()
    
     val pinSplitRule = SplitPinRule.Builder()
       .setDefaultSplitAttributes(splitAttributes)
       .build()
    
     SplitController.getInstance(applicationContext).pinTopActivityStack(taskId, pinSplitRule)
    }
    

    Pane expansion

    The new pane expansion feature, also known as interactive divider, lets you create a visual separation between two activities in split-screen mode. You can make the pane divider draggable so users can resize the panes – and the activities in the panes – on the fly. This gives users control over how they want to view the app’s content.

    val splitAttributesBuilder: SplitAttributes.Builder = SplitAttributes.Builder()
       .setSplitType(SplitAttributes.SplitType.ratio(0.33f))
       .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT)
    
    if (WindowSdkExtensions.getInstance().extensionVersion >= 6) {
       splitAttributesBuilder.setDividerAttributes(
           DividerAttributes.DraggableDividerAttributes.Builder()
               .setColor(getColor(context, R.color.divider_color))
               .setWidthDp(4)
               .setDragRange(
                   DividerAttributes.DragRange.DRAG_RANGE_SYSTEM_DEFAULT)
               .build()
       )
    }
    val splitAttributes: SplitAttributes = splitAttributesBuilder.build()
    

    Dialog full-screen dim

    WindowManager 1.4 gives you more control over how dialogs dim the background. With dialog full-screen dim, you can choose to dim just the container where the dialog appears or the entire task window for a unified UI experience. The entire app window dims by default when a dialog opens (see EmbeddingConfiguration.DimAreaBehavior.ON_TASK).To dim only the container of the activity that opened the dialog, use EmbeddingConfiguration.DimAreaBehavior.ON_ACTIVITY_STACK. This gives you more flexibility in designing dialogs and makes for a smoother, more coherent user experience. Temu is among the first developers to integrate this feature, the full-screen dialog dim has reduced screen invalid touches by about 5%.

    Customised shopping cart reminder with dialog full-screen dim in the Temu app

    Customised shopping cart reminder with dialog full-screen dim in Temu.

    Enhanced posture support

    WindowManager 1.4 makes building apps that work flawlessly on foldables straightforward by providing more information about the physical capabilities of the device. The new WindowInfoTracker#supportedPostures API lets you know if a device supports tabletop mode, so you can optimize your app’s layout and features accordingly.

    val currentSdkVersion = WindowSdkExtensions.getInstance().extensionVersion
    val message =
    if (currentSdkVersion >= 6) {
      val supportedPostures = WindowInfoTracker.getOrCreate(LocalContext.current).supportedPostures
      buildString {
        append(supportedPostures.isNotEmpty())
        if (supportedPostures.isNotEmpty()) {
          append(" ")
          append(
          supportedPostures.joinToString(
          separator = ",", prefix = "(", postfix = ")"))
        }
      }
    } else {
      "N/A (WindowSDK version 6 is needed, current version is $currentSdkVersion)"
    }
    

    Other API changes

    WindowManager 1.4 includes several API changes and additions to support the new features. Notable changes include:

      • Stable and no longer experimental APIs:
        • ActivityEmbeddingController#invalidateVisibleActivityStacks
        • ActivityEmbeddingController#getActivityStack
        • SplitController#updateSplitAttributes
      • API added to set activity embedding animation background:
        • SplitAttributes.Builder#setAnimationParams
      • API to get updated WindowMetrics information:
        • ActivityEmbeddingController#embeddedActivityWindowInfo
      • API to finish all activities in an activity stack:
        • ActivityEmbeddingController#finishActivityStack

    How to get started

    To start using Jetpack WindowManager 1.4 in your Android projects, update your app dependencies in build.gradle.kts to the latest stable version:

    dependencies {
        implementation("androidx.window:window:1.4.0-rc01")
        ...  
        // or, if you're using the WindowManager testing library:
        testImplementation("androidx.window:window-testing:1.4.0-rc01")
    }
    

    Happy coding!



    Source link

  • What’s new in the Jetpack Compose April ’25 release



    Posted by Jolanda Verhoef – Developer Relations Engineer

    Today, as part of the Compose April ‘25 Bill of Materials, we’re releasing version 1.8 of Jetpack Compose, Android’s modern, native UI toolkit, used by many developers. This release contains new features like autofill, various text improvements, visibility tracking, and new ways to animate a composable’s size and location. It also stabilizes many experimental APIs and fixes a number of bugs.

    To use today’s release, upgrade your Compose BOM version to 2025.04.01 :

    implementation(platform("androidx.compose:compose-bom:2025.04.01"))
    

    Note: If you are not using the Bill of Materials, make sure to upgrade Compose Foundation and Compose UI at the same time. Otherwise, autofill will not work correctly.

    Autofill

    Autofill is a service that simplifies data entry. It enables users to fill out forms, login screens, and checkout processes without manually typing in every detail. Now, you can integrate this functionality into your Compose applications.

    Setting up Autofill in your Compose text fields is straightforward:

    TextField(
      state = rememberTextFieldState(),
      modifier = Modifier.semantics {
        contentType = ContentType.Username 
      }
    )
    

    For full details on how to implement autofill in your application, see the Autofill in Compose documentation.

    Text

    When placing text inside a container, you can now use the autoSize parameter in BasicText to let the text size automatically adapt to the container size:

    Box {
        BasicText(
            text = "Hello World",
            maxLines = 1,
            autoSize = TextAutoSize.StepBased()
        )
    }
    

    moving image of Hello World text inside a container

    You can customize sizing by setting a minimum and/or maximum font size and define a step size. Compose Foundation 1.8 contains this new BasicText overload, with Material 1.4 to follow soon with an updated Text overload.

    Furthermore, Compose 1.8 enhances text overflow handling with new TextOverflow.StartEllipsis or TextOverflow.MiddleEllipsis options, which allow you to display ellipses at the beginning or middle of a text line.

    val text = "This is a long text that will overflow"
    Column(Modifier.width(200.dp)) {
      Text(text, maxLines = 1, overflow = TextOverflow.Ellipsis)
      Text(text, maxLines = 1, overflow = TextOverflow.StartEllipsis)
      Text(text, maxLines = 1, overflow = TextOverflow.MiddleEllipsis)
    }
    

    text overflow handling displaying ellipses at the beginning and middle of a text line

    And finally, we’re expanding support for HTML formatting in AnnotatedString, with the addition of bulleted lists:

    Text(
      AnnotatedString.fromHtml(
        """
        <h1>HTML content</h1>
        <ul>
          <li>Hello,</li>
          <li>World</li>
        </ul>
        """.trimIndent()
      )
    )
    

    a bulleted list of two items

    Visibility tracking

    Compose UI 1.8 introduces a new modifier: onLayoutRectChanged. This API solves many use cases that the existing onGloballyPositioned modifier does; however, it does so with much less overhead. The onLayoutRectChanged modifier can debounce and throttle the callback per what the use case demands, which helps with performance when it’s added onto an item in LazyColumn or LazyRow.

    This new API unlocks features that depend on a composable’s visibility on screen. Compose 1.9 will add higher-level abstractions to this low-level API to simplify common use cases.

    Animate composable bounds

    Last year we introduced shared element transitions, which smoothly animate content in your apps. The 1.8 Animation module graduates LookaheadScope to stable, includes numerous performance and stability improvements, and includes a new modifier, animateBounds. When used inside a LookaheadScope, this modifier automatically animates its composable’s size and position on screen, when those change:

    Box(
      Modifier
        .width(if(expanded) 180.dp else 110.dp)
        .offset(x = if (expanded) 0.dp else 100.dp)
        .animateBounds(lookaheadScope = this@LookaheadScope)
        .background(Color.LightGray, shape = RoundedCornerShape(12.dp))
        .height(50.dp)
    ) {
      Text("Layout Content", Modifier.align(Alignment.Center))
    }
    

    a moving image depicting animate composable bounds

    Increased API stability

    Jetpack Compose has utilized @Experimental annotations to mark APIs that are liable to change across releases, for features that require more than a library’s alpha period to stabilize. We have heard your feedback that a number of features have been marked as experimental for some time with no changes, contributing to a sense of instability. We are actively looking at stabilizing existing experimental APIs—in the UI and Foundation modules, we have reduced the experimental APIs from 172 in the 1.7 release to 70 in the 1.8 release. We plan to continue this stabilization trend across modules in future releases.

    Deprecation of contextual flow rows and columns

    As part of the work to reduce experimental annotations, we identified APIs added in recent releases that are less than optimal solutions for their use cases. This has led to the decision to deprecate the experimental ContextualFlowRow and ContextualFlowColumn APIs, added in Foundation 1.7. If you need the deprecated functionality, our recommendation for now is to copy over the implementation and adapt it as needed, while we work on a plan for future components that can cover these functionalities better.

    The related APIs FlowRow and FlowColumn are now stable; however, the new overflow parameter that was added in the last release is now deprecated.

    Improvements and fixes for core features

    In response to developer feedback, we have shipped some particularly in-demand features and bug fixes in our core libraries:

      • Make dialogs go edge to edge: When displayed full screen, dialogs now take into account the full size of the screen and will draw behind system bars.

    Get started!

    We’re grateful for all of the bug reports and feature requests submitted to our issue tracker – they help us to improve Compose and build the APIs you need. Continue providing your feedback, and help us make Compose better.

    Happy composing!



    Source link