برچسب: Building

  • Top 3 updates for building excellent, adaptive apps at Google I/O ‘25



    Posted by Mozart Louis – Developer Relations Engineer

    Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

    Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.

    If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you’re ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3’s editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.

    https://www.youtube.com/watch?v=KiYHuY3hiZc

    Check out the Google I/O playlist for all the session details.

    Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:

    #1: Build adaptively to unlock 500 million devices

    https://www.youtube.com/watch?v=15oPNK1W0Tw

    In today’s diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.

    The talk emphasizes that you don’t need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app’s potential.

    Here are some resources we encourage you to use in your apps:

    New feature support in Jetpack Compose Adaptive Libraries

      • We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.

    Navigation 3

      • The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.

    Updates to Window Manager Library

      • AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as “extra large,” while widths between 1200dp and 1600dp are classified as “large.” These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.

    Support all orientations and be resizable

    Extend to Android XR

    Upgrade your Wear OS apps to Material 3 Design

    You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.

    #2: Enhance your app’s performance optimization

    https://www.youtube.com/watch?v=IaNpcrCSDiI

    Get ready to take your app’s performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.

    Redesigned UiAutomator API

      • To make benchmarking reliable and reproducible, there’s the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.

    Macrobenchmarks

      • Once your tests are in place, it’s time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app’s health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app’s performance and where to focus your efforts.

    R8, More than code shrinking and obfuscation

      • You might know R8 as a code shrinking tool, but it’s capable of so much more! The talk dives into R8’s capabilities using the “Androidify” sample app. You’ll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It’ll also be shown how library developers can include “consumer Keep rules” so that their important code is not touched when used in an application.

    #3: Build Richer Image and Video Experiences

    https://www.youtube.com/watch?v=3zXVPU2vKXs

    In today’s digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.

    Media3Effects in CameraX Preview

      • At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.

    Google Low-Light Boost

      • Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.

    New Camera & Media Samples!

    Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.

    Learn how to build adaptive apps

    Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.

    https://www.youtube.com/watch?v=videoseries



    Source link

  • What Living in a 5-Minute City Taught Me About Building Better Businesses

    What Living in a 5-Minute City Taught Me About Building Better Businesses


    Opinions expressed by Entrepreneur contributors are their own.

    There’s a difference in each city’s marketing. Seoul wants to be culturally forward and technologically advanced; Copenhagen wants to be environmentally leading and design-centric. That’s fine, but big cities are hard to encapsulate, mostly because a well-developed city has many strengths.

    What both cities share, however, is something entrepreneurs should pay serious attention to: the 5-minute principle that’s revolutionizing how I run my business.

    Related: 5 Simple Productivity Hacks That Will Make You More Successful

    The accidental business experiment

    I live part of the year in the Hapjeong neighborhood in Seoul, South Korea. My older daughter’s school is one stop away on her bus, about a ten-minute ride. That’s as far as anyone in my family needs to go. My younger daughter’s preschool is an eight-minute walk away. And my office is one elevator ride and 50 paces away, in the same complex as my residence.

    On the B2 level of the complex is a hypermarket, and the mall that sits between our perch and that store is a veritable feast of retail: convenience stores, home goods stores, pharmacies and wireless shops; sporting goods stores like Nike; some wine shops; a smattering of eateries (including a McDonald’s and a Subway); and coffee bars, including two Starbucks (one a Reserve). Did I forget to mention the movie theatre, the family practitioner, the dentist, the hair salons and the Pilates studios?

    As an American who likes driving, living here took adjustment. I’ve lived in metropolises like Boston before, where the CVS and the JP Licks are a short walk away. But before here, I’d never experienced a place where everything is just down the elevator shaft. Moving here felt magical, like I was on holiday at an urban resort.

    And after a few months of living this way, I decided to double down: I put my office in the mall too.

    Related: 21 Productive Things to Do on Your Commute

    The productivity revolution no one’s talking about

    It’s hard to describe how convenient my Korean life is. How removing the transit time required for any quotidian task has given me back hours every week.

    The business impact was immediate and profound. With my time budget suddenly expanded, I started wondering: what if I could recreate this 5-minute efficiency for my entire operation?

    I appreciated it so much that I decided to offer others the opportunity to live the 5-minute life too. My recruiter put up a post seeking English-speaking people who live nearby; now we have a team where eight people commute from a ten-minute walk away. In the words of one, “This is a dream.”

    The ROI of proximity: Time is actually money

    Let’s do the math. The average American worker spends 52 minutes commuting each day, with some doing way more. That’s 225 hours annually — or six full work weeks — getting to and from work. For entrepreneurs and business owners who bill hourly or measure team productivity meticulously, this represents an extraordinary hidden cost.

    When I implemented my proximity-based hiring model, our team recovered approximately:

    • 960 hours of collective productive time annually (across team members)
    • 15% reduction in our sick days (people who cycle or walk to work get sick less often)
    • 32% decrease in tardiness and schedule disruptions
    • Zero weather-related absences (a factor during Seoul’s monsoon season)

    More importantly, we’ve seen enhanced team collaboration and increased employee retention due to our shared neighborhood experience. Happy hours are easy. We can help each other move. We dog sit for one another. It’s all easy as team members who live and work in the same neighborhood develop stronger connections to the company and each other.

    The 5-minute principle: Beyond real estate

    When I explain this life to my friends and family, they look at me like I’ve become a devotee of a guru they don’t quite trust. “But isn’t it weird? You don’t ever really leave the neighborhood.” It’s true that I rarely leave. Although the other night, I did take a 45-minute cab ride to the other side of the city to catch Park Jin-young’s (JYP) 30th anniversary concert (he’s amazing live).

    But to all American entrepreneurs who commute to offices, fight traffic to meetings and waste precious hours in transit, do we really need to see the scenery during our transit to some daily destination? Wouldn’t business be easier if there were no chance of traffic or weather or accidents, and everything we needed were a block away? So, instead of maximizing your long commute or making it more productive, why not eliminate it?

    While not every business can relocate to a self-contained complex, every entrepreneur can apply the 5-minute principle:

    1. Strategic co-location: Position your office near where your key team members already live, not where it seems prestigious on a business card
    2. Proximity-based recruiting: Target talent pools within specific geographic zones rather than casting wide nets
    3. Creating micro-hubs: Establish small satellite offices in neighborhoods where clusters of employees live
    4. Virtual proximity: Design digital workflows that minimize “travel time” between apps and functions — the digital equivalent of the elevator ride
    5. Proximity partnerships: Form alliances with nearby businesses to create your own service ecosystems

    Related: Super Commuting Is on the Rise, Here’s Why and How It Works

    What you gain when you stop commuting

    I can think of just one thing from my daily commute that I miss: talking on the phone to old friends. My long drives to and from work were good for check-in calls; now that I don’t drive, I don’t have much idle time for calls. But would I give back my 5-minute life for those calls? Nope.

    The business applications of the 5-minute principle extend beyond real estate. It’s about reimagining productivity as friction reduction rather than time extension. While your competitors ask employees to work longer hours, you can offer them the gift of more time without sacrificing output.

    For entrepreneurs, especially those building teams in competitive talent markets, the 5-minute model creates a distinctive advantage. When candidates consider similar roles with similar compensation, the quality-of-life improvement of a 5-minute commute becomes the deciding factor.

    In a business landscape obsessed with digital transformation, perhaps the most revolutionary change we can make is analog: bringing things closer together, not doing more, but traveling less.



    Source link

  • Building delightful UIs with Compose



    Posted by Rebecca Franks – Developer Relations Engineer

    Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

    Material 3 Expressive

    Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

    https://www.youtube.com/watch?v=n17dnMChX14

    It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

    Material Expressive Component updates

    Material Expressive Component updates

    In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

    In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

    @Composable
    fun AndroidifyTheme(
       content: @Composable () -> Unit,
    ) {
       val colorScheme = LightColorScheme
    
    
       MaterialExpressiveTheme(
           colorScheme = colorScheme,
           typography = Typography,
           shapes = shapes,
           motionScheme = MotionScheme.expressive(),
           content = {
               SharedTransitionLayout {
                   CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                       content()
                   }
               }
           },
       )
    }
    

    Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

    moving example of expressive button shapes in slow motion

    The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

    Material Expressive Component updates

    Camera button with a MaterialShapes.Cookie9Sided shape

    Animations

    Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

    val interactionSource = remember { MutableInteractionSource() }
    val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
    Spacer(
       modifier
           .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
           .clip(MaterialShapes.Cookie9Sided.toShape())
           .size(size)
           .drawWithCache {
               //.. etc
           },
    )
    

    Camera button scale interaction

    Camera button scale interaction

    Shared element animations

    The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

    moving example of expressive button shapes in slow motion

    To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

    @Composable
    fun Modifier.sharedBoundsRevealWithShapeMorph(
       sharedContentState: 
    SharedTransitionScope.SharedContentState,
       sharedTransitionScope: SharedTransitionScope = 
    LocalSharedTransitionScope.current,
       animatedVisibilityScope: AnimatedVisibilityScope = 
    LocalNavAnimatedContentScope.current,
       boundsTransform: BoundsTransform = 
    MaterialTheme.motionScheme.sharedElementTransitionSpec,
       resizeMode: SharedTransitionScope.ResizeMode = 
    SharedTransitionScope.ResizeMode.RemeasureToBounds,
       restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
       targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
    )
    

    Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

    val animatedProgress =
       animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)
    
    
    val morph = remember {
       Morph(restingShape, targetShape)
    }
    val morphClip = MorphOverlayClip(morph, { animatedProgress.value })
    
    
    return this@sharedBoundsRevealWithShapeMorph
       .sharedBounds(
           sharedContentState = sharedContentState,
           animatedVisibilityScope = animatedVisibilityScope,
           boundsTransform = boundsTransform,
           resizeMode = resizeMode,
           clipInOverlayDuringTransition = morphClip,
           renderInOverlayDuringTransition = renderInOverlayDuringTransition,
       )
    

    View the full code snippet for this Modifer on GitHub.

    Autosize text

    With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

    BasicText(text,
    style = MaterialTheme.typography.titleLarge,
    autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
    )
    

    This is used front and center for the “Customize your own Android Bot” text:

    Text reads Customize your own Android Bot with an inline moving image

    “Customize your own Android Bot” text with inline GIF

    This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

    @Composable
    private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
       Box(modifier = modifier) {
           val animatedBot = "animatedBot"
           val text = buildAnnotatedString {
               append(stringResource(R.string.customize))
               // Attach "animatedBot" annotation on the placeholder
               appendInlineContent(animatedBot)
               append(stringResource(R.string.android_bot))
           }
           var placeHolderSize by remember {
               mutableStateOf(220.sp)
           }
           val inlineContent = mapOf(
               Pair(
                   animatedBot,
                   InlineTextContent(
                       Placeholder(
                           width = placeHolderSize,
                           height = placeHolderSize,
                           placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                       ),
                   ) {
                       DancingBot(
                           modifier = Modifier
                               .padding(top = 32.dp)
                               .fillMaxSize(),
                       )
                   },
               ),
           )
           BasicText(
               text,
               modifier = Modifier
                   .align(Alignment.Center)
                   .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
               style = MaterialTheme.typography.titleLarge,
               autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
               maxLines = 6,
               onTextLayout = { result ->
                   placeHolderSize = result.layoutInput.style.fontSize * 3.5f
               },
               inlineContent = inlineContent,
           )
       }
    }
    

    Composable visibility with onLayoutRectChanged

    With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

    In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

    var buttonBounds by remember {
       mutableStateOf<RelativeLayoutBounds?>(null)
    }
    var showColorSplash by remember {
       mutableStateOf(false)
    }
    Box(modifier = Modifier.fillMaxSize()) {
       PrimaryButton(
           buttonText = "Let's Go",
           modifier = Modifier
               .align(Alignment.BottomCenter)
               .onLayoutRectChanged(
                   callback = { bounds ->
                       buttonBounds = bounds
                   },
               ),
           onClick = {
               showColorSplash = true
           },
       )
    }
    

    We use these bounds as an indication of where to start the color splash animation from.

    moving image of a blue color splash transition between Androidify demo screens

    Learn more delightful details

    From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

    animated marquee example

    animated gradient button for AI powered actions example

    animated loading screen example

    Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX


    The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

    Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

    moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

    Under the hood

    The app combines a variety of different Google technologies, such as:

      • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
      • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
      • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
      • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

    This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

    How does the Androidify app work?

    The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

    Flow chart describing Androidify app flow

    Androidify app flow chart detailing how the app works with AI

    AI in Androidify with Gemini and ML Kit

    The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

      • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

    val response = generativeModel.generateContent(
       content {
           text(prompt)
           image(image)
       },
    )
    

      • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

      • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

      • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

      • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

    The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

    Explore more detailed information about AI usage in Androidify.

    Jetpack Compose

    The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

    Delightful details with the UI

    The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

    MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

    Androidify app UI showing camera button

    Camera button with a MaterialShapes.Cookie9Sided shape

    Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

      • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

        moving example of expressive button shapes in slow motion

      • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

        animated marquee example

      • Fun color splash animation as a transition between screens.

        moving image of a blue color splash transition between Androidify demo screens

      • Animating gradient buttons for the AI-powered actions.

        animated gradient button for AI powered actions example

    To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

    Adapting to different devices

    Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

    a collage of different adaptive layouts for the Androidify app across small and large screens

    Various adaptive layouts in the app

    For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

      • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

      • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

      • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

    Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

    CameraX and Media3 Compose

    To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

    The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

    @Composable
    private fun VideoPlayer(modifier: Modifier = Modifier) {
        val context = LocalContext.current
        var player by remember { mutableStateOf<Player?>(null) }
        LifecycleStartEffect(Unit) {
            player = ExoPlayer.Builder(context).build().apply {
                setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
                repeatMode = Player.REPEAT_MODE_ONE
                prepare()
            }
            onStopOrDispose {
                player?.release()
                player = null
            }
        }
        Box(
            modifier
                .background(MaterialTheme.colorScheme.surfaceContainerLowest),
        ) {
            player?.let { currentPlayer ->
                PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
            }
        }
    }
    

    Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

    var videoFullyOnScreen by remember { mutableStateOf(false) }     
    
    LaunchedEffect(videoFullyOnScreen) {
         if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
    } 
    
    // We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
    Modifier.onVisibilityChanged(
                    containerWidth = LocalView.current.width,
                    containerHeight = LocalView.current.height,
    ) { fullyVisible -> videoFullyOnScreen = fullyVisible }
    
    // A simple version of visibility changed detection
    fun Modifier.onVisibilityChanged(
        containerWidth: Int,
        containerHeight: Int,
        onChanged: (visible: Boolean) -> Unit,
    ) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
        onChanged(
            layoutBounds.boundsInRoot.top > 0 &&
                layoutBounds.boundsInRoot.bottom < containerHeight &&
                layoutBounds.boundsInRoot.left > 0 &&
                layoutBounds.boundsInRoot.right < containerWidth,
        )
    }
    

    Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

    val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
                OutlinedIconButton(
                    onClick = playPauseButtonState::onClick,
                    enabled = playPauseButtonState.isEnabled,
                ) {
                    val icon =
                        if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                    val contentDescription =
                        if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                    Icon(
                        painterResource(icon),
                        stringResource(contentDescription),
                    )
                }
    

    Check out the code for more details on how CameraX and Media3 were used in Androidify.

    Navigation 3

    Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

    @Composable
    fun MainNavigation() {
       val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
       NavDisplay(
           backStack = backStack,
           onBack = { backStack.removeLastOrNull() },
           entryProvider = entryProvider {
               entry<Home> { entry ->
                   HomeScreen(
                       onAboutClicked = {
                           backStack.add(About)
                       },
                   )
               }
               entry<Camera> {
                   CameraPreviewScreen(
                       onImageCaptured = { uri ->
                           backStack.add(Create(uri.toString()))
                       },
                   )
               }
               // etc
           },
       )
    }
    

    Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

    CameraLayout in Compose

    Learn more about Jetpack Navigation 3, currently in alpha.

    Learn more

    By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Building delightful Android camera and media experiences



    Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital – Developer Relations Engineers

    Hello Android Developers!

    We are the Android Developer Relations Camera & Media team, and we’re excited to bring you something a little different today. Over the past several months, we’ve been hard at work writing sample code and building demos that showcase how to take advantage of all the great potential Android offers for building delightful user experiences.

    Some of these efforts are available for you to explore now, and some you’ll see later throughout the year, but for this blog post we thought we’d share some of the learnings we gathered while going through this exercise.

    Grab your favorite Android plush or rubber duck, and read on to see what we’ve been up to!

    Future-proof your app with Jetpack

    Nevin Mital

    One of our focuses for the past several years has been improving the developer tools available for video editing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which offer solutions for both single-asset and multi-asset video editing preview and export. Today, I’d like to focus on the Composition demo app, a sample app that showcases some of the multi-asset editing experiences that Transformer enables.

    I started by adding a custom video compositor to demonstrate how you can arrange input video sequences into different layouts for your final composition, such as a 2×2 grid or a picture-in-picture overlay. You can customize this by implementing a VideoCompositorSettings and overriding the getOverlaySettings method. This object can then be set when building your Composition with setVideoCompositorSettings.

    Here is an example for the 2×2 grid layout:

    object : VideoCompositorSettings {
      ...
    
      override fun getOverlaySettings(inputId: Int, presentationTimeUs: Long): OverlaySettings {
        return when (inputId) {
          0 -> { // First sequence is placed in the top left
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(-0.5f, 0.5f) // Top-left section of background
              .build()
          }
    
          1 -> { // Second sequence is placed in the top right
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(0.5f, 0.5f) // Top-right section of background
              .build()
          }
    
          2 -> { // Third sequence is placed in the bottom left
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(-0.5f, -0.5f) // Bottom-left section of background
              .build()
          }
    
          3 -> { // Fourth sequence is placed in the bottom right
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(0.5f, -0.5f) // Bottom-right section of background
              .build()
          }
    
          else -> {
            StaticOverlaySettings.Builder().build()
          }
        }
      }
    }
    

    Since getOverlaySettings also provides a presentation time, we can even animate the layout, such as in this picture-in-picture example:

    moving image of picture in picture on a mobile device

    Next, I spent some time migrating the Composition demo app to use Jetpack Compose. With complicated editing flows, it can help to take advantage of as much screen space as is available, so I decided to use the supporting pane adaptive layout. This way, the user can fine-tune their video creation on the preview screen, and export options are only shown at the same time on a larger display. Below, you can see how the UI dynamically adapts to the screen size on a foldable device, when switching from the outer screen to the inner screen and vice versa.

    What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

    moving image of suportive pane adaptive layout

    What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

    moving image of sequential composition preview in Android XR

    Orbiter(
      position = OrbiterEdge.Bottom,
      offset = EdgeOffset.inner(offset = MaterialTheme.spacing.standard),
      alignment = Alignment.CenterHorizontally,
      shape = SpatialRoundedCornerShape(CornerSize(28.dp))
    ) {
      Row (horizontalArrangement = Arrangement.spacedBy(MaterialTheme.spacing.mini)) {
        // Playback control for rewinding by 10 seconds
        FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
          Icon(
            painter = painterResource(id = R.drawable.rewind_10),
            contentDescription = "Rewind by 10 seconds"
          )
        }
        // Playback control for play/pause
        FilledTonalIconButton({ viewModel.togglePlay() }) {
          Icon(
            painter = painterResource(id = R.drawable.rounded_play_pause_24),
            contentDescription = 
                if(viewModel.compositionPlayer.isPlaying) {
                    "Pause preview playback"
                } else {
                    "Resume preview playback"
                }
          )
        }
        // Playback control for forwarding by 10 seconds
        FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
          Icon(
            painter = painterResource(id = R.drawable.forward_10),
            contentDescription = "Forward by 10 seconds"
          )
        }
      }
    }
    

    Jetpack libraries unlock premium functionality incrementally

    Donovan McMurray

    Not only do our Jetpack libraries have you covered by working consistently across existing and future devices, but they also open the doors to advanced functionality and custom behaviors to support all types of app experiences. In a nutshell, our Jetpack libraries aim to make the common case very accessible and easy, and it has hooks for adding more custom features later.

    We’ve worked with many apps who have switched to a Jetpack library, built the basics, added their critical custom features, and actually saved developer time over their estimates. Let’s take a look at CameraX and how this incremental development can supercharge your process.

    // Set up CameraX app with preview and image capture.
    // Note: setting the resolution selector is optional, and if not set,
    // then a default 4:3 ratio will be used.
    val aspectRatioStrategy = AspectRatioStrategy(
      AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
    var resolutionSelector = ResolutionSelector.Builder()
      .setAspectRatioStrategy(aspectRatioStrategy)
      .build()
    
    private val previewUseCase = Preview.Builder()
      .setResolutionSelector(resolutionSelector)
      .build()
    private val imageCaptureUseCase = ImageCapture.Builder()
      .setResolutionSelector(resolutionSelector)
      .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
      .build()
    
    val useCaseGroupBuilder = UseCaseGroup.Builder()
      .addUseCase(previewUseCase)
      .addUseCase(imageCaptureUseCase)
    
    cameraProvider.unbindAll()
    
    camera = cameraProvider.bindToLifecycle(
      this,  // lifecycleOwner
      CameraSelector.DEFAULT_BACK_CAMERA,
      useCaseGroupBuilder.build(),
    )
    

    After setting up the basic structure for CameraX, you can set up a simple UI with a camera preview and a shutter button. You can use the CameraX Viewfinder composable which displays a Preview stream from a CameraX SurfaceRequest.

    // Create preview
    Box(
      Modifier
        .background(Color.Black)
        .fillMaxSize(),
      contentAlignment = Alignment.Center,
    ) {
      surfaceRequest?.let {
        CameraXViewfinder(
          modifier = Modifier.fillMaxSize(),
          implementationMode = ImplementationMode.EXTERNAL,
          surfaceRequest = surfaceRequest,
         )
      }
      Button(
        onClick = onPhotoCapture,
        shape = CircleShape,
        colors = ButtonDefaults.buttonColors(containerColor = Color.White),
        modifier = Modifier
          .height(75.dp)
          .width(75.dp),
      )
    }
    
    fun onPhotoCapture() {
      // Not shown: defining the ImageCapture.OutputFileOptions for
      // your saved images
      imageCaptureUseCase.takePicture(
        outputOptions,
        ContextCompat.getMainExecutor(context),
        object : ImageCapture.OnImageSavedCallback {
          override fun onError(exc: ImageCaptureException) {
            val msg = "Photo capture failed."
            Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
          }
    
          override fun onImageSaved(output: ImageCapture.OutputFileResults) {
            val savedUri = output.savedUri
            if (savedUri != null) {
              // Do something with the savedUri if needed
            } else {
              val msg = "Photo capture failed."
              Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
            }
          }
        },
      )
    }
    

    You’re already on track for a solid camera experience, but what if you wanted to add some extra features for your users? Adding filters and effects are easy with CameraX’s Media3 effect integration, which is one of the new features introduced in CameraX 1.4.0.

    Here’s how simple it is to add a black and white filter from Media3’s built-in effects.

    val media3Effect = Media3Effect(
      application,
      PREVIEW or IMAGE_CAPTURE,
      ContextCompat.getMainExecutor(application),
      {},
    )
    media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
    useCaseGroupBuilder.addEffect(media3Effect)
    

    The Media3Effect object takes a Context, a bitwise representation of the use case constants for targeted UseCases, an Executor, and an error listener. Then you set the list of effects you want to apply. Finally, you add the effect to the useCaseGroupBuilder we defined earlier.

    moving image of sequential composition preview in Android XR

    (Left) Our camera app with no filter applied. 
     (Right) Our camera app after the createGrayscaleFilter was added.

    There are many other built-in effects you can add, too! See the Media3 Effect documentation for more options, like brightness, color lookup tables (LUTs), contrast, blur, and many other effects.

    To take your effects to yet another level, it’s also possible to define your own effects by implementing the GlEffect interface, which acts as a factory of GlShaderPrograms. You can implement a BaseGlShaderProgram’s drawFrame() method to implement a custom effect of your own. A minimal implementation should tell your graphics library to use its shader program, bind the shader program’s vertex attributes and uniforms, and issue a drawing command.

    Jetpack libraries meet you where you are and your app’s needs. Whether that be a simple, fast-to-implement, and reliable implementation, or custom functionality that helps the critical user journeys in your app stand out from the rest, Jetpack has you covered!

    Jetpack offers a foundation for innovative AI Features

    Mayuri Khinvasara Khabya

    Just as Donovan demonstrated with CameraX for capture, Jetpack Media3 provides a reliable, customizable, and feature-rich solution for playback with ExoPlayer. The AI Samples app builds on this foundation to delight users with helpful and enriching AI-driven additions.

    In today’s rapidly evolving digital landscape, users expect more from their media applications. Simply playing videos is no longer enough. Developers are constantly seeking ways to enhance user experiences and provide deeper engagement. Leveraging the power of Artificial Intelligence (AI), particularly when built upon robust media frameworks like Media3, offers exciting opportunities. Let’s take a look at some of the ways we can transform the way users interact with video content:

      • Empowering Video Understanding: The core idea is to use AI, specifically multimodal models like the Gemini Flash and Pro models, to analyze video content and extract meaningful information. This goes beyond simply playing a video; it’s about understanding what’s in the video and making that information readily accessible to the user.
      • Actionable Insights: The goal is to transform raw video into summaries, insights, and interactive experiences. This allows users to quickly grasp the content of a video and find specific information they need or learn something new!
      • Accessibility and Engagement: AI helps make videos more accessible by providing features like summaries, translations, and descriptions. It also aims to increase user engagement through interactive features.

    A Glimpse into AI-Powered Video Journeys

    The following example demonstrates potential video journies enhanced by artificial intelligence. This sample integrates several components, such as ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will be available soon on Github.

    moving images of examples of AI-powered video journeys

    (Left) Video summarization  
     (Right) Thumbnails timestamps and HDR frame extraction

    There are two experiences in particular that I’d like to highlight:

      • HDR Thumbnails: AI can help identify key moments in the video that could make for good thumbnails. With those timestamps, you can use the new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from videos, providing richer visual previews.
      • Text-to-Speech: AI can be used to convert textual information derived from the video into spoken audio, enhancing accessibility. On Android you can also choose to play audio in different languages and dialects thus enhancing personalization for a wider audience.

    Using the right AI solution

    Currently, only cloud models support video inputs, so we went ahead with a cloud-based solution.Iintegrating Firebase in our sample empowers the app to:

      • Generate real-time, concise video summaries automatically.
      • Produce comprehensive content metadata, including chapter markers and relevant hashtags.
      • Facilitate seamless multilingual content translation.

    So how do you actually interact with a video and work with Gemini to process it? First, send your video as an input parameter to your prompt:

    val promptData =
    "Summarize this video in the form of top 3-4 takeaways only. Write in the form of bullet points. Don't assume if you don't know"
    
    val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
    _outputText.value = OutputTextState.Loading
    
    viewModelScope.launch(Dispatchers.IO) {
        try {
            val requestContent = content {
                fileData(videoSource.toString(), "video/mp4")
                text(prompt)
            }
            val outputStringBuilder = StringBuilder()
    
            generativeModel.generateContentStream(requestContent).collect { response ->
                outputStringBuilder.append(response.text)
                _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
            }
    
            _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
    
        } catch (error: Exception) {
            _outputText.value = error.localizedMessage?.let { OutputTextState.Error(it) }
        }
    }
    

    Notice there are two key components here:

      • FileData: This component integrates a video into the query.
      • Prompt: This asks the user what specific assistance they need from AI in relation to the provided video.

    Of course, you can finetune your prompt as per your requirements and get the responses accordingly.

    In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI solutions like Gemini through Firebase, you can significantly elevate video experiences on Android. This combination enables advanced features like video summaries, enriched metadata, and seamless multilingual translations, ultimately enhancing accessibility and engagement for users. As these technologies continue to evolve, the potential for creating even more dynamic and intelligent video applications is vast.

    Go above-and-beyond with specialized APIs

    Mozart Louis

    Android 16 introduces the new audio PCM Offload mode which can reduce the power consumption of audio playback in your app, leading to longer playback time and increased user engagement. Eliminating the power anxiety greatly enhances the user experience.

    Oboe is Android’s premiere audio api that developers are able to use to create high performance, low latency audio apps. A new feature is being added to the Android NDK and Android 16 called Native PCM Offload playback.

    Offload playback helps save battery life when playing audio. It works by sending a large chunk of audio to a special part of the device’s hardware (a DSP). This allows the CPU of the device to go into a low-power state while the DSP handles playing the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), where the DSP also takes care of decoding.

    This can result in significant power saving while playing back audio and is perfect for applications that play audio in the background or while the screen is off (think audiobooks, podcasts, music etc).

    We created the sample app PowerPlay to demonstrate how to implement these features using the latest NDK version, C++ and Jetpack Compose.

    Here are the most important parts!

    First order of business is to assure the device supports audio offload of the file attributes you need. In the example below, we are checking if the device support audio offload of stereo, float PCM file with a sample rate of 48000Hz.

           val format = AudioFormat.Builder()
                .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
                .setSampleRate(48000)
                .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
                .build()
    
            val attributes =
                AudioAttributes.Builder()
                    .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                    .setUsage(AudioAttributes.USAGE_MEDIA)
                    .build()
           
            val isOffloadSupported = 
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
                    AudioManager.isOffloadedPlaybackSupported(format, attributes)
                } else {
                    false
                }
    
            if (isOffloadSupported) {
                player.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
            }
    

    Once we know the device supports audio offload, we can confidently set the Oboe audio streams’ performance mode to the new performance mode option, PerformanceMode::POWER_SAVING_OFFLOADED.

    // Create an audio stream
            AudioStreamBuilder builder;
            builder.setChannelCount(mChannelCount);
            builder.setDataCallback(mDataCallback);
            builder.setFormat(AudioFormat::Float);
            builder.setSampleRate(48000);
    
            builder.setErrorCallback(mErrorCallback);
            builder.setPresentationCallback(mPresentationCallback);
            builder.setPerformanceMode(PerformanceMode::POWER_SAVING_OFFLOADED);
            builder.setFramesPerDataCallback(128);
            builder.setSharingMode(SharingMode::Exclusive);
               builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
            Result result = builder.openStream(mAudioStream);
    

    Now when audio is played back, it will be offloading audio to the DSP, helping save power when playing back audio.

    There is more to this feature that will be covered in a future blog post, fully detailing out all of the new available APIs that will help you optimize your audio playback experience!

    What’s next

    Of course, we were only able to share the tip of the iceberg with you here, so to dive deeper into the samples, check out the following links:

    Hopefully these examples have inspired you to explore what new and fascinating experiences you can build on Android. Tune in to our session at Google I/O in a couple weeks to learn even more about use-cases supported by solutions like Jetpack CameraX and Jetpack Media3!



    Source link

  • Building Intelligent Apps with Apple AI Models

    Building Intelligent Apps with Apple AI Models



    This course explores on-device machine learning using Apple’s powerful tools. See how simple the Vision framework makes complex computer vision tasks, enabling your app to understand the real world, through tasks like object detection and face recognition. Learn to leverage the Translation framework for on-device, real-time language translation, breaking down language barriers for your users. Before finally looking at how to develop your own machine learning models, by customizing Apple’s pre-built models for specific use cases within your apps.



    Source link

  • Building Integrated AI Services with LangChain & LangGraph

    Building Integrated AI Services with LangChain & LangGraph


    While the course is designed to accommodate developers with varying levels of experience, the following prerequisites are recommended: 

    • Basic programming knowledge in any language 
    • Familiarity with web development concepts and RESTful APIs 
    • Understanding of basic AI and machine learning concepts (beneficial but not required) 



    Source link

  • Building excellent games with better graphics and performance



    Posted by Matthew McCullough – VP of Product Management, Android

    We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem.

    Today, we’re sharing a closer look at what’s new from Android. We’re making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we’re enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out the video or keep reading below.

    https://www.youtube.com/watch?v=9MN0-qwYAFU

    More immersive visuals built on Vulkan, now the official graphics API

    These days, games require more processing power for realistic graphics and cutting-edge visuals. Vulkan is an API used for low level graphics that helps developers maximize the performance of modern GPUs, and today we’re making it the official graphics API for Android. This unlocks advanced features like ray tracing and multithreading for realistic and immersive gaming visuals. For example, Diablo Immortal used Vulkan to implement ray tracing, bringing the world of Sanctuary to life with spectacular special effects, from fiery explosions to icy blasts.

    Moving image showing ray tracing in Diablo Immortal on Google Play

    Diablo Immortal running on Vulkan

    For casual games like Pokémon TCG Pocket, which draws players into the vibrant world of each Pokémon, Vulkan helps optimize graphics across a broad range of devices to ensure a smooth and engaging experience for every player.

    Moving image showing gameplay of Pokemon TCG Pocket on Google Play

    Pokémon TCG Pocket running on Vulkan

    We’re excited to announce that Android is transitioning to a modern, unified rendering stack with Vulkan at its core. Starting with our next Android release, more devices will use Vulkan to process all graphics commands. If your game is running on OpenGL, it will use ANGLE as a system driver that translates OpenGL to Vulkan. We recommend testing your game on ANGLE today to ensure it’s ready for the Vulkan transition.

    We’re also partnering with major game engines to make Vulkan integration easier. With Unity 6, you can configure Vulkan per device while older versions can access this setting through plugins. Over 45% of sessions from new games on Unity* use Vulkan, and we expect this number to grow rapidly.

    To simplify workflows further, we’re teaming up with the Samsung Austin Research Center to create an integrated GPU profiler toolchain for Vulkan and AI/ML optimization. Coming later this year, this tool will enable developers to make graphics, memory and compute workloads more efficient.

    Longer and smoother gameplay sessions with ADPF

    Android Dynamic Performance Framework (ADPF) enables developers to adjust between the device and game’s performance in real-time based on the thermal state of the device, and it’s getting a big update today to provide longer and smoother gameplay sessions. ADPF is designed to work across a wide range of devices including models like the Pixel 9 family and the Samsung S25 Series. We’re excited to see MMORPGs like Lineage W integrating ADPF to optimize performance on their core target devices.

    Moving image showing gameplay from Lineage w on Google Play

    Lineage W running on ADPF

    Here’s how we’re enhancing ADPF with better performance and simplified integration:

    Performance optimization with more features in Play Console

    Once you’ve launched your game, Play Console offers the tools to monitor and improve your game’s performance. We’re newly including Low Memory Killers (LMK) in Android vitals, giving you insight into memory constraints that can cause your game to crash. Android vitals is your one-stop destination for monitoring metrics that impact your visibility on the Play Store like slow sessions. You can find this information next to reach and devices which provides updates on your game’s user distribution and notifies developers for device-specific issues.

    Android vitals details in Google Play Console

    Check your Android vitals regularly to ensure high technical quality

    Bringing PC games to mobile, and pushing the boundaries of gaming

    We’re launching a pilot program to simplify the process of bringing PC games to mobile. It provides support starting from Android game development all the way through publishing your game on Play. Starting this month, games like DREDGE and TABS Mobile are growing their mobile audience using this program. Many more are following in their footsteps this year, including Disco Elysium. You can express your interest to join the PC to mobile program.

    Moving image displaying thumbnails of titles of new PC games coming to mobile - Disco Elysium, TABS Mobile, and DREDGE

    New PC games are coming to mobile

    You can learn more about Android game development from our developer site. We can’t wait to see your title join the ranks of these amazing games built for Android. And if you’ll be at GDC next week, we’d love to say hello – stop by at the Moscone Center West Hall!

    * Source: Google internal data measuring games on Android 14 or later launched between August 2024 – February 2025.



    Source link

  • Building Robust ViewModels | Kodeco

    Building Robust ViewModels | Kodeco


    Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive
    catalogue of 50+ books and 4,000+ videos.

    Learn more

    © 2025 Kodeco Inc



    Source link

  • Building Engaging User Interfaces with SwiftUI [SUBSCRIBER]

    Building Engaging User Interfaces with SwiftUI [SUBSCRIBER]



    <p>This module explores advanced SwiftUI features and techniques to build complex and visually appealing user interfaces. Students will learn about animation and transitions, building complex layouts, and how to integrate SwiftUI with UIKit to leverage existing code and UI components.</p>



    Source link