دسته: اخبار اندروید

  • Common media processing operations with Jetpack Media3 Transformer



    Posted by Nevin Mital – Developer Relations Engineer, and Kristina Simakova – Engineering Manager

    Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality.

    The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we’ll walk through some of the most common editing operations with Transformer and discuss its performance.

    Getting set up with Transformer

    To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you’ll:

      • Create one or many MediaItem instances from your video file(s), then
      • Apply item-specific edits to them by building an EditedMediaItem for each MediaItem,
      • Create a Transformer instance configured with settings applicable to the whole exported video,
      • and finally start the export to save your applied edits to a file.

    Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post!

    Here’s what this looks like in code:

    val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build()
    val editedMediaItem = EditedMediaItem.Builder(mediaItem).build()
    val transformer = 
      Transformer.Builder(context)
        .addListener(/* Add a Transformer.Listener instance here for completion events */)
        .build()
    transformer.start(editedMediaItem, outputFilePath)
    

    Transcoding, Trimming, Muting, and Resizing with the Transformer API

    Let’s now take a look at four of the most common single-asset media editing operations, starting with Transcoding.

    Transcoding is the process of re-encoding an input file into a specified output format. For this example, we’ll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change:

    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .setVideoMimeType(MimeTypes.VIDEO_H265)
        .setAudioMimeType(MimeTypes.AUDIO_AAC)
        .build()
    

    Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we’ll also include FFmpeg commands for each example to serve as a helpful reference. Here’s how you can perform the same transcoding with FFmpeg:

    $ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath
    

    The next operation we’ll try is Trimming.

    Specifically, we’ll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the “Getting set up” section above, here are the lines that change:

    // Configure the trim operation by adding a ClippingConfiguration to
    // the media item
    val clippingConfiguration =
       MediaItem.ClippingConfiguration.Builder()
         .setStartPositionMs(3000)
         .setEndPositionMs(8000)
         .build()
    val mediaItem =
       MediaItem.Builder()
         .setUri(mediaItemUri)
         .setClippingConfiguration(clippingConfiguration)
         .build()
    
    // Transformer also has a trim optimization feature we can enable.
    // This will prioritize Transmuxing over Transcoding where possible.
    // See more about Transmuxing further down in this post.
    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .experimentalSetTrimOptimizationEnabled(true)
        .build()
    

    With FFmpeg:

    $ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath
    

    Next, we can mute the audio in the exported video file.

    val editedMediaItem = 
      EditedMediaItem.Builder(mediaItem)
        .setRemoveAudio(true)
        .build()
    

    The corresponding FFmpeg command:

    $ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath
    

    And for our final example, we’ll try resizing the input video by scaling it down to half its original height and width.

    val scaleEffect = 
      ScaleAndRotateTransformation.Builder()
        .setScale(0.5f, 0.5f)
        .build()
    val editedMediaItem =
      EditedMediaItem.Builder(mediaItem)
        .setEffects(
          /* audio */ Effects(emptyList(), 
          /* video */ listOf(scaleEffect))
        )
        .build()
    

    An FFmpeg command could look like this:

    $ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath
    

    Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple.

    Transformer API Performance results

    Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device:

    (Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.)

    Input video format: 10s 720p H264 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~1300ms
    • Trimming video to 00:03-00:08: ~2300ms
    • Muting audio: ~200ms
    • Resizing video to half height and width: ~1200ms

    Input video format: 25s 360p VP8 video with Vorbis audio

    • Transcoding to H265 video and AAC audio: ~3400ms
    • Trimming video to 00:03-00:08: ~1700ms
    • Muting audio: ~1600ms
    • Resizing video to half height and width: ~4800ms

    Input video format: 4s 8k H265 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~2300ms
    • Trimming video to 00:03-00:08: ~1800ms
    • Muting audio: ~2000ms
    • Resizing video to half height and width: ~3700ms

    One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times.

    When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences:

    Transmuxing

      • Transformer’s preferred approach when possible – a quick transformation that preserves elementary streams.
      • Only applicable to basic operations, such as rotating, trimming, or container conversion.
      • No quality loss or bitrate change.

    Transmux

    Transcoding

      • Transformer’s fallback approach in cases when Transmuxing isn’t possible – Involves decoding and re-encoding elementary streams.
      • More extensive modifications to the input video are possible.
      • Loss in quality due to re-encoding, but can achieve a desired bitrate target.

    Transcode

    We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above.

    A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest.

    Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder’s output format and the input format must be compatible.

    If the optimization fails, Transformer automatically falls back to normal export.

    What’s next?

    As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs.

    To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We’ve already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app’s performance!



    Source link

  • Widgets on lock screen: FAQ



    Posted by Tyler Beneke – Product Manager, and Lucas Silva – Software Engineer

    Widgets are now available on your Pixel Tablet lock screens! Lock screen widgets empower users to create a personalized, always-on experience. Whether you want to easily manage smart home devices like lights and thermostats, or build dashboards for quick access and control of vital information, this blog post will answer your key questions about lock screen widgets on Android. Read on to discover when, where, how, and why they’ll be on a lock screen near you.

    Lock screen widgets

    Lock screen widgets in clock-wise order: Clock, Weather, Stocks, Timers, and Google Home App. In the top right is a customization call-to-action.

    Q: When will lock screen widgets be available?

    A: Lock screen widgets will be available in AOSP for tablets and mobile starting with the release after Android 16 (QPR1). This update is scheduled to be pushed to AOSP in late Summer 2025. Lock screen widgets are already available on Pixel Tablets.

    Q: Are there any specific requirements for widgets to be allowed on the lock screen?

    A: No, widgets allowed on the lock screen have the same requirements as any other widgets. Widgets on the lock screen should follow the same quality guidelines as home screen widgets including quality, sizing, and configuration. If a widget launches an activity from the lock screen, users must authenticate to launch the activity, or the activity should declare android:showWhenLocked=”true” in its manifest entry.

    Q: How can I test my widget on the lock screen?

    A: Currently, lock screen widgets can be tested on Pixel Tablet devices. You can enable lock screen widgets and add your widget.

    Q: Which widgets can be displayed in this experience?

    A: All widgets are compatible with the lock screen widget experience. To prioritize user choice and customization, we’ve made all widgets available. For the best experience, please make sure your widget supports dynamic color and dynamic resizing. Lock screen widgets are sized to approximately 4 cells wide by 3 cells tall on the launcher, but exact dimensions vary by device.

    Q: Can my widget opt-out of the experience?

    A:Important: Apps can choose to restrict the use of their widgets on the lock screen using an opt-out API. To opt-out, use the widget category “not_keyguard” in your appwidget info xml file. Place this file in an xml-36 resource folder to ensure backwards compatibility.

    Q: Are there any CDD requirements specifically for lock screen widgets?

    A: No, there are no specific CDD requirements solely for lock screen widgets. However, it’s crucial to ensure that any widgets and screensavers that integrate with the framework adhere to the standard CDD requirements for those features.

    Q: Will lock screen widgets be enabled on existing devices?

    A: Yes, lock screen widgets were launched on the Pixel Tablet in 2024 Other device manufacturers may update their devices as well once the feature is available in AOSP.

    Q: Does the device need to be docked to use lock screen widgets?

    A: The mechanism that triggers the lock screen widget experience is customizable by the OEM. For example, OEMs can choose to use charging or docking status as triggers. Third-party OEMs will need to implement their own posture detection if desired.

    Q: Can OEMs set their own default widgets?

    A: Yes! Hardware providers can pre-set and automatically display default widgets.

    Q: Can OEMs customize the user interface for lock screen widgets?

    A: Customization of the lock screen widget user interface by OEMs is not supported in the initial release. All lock screen widgets will have the same developer experience on all devices.

    Lock screen widgets are poised to give your users new ways to interact with your app on their devices. Today you can leverage your existing widget designs and experiences on the lock screen with Pixel Tablets. To learn more about building widgets, please check out our resources on developer.android.com


    This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.



    Source link

  • Deal: This Samsung 70-inch Crystal UHD 4K Smart TV is just $399!

    Deal: This Samsung 70-inch Crystal UHD 4K Smart TV is just $399!


    Samsung 70 inch Class DU7200B Crystal UHD 4K Smart TV

    This offer is available from Amazon. The price is hidden until you add the unit to your cart, so make sure to do that and check that the deal is still available first.

    Are you looking to get a large TV? No longer do you have to pay thousands for a good one. This one is pretty nice and currently only goes for $399.

    The Samsung 70-inch Class DU7200B Crystal UHD 4K Smart TV is pretty huge at 70 inches diagonally. It also has a 4K UHD resolution with a 60Hz refresh rate. Not to mention, it gets some nice enhancements like PurColor and Motion Xcelerator, to make colors more vivid and true to life, as well as avoiding lag and blur. You’ll also get HDR support, Object Tracking Sound Lite, and Q-Symphony.

    Of course, this is also a smart TV. It is powered by Tizen. You’ll get access to plenty of streaming apps. This includes Netflix, Amazon Prime Video, Hulu, Disney Plus, Apple TV, and more. You’ll also get access to Samsung TV Plus, which can stream live TV channels for free.

    As if streaming both on-demand and live TV wasn’t enough, the Samsung 70-inch Class DU7200B Crystal UHD 4K Smart TV, it even gets access to Samsung’s Gaming Hub. This means you can enjoy your free time playing games without the need for a console. You can access cloud gaming services like Xbox Game Pass, NVIDIA GeForce Now, Amazon Luna, and others.

    Quite the deal, right? The Samsung 70-inch Class DU7200B Crystal UHD 4K Smart TV is huge, has a 4K resolution, and a full smart TV experience with all the bells and whistles. Catch this deal while you can!



    Source link

  • Health Connect Jetpack SDK is now in beta and new feature updates



    Posted by Brenda Shaw – Health & Home Partner Engineering Technical Writer

    At Google, we are committed to empowering developers as they build exceptional health and fitness experiences. Core to that commitment is Health Connect, an Android platform that allows health and fitness apps to store and share the same on-device data. Android devices running Android 14 or that have the pre-installed APK will automatically have Health Connect by default in Settings. For pre-Android 14 devices, Health Connect is available for download from the Play Store.

    We’re excited to announce significant Health Connect updates like the Jetpack SDK Beta, new datatypes and new permissions that will enable richer, more insightful app functionalities.

    Jetpack SDK is now in Beta

    We are excited to announce the beta release of our Jetback SDK! Since its initial release, we’ve dedicated significant effort to improving data completeness, with a particular focus on enriching the metadata associated with each data point.

    In the latest SDK, we’re introducing two key changes designed to ensure richer metadata and unlock new possibilities for you and your users:

    Make Recording Method Mandatory

    To deliver more accurate and insightful data, the Beta introduces a requirement to specify one of four recording methods when writing data to Health Connect. This ensures increased data clarity, enhanced data analysis and improved user experience:

    If your app currently does not set metadata when creating a record:

    Before

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
    ) // error: metadata is not provided
    

    After

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata = Metadata.manualEntry()
    )
    

    If your app currently calls Metadata constructor when creating a record:

    Before

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata =
            Metadata(
                clientRecordId = "client id",
                recordingMethod = RECORDING_METHOD_MANUAL_ENTRY,
            ), // error: Metadata constructor not found
    )
    

    After

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata = Metadata.manualEntry(clientRecordId = "client id"),
    )
    

    Make Device Type Mandatory

    You will be required to specify device type when creating a Device object. A device object will be required for Automatically (RECORDING_METHOD_AUTOMATICALLY_RECORDED) or Actively (RECORDING_METHOD_ACTIVELY_RECORDED) recorded data.

    Before

    Device() // error: type not provided
    

    After

    Device(type = Device.Companion.TYPE_PHONE)
    

    We believe these updates will significantly improve the quality of data within your applications and empower you to create more insightful user experiences. We encourage you to explore the Jetpack SDK Beta and review the updated Metadata page and familiarize yourself with these changes.

    New background reads permission

    To enable richer, background-driven health and fitness experiences while maintaining user trust, Health Connect now features a dedicated background reads permission.

    This permission allows your app to access Health Connect data while running in the background, provided the user grants explicit consent. Users retain full control, with the ability to manage or revoke this permission at any time via Health Connect settings.

    Let your app read health data even in the background with the new Background Reads permission. Declare the following permission in your manifest file:

    <application>
      <uses-permission android:name="android.permission.health.READ_HEALTH_DATA_IN_BACKGROUND" />
    ...
    </application>
    

    Use the Feature Availability API to check if the user has the background read feature available, according to the version of Health Connect they have on their devices.

    Allow your app to read historic data

    By default, when granted read permission, your app can access historical data from other apps for the preceding 30 days from the initial permission grant. To enable access to data beyond this 30-day window, Health Connect introduces the PERMISSION_READ_HEALTH_DATA_HISTORY permission. This allows your app to provide new users with a comprehensive overview of their health and wellness history.

    Users are in control of their data with both background reads and history reads. Both capabilities require developers to declare the respective permissions, and users must grant the permission before developers can access their data. Even after granting permission, users have the option of revoking access at any time from Health Connect settings.

    Additional data access and types

    Health Connect now offers expanded data types, enabling developers to build richer user experiences and provide deeper insights. Check out the following new data types:

      • Exercise Routes allows users to share exercise routes with other apps for a seamless synchronized workout. By allowing users to share all routes or one route, their associated exercise activities and maps for their workouts will be synced with the fitness apps of their choice.

    Fitness app asking permission to access exercise route in Health Connect

      • The skin temperature data type measures peripheral body temperature unlocking insights around sleep quality, reproductive health, and the potential onset of illness.
      • Health Connect also provides a planned exercise data type to enable training apps to write training plans and workout apps to read training plans. Recorded exercises (workouts) can be read back for personalized performance analysis to help users achieve their training goals. Access granular workout data, including sessions, blocks, and steps, for comprehensive performance analysis and personalized feedback.

    These new data types empower developers to create more connected and insightful health and fitness applications, providing users with a holistic view of their well-being.

    To learn more about all new APIs and bug fixes, check out the full release notes.

    Get started with the Health Connect Jetpack SDK

    Whether you are just getting started with Health Connect or are looking to implement the latest features, there are many ways to learn more and have your voice heard.

      • Subscribe to our newsletter: Stay up-to-date with the latest news, announcements, and resources from Google Health and Fitness. Subscribe to our Health and Fitness Google Developer Newsletter and get the latest updates delivered straight to your inbox.
      • Check out our Health Connect developer guide: The Health and Fitness Developer Center is your one-stop-shop for building health and fitness apps on Android – including a robust guide for getting started with Health Connect.
      • Report an issue: Encountered a bug or technical issue? Report it directly to our team through the Issue Tracker so we can investigate and resolve it. You can also request a feature or provide feedback with Issue Tracker.

    We can’t wait to see what you create!



    Source link

  • Jetpack WindowManager 1.4 is stable



    Posted by Xiaodao Wu – Developer Relations Engineer

    Jetpack WindowManager keeps getting better. WindowManager gives you tools to build adaptive apps that work seamlessly across all kinds of large screen devices. Version 1.4, which is stable now, introduces new features that make multi-window experiences even more powerful and flexible. While Jetpack Compose is still the best way to create app layouts for different screen sizes, 1.4 makes some big improvements to activity embedding, including activity stack spinning, pane expansion, and dialog full-screen dim. Multi-activity apps can easily take advantage of all these great features.

    WindowManager 1.4 introduces a range of enhancements. Here are some of the highlights.

    WindowSizeClass

    We’ve updated the WindowSizeClass API to support custom values. We changed the API shape to make it easy and extensible to support custom values and add new values in the future. The high level changes are as follows:

      • Opened the constructor to take in minWidthDp and minHeightDp parameters so you can create your own window size classes
      • Added convenience methods for checking breakpoint validity
      • Deprecated WindowWidthSizeClass and WindowHeightSizeClass in favor of WindowSizeClass#isWidthAtLeastBreakpoint() and WindowSizeClass#isHeightAtLeastBreakpoint() respectively

    Here’s a migration example:

    // old 
    
    val sizeClass = WindowSizeClass.compute(widthDp, heightDp)
    when (sizeClass.widthSizeClass) {
      COMPACT -> doCompact()
      MEDIUM -> doMedium()
      EXPANDED -> doExpanded()
      else -> doDefault()
    }
    
    // new
    val sizeClass = WindowSizeClass.BREAKPOINTS_V1
                                   .computeWindowSizeClass(widthDp, heightDp)
    
    when {
      sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND) -> {
        doExpanded()
      }
      sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND) -> {
        doMedium()
      }
      else -> {
        doCompact()
      }
    }
    

    Some things to note in the new API:

      • The order of the when branches should go from largest to smallest to support custom values from developers or new values in the future
      • The default branch should be treated as the smallest window size class

    Activity embedding

    Activity stack pinning

    Activity stack pinning provides a way to keep an activity stack always on screen, no matter what else is happening in your app. This new feature lets you pin an activity stack to a specific window, so the top activity stays visible even when the user navigates to other parts of the app in a different window. This is perfect for things like live chats or video players that you want to keep on screen while users explore other content.

    private fun pinActivityStackExample(taskId: Int) {
     val splitAttributes: SplitAttributes = SplitAttributes.Builder()
       .setSplitType(SplitAttributes.SplitType.ratio(0.66f))
       .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT)
       .build()
    
     val pinSplitRule = SplitPinRule.Builder()
       .setDefaultSplitAttributes(splitAttributes)
       .build()
    
     SplitController.getInstance(applicationContext).pinTopActivityStack(taskId, pinSplitRule)
    }
    

    Pane expansion

    The new pane expansion feature, also known as interactive divider, lets you create a visual separation between two activities in split-screen mode. You can make the pane divider draggable so users can resize the panes – and the activities in the panes – on the fly. This gives users control over how they want to view the app’s content.

    val splitAttributesBuilder: SplitAttributes.Builder = SplitAttributes.Builder()
       .setSplitType(SplitAttributes.SplitType.ratio(0.33f))
       .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT)
    
    if (WindowSdkExtensions.getInstance().extensionVersion >= 6) {
       splitAttributesBuilder.setDividerAttributes(
           DividerAttributes.DraggableDividerAttributes.Builder()
               .setColor(getColor(context, R.color.divider_color))
               .setWidthDp(4)
               .setDragRange(
                   DividerAttributes.DragRange.DRAG_RANGE_SYSTEM_DEFAULT)
               .build()
       )
    }
    val splitAttributes: SplitAttributes = splitAttributesBuilder.build()
    

    Dialog full-screen dim

    WindowManager 1.4 gives you more control over how dialogs dim the background. With dialog full-screen dim, you can choose to dim just the container where the dialog appears or the entire task window for a unified UI experience. The entire app window dims by default when a dialog opens (see EmbeddingConfiguration.DimAreaBehavior.ON_TASK).To dim only the container of the activity that opened the dialog, use EmbeddingConfiguration.DimAreaBehavior.ON_ACTIVITY_STACK. This gives you more flexibility in designing dialogs and makes for a smoother, more coherent user experience. Temu is among the first developers to integrate this feature, the full-screen dialog dim has reduced screen invalid touches by about 5%.

    Customised shopping cart reminder with dialog full-screen dim in the Temu app

    Customised shopping cart reminder with dialog full-screen dim in Temu.

    Enhanced posture support

    WindowManager 1.4 makes building apps that work flawlessly on foldables straightforward by providing more information about the physical capabilities of the device. The new WindowInfoTracker#supportedPostures API lets you know if a device supports tabletop mode, so you can optimize your app’s layout and features accordingly.

    val currentSdkVersion = WindowSdkExtensions.getInstance().extensionVersion
    val message =
    if (currentSdkVersion >= 6) {
      val supportedPostures = WindowInfoTracker.getOrCreate(LocalContext.current).supportedPostures
      buildString {
        append(supportedPostures.isNotEmpty())
        if (supportedPostures.isNotEmpty()) {
          append(" ")
          append(
          supportedPostures.joinToString(
          separator = ",", prefix = "(", postfix = ")"))
        }
      }
    } else {
      "N/A (WindowSDK version 6 is needed, current version is $currentSdkVersion)"
    }
    

    Other API changes

    WindowManager 1.4 includes several API changes and additions to support the new features. Notable changes include:

      • Stable and no longer experimental APIs:
        • ActivityEmbeddingController#invalidateVisibleActivityStacks
        • ActivityEmbeddingController#getActivityStack
        • SplitController#updateSplitAttributes
      • API added to set activity embedding animation background:
        • SplitAttributes.Builder#setAnimationParams
      • API to get updated WindowMetrics information:
        • ActivityEmbeddingController#embeddedActivityWindowInfo
      • API to finish all activities in an activity stack:
        • ActivityEmbeddingController#finishActivityStack

    How to get started

    To start using Jetpack WindowManager 1.4 in your Android projects, update your app dependencies in build.gradle.kts to the latest stable version:

    dependencies {
        implementation("androidx.window:window:1.4.0-rc01")
        ...  
        // or, if you're using the WindowManager testing library:
        testImplementation("androidx.window:window-testing:1.4.0-rc01")
    }
    

    Happy coding!



    Source link

  • Announcing Android support of digital credentials



    Posted by Rohey Livne – Group Product Manager

    In today’s interconnected world, managing digital identity is essential. Android aims to support open standards that ensure seamless interoperability with various identity providers and services. As part of this goal, we are excited to announce that Android, via Credential Manager’s DigitalCredential API, now natively supports OpenID4VP and OpenID4VCI for digital credential presentation and issuance respectively.

    What are digital credentials?

    Digital credentials are cryptographically verifiable documents. The most common emerging use case for digital credentials is identity documents such as driver’s licenses, passports, or national ID cards. In the coming years, it is anticipated that Android developers will develop innovative applications of this technology for a wider range of personal credentials that users will need to present digitally, including education certifications, insurance policies, memberships, permits, and more.

    Digital credentials can be provided by any installed Android app. These apps are known as “credential holders”; typically digital wallet apps such as Google Wallet or Samsung Wallet.

    Other apps not necessarily thought of as “wallets” may also have a use for exposing a digital credential. For example an airline app might want to offer their users’ air miles reward program membership as a digital credential to be presented to other apps or websites.

    Digital credentials can be presented by the user to any other app or website on the same device, and Android also supports securely presenting Digital Credentials between devices using the same industry standard protocols used by passkeys (CTAP), by establishing encrypted communication tunnels.

    Users can store multiple credentials across multiple apps on their device. By leveraging OpenID4VP requests from websites using the W3C Digital Credential API, or from native apps using Android Credential Manager API, a user can select what credential to present from across all available credentials across all installed digital wallet apps.

    How digital credentials work

    Presentation

    To present the credential, the verifier sends an OpenID4VP request to the Digital Credential API, which then prompts the user to select a credential across all the credentials that can satisfy this request. Note that the user is selecting a credential, not a digital wallet app:

    Digital credentials selection interface on a mobile device

    Digital credentials selection interface

    Once the user chooses a credential to proceed with, Android platform redirects the original OpenID4VP request to the digital wallet app that holds the chosen credential to complete the presentation back to the verifier. When the digital wallet app receives the OpenID4VP request from Android, it can also perform any additional due-diligence steps it needs to perform prior to releasing the credential to the verifier.

    Issuance

    Android also allows developers to issue their own Digital Credentials to a user’s digital wallet app. This process can be done using an OpenID4VCI request, which prompts the user to choose the digital wallet app that they want to store the credential in. Alternatively, the issuance could be done directly from within the digital wallet app (some apps might not even have an explicit user facing issuance step if they store credentials based on their association to a signed-in user account).

    a single credential in a user's digital wallet app

    A wallet app holds a single credential

    Over time, the user can repeat this process to issue multiple credentials across multiple digital wallet apps:

    multiple credentials in multiple digital wallets held by a single user

    Multiple wallet apps hold multiple credentials

    Note: To ensure that at presentation time Android can appropriately list all the credentials that digital wallet apps hold, digital wallets must register their credentials’ metadata with Credential Manager. Credential Manager uses this metadata to match credentials across available digital wallet apps to the verifier’s request, so that it can only present a list of valid credentials that can satisfy the request for the user to select from.

    Early adopters

    As Google Wallet announced yesterday, soon users will be able to use digital credentials to recover Amazon accounts, access online health services with CVS and MyChart by Epic, and verify profiles or identity on platforms like Uber and Bumble.

    These use cases will take advantage of users’ digital credentials stored in any digital wallet app users have on their Android device. To that end, we’re also happy to share that both Samsung Wallet and 1Password will hold users’ digital credentials as digital wallets and support OpenID standards via Android’s Credential Manager API.

    Learn more

    Credential Manager API lets every Android app implement credential verification or provide credentials on the Android platform.

    Check out our new digital credential documentation on how to become a credential verifier, taking advantage of users’ existing digital credentials using Jetpack Credential Manager, or to become a digital wallet app holding your own credentials for other apps or websites to verify.



    Source link

  • Building excellent games with better graphics and performance



    Posted by Matthew McCullough – VP of Product Management, Android

    We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem.

    Today, we’re sharing a closer look at what’s new from Android. We’re making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we’re enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out the video or keep reading below.

    https://www.youtube.com/watch?v=9MN0-qwYAFU

    More immersive visuals built on Vulkan, now the official graphics API

    These days, games require more processing power for realistic graphics and cutting-edge visuals. Vulkan is an API used for low level graphics that helps developers maximize the performance of modern GPUs, and today we’re making it the official graphics API for Android. This unlocks advanced features like ray tracing and multithreading for realistic and immersive gaming visuals. For example, Diablo Immortal used Vulkan to implement ray tracing, bringing the world of Sanctuary to life with spectacular special effects, from fiery explosions to icy blasts.

    Moving image showing ray tracing in Diablo Immortal on Google Play

    Diablo Immortal running on Vulkan

    For casual games like Pokémon TCG Pocket, which draws players into the vibrant world of each Pokémon, Vulkan helps optimize graphics across a broad range of devices to ensure a smooth and engaging experience for every player.

    Moving image showing gameplay of Pokemon TCG Pocket on Google Play

    Pokémon TCG Pocket running on Vulkan

    We’re excited to announce that Android is transitioning to a modern, unified rendering stack with Vulkan at its core. Starting with our next Android release, more devices will use Vulkan to process all graphics commands. If your game is running on OpenGL, it will use ANGLE as a system driver that translates OpenGL to Vulkan. We recommend testing your game on ANGLE today to ensure it’s ready for the Vulkan transition.

    We’re also partnering with major game engines to make Vulkan integration easier. With Unity 6, you can configure Vulkan per device while older versions can access this setting through plugins. Over 45% of sessions from new games on Unity* use Vulkan, and we expect this number to grow rapidly.

    To simplify workflows further, we’re teaming up with the Samsung Austin Research Center to create an integrated GPU profiler toolchain for Vulkan and AI/ML optimization. Coming later this year, this tool will enable developers to make graphics, memory and compute workloads more efficient.

    Longer and smoother gameplay sessions with ADPF

    Android Dynamic Performance Framework (ADPF) enables developers to adjust between the device and game’s performance in real-time based on the thermal state of the device, and it’s getting a big update today to provide longer and smoother gameplay sessions. ADPF is designed to work across a wide range of devices including models like the Pixel 9 family and the Samsung S25 Series. We’re excited to see MMORPGs like Lineage W integrating ADPF to optimize performance on their core target devices.

    Moving image showing gameplay from Lineage w on Google Play

    Lineage W running on ADPF

    Here’s how we’re enhancing ADPF with better performance and simplified integration:

    Performance optimization with more features in Play Console

    Once you’ve launched your game, Play Console offers the tools to monitor and improve your game’s performance. We’re newly including Low Memory Killers (LMK) in Android vitals, giving you insight into memory constraints that can cause your game to crash. Android vitals is your one-stop destination for monitoring metrics that impact your visibility on the Play Store like slow sessions. You can find this information next to reach and devices which provides updates on your game’s user distribution and notifies developers for device-specific issues.

    Android vitals details in Google Play Console

    Check your Android vitals regularly to ensure high technical quality

    Bringing PC games to mobile, and pushing the boundaries of gaming

    We’re launching a pilot program to simplify the process of bringing PC games to mobile. It provides support starting from Android game development all the way through publishing your game on Play. Starting this month, games like DREDGE and TABS Mobile are growing their mobile audience using this program. Many more are following in their footsteps this year, including Disco Elysium. You can express your interest to join the PC to mobile program.

    Moving image displaying thumbnails of titles of new PC games coming to mobile - Disco Elysium, TABS Mobile, and DREDGE

    New PC games are coming to mobile

    You can learn more about Android game development from our developer site. We can’t wait to see your title join the ranks of these amazing games built for Android. And if you’ll be at GDC next week, we’d love to say hello – stop by at the Moscone Center West Hall!

    * Source: Google internal data measuring games on Android 14 or later launched between August 2024 – February 2025.



    Source link

  • Making Google Play the best place to grow PC games



    Posted by Aurash Mahbod – VP and GM of Games on Google Play

    We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem.

    Today, we’re sharing a closer look at what’s new from Play. We’re expanding our support for native PC games with a new earnback program and making Google Play Games on PC generally available this year with major upgrades. Check out the video or keep reading below.

    https://www.youtube.com/watch?v=iP-Bzzn8q4s

    Google Play connects developers with over 2 billion monthly active players1 worldwide. Our tools and features help you engage these players across a wide range of devices to drive engagement and revenue. But we know the gaming landscape is constantly evolving. More and more players enjoy the immersive experiences on PC and want the flexibility to play their favorite games on any screen.

    That’s why we’re making even bigger investments in our PC gaming platform. Google Play Games on PC was launched to help mobile games reach more players on PC. Today, we’re expanding this support to native PC games, enabling more developers to connect with our massive player base on mobile.

    Expanding support for native PC games

    For games that are designed with a PC-first audience in mind, we’ve added even more helpful tools to our native PC program. Games like Wuthering Waves, Remember of Majesty, Genshin Impact, and Journey of Monarch have seen great success on the platform. Based on feedback from early access partners, we’re taking the program even further, with comprehensive support across game development, distribution, and growth on the platform.

      • Develop with Play Games PC SDK: We’re launching a dedicated SDK for native PC games on Google Play Games, providing powerful tools, such as easier in-app purchase integration and advanced security protection.
      • Distribute through Play Console: We’ve made it easier for developers to manage both mobile and PC game builds in one place, simplifying the process of packaging PC versions, configuring releases, and managing store listings.
      • Grow with our new earnback program: Bring your PC games to Google Play Games on PC to unlock up to 15% additional earnback.2

    We’re opening up the program for all native PC games – including PC-only games – this year. Learn more about the eligibility requirements and how to join the program.

    Moving image of thumbnails for popular PC Games on Google Play – Remember of Majesty, Genshin Impact, Joourney of Monarch, and Wuthering Waves

    Native PC games on Google Play Games

    Making PC an easy choice for mobile developers

    Bringing your game to PC unlocks a whole new audience of engaged players. To help maximize your discoverability, we’re making all mobile games available3 on PC by default with the option to opt out anytime.

    Games will display a playability badge indicating their compatibility with PC. “Optimized” means that a game meets all of our quality standards for a great gaming experience while “playable” means that the game meets the minimum requirements to play well on a PC. With the support of our new custom control mappings, many games can be playable right out of the box. Learn more about the playability criteria and how to optimize your games for PC today.

    Moving image of playable PC Games on Google Play

    Thousands of new games are added to Google Play Games

    To enhance our PC experience, we’ve made major upgrades to the platform. Now, gamers can enjoy the full Google Play Games on PC catalog on even more devices, including AMD laptops and desktops. We’re partnering with PC OEMs to make Google Play Games accessible right from the start menu on new devices starting this year.

    We’re also bringing new features for players to customize their gaming experiences. Custom controls is now available to help tailor their setup for optimal comfort and performance. Rolling out this month, we’re adding a handy game sidebar for quick adjustments and enabling multi-account and multi-instance support by popular demand.

    Moving image demonstrating customizable controls while playing Dye Hard - Color War on PC on Google Play

    You can customize controls while playing Dye Hard – Color War

    Unlocking exclusive rewards on PC with Play Points

    To help you boost engagement, we’re also rolling out a more seamless Play Points4 experience on PC. Play Points balance is now easier to track and more rewarding, with up to 10x points boosters5 on Google Play Games. This means more opportunities for players to earn and redeem points for in-game items and discounts, enhancing the overall PC experience.

    Moving image showing Google Play Points in Google Play Games

    Google Play Points is integrated seamlessly with Google Play Games

    Bringing new PC UA tools powered by Google Ads

    More developers are launching games on PC than ever, presenting an opportunity to reach a rapidly growing audience on PC. We want to make it easier for developers to reach great players with Google Ads. We’re working on a solution to help developers run user acquisition campaigns for both mobile emulated and native PC titles within Google Play Games on PC. We’re still in the early stages of partner testing, but we look forward to sharing more details later this year.

    Join the celebration!

    We’re celebrating all that’s to come to Google Play Games on PC with players and developers. Take a look at the behind-the-scenes from our social channels and editorial features on Google Play. At GDC, you can dive into the complete gaming experience that is available on the best Android gaming devices. If you’ll be there, please stop by and say hello – we’re at the Moscone Center West Hall!

    1 Source: Google internal data measuring monthly users who opened a game downloaded from the Play store.

    2 Additional terms apply for the earnback program.

    3 Your game’s visibility on Google Play Games on PC is determined by its playability badge. If your game is labeled as “Untested”, this means it will only appear if a user specifically searches for it in the Google Play Games on PC search menu. The playability badge may change once testing is complete. You can express interest in having Play evaluate your game for playability using this form.

    5 Offered for a limited time period. Additional terms apply.



    Source link

  • Multimodal image attachment is now available for Gemini in Android Studio



    Posted by Paris Hsu – Product Manager, Android Studio

    At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion, making it easier to build high quality apps. We are excited to announce a significant expansion: Gemini in Android Studio now supports multimodal inputs, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve team collaboration and UI development workflows.

    You can try out this new feature by downloading the latest Android Studio canary. We’ve outlined a few use cases to try, but we’d love to hear what you think as we work through bringing this feature into future stable releases. Check it out:

    https://www.youtube.com/watch?v=f_6mtRWJzuc

    Image attachment – a new dimension of interaction

    We first previewed Gemini’s multimodal capabilities at Google I/O 2024. This technology allows Gemini in Android Studio to understand simple wireframes, and transform them into working Jetpack Compose code.

    You’ll now find an image attachment icon in the Gemini chat window. Simply attach JPEG or PNG files to your prompts and watch Gemini understand and respond to visual information. We’ve observed that images with strong color contrasts yield the best results.

    New “Attach Image File” icon in chat window

    1.1 New “Attach Image File” icon in chat window

    Example of multimodal response in chat

    1.2 Example multimodal response in chat

    We encourage you to experiment with various prompts and images. Here are a few compelling use cases to get you started:

      • Rapid UI prototyping and iteration: Convert a simple wireframe or high-fidelity mock of your app’s UI into working code.
      • Diagram explanation and documentation: Gain deeper insights into complex architecture or data flow diagrams by having Gemini explain their components and relationships.
      • UI troubleshooting: Capture screenshots of UI bugs and ask Gemini for solutions.

    Rapid UI prototyping and iteration

    Gemini’s multimodal support lets you convert visual designs into functional UI code. Simply upload your image and use a clear prompt. It works whether you’re working from your own sketches or from a designer mockup.

    Here’s an example prompt: “For this image provided, write Android Jetpack Compose code to make a screen that’s as close to this image as possible. Make sure to include imports, use Material3, and document the code.” And then you can append any specific or additional instructions related to the image.

    Example prompt: 'For this image provided, write Android Jetpack Compose code to make a screen that's as close to this image as possible. Make sure to include imports, use Material3, and document the code.'

    Example of generating Compose code from high-fidelity mock using Gemini in Android Studio

    2. Example of generating Compose code from high-fidelity mock using Gemini in Android Studio (code output)

    For more complex UIs, refine your prompts to capture specific functionality. For instance, when converting a calculator mockup, adding “make the interactions and calculations work as you’d expect” results in a fully functional calculator:

    Example prompt to convert a calculator mock up

    Example of generating Compose code from high-fidelity mock using Gemini in Android Studio

    3. Example of generating Compose code from wireframe via Gemini in Android Studio (code output)

    Note: this feature provides an initial design scaffold. It’s a good “first draft” and your edits and adjustments will be needed. Common refinements include ensuring correct drawable imports and importing icons. Consider the generated code a highly efficient starting point, accelerating your UI development workflow.

    Diagram explanation and documentation

    With Gemini’s multimodal capabilities, you can also try uploading an image of your diagram and ask for explanations or documentation.

    Example prompt: Upload the Now in Android architecture diagram and say “Explain the components and data flow in this diagram” or “Write documentation about this diagram”.

    Example of generating Compose code from high-fidelity mock using Gemini in Android Studio

    4. Example of asking Gemini to help document the NowInAndroid architecture diagram

    UI troubleshooting

    Leverage Gemini’s visual analysis to identify and resolve bugs quickly. Upload a screenshot of the problematic UI, and Gemini will analyze the image and suggest potential solutions. You can also include relevant code snippets for more precise assistance.

    In the example below, we used Compose UI check and found that the button is stretched too wide in tablet screens, so we took a screenshot and asked Gemini for solutions – it was able to leverage the window size classes to provide the right fix.

    Example of generating Compose code from high-fidelity mock using Gemini in Android Studio

    5. Example of fixing UI bugs using Image Attachment (code output)

    Download Android Studio today

    Download the latest Android Studio canary today to try the new multimodal features!

    As always, Google is committed to the responsible use of AI. Android Studio won’t send any of your source code to servers without your consent. You can read more on Gemini in Android Studio’s commitment to privacy.

    We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue and also check out known issues. Remember to also follow us on X, Medium, or YouTube for more Android development updates!





    Source link

  • Multimodal for Gemini in Android Studio, news for gaming devs, the latest devices at MWC, XR and more!



    Posted by Anirudh Dewani – Director, Android Developer Relations

    We just dropped our Winter episode of #TheAndroidShow, on YouTube and on developer.android.com, and this time we were in Barcelona to give you the latest from Mobile World Congress and across the Android Developer world. We unveiled a big update to Gemini in Android Studio (multi-modal support, so you can translate image to code) and we shared some news for games developers ahead of GDC later this month. Plus we unpacked the latest Android hardware devices from our partners coming out of Mobile World Congress and recapped all of the latest in Android XR. Let’s dive in!

    https://www.youtube.com/watch?v=-Drt3YeIMuc

    Multimodality image-to-code, now available for Gemini in Android Studio

    At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion. Today, we took the wraps off a new feature: Gemini in Android Studio now supports multimodal image to code, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve collaboration and design workflows. You can try out this new feature by downloading the latest canary – Android Studio Narwal, and read more about multimodal image attachment – now available for Gemini in Android Studio.

    https://www.youtube.com/watch?v=f_6mtRWJzuc

    Building excellent games with better graphics and performance

    Ahead of next week’s Games Developer Conference (GDC), we announced new developer tools that will help improve gameplay across the Android ecosystem. We’re making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we’re enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplay sessions. Learn more about how we’re building excellent games with better graphics and performance.

    https://www.youtube.com/watch?v=SkkkwCEkO6I

    A deep dive into Android XR

    Since we unveiled Android XR in December, it’s been exciting to see developers preparing their apps for the next generation of Android XR devices. In the latest episode of #TheAndroidShow we dove into this new form factor and spoke with a developer who has already been building. Developing for this new platform leverages your existing Android development skills and familiar tools like Android Studio, Kotlin, and Jetpack libraries. The Android XR SDK Developer Preview is available now, complete with an emulator, so you can start experimenting and building XR experiences immediately! Visit developer.android.com/xr for more.

    https://www.youtube.com/watch?v=AkKjMtBYwDA

    New Android foldables and tablets, at Mobile World Congress

    Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona:

      • OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12” large screen – making it as compact or expansive as needed.
      • Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet.
      • Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity.

    These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost.

    https://www.youtube.com/watch?v=KqkUQpsQ2QA

    Watch the Winter episode of #TheAndroidShow

    That’s a wrap on this quarter’s episode of #TheAndroidShow. A special thanks to our co-hosts for the Fall episode, Simona Milanović and Alejandra Stamato! You can watch the full show on YouTube and on developer.android.com/events/show.

    Have an idea for our next episode of #TheAndroidShow? It’s your conversation with the broader community, and we’d love to hear your ideas for our next quarterly episode – you can let us know on X or LinkedIn.





    Source link