دسته: اخبار اندروید

  • Don’t upgrade to T-Mobile Experience; your legacy plan is better

    Don’t upgrade to T-Mobile Experience; your legacy plan is better


    T Mobile logo on smartphone with colored background stock photo

    Edgar Cervantes / Android Authority

    Earlier this week, T-Mobile announced the retirement of its Go5G lineup, introducing new Experience plans to take their place. At first, these changes didn’t seem too bad. But then it became clear: taxes and fees are no longer included in the advertised price. That changes everything.

    Let’s take a closer look at why you’re probably better off skipping T-Mobile Experience, and where a few rare exceptions might apply.

    Is T-Mobile Experience worth the switch?

    6 votes

    T-Mobile Experience is big on marketing, little on substance

    First, here’s a quick recap of how the new Experience plans compare to the Go5G offerings they’re replacing. Also note that there is no direct replacement for the old base Go5G plan; it’s simply no longer offered.

    T-Mobile Experience More offers everything from Go5G Plus, but adds:

    • 10GB of additional hotspot data (60GB total)
    • Free T-Satellite with Starlink through the end of the year

    Meanwhile, T-Mobile Experience Beyond takes Go5G Next and adds the following:

    • 200GB of additional hotspot access (250GB total)
    • 15GB of extra high-speed data (30GB total)
    • 15GB of high-speed data in 210 countries (a new perk)

    There’s also a new 5-year price guarantee — though so far, it looks more like a sidegrade than an upgrade. I’ll break that down further in a separate piece as I’m still digging into it.

    On paper, these changes aren’t nearly as dramatic as the shift from Magenta to Go5G. T-Mobile even tries to sweeten the pot by offering a $5 per line discount on Experience More compared to Go5G Plus. But let’s be real: the 10GB hotspot bump won’t move the needle for most users, who rarely burn through their hotspot allowance to begin with.

    The free Starlink T-Satellite beta access is a bit more compelling, especially since current beta users aren’t being charged; But that’s set to change in July. Even so, this is not a permanent perk: Experience More customers will eventually have to pay extra for satellite access, just like Go5G Plus subscribers.

    Experience Beyond, to its credit, has a bit more appeal for frequent travelers. You also get satellite backup for free as a long-term perk. But if you’re not flying internationally for work or play, or don’t need satellite access? These extras won’t change much for you.

    The bigger issue is what T-Mobile no longer includes in either of these plans: taxes and fees. Previously, T-Mobile baked taxes and most fees into the monthly price for its Go5G plans. Not anymore. With Experience, you’ll see these costs tacked on, meaning that a $90 or $100 plan might balloon closer to $110 or more, depending on where you live.

    For plans that add very little for most mainstream users, this shift feels like a pure marketing sleight of hand. There’s just not enough real value here to justify the change.

    Are there any exceptions where T-Mobile Experience might make sense?

    Google Fi Wireless logo on smartphone with colored background stock photo

    Edgar Cervantes / Android Authority

    While I generally can’t recommend these plans for existing customers, there are a few narrow scenarios where they might make sense:

    • Low-tax states with multiple lines: As one reader pointed out in the comments on my original article, folks in low-tax states with several lines might actually save a few bucks over Go5G Plus. But be careful as this is very case-specific. Do the math and verify your state’s tax rate before jumping ship.
    • Frequent international travelers: If you travel often and currently rely on an add-on or a second carrier for roaming, Experience Beyond’s expanded international data might save you money compared to Go5G Next, but again, this depends on your usage.

    That said, there are better alternatives out there for international users. Google Fi, for instance, offers a postpaid-like experience and more robust roaming features, often at a lower cost. As always, do your homework.

    What about new customers or those with much older legacy plans?

    If you are on an existing Go5G plan, my general advice is to either stay put or look at outside alternatives if you aren’t happy with T-Mobile’s recent changes. Older legacy customers may eventually feel priced out of their old plans due to creeping fees and rate hikes as well. Still, I’d advise against switching to T-Mobile Experience. You’re likely better off sticking to what you have or exploring prepaid carriers, which often offer similar service at much lower rates.

    That same advice applies to new customers considering T-Mobile. Unless you absolutely need in-store customer support, free phone deals, or other perks, T-Mobile’s current lineup just isn’t worth the premium in 2025.

    Even then, you’re often better off buying a phone outright (ideally with a no-interest financing deal from a retailer) and pairing it with a prepaid carrier. You’ll save a lot of money and can still enjoy options like insurance through select carriers. For those who still prefer in-person support, consider Cricket or Metro by T-Mobile — both are more affordable than the big three and still have thousands of physical stores nationwide each.

    It’s time to rethink what prepaid means in the US

    I get it — switching is hard. I was slow to make the leap myself. In fact, my family still has a few Verizon lines we’re gradually migrating elsewhere as we pay off devices.

    But here’s some perspective: the US is one of the only major mobile markets where postpaid is the absolute default. In many parts of Europe and Asia, most people use prepaid or mobile virtual network operators (MVNOs) instead of signing on directly with a major carrier.

    That old stereotype of prepaid being cheap and limited? It’s outdated. The prepaid market has matured with unlimited plans, solid customer support, and even premium device compatibility. Some carriers even offer special device promotions and more.

    Of course, if you absolutely refuse to consider prepaid, then T-Mobile is still your best bet among the big three. Despite the new pricing structure, it generally remains cheaper and more user-friendly than Verizon or AT&T, though that advantage is shrinking year by year. You could also consider Boost Mobile, though I’ve heard mixed things about its postpaid service. US Cellular is also a regional option, but it’s equally pricey and very likely to be eaten up by T-Mobile and the big three anyhow.



    Source link

  • Android Developers Blog: Introducing Widget Quality Tiers



    Posted by Ivy Knight – Senior Design Advocate

    Level up your app Widgets with new quality tiers

    Widgets can be a powerful tool for engaging users and increasing the visibility of your app. They can also help you to improve the user experience by providing users with a more convenient way to access your app’s content and features.

    To build a great Android widget, it should be helpful, adaptive, and visually cohesive with the overall aesthetic of the device home screen.

    In order to help you achieve a great widget, we are pleased to introduce Android Widget Quality Tiers!

    The new Widget quality tiers are here to help guide you towards a best practice implementation of widgets, that will look great and bring your user’s value across the ecosystem of Android Phone, Tablets and Foldables.

    What does this mean for widget makers?

    Whether you are planning a new widget, or investing in an update to an existing widget, the Widget Quality Tiers will help you evaluate and plan for a high quality widget.

    Just like Large Screen quality tiers help optimize app experiences, these Widget tiers guide you in creating great widgets across all Android devices. Now, similar tiers are being introduced for widgets to ensure they’re not just functional, but also visually appealing and user-friendly.

    Two screenshots of a phone display different views in the Google Play app. The first shows a list of running apps with the Widget filter applied in a search for 'Running apps'; the second shows the Nike Run Club app page.

    Widgets that meet quality tier guidelines will be discoverable under the new Widget filter in Google Play.

    Consider using our Canonical Widget layouts, which are based on Jetpack Glance components, to make it easier for you to design and build a Tier 1 widget your users will love.

    Let’s take a look at the Widget Quality Tiers

    There are three tiers built with required system defaults and suggested guidance to create an enhanced widget experience:

    Tier 1: Differentiated

    Four mockups show examples of Material Design 3 dynamic color applied to an app called 'Radio Hour'.

    Differentiated widgets go further by implementing theming and adapting to resizing.

    Tier 1 widgets are exemplary widgets offering hero experiences that are personalized, and create unique and productive homescreens. These widgets meet Tier 2 standards plus enhancements for layout, color, discovery, and system coherence criteria.

    A stylized cartoon figure holds their chin thoughtfully while a chat bubble icon is highlighted

    For example, use the system provided corner radius, and don’t set a custom corner radius on Widgets.

    Add more personalization with dynamic color and generated previews while ensuring your widgets look good across devices by not overriding system defaults.

     Four mockups show examples of Material Design 3 components on Android: a contact card, a podcast player, a task list, and a news feed.

    Tier 1 widgets that, from the top left, properly crop content, fill the layout bounds, have appropriately sized headers and touch targets, and make good use of colors and contrast.

    Tier 2: Quality Standard

    These widgets are helpful, usable, and provide a quality experience. They meet all criteria for layout, color, discovery, and content.

    A simple to-do list app widget displays two tasks: 'Water plants' and 'Water more plants.' Both tasks have calendar icons next to them. The app is titled 'Plants' and has search and add buttons in the top right corner.

    Make sure your widget has appropriate touch targets.

    Tier 2 widgets are functional but simple, they meet the basic criteria for a usable app. But if you want to create a truly stellar experience for your users, tier 1 criteria introduce ways to make a more personal, interactive, and coherent widget.

    Tier 3: Low Quality

    These widgets don’t meet the minimum quality bar and don’t provide a great user experience, meaning they are not following or missing criteria from Tier 2.

     Examples of Material Design 3 widgets are displayed on a light pink background with stylized X shapes. Widgets include a podcast player, a contact card, to-do lists, and a music player.

    Clockwise from the top left not filling the bounds, poorly cropped content, low color contrast, mis-sized header, and small touch targets.

    A stylized cartoon person with orange hair, a blue shirt, holds a pencil to their cheek.  'Kacie' is written above them, with a cut off chat bubble icon.

    For example, ensure content is visible and not cropped

    Build and elevate your Android widgets with Widget Quality Tiers

    Dive deeper into the widget quality tiers and start building widgets that not only look great but also provide an amazing user experience! Check out the official Android documentation for detailed information and best practices.


    This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.



    Source link

  • Design with Widget Canonical Layouts



    Posted by Summers Pitman – Developer Relations Engineer, and Ivy Knight – Senior Design Advocate

    Widgets can bring more productive, delightful and customized experiences to users’ home screens, but they can be tricky to design to ensure a high quality focused experience. In this blog post, we’ll cover how easy Widget Canonical Layouts can make this process.

    But, what is a Canonical Layout? It is a common layout pattern that works for various screen sizes. You can use them as a starting point, ready-to-use compositions that help layouts adapt for common use cases and screen sizes. Widgets also provide Canonical Layouts to get started crafting higher quality widgets.

    Widget Canonical Layouts

    The Widget Canonical Layouts Figma makes previewing your widget content in multiple breakpoints and layout types. Join me in our Figma design resource to explore how they can simplify designing a widget for one of our sample apps, JetNews.

    Three side-by-side examples of Widget Canonical Layouts in Figma being used to design a widget for JetNews

    1. Content to adapt

    Jetnews is a sample news reading app, built with Jetpack Compose. With the experience in mind, the primary user journey is reading articles.

      • A widget should be glanceable, so displaying a full article would not be a good use case.
      • Since they are timely news articles, surfacing newer content could be more productive for users.
      • We’ll want to give a condensed version of each article similar to the app home feed.
      • The addition of a bookmark action would allow the user to save and read later in the full app experience.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    2. Choosing a Canonical Layout

    With our content and user journey established, we’ll take a glance at which canonical layouts would make sense.

    We want to show at least a few new articles with a headline, truncated description, and possible thumbnail. Which brings us to the Image + Text Grid layout and maybe the list layout.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Within our new Figma Widget Canonical Layout preview, we can add in some mock content to check out how these layouts will look in various sizes.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Moving example of using Widget Canonical Layouts in Figma to design a widget for JetNews

    3. Adapting to breakpoint sizes

    Now that we’ve previewed our content in both the grid and list layouts, we don’t have to choose between just one!

    The grid layout better displays our content for larger sizes, where we have some more room to take advantage of multiple columns and a larger thumbnail image. While the list is working nicely for smaller sizes, giving a one column layout with a smaller thumbnail.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    But we can adapt even further to allow the user to have more resizing flexibility and anticipate different OEM grid sizing. For JetNews, we decided on an additional extra small layout to accommodate a smaller grid size and vertical height while still using the List layout. For this size I decided to remove the thumbnail all together to give the title and action space.

    Consider these in-between design tweaks as needed (between any of the breakpoints), that can be applied as general rules in your widget designs.

    Here are a few guidelines to borrow:

      • Establish a content hierarchy on what to hide as the widget shrinks.
      • Use a type scale so the type scales consistently.
      • Create some parameters for image scaling with aspect ratios and cropping techniques.
      • Use component presentation changes. For example, the title bar’s FAB can be reduced to a standard icon.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Last, I’ll swap the app icon, round up all the breakpoint sizes, and provide an option with brand colors.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    These are ready to send over to dev! Tune in for the code along to check out how to implement the final widget.

    Go try it out and explore more widgets

    You can find the Widget Canonical Layouts at our new Figma Community Page: figma.com/@androiddesign. Stay tuned for more Android Figma resources.

    Check out the official Android documentation for detailed information and best practices Widgets on Android and more on Widget Quality Tiers, and join us for the rest of Widget Spotlight week!

    Android Banner

    This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.



    Source link

  • Tune in for our winter episode of #TheAndroidShow on March 13!



    Posted by Anirudh Dewani, Director – Android Developer Relations

    In just a few days, on Thursday, March 13 at 10AM PT, we’ll be dropping our winter episode of #TheAndroidShow, on YouTube and on developer.android.com!

    Mobile World Congress – the annual event in Barcelona where Android device makers show off their latest devices, kicked off yesterday. In our winter episode we’ll take a look at these foldables, tablets and wearables and tell you what you need to get building.

    Plus we’ve got some news to share, like a new update for Gemini in Android Studio and some new goodies for games developers ahead of the Game Developer Conference (GDC) in San Francisco later this month. And of course, with the launch of Android XR in December, we’ll also be taking a look at how to get building there. It’s a packed show, and you don’t want to miss it!

    https://www.youtube.com/watch?v=6Nwq0oI41lg

    Some new Android foldables and tablets, at Mobile World Congress

    Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona:

      • OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12” large screen – making it as compact or expansive as needed.
      • Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet.
      • Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity.

    These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost.

    Tune in to #TheAndroidShow: March 13 at 10AM PT

    These new devices are just one of the many things we’ll cover in our winter episode, you don’t want to miss it! If you watch live on YouTube, we’ll have folks standing by to answer your questions in the comments. See you on March 13 on YouTube or at developer.android.com/events/show!




    Source link

  • Generate stunning visuals in your Android apps with Imagen 3 via Vertex AI in Firebase



    Posted by Thomas Ezan Sr. – Android Developer Relation Engineer (@lethargicpanda)

    Imagen 3, our most advanced image generation model, is now available through Vertex AI in Firebase, making it even easier to integrate it to your Android apps.

    Designed to generate well-composed images with exceptional details, reduced artifacts, and rich lighting, Imagen 3 represents a significant leap forward in image generation capabilities.

    Hot air balloons float over a scenic desert landscape with unique rock formations.

    Image generated by Imagen 3 with prompt: “Shot in the style of DSLR camera with the polarizing filter. A photo of two hot air balloons over the unique rock formations in Cappadocia, Turkey. The colors and patterns on these balloons contrast beautifully against the earthy tones of the landscape below. This shot captures the sense of adventure that comes with enjoying such an experience.”

    A wooden robot stands in a field of yellow flowers, holding a small blue bird on its outstretched hand.

    Image generated by Imagen 3 with prompt: A weathered, wooden mech robot covered in flowering vines stands peacefully in a field of tall wildflowers, with a small blue bird resting on its outstretched hand. Digital cartoon, with warm colors and soft lines. A large cliff with a waterfall looms behind.

    Imagen 3 unlocks exciting new possibilities for Android developers. Generated visuals can adapt to the content of your app, creating a more engaging user experience. For instance, your users can generate custom artwork to enhance their in-app profile. Imagen can also improve your app’s storytelling by bringing its narratives to life with delightful personalized illustrations.

    You can experiment with image prompts in Vertex AI Studio, and learn how to improve your prompts by reviewing the prompt and image attribute guide.

    Get started with Imagen 3

    The integration of Imagen 3 is similar to adding Gemini access via Vertex AI in Firebase. Start by adding the gradle dependencies to your Android project:

    dependencies {
        implementation(platform("com.google.firebase:firebase-bom:33.10.0"))
    
        implementation("com.google.firebase:firebase-vertexai")
    }
    

    Then, in your Kotlin code, create an ImageModel instance by passing the model name and optionally, a model configuration and safety settings:

    val imageModel = Firebase.vertexAI.imagenModel(
      modelName = "imagen-3.0-generate-001",
      generationConfig = ImagenGenerationConfig(
        imageFormat = ImagenImageFormat.jpeg(compresssionQuality = 75),
        addWatermark = true,
        numberOfImages = 1,
        aspectRatio = ImagenAspectRatio.SQUARE_1x1
      ),
      safetySettings = ImagenSafetySettings(
        safetyFilterLevel = ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE
        personFilterLevel = ImagenPersonFilterLevel.ALLOW_ADULT
      )
    )
    

    Finally generate the image by calling generateImages:

    val imageResponse = imageModel.generateImages(
      prompt = "An astronaut riding a horse"
    )
    

    Retrieve the generated image from the imageResponse and display it as a bitmap as follow:

    val image = imageResponse.images.first()
    val uiImage = image.asBitmap()
    

    Next steps

    Explore the comprehensive Firebase documentation for detailed API information.

    Access to Imagen 3 using Vertex AI in Firebase is currently in Public Preview, giving you an early opportunity to experiment and innovate. For pricing details, please refer to the Vertex AI in Firebase pricing page.

    Start experimenting with Imagen 3 today! We’re looking forward to seeing how you’ll leverage Imagen 3’s capabilities to create truly unique, immersive and personalized Android experiences.



    Source link

  • Common media processing operations with Jetpack Media3 Transformer



    Posted by Nevin Mital – Developer Relations Engineer, and Kristina Simakova – Engineering Manager

    Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality.

    The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we’ll walk through some of the most common editing operations with Transformer and discuss its performance.

    Getting set up with Transformer

    To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you’ll:

      • Create one or many MediaItem instances from your video file(s), then
      • Apply item-specific edits to them by building an EditedMediaItem for each MediaItem,
      • Create a Transformer instance configured with settings applicable to the whole exported video,
      • and finally start the export to save your applied edits to a file.

    Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post!

    Here’s what this looks like in code:

    val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build()
    val editedMediaItem = EditedMediaItem.Builder(mediaItem).build()
    val transformer = 
      Transformer.Builder(context)
        .addListener(/* Add a Transformer.Listener instance here for completion events */)
        .build()
    transformer.start(editedMediaItem, outputFilePath)
    

    Transcoding, Trimming, Muting, and Resizing with the Transformer API

    Let’s now take a look at four of the most common single-asset media editing operations, starting with Transcoding.

    Transcoding is the process of re-encoding an input file into a specified output format. For this example, we’ll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change:

    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .setVideoMimeType(MimeTypes.VIDEO_H265)
        .setAudioMimeType(MimeTypes.AUDIO_AAC)
        .build()
    

    Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we’ll also include FFmpeg commands for each example to serve as a helpful reference. Here’s how you can perform the same transcoding with FFmpeg:

    $ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath
    

    The next operation we’ll try is Trimming.

    Specifically, we’ll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the “Getting set up” section above, here are the lines that change:

    // Configure the trim operation by adding a ClippingConfiguration to
    // the media item
    val clippingConfiguration =
       MediaItem.ClippingConfiguration.Builder()
         .setStartPositionMs(3000)
         .setEndPositionMs(8000)
         .build()
    val mediaItem =
       MediaItem.Builder()
         .setUri(mediaItemUri)
         .setClippingConfiguration(clippingConfiguration)
         .build()
    
    // Transformer also has a trim optimization feature we can enable.
    // This will prioritize Transmuxing over Transcoding where possible.
    // See more about Transmuxing further down in this post.
    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .experimentalSetTrimOptimizationEnabled(true)
        .build()
    

    With FFmpeg:

    $ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath
    

    Next, we can mute the audio in the exported video file.

    val editedMediaItem = 
      EditedMediaItem.Builder(mediaItem)
        .setRemoveAudio(true)
        .build()
    

    The corresponding FFmpeg command:

    $ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath
    

    And for our final example, we’ll try resizing the input video by scaling it down to half its original height and width.

    val scaleEffect = 
      ScaleAndRotateTransformation.Builder()
        .setScale(0.5f, 0.5f)
        .build()
    val editedMediaItem =
      EditedMediaItem.Builder(mediaItem)
        .setEffects(
          /* audio */ Effects(emptyList(), 
          /* video */ listOf(scaleEffect))
        )
        .build()
    

    An FFmpeg command could look like this:

    $ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath
    

    Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple.

    Transformer API Performance results

    Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device:

    (Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.)

    Input video format: 10s 720p H264 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~1300ms
    • Trimming video to 00:03-00:08: ~2300ms
    • Muting audio: ~200ms
    • Resizing video to half height and width: ~1200ms

    Input video format: 25s 360p VP8 video with Vorbis audio

    • Transcoding to H265 video and AAC audio: ~3400ms
    • Trimming video to 00:03-00:08: ~1700ms
    • Muting audio: ~1600ms
    • Resizing video to half height and width: ~4800ms

    Input video format: 4s 8k H265 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~2300ms
    • Trimming video to 00:03-00:08: ~1800ms
    • Muting audio: ~2000ms
    • Resizing video to half height and width: ~3700ms

    One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times.

    When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences:

    Transmuxing

      • Transformer’s preferred approach when possible – a quick transformation that preserves elementary streams.
      • Only applicable to basic operations, such as rotating, trimming, or container conversion.
      • No quality loss or bitrate change.

    Transmux

    Transcoding

      • Transformer’s fallback approach in cases when Transmuxing isn’t possible – Involves decoding and re-encoding elementary streams.
      • More extensive modifications to the input video are possible.
      • Loss in quality due to re-encoding, but can achieve a desired bitrate target.

    Transcode

    We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above.

    A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest.

    Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder’s output format and the input format must be compatible.

    If the optimization fails, Transformer automatically falls back to normal export.

    What’s next?

    As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs.

    To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We’ve already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app’s performance!



    Source link

  • Widgets on lock screen: FAQ



    Posted by Tyler Beneke – Product Manager, and Lucas Silva – Software Engineer

    Widgets are now available on your Pixel Tablet lock screens! Lock screen widgets empower users to create a personalized, always-on experience. Whether you want to easily manage smart home devices like lights and thermostats, or build dashboards for quick access and control of vital information, this blog post will answer your key questions about lock screen widgets on Android. Read on to discover when, where, how, and why they’ll be on a lock screen near you.

    Lock screen widgets

    Lock screen widgets in clock-wise order: Clock, Weather, Stocks, Timers, and Google Home App. In the top right is a customization call-to-action.

    Q: When will lock screen widgets be available?

    A: Lock screen widgets will be available in AOSP for tablets and mobile starting with the release after Android 16 (QPR1). This update is scheduled to be pushed to AOSP in late Summer 2025. Lock screen widgets are already available on Pixel Tablets.

    Q: Are there any specific requirements for widgets to be allowed on the lock screen?

    A: No, widgets allowed on the lock screen have the same requirements as any other widgets. Widgets on the lock screen should follow the same quality guidelines as home screen widgets including quality, sizing, and configuration. If a widget launches an activity from the lock screen, users must authenticate to launch the activity, or the activity should declare android:showWhenLocked=”true” in its manifest entry.

    Q: How can I test my widget on the lock screen?

    A: Currently, lock screen widgets can be tested on Pixel Tablet devices. You can enable lock screen widgets and add your widget.

    Q: Which widgets can be displayed in this experience?

    A: All widgets are compatible with the lock screen widget experience. To prioritize user choice and customization, we’ve made all widgets available. For the best experience, please make sure your widget supports dynamic color and dynamic resizing. Lock screen widgets are sized to approximately 4 cells wide by 3 cells tall on the launcher, but exact dimensions vary by device.

    Q: Can my widget opt-out of the experience?

    A:Important: Apps can choose to restrict the use of their widgets on the lock screen using an opt-out API. To opt-out, use the widget category “not_keyguard” in your appwidget info xml file. Place this file in an xml-36 resource folder to ensure backwards compatibility.

    Q: Are there any CDD requirements specifically for lock screen widgets?

    A: No, there are no specific CDD requirements solely for lock screen widgets. However, it’s crucial to ensure that any widgets and screensavers that integrate with the framework adhere to the standard CDD requirements for those features.

    Q: Will lock screen widgets be enabled on existing devices?

    A: Yes, lock screen widgets were launched on the Pixel Tablet in 2024 Other device manufacturers may update their devices as well once the feature is available in AOSP.

    Q: Does the device need to be docked to use lock screen widgets?

    A: The mechanism that triggers the lock screen widget experience is customizable by the OEM. For example, OEMs can choose to use charging or docking status as triggers. Third-party OEMs will need to implement their own posture detection if desired.

    Q: Can OEMs set their own default widgets?

    A: Yes! Hardware providers can pre-set and automatically display default widgets.

    Q: Can OEMs customize the user interface for lock screen widgets?

    A: Customization of the lock screen widget user interface by OEMs is not supported in the initial release. All lock screen widgets will have the same developer experience on all devices.

    Lock screen widgets are poised to give your users new ways to interact with your app on their devices. Today you can leverage your existing widget designs and experiences on the lock screen with Pixel Tablets. To learn more about building widgets, please check out our resources on developer.android.com


    This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.



    Source link

  • Deal: This Samsung 70-inch Crystal UHD 4K Smart TV is just $399!

    Deal: This Samsung 70-inch Crystal UHD 4K Smart TV is just $399!


    Samsung 70 inch Class DU7200B Crystal UHD 4K Smart TV

    This offer is available from Amazon. The price is hidden until you add the unit to your cart, so make sure to do that and check that the deal is still available first.

    Are you looking to get a large TV? No longer do you have to pay thousands for a good one. This one is pretty nice and currently only goes for $399.

    The Samsung 70-inch Class DU7200B Crystal UHD 4K Smart TV is pretty huge at 70 inches diagonally. It also has a 4K UHD resolution with a 60Hz refresh rate. Not to mention, it gets some nice enhancements like PurColor and Motion Xcelerator, to make colors more vivid and true to life, as well as avoiding lag and blur. You’ll also get HDR support, Object Tracking Sound Lite, and Q-Symphony.

    Of course, this is also a smart TV. It is powered by Tizen. You’ll get access to plenty of streaming apps. This includes Netflix, Amazon Prime Video, Hulu, Disney Plus, Apple TV, and more. You’ll also get access to Samsung TV Plus, which can stream live TV channels for free.

    As if streaming both on-demand and live TV wasn’t enough, the Samsung 70-inch Class DU7200B Crystal UHD 4K Smart TV, it even gets access to Samsung’s Gaming Hub. This means you can enjoy your free time playing games without the need for a console. You can access cloud gaming services like Xbox Game Pass, NVIDIA GeForce Now, Amazon Luna, and others.

    Quite the deal, right? The Samsung 70-inch Class DU7200B Crystal UHD 4K Smart TV is huge, has a 4K resolution, and a full smart TV experience with all the bells and whistles. Catch this deal while you can!



    Source link

  • Health Connect Jetpack SDK is now in beta and new feature updates



    Posted by Brenda Shaw – Health & Home Partner Engineering Technical Writer

    At Google, we are committed to empowering developers as they build exceptional health and fitness experiences. Core to that commitment is Health Connect, an Android platform that allows health and fitness apps to store and share the same on-device data. Android devices running Android 14 or that have the pre-installed APK will automatically have Health Connect by default in Settings. For pre-Android 14 devices, Health Connect is available for download from the Play Store.

    We’re excited to announce significant Health Connect updates like the Jetpack SDK Beta, new datatypes and new permissions that will enable richer, more insightful app functionalities.

    Jetpack SDK is now in Beta

    We are excited to announce the beta release of our Jetback SDK! Since its initial release, we’ve dedicated significant effort to improving data completeness, with a particular focus on enriching the metadata associated with each data point.

    In the latest SDK, we’re introducing two key changes designed to ensure richer metadata and unlock new possibilities for you and your users:

    Make Recording Method Mandatory

    To deliver more accurate and insightful data, the Beta introduces a requirement to specify one of four recording methods when writing data to Health Connect. This ensures increased data clarity, enhanced data analysis and improved user experience:

    If your app currently does not set metadata when creating a record:

    Before

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
    ) // error: metadata is not provided
    

    After

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata = Metadata.manualEntry()
    )
    

    If your app currently calls Metadata constructor when creating a record:

    Before

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata =
            Metadata(
                clientRecordId = "client id",
                recordingMethod = RECORDING_METHOD_MANUAL_ENTRY,
            ), // error: Metadata constructor not found
    )
    

    After

    StepsRecord(
        count = 888,
        startTime = START_TIME,
        endTime = END_TIME,
        metadata = Metadata.manualEntry(clientRecordId = "client id"),
    )
    

    Make Device Type Mandatory

    You will be required to specify device type when creating a Device object. A device object will be required for Automatically (RECORDING_METHOD_AUTOMATICALLY_RECORDED) or Actively (RECORDING_METHOD_ACTIVELY_RECORDED) recorded data.

    Before

    Device() // error: type not provided
    

    After

    Device(type = Device.Companion.TYPE_PHONE)
    

    We believe these updates will significantly improve the quality of data within your applications and empower you to create more insightful user experiences. We encourage you to explore the Jetpack SDK Beta and review the updated Metadata page and familiarize yourself with these changes.

    New background reads permission

    To enable richer, background-driven health and fitness experiences while maintaining user trust, Health Connect now features a dedicated background reads permission.

    This permission allows your app to access Health Connect data while running in the background, provided the user grants explicit consent. Users retain full control, with the ability to manage or revoke this permission at any time via Health Connect settings.

    Let your app read health data even in the background with the new Background Reads permission. Declare the following permission in your manifest file:

    <application>
      <uses-permission android:name="android.permission.health.READ_HEALTH_DATA_IN_BACKGROUND" />
    ...
    </application>
    

    Use the Feature Availability API to check if the user has the background read feature available, according to the version of Health Connect they have on their devices.

    Allow your app to read historic data

    By default, when granted read permission, your app can access historical data from other apps for the preceding 30 days from the initial permission grant. To enable access to data beyond this 30-day window, Health Connect introduces the PERMISSION_READ_HEALTH_DATA_HISTORY permission. This allows your app to provide new users with a comprehensive overview of their health and wellness history.

    Users are in control of their data with both background reads and history reads. Both capabilities require developers to declare the respective permissions, and users must grant the permission before developers can access their data. Even after granting permission, users have the option of revoking access at any time from Health Connect settings.

    Additional data access and types

    Health Connect now offers expanded data types, enabling developers to build richer user experiences and provide deeper insights. Check out the following new data types:

      • Exercise Routes allows users to share exercise routes with other apps for a seamless synchronized workout. By allowing users to share all routes or one route, their associated exercise activities and maps for their workouts will be synced with the fitness apps of their choice.

    Fitness app asking permission to access exercise route in Health Connect

      • The skin temperature data type measures peripheral body temperature unlocking insights around sleep quality, reproductive health, and the potential onset of illness.
      • Health Connect also provides a planned exercise data type to enable training apps to write training plans and workout apps to read training plans. Recorded exercises (workouts) can be read back for personalized performance analysis to help users achieve their training goals. Access granular workout data, including sessions, blocks, and steps, for comprehensive performance analysis and personalized feedback.

    These new data types empower developers to create more connected and insightful health and fitness applications, providing users with a holistic view of their well-being.

    To learn more about all new APIs and bug fixes, check out the full release notes.

    Get started with the Health Connect Jetpack SDK

    Whether you are just getting started with Health Connect or are looking to implement the latest features, there are many ways to learn more and have your voice heard.

      • Subscribe to our newsletter: Stay up-to-date with the latest news, announcements, and resources from Google Health and Fitness. Subscribe to our Health and Fitness Google Developer Newsletter and get the latest updates delivered straight to your inbox.
      • Check out our Health Connect developer guide: The Health and Fitness Developer Center is your one-stop-shop for building health and fitness apps on Android – including a robust guide for getting started with Health Connect.
      • Report an issue: Encountered a bug or technical issue? Report it directly to our team through the Issue Tracker so we can investigate and resolve it. You can also request a feature or provide feedback with Issue Tracker.

    We can’t wait to see what you create!



    Source link

  • Jetpack WindowManager 1.4 is stable



    Posted by Xiaodao Wu – Developer Relations Engineer

    Jetpack WindowManager keeps getting better. WindowManager gives you tools to build adaptive apps that work seamlessly across all kinds of large screen devices. Version 1.4, which is stable now, introduces new features that make multi-window experiences even more powerful and flexible. While Jetpack Compose is still the best way to create app layouts for different screen sizes, 1.4 makes some big improvements to activity embedding, including activity stack spinning, pane expansion, and dialog full-screen dim. Multi-activity apps can easily take advantage of all these great features.

    WindowManager 1.4 introduces a range of enhancements. Here are some of the highlights.

    WindowSizeClass

    We’ve updated the WindowSizeClass API to support custom values. We changed the API shape to make it easy and extensible to support custom values and add new values in the future. The high level changes are as follows:

      • Opened the constructor to take in minWidthDp and minHeightDp parameters so you can create your own window size classes
      • Added convenience methods for checking breakpoint validity
      • Deprecated WindowWidthSizeClass and WindowHeightSizeClass in favor of WindowSizeClass#isWidthAtLeastBreakpoint() and WindowSizeClass#isHeightAtLeastBreakpoint() respectively

    Here’s a migration example:

    // old 
    
    val sizeClass = WindowSizeClass.compute(widthDp, heightDp)
    when (sizeClass.widthSizeClass) {
      COMPACT -> doCompact()
      MEDIUM -> doMedium()
      EXPANDED -> doExpanded()
      else -> doDefault()
    }
    
    // new
    val sizeClass = WindowSizeClass.BREAKPOINTS_V1
                                   .computeWindowSizeClass(widthDp, heightDp)
    
    when {
      sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND) -> {
        doExpanded()
      }
      sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND) -> {
        doMedium()
      }
      else -> {
        doCompact()
      }
    }
    

    Some things to note in the new API:

      • The order of the when branches should go from largest to smallest to support custom values from developers or new values in the future
      • The default branch should be treated as the smallest window size class

    Activity embedding

    Activity stack pinning

    Activity stack pinning provides a way to keep an activity stack always on screen, no matter what else is happening in your app. This new feature lets you pin an activity stack to a specific window, so the top activity stays visible even when the user navigates to other parts of the app in a different window. This is perfect for things like live chats or video players that you want to keep on screen while users explore other content.

    private fun pinActivityStackExample(taskId: Int) {
     val splitAttributes: SplitAttributes = SplitAttributes.Builder()
       .setSplitType(SplitAttributes.SplitType.ratio(0.66f))
       .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT)
       .build()
    
     val pinSplitRule = SplitPinRule.Builder()
       .setDefaultSplitAttributes(splitAttributes)
       .build()
    
     SplitController.getInstance(applicationContext).pinTopActivityStack(taskId, pinSplitRule)
    }
    

    Pane expansion

    The new pane expansion feature, also known as interactive divider, lets you create a visual separation between two activities in split-screen mode. You can make the pane divider draggable so users can resize the panes – and the activities in the panes – on the fly. This gives users control over how they want to view the app’s content.

    val splitAttributesBuilder: SplitAttributes.Builder = SplitAttributes.Builder()
       .setSplitType(SplitAttributes.SplitType.ratio(0.33f))
       .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT)
    
    if (WindowSdkExtensions.getInstance().extensionVersion >= 6) {
       splitAttributesBuilder.setDividerAttributes(
           DividerAttributes.DraggableDividerAttributes.Builder()
               .setColor(getColor(context, R.color.divider_color))
               .setWidthDp(4)
               .setDragRange(
                   DividerAttributes.DragRange.DRAG_RANGE_SYSTEM_DEFAULT)
               .build()
       )
    }
    val splitAttributes: SplitAttributes = splitAttributesBuilder.build()
    

    Dialog full-screen dim

    WindowManager 1.4 gives you more control over how dialogs dim the background. With dialog full-screen dim, you can choose to dim just the container where the dialog appears or the entire task window for a unified UI experience. The entire app window dims by default when a dialog opens (see EmbeddingConfiguration.DimAreaBehavior.ON_TASK).To dim only the container of the activity that opened the dialog, use EmbeddingConfiguration.DimAreaBehavior.ON_ACTIVITY_STACK. This gives you more flexibility in designing dialogs and makes for a smoother, more coherent user experience. Temu is among the first developers to integrate this feature, the full-screen dialog dim has reduced screen invalid touches by about 5%.

    Customised shopping cart reminder with dialog full-screen dim in the Temu app

    Customised shopping cart reminder with dialog full-screen dim in Temu.

    Enhanced posture support

    WindowManager 1.4 makes building apps that work flawlessly on foldables straightforward by providing more information about the physical capabilities of the device. The new WindowInfoTracker#supportedPostures API lets you know if a device supports tabletop mode, so you can optimize your app’s layout and features accordingly.

    val currentSdkVersion = WindowSdkExtensions.getInstance().extensionVersion
    val message =
    if (currentSdkVersion >= 6) {
      val supportedPostures = WindowInfoTracker.getOrCreate(LocalContext.current).supportedPostures
      buildString {
        append(supportedPostures.isNotEmpty())
        if (supportedPostures.isNotEmpty()) {
          append(" ")
          append(
          supportedPostures.joinToString(
          separator = ",", prefix = "(", postfix = ")"))
        }
      }
    } else {
      "N/A (WindowSDK version 6 is needed, current version is $currentSdkVersion)"
    }
    

    Other API changes

    WindowManager 1.4 includes several API changes and additions to support the new features. Notable changes include:

      • Stable and no longer experimental APIs:
        • ActivityEmbeddingController#invalidateVisibleActivityStacks
        • ActivityEmbeddingController#getActivityStack
        • SplitController#updateSplitAttributes
      • API added to set activity embedding animation background:
        • SplitAttributes.Builder#setAnimationParams
      • API to get updated WindowMetrics information:
        • ActivityEmbeddingController#embeddedActivityWindowInfo
      • API to finish all activities in an activity stack:
        • ActivityEmbeddingController#finishActivityStack

    How to get started

    To start using Jetpack WindowManager 1.4 in your Android projects, update your app dependencies in build.gradle.kts to the latest stable version:

    dependencies {
        implementation("androidx.window:window:1.4.0-rc01")
        ...  
        // or, if you're using the WindowManager testing library:
        testImplementation("androidx.window:window-testing:1.4.0-rc01")
    }
    

    Happy coding!



    Source link