دسته: اخبار اندروید

  • Building delightful Android camera and media experiences



    Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital – Developer Relations Engineers

    Hello Android Developers!

    We are the Android Developer Relations Camera & Media team, and we’re excited to bring you something a little different today. Over the past several months, we’ve been hard at work writing sample code and building demos that showcase how to take advantage of all the great potential Android offers for building delightful user experiences.

    Some of these efforts are available for you to explore now, and some you’ll see later throughout the year, but for this blog post we thought we’d share some of the learnings we gathered while going through this exercise.

    Grab your favorite Android plush or rubber duck, and read on to see what we’ve been up to!

    Future-proof your app with Jetpack

    Nevin Mital

    One of our focuses for the past several years has been improving the developer tools available for video editing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which offer solutions for both single-asset and multi-asset video editing preview and export. Today, I’d like to focus on the Composition demo app, a sample app that showcases some of the multi-asset editing experiences that Transformer enables.

    I started by adding a custom video compositor to demonstrate how you can arrange input video sequences into different layouts for your final composition, such as a 2×2 grid or a picture-in-picture overlay. You can customize this by implementing a VideoCompositorSettings and overriding the getOverlaySettings method. This object can then be set when building your Composition with setVideoCompositorSettings.

    Here is an example for the 2×2 grid layout:

    object : VideoCompositorSettings {
      ...
    
      override fun getOverlaySettings(inputId: Int, presentationTimeUs: Long): OverlaySettings {
        return when (inputId) {
          0 -> { // First sequence is placed in the top left
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(-0.5f, 0.5f) // Top-left section of background
              .build()
          }
    
          1 -> { // Second sequence is placed in the top right
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(0.5f, 0.5f) // Top-right section of background
              .build()
          }
    
          2 -> { // Third sequence is placed in the bottom left
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(-0.5f, -0.5f) // Bottom-left section of background
              .build()
          }
    
          3 -> { // Fourth sequence is placed in the bottom right
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(0.5f, -0.5f) // Bottom-right section of background
              .build()
          }
    
          else -> {
            StaticOverlaySettings.Builder().build()
          }
        }
      }
    }
    

    Since getOverlaySettings also provides a presentation time, we can even animate the layout, such as in this picture-in-picture example:

    moving image of picture in picture on a mobile device

    Next, I spent some time migrating the Composition demo app to use Jetpack Compose. With complicated editing flows, it can help to take advantage of as much screen space as is available, so I decided to use the supporting pane adaptive layout. This way, the user can fine-tune their video creation on the preview screen, and export options are only shown at the same time on a larger display. Below, you can see how the UI dynamically adapts to the screen size on a foldable device, when switching from the outer screen to the inner screen and vice versa.

    What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

    moving image of suportive pane adaptive layout

    What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

    moving image of sequential composition preview in Android XR

    Orbiter(
      position = OrbiterEdge.Bottom,
      offset = EdgeOffset.inner(offset = MaterialTheme.spacing.standard),
      alignment = Alignment.CenterHorizontally,
      shape = SpatialRoundedCornerShape(CornerSize(28.dp))
    ) {
      Row (horizontalArrangement = Arrangement.spacedBy(MaterialTheme.spacing.mini)) {
        // Playback control for rewinding by 10 seconds
        FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
          Icon(
            painter = painterResource(id = R.drawable.rewind_10),
            contentDescription = "Rewind by 10 seconds"
          )
        }
        // Playback control for play/pause
        FilledTonalIconButton({ viewModel.togglePlay() }) {
          Icon(
            painter = painterResource(id = R.drawable.rounded_play_pause_24),
            contentDescription = 
                if(viewModel.compositionPlayer.isPlaying) {
                    "Pause preview playback"
                } else {
                    "Resume preview playback"
                }
          )
        }
        // Playback control for forwarding by 10 seconds
        FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
          Icon(
            painter = painterResource(id = R.drawable.forward_10),
            contentDescription = "Forward by 10 seconds"
          )
        }
      }
    }
    

    Jetpack libraries unlock premium functionality incrementally

    Donovan McMurray

    Not only do our Jetpack libraries have you covered by working consistently across existing and future devices, but they also open the doors to advanced functionality and custom behaviors to support all types of app experiences. In a nutshell, our Jetpack libraries aim to make the common case very accessible and easy, and it has hooks for adding more custom features later.

    We’ve worked with many apps who have switched to a Jetpack library, built the basics, added their critical custom features, and actually saved developer time over their estimates. Let’s take a look at CameraX and how this incremental development can supercharge your process.

    // Set up CameraX app with preview and image capture.
    // Note: setting the resolution selector is optional, and if not set,
    // then a default 4:3 ratio will be used.
    val aspectRatioStrategy = AspectRatioStrategy(
      AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
    var resolutionSelector = ResolutionSelector.Builder()
      .setAspectRatioStrategy(aspectRatioStrategy)
      .build()
    
    private val previewUseCase = Preview.Builder()
      .setResolutionSelector(resolutionSelector)
      .build()
    private val imageCaptureUseCase = ImageCapture.Builder()
      .setResolutionSelector(resolutionSelector)
      .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
      .build()
    
    val useCaseGroupBuilder = UseCaseGroup.Builder()
      .addUseCase(previewUseCase)
      .addUseCase(imageCaptureUseCase)
    
    cameraProvider.unbindAll()
    
    camera = cameraProvider.bindToLifecycle(
      this,  // lifecycleOwner
      CameraSelector.DEFAULT_BACK_CAMERA,
      useCaseGroupBuilder.build(),
    )
    

    After setting up the basic structure for CameraX, you can set up a simple UI with a camera preview and a shutter button. You can use the CameraX Viewfinder composable which displays a Preview stream from a CameraX SurfaceRequest.

    // Create preview
    Box(
      Modifier
        .background(Color.Black)
        .fillMaxSize(),
      contentAlignment = Alignment.Center,
    ) {
      surfaceRequest?.let {
        CameraXViewfinder(
          modifier = Modifier.fillMaxSize(),
          implementationMode = ImplementationMode.EXTERNAL,
          surfaceRequest = surfaceRequest,
         )
      }
      Button(
        onClick = onPhotoCapture,
        shape = CircleShape,
        colors = ButtonDefaults.buttonColors(containerColor = Color.White),
        modifier = Modifier
          .height(75.dp)
          .width(75.dp),
      )
    }
    
    fun onPhotoCapture() {
      // Not shown: defining the ImageCapture.OutputFileOptions for
      // your saved images
      imageCaptureUseCase.takePicture(
        outputOptions,
        ContextCompat.getMainExecutor(context),
        object : ImageCapture.OnImageSavedCallback {
          override fun onError(exc: ImageCaptureException) {
            val msg = "Photo capture failed."
            Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
          }
    
          override fun onImageSaved(output: ImageCapture.OutputFileResults) {
            val savedUri = output.savedUri
            if (savedUri != null) {
              // Do something with the savedUri if needed
            } else {
              val msg = "Photo capture failed."
              Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
            }
          }
        },
      )
    }
    

    You’re already on track for a solid camera experience, but what if you wanted to add some extra features for your users? Adding filters and effects are easy with CameraX’s Media3 effect integration, which is one of the new features introduced in CameraX 1.4.0.

    Here’s how simple it is to add a black and white filter from Media3’s built-in effects.

    val media3Effect = Media3Effect(
      application,
      PREVIEW or IMAGE_CAPTURE,
      ContextCompat.getMainExecutor(application),
      {},
    )
    media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
    useCaseGroupBuilder.addEffect(media3Effect)
    

    The Media3Effect object takes a Context, a bitwise representation of the use case constants for targeted UseCases, an Executor, and an error listener. Then you set the list of effects you want to apply. Finally, you add the effect to the useCaseGroupBuilder we defined earlier.

    moving image of sequential composition preview in Android XR

    (Left) Our camera app with no filter applied. 
     (Right) Our camera app after the createGrayscaleFilter was added.

    There are many other built-in effects you can add, too! See the Media3 Effect documentation for more options, like brightness, color lookup tables (LUTs), contrast, blur, and many other effects.

    To take your effects to yet another level, it’s also possible to define your own effects by implementing the GlEffect interface, which acts as a factory of GlShaderPrograms. You can implement a BaseGlShaderProgram’s drawFrame() method to implement a custom effect of your own. A minimal implementation should tell your graphics library to use its shader program, bind the shader program’s vertex attributes and uniforms, and issue a drawing command.

    Jetpack libraries meet you where you are and your app’s needs. Whether that be a simple, fast-to-implement, and reliable implementation, or custom functionality that helps the critical user journeys in your app stand out from the rest, Jetpack has you covered!

    Jetpack offers a foundation for innovative AI Features

    Mayuri Khinvasara Khabya

    Just as Donovan demonstrated with CameraX for capture, Jetpack Media3 provides a reliable, customizable, and feature-rich solution for playback with ExoPlayer. The AI Samples app builds on this foundation to delight users with helpful and enriching AI-driven additions.

    In today’s rapidly evolving digital landscape, users expect more from their media applications. Simply playing videos is no longer enough. Developers are constantly seeking ways to enhance user experiences and provide deeper engagement. Leveraging the power of Artificial Intelligence (AI), particularly when built upon robust media frameworks like Media3, offers exciting opportunities. Let’s take a look at some of the ways we can transform the way users interact with video content:

      • Empowering Video Understanding: The core idea is to use AI, specifically multimodal models like the Gemini Flash and Pro models, to analyze video content and extract meaningful information. This goes beyond simply playing a video; it’s about understanding what’s in the video and making that information readily accessible to the user.
      • Actionable Insights: The goal is to transform raw video into summaries, insights, and interactive experiences. This allows users to quickly grasp the content of a video and find specific information they need or learn something new!
      • Accessibility and Engagement: AI helps make videos more accessible by providing features like summaries, translations, and descriptions. It also aims to increase user engagement through interactive features.

    A Glimpse into AI-Powered Video Journeys

    The following example demonstrates potential video journies enhanced by artificial intelligence. This sample integrates several components, such as ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will be available soon on Github.

    moving images of examples of AI-powered video journeys

    (Left) Video summarization  
     (Right) Thumbnails timestamps and HDR frame extraction

    There are two experiences in particular that I’d like to highlight:

      • HDR Thumbnails: AI can help identify key moments in the video that could make for good thumbnails. With those timestamps, you can use the new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from videos, providing richer visual previews.
      • Text-to-Speech: AI can be used to convert textual information derived from the video into spoken audio, enhancing accessibility. On Android you can also choose to play audio in different languages and dialects thus enhancing personalization for a wider audience.

    Using the right AI solution

    Currently, only cloud models support video inputs, so we went ahead with a cloud-based solution.Iintegrating Firebase in our sample empowers the app to:

      • Generate real-time, concise video summaries automatically.
      • Produce comprehensive content metadata, including chapter markers and relevant hashtags.
      • Facilitate seamless multilingual content translation.

    So how do you actually interact with a video and work with Gemini to process it? First, send your video as an input parameter to your prompt:

    val promptData =
    "Summarize this video in the form of top 3-4 takeaways only. Write in the form of bullet points. Don't assume if you don't know"
    
    val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
    _outputText.value = OutputTextState.Loading
    
    viewModelScope.launch(Dispatchers.IO) {
        try {
            val requestContent = content {
                fileData(videoSource.toString(), "video/mp4")
                text(prompt)
            }
            val outputStringBuilder = StringBuilder()
    
            generativeModel.generateContentStream(requestContent).collect { response ->
                outputStringBuilder.append(response.text)
                _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
            }
    
            _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
    
        } catch (error: Exception) {
            _outputText.value = error.localizedMessage?.let { OutputTextState.Error(it) }
        }
    }
    

    Notice there are two key components here:

      • FileData: This component integrates a video into the query.
      • Prompt: This asks the user what specific assistance they need from AI in relation to the provided video.

    Of course, you can finetune your prompt as per your requirements and get the responses accordingly.

    In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI solutions like Gemini through Firebase, you can significantly elevate video experiences on Android. This combination enables advanced features like video summaries, enriched metadata, and seamless multilingual translations, ultimately enhancing accessibility and engagement for users. As these technologies continue to evolve, the potential for creating even more dynamic and intelligent video applications is vast.

    Go above-and-beyond with specialized APIs

    Mozart Louis

    Android 16 introduces the new audio PCM Offload mode which can reduce the power consumption of audio playback in your app, leading to longer playback time and increased user engagement. Eliminating the power anxiety greatly enhances the user experience.

    Oboe is Android’s premiere audio api that developers are able to use to create high performance, low latency audio apps. A new feature is being added to the Android NDK and Android 16 called Native PCM Offload playback.

    Offload playback helps save battery life when playing audio. It works by sending a large chunk of audio to a special part of the device’s hardware (a DSP). This allows the CPU of the device to go into a low-power state while the DSP handles playing the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), where the DSP also takes care of decoding.

    This can result in significant power saving while playing back audio and is perfect for applications that play audio in the background or while the screen is off (think audiobooks, podcasts, music etc).

    We created the sample app PowerPlay to demonstrate how to implement these features using the latest NDK version, C++ and Jetpack Compose.

    Here are the most important parts!

    First order of business is to assure the device supports audio offload of the file attributes you need. In the example below, we are checking if the device support audio offload of stereo, float PCM file with a sample rate of 48000Hz.

           val format = AudioFormat.Builder()
                .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
                .setSampleRate(48000)
                .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
                .build()
    
            val attributes =
                AudioAttributes.Builder()
                    .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                    .setUsage(AudioAttributes.USAGE_MEDIA)
                    .build()
           
            val isOffloadSupported = 
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
                    AudioManager.isOffloadedPlaybackSupported(format, attributes)
                } else {
                    false
                }
    
            if (isOffloadSupported) {
                player.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
            }
    

    Once we know the device supports audio offload, we can confidently set the Oboe audio streams’ performance mode to the new performance mode option, PerformanceMode::POWER_SAVING_OFFLOADED.

    // Create an audio stream
            AudioStreamBuilder builder;
            builder.setChannelCount(mChannelCount);
            builder.setDataCallback(mDataCallback);
            builder.setFormat(AudioFormat::Float);
            builder.setSampleRate(48000);
    
            builder.setErrorCallback(mErrorCallback);
            builder.setPresentationCallback(mPresentationCallback);
            builder.setPerformanceMode(PerformanceMode::POWER_SAVING_OFFLOADED);
            builder.setFramesPerDataCallback(128);
            builder.setSharingMode(SharingMode::Exclusive);
               builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
            Result result = builder.openStream(mAudioStream);
    

    Now when audio is played back, it will be offloading audio to the DSP, helping save power when playing back audio.

    There is more to this feature that will be covered in a future blog post, fully detailing out all of the new available APIs that will help you optimize your audio playback experience!

    What’s next

    Of course, we were only able to share the tip of the iceberg with you here, so to dive deeper into the samples, check out the following links:

    Hopefully these examples have inspired you to explore what new and fascinating experiences you can build on Android. Tune in to our session at Google I/O in a couple weeks to learn even more about use-cases supported by solutions like Jetpack CameraX and Jetpack Media3!



    Source link

  • Spotify announces meaningful new features for all users

    Spotify announces meaningful new features for all users


    Spotify stock photo 1

    Edgar Cervantes / Android Authority

    TL;DR

    • Spotify has rolled out new features for both Premium and free users.
    • Premium users get a revamped Queue, a more powerful Hide button, and a new 30-day Snooze feature.
    • Meanwhile, the Spotify app now surfaces new “Add,” “Sort,” and “Edit” tools at the top of playlists.
    • There’s also a new Create button for quick access to several features.

    Spotify has just rolled out a series of meaningful updates aimed at giving users, both Premium and free, greater control over their listening experience. These updates, some of which are still experimental, enhance playlist management, track selection, and social collaboration.

    What’s new for Spotify Premium users?

    Spotify Premium subscribers are getting several upgraded tools, starting with a revamped Queue. Located via the three lines at the bottom of the Now Playing screen, the updated Queue now includes new controls like Shuffle, Smart Shuffle (which suggests personalized tracks), Repeat, and Sleep Timer. Spotify will also show you suggested songs after your queued tracks, helping you decide what to listen to next. If you’d rather not see these suggestions, you have the option of disabling them by turning off Autoplay and Smart Shuffle.

    Another enhancement for Premium users is a more powerful Hide button. Tapping it now removes a song from that playlist across all your devices. If you’d prefer a temporary break from a track, Spotify is also testing a new “30-day Snooze” feature. This experimental option removes the song from your recommendations for a month and may roll out to all users in the future.

    New features for all Spotify users

    In addition to Premium-specific updates, Spotify is introducing broader improvements across its app. All users will now see new “Add,” “Sort,” and “Edit” tools at the top of their playlists. These tools make it easier to customize tracklists, change playlist titles, design custom cover art, and reorder songs to your liking.

    In selected countries, including the US, you can now turn your Liked Songs into a playlist. Simply filter them by genre and tap “Turn into playlist.”

    The mobile app is also getting a new Create button (+) in the bottom-right corner. This gives all users quick access to playlist creation, collaboration features, and Spotify’s social listening tool, Blend. Premium subscribers get bonus features here, including direct access to Jam for real-time group listening and AI Playlist, which builds playlists with the help of AI.

    Lastly, Spotify has slightly reorganized its navigation. Your Library now appears as the third tab at the bottom of the screen.



    Source link

  • Garmin users should start bracing for more subscription-only features

    Garmin users should start bracing for more subscription-only features


    Garmin Connect Plus dashboard

    Ryan Haines / Android Authority

    TL;DR

    • Garmin recently conducted an earnings call for the first quarter of 2025.
    • In the earnings call, CEO Cliff Pemble explains why the company decided to launch Connect Plus.
    • Pemble also mentions reserving features for the subscription service.

    Since the launch of Connect Plus in March, Garmin users have been worried about the future of their devices. These users aren’t just upset about the mere introduction of a subscription service; there’s a palpable concern about what this paywall could mean for new features going forward. A recent earnings call appears to show that there may be some substance behind these fears.

    Garmin recently conducted an earnings call for the first quarter of 2025. During this call, the company announced an 11% improvement year on year, netting earnings of $1.54 billion. On top of that, Garmin reached a record $330 million in operating income. Around the 16:30 minute mark, the call was opened up for questions.

    When asked about the launch of Connect Plus and why the decision was made, CEO Cliff Pemble stated:

    I think we’ve been saying for a while that we are evaluating opportunities to have a premium offering on Garmin Connect. I think the developments of AI and particularly around AI-based insights for our users was one of those things that we felt was important to recognize the value for the investment that it takes to do.

    Pemble went on to mention that the company “felt like it was the right time” and added that they have not taken away any previously free features. Although the smartwatch maker may not have any plans to take away previously free features, Pemble seemed to confirm what users have been worrying about over the last few weeks.

    While discussing the various features Garmin offers, Pemble says “certain ones, we will likely reserve for premium offerings.” Meaning that the company may focus on making Connect Plus a more robust offering by working on features that will be hidden behind the subscription.

    Considering that one of the biggest complaints about Connect Plus is how underwhelming it is, Garmin wanting to build out its service doesn’t come as a big surprise. Unfortunately, if Connect Plus is to become a service worthy of a subscription, such a move is necessary. However, knowing this doesn’t exactly ease the sting that comes with realizing more and more features may become exclusive to Connect Plus.

    Got a tip? Talk to us! Email our staff at news@androidauthority.com. You can stay anonymous or get credit for the info, it’s your choice.



    Source link

  • Android Studio Meerkat Feature Drop is stable



    Posted by Adarsh Fernando, Group Product Manager

    Today, we’re excited to announce the stable release of Android Studio Meerkat Feature Drop (2024.3.2)!

    This release brings a host of new features and improvements designed to boost your productivity and enhance your development workflow. With numerous enhancements, this latest release helps you build high-quality Android apps faster and more efficiently: streamlined Jetpack Compose previews, new Gemini capabilities, better Kotlin Multiplatform (KMP) integration, improved device management, and more.

    Read on to learn about the key updates in Android Studio Meerkat Feature Drop, and download the latest stable version today to explore them yourself!

    Developer Productivity Enhancements

    Analyze Crash Reports with Gemini in Android Studio

    Debugging production crashes can require you to spend significant time switching contexts between your crash reporting tool, such as Firebase Crashlytics and Android Vitals, and investigating root causes in the IDE. Now, when viewing reports in App Quality Insights (AQI), click the Insights tab. Gemini provides a summary of the crash, generates insights, and links to useful documentation. If you also provide Gemini with access to local code context, it can provide more accurate results, relevant next steps, and code suggestions. This helps you reduce the time spent diagnosing and resolving issues.

    moving image of Gemini in the App Quality Insights tool window in Android Studio

    Gemini helps you investigate, understand, and resolve crashes in your app much more quickly in the App Quality Insights tool window.

    Generate Unit Test Scenarios with Gemini

    Writing effective unit tests is crucial but can be time-consuming. Gemini now helps kickstart this process by generating relevant test scenarios. Right-click on a class in your editor and select Gemini > Generate Unit Test Scenarios. Gemini analyzes the code and suggests test cases with descriptive names, outlining what to test. While you still implement the specific test logic, this significantly speeds up the initial setup and ensures better test coverage by suggesting scenarios you might have missed.

    moving image of generating unit test scenarios in Android Studio

    Gemini helps you generate unit test scenarios for your app.

    Gemini Prompt Library

    No more retyping your most frequently used prompts for Gemini! The new Prompt Library lets you save prompts directly within Android Studio (Settings > Gemini > Prompt Library). Whether it’s a specific code generation pattern, a refactoring instruction, or a debugging query you use often, save it once from the chat (right-click > Save prompt) and re-apply it instantly from the editor (right-click > Gemini > Prompt Library). Prompts that you save can also be shared and standardized across your team.

    moving image of prompt library in Android Studio

    The prompt library saves your frequently used Gemini prompts to make them easier to use.

    You have the option to store prompts on IDE level or Project level:

      • IDE level prompts are private and can be used across multiple projects.
      • Project level prompts can be shared across teams working on the same project (if .idea folder is added to VCS).

    Compose and UI Development

    Themed Icon Support Preview

    Ensure your app’s branding looks great with Android’s themed icons. Android Studio now lets you preview how your existing launcher icon adapts to the monochromatic theming algorithm directly within the IDE. This quick visual check helps you identify potential contrast issues or undesirable shapes early in the workflow, even before you provide a dedicated monochromatic drawable. This allows for faster iteration on your app’s visual identity.

    moving image of themed icon support in preview in Android Studio

    Themed icon support in Preview helps you visually check how your existing launcher icon adapts to monochromatic theming.

    Compose Preview Enhancements

    Iterating on your Compose UI is now faster and better organized:

      • Enhanced Zoom: Navigate complex layouts more easily with smoother, more responsive zooming in your Compose previews.
      • Collapsible Groups: Tidy up your preview surface by collapsing groups of related composables under their @Preview annotation names, letting you focus on specific parts of the UI without clutter.
      • Grid Mode by Default: Grid mode is now the default for a clear overview. Gallery mode (for flipping through individual previews) is available via right-click, while List view has been removed to streamline the experience.

    moving image of Compose previews in Android Studio

    Compose previews render more smoothly and make it easier to hide previews you’re not focused on.

    Build and Deploy

    KMP Shared Module Integration

    Android Studio now streamlines adding shared logic to your Android app with the new Kotlin Multiplatform Shared Module template. This provides a dedicated starting point within your Android project, making it easier to structure and build shared business logic for both Android and iOS directly from Android Studio.

    Kotlin Multiplatform template in Android Studio

    The new Kotlin Multiplatform module template makes it easier to add shared business logic to your existing app.

    Updated UX for Adding Devices

    Spend less time configuring test devices. The new Device Manager UX for adding virtual and remote devices makes it much easier to configure the devices you want from the Device Manager. To get started, click the ‘+’ action at the top of the window and select one of these options:

      • Create Virtual Device: New filters, recommendations, and creation flow guide you towards creating AVDs that are best suited for your intended purpose and your machine’s performance.
      • Add Remote Devices: With Android Device Streaming, powered by Firebase, you can connect and debug your app with a variety of real physical devices. With a new catalog view and filters, it’s now easier to locate and start using the device you need in just a few clicks.

    moving image of configuring virtual devices in Android Studio

    It’s now easier to configure virtual devices that are optimized for your workstation.

    Google Play Deprecated SDK Warnings

    Stay more informed about SDKs you publish with your app. Android Studio now displays warnings from the Google Play SDK Index when an SDK used in your app has been deprecated by its author. These warnings include information about suggested alternative SDKs, helping you proactively manage dependencies and avoid potential issues related to outdated or insecure libraries.

    Google Play Deprecated SDK warnings in Android Studio

    Play deprecated SDK warnings help you avoid potential issues related to outdated or insecure libraries.

    Updated Build Menu and Actions

    We’ve refined the Build menu for a more intuitive experience:

      • New ‘Build run-configuration-name’ Action: Builds the currently selected run configuration (e.g., :app or a specific test). This is now the default action for the toolbar button and Control/Command+F9.
      • Reordered Actions: The new build action is prioritized at the top, followed by Compile and Assemble actions.
      • Clearer Naming: “Rebuild Project” is now “Clean and Assemble Project with Tests”. “Make Project” is renamed to “Assemble Project”, and a new “Assemble Project with Tests” action is available.

    Build menu in Android Studio

    The Build menu includes behavior and naming changes to simplify and streamline the experience.

    Standardized Config Directories

    Switching between Stable, Beta, and Canary versions of Android Studio is now smoother. Configuration directories are standardized, removing the “Preview” suffix for non-stable builds. We’ve also added the micro version (e.g., AndroidStudio2024.3.2) to the path, allowing different feature drops to run side-by-side without conflicts. This simplifies managing your IDE settings, especially if you work with multiple Android Studio installations.

    IntelliJ platform update

    Android Studio Meerkat Feature Drop (2024.3.2) includes the IntelliJ 2024.3 platform release, which has many new features such as a feature complete K2 mode, more reliable Java** and Kotlin code inspections, grammar checks during indexing, debugger improvements, speed and quality of life improvements to Terminal, and more.

    For more information, read the full IntelliJ 2024.3 release notes.

    Summary

    Android Studio Meerkat Feature Drop (2024.3.2) delivers these key features and enhancements:

      • Developer Productivity:
          • Analyze Crash Reports with Gemini
          • Generate Unit Test Scenarios with Gemini
          • Gemini Prompt Library
      • Compose and UI:
          • Themed Icon Preview
          • Compose Preview Enhancements (Zoom, Collapsible Groups, View Modes)
      • Build and Deploy:
          • KMP Shared Module Template
          • Updated UX for Adding Devices
          • Google Play SDK Insights: Deprecated SDK Warnings
          • Updated Build Menu & Actions
          • Standardized Config Directories
      • IntelliJ Platform Update
          • Feature complete K2 mode
          • Improved Kotlin and Java** inspection reliability
          • Debugger improvements
          • Speed and quality of life improvements in Terminal

    Getting Started

    Ready to elevate your Android development? Download Android Studio Meerkat Feature Drop and start using these powerful new features today!

    As always, your feedback is crucial. Check known issues, report bugs, suggest improvements, and connect with the community on LinkedIn, Medium, YouTube, or X. Let’s continue building amazing Android apps together!

    **Java is a trademark or registered trademark of Oracle and/or its affiliates.





    Source link

  • There’s good and bad news about the Z Fold and Flip 7 batteries- Android Authority

    There’s good and bad news about the Z Fold and Flip 7 batteries- Android Authority


    The Samsung Galaxy Z Flip 6 and Z Fold 6 on a table.

    Hadlee Simons / Android Authority

    TL;DR

    • The batteries for the Galaxy Z Fold 7 and Galaxy Z Flip 7 have received UL Demko certification.
    • The Z Fold 7 would have a total battery capacity of 4,272mAh, while the Z Flip 7 gets 4,174mAh.
    • Both devices may have 25W wireless charging speeds, up from the 15W of previous generations.

    As we get closer to summer, Samsung’s next generation of foldables is looming just over the horizon. We’re anticipating Samsung’s next Galaxy Unpacked event in the first half of July, which may be held in New York for the first time in three years. Here, we should see the Galaxy Z Fold 7 and Z Flip 7 devices, and leaks continue to give us a good idea of what to expect.

    What appear to be the batteries for both the Galaxy Z Fold 7 and Z Flip 7 have received UL Demko certification, which follows their earlier BIS certification, according to TheTechOutlook. Because of this, we now have some solid expectations for the capacities of both batteries.

    For the Galaxy Z Fold 7, we’re looking at possible battery model numbers of EB-BF966ABE and EB-BF967ABE, while these got certificate numbers of DK–163799-UL and DK–163657-UL. These are Li-ion batteries with capacities of 2,126mAh and 2,146mAh, which means 4,272mAh total for the rated capacity. As a comparison, the Galaxy Z Fold 6 packs in 2,355mAh and 1,918mAH batteries, which brings its rated total to 4,273mAh. In terms of marketing, since the Z Fold 6 has a typical 4,400mAh capacity, we should expect something similar for the Z Fold 7 as well.

    Regarding the Z Flip 7, we’ve got model numbers EB-BF766ABE and EB-BF767ABE for the potential batteries here, with certification numbers DK–163399-UL and DK–163928-UL. On this one, the capacities of the batteries are 1,189mAh and 2,985mAh, which would be a total of 4,174mAh. For reference, the Galaxy Z Flip 6’s components were rated at 2,790mAh and 1,097mAh, which is a total of 3,887mAh capacity. The typical capacity for the Z Flip 6 is 4,000mAh, so Samsung may be thinking of positioning this be as 4,300mAh for the Z Flip 7.

    From these new certification listings, those who prefer the larger Galaxy Z Fold series could  see a negligible drop in battery capacity, while the Z Flip fans are likely due a more substantial increase. Of course, actual battery life depends on what you do with your device all day, so these numbers may or may not have a big impact. We’ll find out when the phones launch and we try them out ourselves.

    But there is some good news for both, thankfully. It appears the the next-generation of foldables should support 25W wireless charging, according to their listings in China’s 3C certification database as spotted by TheTechOutlook. However, we also saw that both the Z Fold 7 and Flip 7 might only have 25W wired charging speeds as well, which isn’t as impressive as some of Samsung’s other flagships, and even mid-range devices with 45W.

    We also expect the Galaxy Z Fold 7 and Flip 7 to have Snapdragon 8 Elite SoC and at least 12GB RAM. With just a couple more months before the release of Samsung’s next-generation foldables, we shouldn’t have a much longer wait and will likely see plenty more leaks in the coming weeks.

    Got a tip? Talk to us! Email our staff at news@androidauthority.com. You can stay anonymous or get credit for the info, it’s your choice.



    Source link

  • Don’t upgrade to T-Mobile Experience; your legacy plan is better

    Don’t upgrade to T-Mobile Experience; your legacy plan is better


    T Mobile logo on smartphone with colored background stock photo

    Edgar Cervantes / Android Authority

    Earlier this week, T-Mobile announced the retirement of its Go5G lineup, introducing new Experience plans to take their place. At first, these changes didn’t seem too bad. But then it became clear: taxes and fees are no longer included in the advertised price. That changes everything.

    Let’s take a closer look at why you’re probably better off skipping T-Mobile Experience, and where a few rare exceptions might apply.

    Is T-Mobile Experience worth the switch?

    6 votes

    T-Mobile Experience is big on marketing, little on substance

    First, here’s a quick recap of how the new Experience plans compare to the Go5G offerings they’re replacing. Also note that there is no direct replacement for the old base Go5G plan; it’s simply no longer offered.

    T-Mobile Experience More offers everything from Go5G Plus, but adds:

    • 10GB of additional hotspot data (60GB total)
    • Free T-Satellite with Starlink through the end of the year

    Meanwhile, T-Mobile Experience Beyond takes Go5G Next and adds the following:

    • 200GB of additional hotspot access (250GB total)
    • 15GB of extra high-speed data (30GB total)
    • 15GB of high-speed data in 210 countries (a new perk)

    There’s also a new 5-year price guarantee — though so far, it looks more like a sidegrade than an upgrade. I’ll break that down further in a separate piece as I’m still digging into it.

    On paper, these changes aren’t nearly as dramatic as the shift from Magenta to Go5G. T-Mobile even tries to sweeten the pot by offering a $5 per line discount on Experience More compared to Go5G Plus. But let’s be real: the 10GB hotspot bump won’t move the needle for most users, who rarely burn through their hotspot allowance to begin with.

    The free Starlink T-Satellite beta access is a bit more compelling, especially since current beta users aren’t being charged; But that’s set to change in July. Even so, this is not a permanent perk: Experience More customers will eventually have to pay extra for satellite access, just like Go5G Plus subscribers.

    Experience Beyond, to its credit, has a bit more appeal for frequent travelers. You also get satellite backup for free as a long-term perk. But if you’re not flying internationally for work or play, or don’t need satellite access? These extras won’t change much for you.

    The bigger issue is what T-Mobile no longer includes in either of these plans: taxes and fees. Previously, T-Mobile baked taxes and most fees into the monthly price for its Go5G plans. Not anymore. With Experience, you’ll see these costs tacked on, meaning that a $90 or $100 plan might balloon closer to $110 or more, depending on where you live.

    For plans that add very little for most mainstream users, this shift feels like a pure marketing sleight of hand. There’s just not enough real value here to justify the change.

    Are there any exceptions where T-Mobile Experience might make sense?

    Google Fi Wireless logo on smartphone with colored background stock photo

    Edgar Cervantes / Android Authority

    While I generally can’t recommend these plans for existing customers, there are a few narrow scenarios where they might make sense:

    • Low-tax states with multiple lines: As one reader pointed out in the comments on my original article, folks in low-tax states with several lines might actually save a few bucks over Go5G Plus. But be careful as this is very case-specific. Do the math and verify your state’s tax rate before jumping ship.
    • Frequent international travelers: If you travel often and currently rely on an add-on or a second carrier for roaming, Experience Beyond’s expanded international data might save you money compared to Go5G Next, but again, this depends on your usage.

    That said, there are better alternatives out there for international users. Google Fi, for instance, offers a postpaid-like experience and more robust roaming features, often at a lower cost. As always, do your homework.

    What about new customers or those with much older legacy plans?

    If you are on an existing Go5G plan, my general advice is to either stay put or look at outside alternatives if you aren’t happy with T-Mobile’s recent changes. Older legacy customers may eventually feel priced out of their old plans due to creeping fees and rate hikes as well. Still, I’d advise against switching to T-Mobile Experience. You’re likely better off sticking to what you have or exploring prepaid carriers, which often offer similar service at much lower rates.

    That same advice applies to new customers considering T-Mobile. Unless you absolutely need in-store customer support, free phone deals, or other perks, T-Mobile’s current lineup just isn’t worth the premium in 2025.

    Even then, you’re often better off buying a phone outright (ideally with a no-interest financing deal from a retailer) and pairing it with a prepaid carrier. You’ll save a lot of money and can still enjoy options like insurance through select carriers. For those who still prefer in-person support, consider Cricket or Metro by T-Mobile — both are more affordable than the big three and still have thousands of physical stores nationwide each.

    It’s time to rethink what prepaid means in the US

    I get it — switching is hard. I was slow to make the leap myself. In fact, my family still has a few Verizon lines we’re gradually migrating elsewhere as we pay off devices.

    But here’s some perspective: the US is one of the only major mobile markets where postpaid is the absolute default. In many parts of Europe and Asia, most people use prepaid or mobile virtual network operators (MVNOs) instead of signing on directly with a major carrier.

    That old stereotype of prepaid being cheap and limited? It’s outdated. The prepaid market has matured with unlimited plans, solid customer support, and even premium device compatibility. Some carriers even offer special device promotions and more.

    Of course, if you absolutely refuse to consider prepaid, then T-Mobile is still your best bet among the big three. Despite the new pricing structure, it generally remains cheaper and more user-friendly than Verizon or AT&T, though that advantage is shrinking year by year. You could also consider Boost Mobile, though I’ve heard mixed things about its postpaid service. US Cellular is also a regional option, but it’s equally pricey and very likely to be eaten up by T-Mobile and the big three anyhow.



    Source link

  • Android Developers Blog: Introducing Widget Quality Tiers



    Posted by Ivy Knight – Senior Design Advocate

    Level up your app Widgets with new quality tiers

    Widgets can be a powerful tool for engaging users and increasing the visibility of your app. They can also help you to improve the user experience by providing users with a more convenient way to access your app’s content and features.

    To build a great Android widget, it should be helpful, adaptive, and visually cohesive with the overall aesthetic of the device home screen.

    In order to help you achieve a great widget, we are pleased to introduce Android Widget Quality Tiers!

    The new Widget quality tiers are here to help guide you towards a best practice implementation of widgets, that will look great and bring your user’s value across the ecosystem of Android Phone, Tablets and Foldables.

    What does this mean for widget makers?

    Whether you are planning a new widget, or investing in an update to an existing widget, the Widget Quality Tiers will help you evaluate and plan for a high quality widget.

    Just like Large Screen quality tiers help optimize app experiences, these Widget tiers guide you in creating great widgets across all Android devices. Now, similar tiers are being introduced for widgets to ensure they’re not just functional, but also visually appealing and user-friendly.

    Two screenshots of a phone display different views in the Google Play app. The first shows a list of running apps with the Widget filter applied in a search for 'Running apps'; the second shows the Nike Run Club app page.

    Widgets that meet quality tier guidelines will be discoverable under the new Widget filter in Google Play.

    Consider using our Canonical Widget layouts, which are based on Jetpack Glance components, to make it easier for you to design and build a Tier 1 widget your users will love.

    Let’s take a look at the Widget Quality Tiers

    There are three tiers built with required system defaults and suggested guidance to create an enhanced widget experience:

    Tier 1: Differentiated

    Four mockups show examples of Material Design 3 dynamic color applied to an app called 'Radio Hour'.

    Differentiated widgets go further by implementing theming and adapting to resizing.

    Tier 1 widgets are exemplary widgets offering hero experiences that are personalized, and create unique and productive homescreens. These widgets meet Tier 2 standards plus enhancements for layout, color, discovery, and system coherence criteria.

    A stylized cartoon figure holds their chin thoughtfully while a chat bubble icon is highlighted

    For example, use the system provided corner radius, and don’t set a custom corner radius on Widgets.

    Add more personalization with dynamic color and generated previews while ensuring your widgets look good across devices by not overriding system defaults.

     Four mockups show examples of Material Design 3 components on Android: a contact card, a podcast player, a task list, and a news feed.

    Tier 1 widgets that, from the top left, properly crop content, fill the layout bounds, have appropriately sized headers and touch targets, and make good use of colors and contrast.

    Tier 2: Quality Standard

    These widgets are helpful, usable, and provide a quality experience. They meet all criteria for layout, color, discovery, and content.

    A simple to-do list app widget displays two tasks: 'Water plants' and 'Water more plants.' Both tasks have calendar icons next to them. The app is titled 'Plants' and has search and add buttons in the top right corner.

    Make sure your widget has appropriate touch targets.

    Tier 2 widgets are functional but simple, they meet the basic criteria for a usable app. But if you want to create a truly stellar experience for your users, tier 1 criteria introduce ways to make a more personal, interactive, and coherent widget.

    Tier 3: Low Quality

    These widgets don’t meet the minimum quality bar and don’t provide a great user experience, meaning they are not following or missing criteria from Tier 2.

     Examples of Material Design 3 widgets are displayed on a light pink background with stylized X shapes. Widgets include a podcast player, a contact card, to-do lists, and a music player.

    Clockwise from the top left not filling the bounds, poorly cropped content, low color contrast, mis-sized header, and small touch targets.

    A stylized cartoon person with orange hair, a blue shirt, holds a pencil to their cheek.  'Kacie' is written above them, with a cut off chat bubble icon.

    For example, ensure content is visible and not cropped

    Build and elevate your Android widgets with Widget Quality Tiers

    Dive deeper into the widget quality tiers and start building widgets that not only look great but also provide an amazing user experience! Check out the official Android documentation for detailed information and best practices.


    This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.



    Source link

  • Design with Widget Canonical Layouts



    Posted by Summers Pitman – Developer Relations Engineer, and Ivy Knight – Senior Design Advocate

    Widgets can bring more productive, delightful and customized experiences to users’ home screens, but they can be tricky to design to ensure a high quality focused experience. In this blog post, we’ll cover how easy Widget Canonical Layouts can make this process.

    But, what is a Canonical Layout? It is a common layout pattern that works for various screen sizes. You can use them as a starting point, ready-to-use compositions that help layouts adapt for common use cases and screen sizes. Widgets also provide Canonical Layouts to get started crafting higher quality widgets.

    Widget Canonical Layouts

    The Widget Canonical Layouts Figma makes previewing your widget content in multiple breakpoints and layout types. Join me in our Figma design resource to explore how they can simplify designing a widget for one of our sample apps, JetNews.

    Three side-by-side examples of Widget Canonical Layouts in Figma being used to design a widget for JetNews

    1. Content to adapt

    Jetnews is a sample news reading app, built with Jetpack Compose. With the experience in mind, the primary user journey is reading articles.

      • A widget should be glanceable, so displaying a full article would not be a good use case.
      • Since they are timely news articles, surfacing newer content could be more productive for users.
      • We’ll want to give a condensed version of each article similar to the app home feed.
      • The addition of a bookmark action would allow the user to save and read later in the full app experience.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    2. Choosing a Canonical Layout

    With our content and user journey established, we’ll take a glance at which canonical layouts would make sense.

    We want to show at least a few new articles with a headline, truncated description, and possible thumbnail. Which brings us to the Image + Text Grid layout and maybe the list layout.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Within our new Figma Widget Canonical Layout preview, we can add in some mock content to check out how these layouts will look in various sizes.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Moving example of using Widget Canonical Layouts in Figma to design a widget for JetNews

    3. Adapting to breakpoint sizes

    Now that we’ve previewed our content in both the grid and list layouts, we don’t have to choose between just one!

    The grid layout better displays our content for larger sizes, where we have some more room to take advantage of multiple columns and a larger thumbnail image. While the list is working nicely for smaller sizes, giving a one column layout with a smaller thumbnail.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    But we can adapt even further to allow the user to have more resizing flexibility and anticipate different OEM grid sizing. For JetNews, we decided on an additional extra small layout to accommodate a smaller grid size and vertical height while still using the List layout. For this size I decided to remove the thumbnail all together to give the title and action space.

    Consider these in-between design tweaks as needed (between any of the breakpoints), that can be applied as general rules in your widget designs.

    Here are a few guidelines to borrow:

      • Establish a content hierarchy on what to hide as the widget shrinks.
      • Use a type scale so the type scales consistently.
      • Create some parameters for image scaling with aspect ratios and cropping techniques.
      • Use component presentation changes. For example, the title bar’s FAB can be reduced to a standard icon.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Last, I’ll swap the app icon, round up all the breakpoint sizes, and provide an option with brand colors.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    These are ready to send over to dev! Tune in for the code along to check out how to implement the final widget.

    Go try it out and explore more widgets

    You can find the Widget Canonical Layouts at our new Figma Community Page: figma.com/@androiddesign. Stay tuned for more Android Figma resources.

    Check out the official Android documentation for detailed information and best practices Widgets on Android and more on Widget Quality Tiers, and join us for the rest of Widget Spotlight week!

    Android Banner

    This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.



    Source link

  • Tune in for our winter episode of #TheAndroidShow on March 13!



    Posted by Anirudh Dewani, Director – Android Developer Relations

    In just a few days, on Thursday, March 13 at 10AM PT, we’ll be dropping our winter episode of #TheAndroidShow, on YouTube and on developer.android.com!

    Mobile World Congress – the annual event in Barcelona where Android device makers show off their latest devices, kicked off yesterday. In our winter episode we’ll take a look at these foldables, tablets and wearables and tell you what you need to get building.

    Plus we’ve got some news to share, like a new update for Gemini in Android Studio and some new goodies for games developers ahead of the Game Developer Conference (GDC) in San Francisco later this month. And of course, with the launch of Android XR in December, we’ll also be taking a look at how to get building there. It’s a packed show, and you don’t want to miss it!

    https://www.youtube.com/watch?v=6Nwq0oI41lg

    Some new Android foldables and tablets, at Mobile World Congress

    Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona:

      • OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12” large screen – making it as compact or expansive as needed.
      • Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet.
      • Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity.

    These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost.

    Tune in to #TheAndroidShow: March 13 at 10AM PT

    These new devices are just one of the many things we’ll cover in our winter episode, you don’t want to miss it! If you watch live on YouTube, we’ll have folks standing by to answer your questions in the comments. See you on March 13 on YouTube or at developer.android.com/events/show!




    Source link

  • Generate stunning visuals in your Android apps with Imagen 3 via Vertex AI in Firebase



    Posted by Thomas Ezan Sr. – Android Developer Relation Engineer (@lethargicpanda)

    Imagen 3, our most advanced image generation model, is now available through Vertex AI in Firebase, making it even easier to integrate it to your Android apps.

    Designed to generate well-composed images with exceptional details, reduced artifacts, and rich lighting, Imagen 3 represents a significant leap forward in image generation capabilities.

    Hot air balloons float over a scenic desert landscape with unique rock formations.

    Image generated by Imagen 3 with prompt: “Shot in the style of DSLR camera with the polarizing filter. A photo of two hot air balloons over the unique rock formations in Cappadocia, Turkey. The colors and patterns on these balloons contrast beautifully against the earthy tones of the landscape below. This shot captures the sense of adventure that comes with enjoying such an experience.”

    A wooden robot stands in a field of yellow flowers, holding a small blue bird on its outstretched hand.

    Image generated by Imagen 3 with prompt: A weathered, wooden mech robot covered in flowering vines stands peacefully in a field of tall wildflowers, with a small blue bird resting on its outstretched hand. Digital cartoon, with warm colors and soft lines. A large cliff with a waterfall looms behind.

    Imagen 3 unlocks exciting new possibilities for Android developers. Generated visuals can adapt to the content of your app, creating a more engaging user experience. For instance, your users can generate custom artwork to enhance their in-app profile. Imagen can also improve your app’s storytelling by bringing its narratives to life with delightful personalized illustrations.

    You can experiment with image prompts in Vertex AI Studio, and learn how to improve your prompts by reviewing the prompt and image attribute guide.

    Get started with Imagen 3

    The integration of Imagen 3 is similar to adding Gemini access via Vertex AI in Firebase. Start by adding the gradle dependencies to your Android project:

    dependencies {
        implementation(platform("com.google.firebase:firebase-bom:33.10.0"))
    
        implementation("com.google.firebase:firebase-vertexai")
    }
    

    Then, in your Kotlin code, create an ImageModel instance by passing the model name and optionally, a model configuration and safety settings:

    val imageModel = Firebase.vertexAI.imagenModel(
      modelName = "imagen-3.0-generate-001",
      generationConfig = ImagenGenerationConfig(
        imageFormat = ImagenImageFormat.jpeg(compresssionQuality = 75),
        addWatermark = true,
        numberOfImages = 1,
        aspectRatio = ImagenAspectRatio.SQUARE_1x1
      ),
      safetySettings = ImagenSafetySettings(
        safetyFilterLevel = ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE
        personFilterLevel = ImagenPersonFilterLevel.ALLOW_ADULT
      )
    )
    

    Finally generate the image by calling generateImages:

    val imageResponse = imageModel.generateImages(
      prompt = "An astronaut riding a horse"
    )
    

    Retrieve the generated image from the imageResponse and display it as a bitmap as follow:

    val image = imageResponse.images.first()
    val uiImage = image.asBitmap()
    

    Next steps

    Explore the comprehensive Firebase documentation for detailed API information.

    Access to Imagen 3 using Vertex AI in Firebase is currently in Public Preview, giving you an early opportunity to experiment and innovate. For pricing details, please refer to the Vertex AI in Firebase pricing page.

    Start experimenting with Imagen 3 today! We’re looking forward to seeing how you’ll leverage Imagen 3’s capabilities to create truly unique, immersive and personalized Android experiences.



    Source link