برچسب: Media

  • Building delightful Android camera and media experiences



    Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital – Developer Relations Engineers

    Hello Android Developers!

    We are the Android Developer Relations Camera & Media team, and we’re excited to bring you something a little different today. Over the past several months, we’ve been hard at work writing sample code and building demos that showcase how to take advantage of all the great potential Android offers for building delightful user experiences.

    Some of these efforts are available for you to explore now, and some you’ll see later throughout the year, but for this blog post we thought we’d share some of the learnings we gathered while going through this exercise.

    Grab your favorite Android plush or rubber duck, and read on to see what we’ve been up to!

    Future-proof your app with Jetpack

    Nevin Mital

    One of our focuses for the past several years has been improving the developer tools available for video editing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which offer solutions for both single-asset and multi-asset video editing preview and export. Today, I’d like to focus on the Composition demo app, a sample app that showcases some of the multi-asset editing experiences that Transformer enables.

    I started by adding a custom video compositor to demonstrate how you can arrange input video sequences into different layouts for your final composition, such as a 2×2 grid or a picture-in-picture overlay. You can customize this by implementing a VideoCompositorSettings and overriding the getOverlaySettings method. This object can then be set when building your Composition with setVideoCompositorSettings.

    Here is an example for the 2×2 grid layout:

    object : VideoCompositorSettings {
      ...
    
      override fun getOverlaySettings(inputId: Int, presentationTimeUs: Long): OverlaySettings {
        return when (inputId) {
          0 -> { // First sequence is placed in the top left
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(-0.5f, 0.5f) // Top-left section of background
              .build()
          }
    
          1 -> { // Second sequence is placed in the top right
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(0.5f, 0.5f) // Top-right section of background
              .build()
          }
    
          2 -> { // Third sequence is placed in the bottom left
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(-0.5f, -0.5f) // Bottom-left section of background
              .build()
          }
    
          3 -> { // Fourth sequence is placed in the bottom right
            StaticOverlaySettings.Builder()
              .setScale(0.5f, 0.5f)
              .setOverlayFrameAnchor(0f, 0f) // Middle of overlay
              .setBackgroundFrameAnchor(0.5f, -0.5f) // Bottom-right section of background
              .build()
          }
    
          else -> {
            StaticOverlaySettings.Builder().build()
          }
        }
      }
    }
    

    Since getOverlaySettings also provides a presentation time, we can even animate the layout, such as in this picture-in-picture example:

    moving image of picture in picture on a mobile device

    Next, I spent some time migrating the Composition demo app to use Jetpack Compose. With complicated editing flows, it can help to take advantage of as much screen space as is available, so I decided to use the supporting pane adaptive layout. This way, the user can fine-tune their video creation on the preview screen, and export options are only shown at the same time on a larger display. Below, you can see how the UI dynamically adapts to the screen size on a foldable device, when switching from the outer screen to the inner screen and vice versa.

    What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

    moving image of suportive pane adaptive layout

    What’s great is that by using Jetpack Media3 and Jetpack Compose, these features also carry over seamlessly to other devices and form factors, such as the new Android XR platform. Right out-of-the-box, I was able to run the demo app in Home Space with the 2D UI I already had. And with some small updates, I was even able to adapt the UI specifically for XR with features such as multiple panels, and to take further advantage of the extra space, an Orbiter with playback controls for the editing preview.

    moving image of sequential composition preview in Android XR

    Orbiter(
      position = OrbiterEdge.Bottom,
      offset = EdgeOffset.inner(offset = MaterialTheme.spacing.standard),
      alignment = Alignment.CenterHorizontally,
      shape = SpatialRoundedCornerShape(CornerSize(28.dp))
    ) {
      Row (horizontalArrangement = Arrangement.spacedBy(MaterialTheme.spacing.mini)) {
        // Playback control for rewinding by 10 seconds
        FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
          Icon(
            painter = painterResource(id = R.drawable.rewind_10),
            contentDescription = "Rewind by 10 seconds"
          )
        }
        // Playback control for play/pause
        FilledTonalIconButton({ viewModel.togglePlay() }) {
          Icon(
            painter = painterResource(id = R.drawable.rounded_play_pause_24),
            contentDescription = 
                if(viewModel.compositionPlayer.isPlaying) {
                    "Pause preview playback"
                } else {
                    "Resume preview playback"
                }
          )
        }
        // Playback control for forwarding by 10 seconds
        FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
          Icon(
            painter = painterResource(id = R.drawable.forward_10),
            contentDescription = "Forward by 10 seconds"
          )
        }
      }
    }
    

    Jetpack libraries unlock premium functionality incrementally

    Donovan McMurray

    Not only do our Jetpack libraries have you covered by working consistently across existing and future devices, but they also open the doors to advanced functionality and custom behaviors to support all types of app experiences. In a nutshell, our Jetpack libraries aim to make the common case very accessible and easy, and it has hooks for adding more custom features later.

    We’ve worked with many apps who have switched to a Jetpack library, built the basics, added their critical custom features, and actually saved developer time over their estimates. Let’s take a look at CameraX and how this incremental development can supercharge your process.

    // Set up CameraX app with preview and image capture.
    // Note: setting the resolution selector is optional, and if not set,
    // then a default 4:3 ratio will be used.
    val aspectRatioStrategy = AspectRatioStrategy(
      AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
    var resolutionSelector = ResolutionSelector.Builder()
      .setAspectRatioStrategy(aspectRatioStrategy)
      .build()
    
    private val previewUseCase = Preview.Builder()
      .setResolutionSelector(resolutionSelector)
      .build()
    private val imageCaptureUseCase = ImageCapture.Builder()
      .setResolutionSelector(resolutionSelector)
      .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
      .build()
    
    val useCaseGroupBuilder = UseCaseGroup.Builder()
      .addUseCase(previewUseCase)
      .addUseCase(imageCaptureUseCase)
    
    cameraProvider.unbindAll()
    
    camera = cameraProvider.bindToLifecycle(
      this,  // lifecycleOwner
      CameraSelector.DEFAULT_BACK_CAMERA,
      useCaseGroupBuilder.build(),
    )
    

    After setting up the basic structure for CameraX, you can set up a simple UI with a camera preview and a shutter button. You can use the CameraX Viewfinder composable which displays a Preview stream from a CameraX SurfaceRequest.

    // Create preview
    Box(
      Modifier
        .background(Color.Black)
        .fillMaxSize(),
      contentAlignment = Alignment.Center,
    ) {
      surfaceRequest?.let {
        CameraXViewfinder(
          modifier = Modifier.fillMaxSize(),
          implementationMode = ImplementationMode.EXTERNAL,
          surfaceRequest = surfaceRequest,
         )
      }
      Button(
        onClick = onPhotoCapture,
        shape = CircleShape,
        colors = ButtonDefaults.buttonColors(containerColor = Color.White),
        modifier = Modifier
          .height(75.dp)
          .width(75.dp),
      )
    }
    
    fun onPhotoCapture() {
      // Not shown: defining the ImageCapture.OutputFileOptions for
      // your saved images
      imageCaptureUseCase.takePicture(
        outputOptions,
        ContextCompat.getMainExecutor(context),
        object : ImageCapture.OnImageSavedCallback {
          override fun onError(exc: ImageCaptureException) {
            val msg = "Photo capture failed."
            Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
          }
    
          override fun onImageSaved(output: ImageCapture.OutputFileResults) {
            val savedUri = output.savedUri
            if (savedUri != null) {
              // Do something with the savedUri if needed
            } else {
              val msg = "Photo capture failed."
              Toast.makeText(context, msg, Toast.LENGTH_SHORT).show()
            }
          }
        },
      )
    }
    

    You’re already on track for a solid camera experience, but what if you wanted to add some extra features for your users? Adding filters and effects are easy with CameraX’s Media3 effect integration, which is one of the new features introduced in CameraX 1.4.0.

    Here’s how simple it is to add a black and white filter from Media3’s built-in effects.

    val media3Effect = Media3Effect(
      application,
      PREVIEW or IMAGE_CAPTURE,
      ContextCompat.getMainExecutor(application),
      {},
    )
    media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
    useCaseGroupBuilder.addEffect(media3Effect)
    

    The Media3Effect object takes a Context, a bitwise representation of the use case constants for targeted UseCases, an Executor, and an error listener. Then you set the list of effects you want to apply. Finally, you add the effect to the useCaseGroupBuilder we defined earlier.

    moving image of sequential composition preview in Android XR

    (Left) Our camera app with no filter applied. 
     (Right) Our camera app after the createGrayscaleFilter was added.

    There are many other built-in effects you can add, too! See the Media3 Effect documentation for more options, like brightness, color lookup tables (LUTs), contrast, blur, and many other effects.

    To take your effects to yet another level, it’s also possible to define your own effects by implementing the GlEffect interface, which acts as a factory of GlShaderPrograms. You can implement a BaseGlShaderProgram’s drawFrame() method to implement a custom effect of your own. A minimal implementation should tell your graphics library to use its shader program, bind the shader program’s vertex attributes and uniforms, and issue a drawing command.

    Jetpack libraries meet you where you are and your app’s needs. Whether that be a simple, fast-to-implement, and reliable implementation, or custom functionality that helps the critical user journeys in your app stand out from the rest, Jetpack has you covered!

    Jetpack offers a foundation for innovative AI Features

    Mayuri Khinvasara Khabya

    Just as Donovan demonstrated with CameraX for capture, Jetpack Media3 provides a reliable, customizable, and feature-rich solution for playback with ExoPlayer. The AI Samples app builds on this foundation to delight users with helpful and enriching AI-driven additions.

    In today’s rapidly evolving digital landscape, users expect more from their media applications. Simply playing videos is no longer enough. Developers are constantly seeking ways to enhance user experiences and provide deeper engagement. Leveraging the power of Artificial Intelligence (AI), particularly when built upon robust media frameworks like Media3, offers exciting opportunities. Let’s take a look at some of the ways we can transform the way users interact with video content:

      • Empowering Video Understanding: The core idea is to use AI, specifically multimodal models like the Gemini Flash and Pro models, to analyze video content and extract meaningful information. This goes beyond simply playing a video; it’s about understanding what’s in the video and making that information readily accessible to the user.
      • Actionable Insights: The goal is to transform raw video into summaries, insights, and interactive experiences. This allows users to quickly grasp the content of a video and find specific information they need or learn something new!
      • Accessibility and Engagement: AI helps make videos more accessible by providing features like summaries, translations, and descriptions. It also aims to increase user engagement through interactive features.

    A Glimpse into AI-Powered Video Journeys

    The following example demonstrates potential video journies enhanced by artificial intelligence. This sample integrates several components, such as ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will be available soon on Github.

    moving images of examples of AI-powered video journeys

    (Left) Video summarization  
     (Right) Thumbnails timestamps and HDR frame extraction

    There are two experiences in particular that I’d like to highlight:

      • HDR Thumbnails: AI can help identify key moments in the video that could make for good thumbnails. With those timestamps, you can use the new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from videos, providing richer visual previews.
      • Text-to-Speech: AI can be used to convert textual information derived from the video into spoken audio, enhancing accessibility. On Android you can also choose to play audio in different languages and dialects thus enhancing personalization for a wider audience.

    Using the right AI solution

    Currently, only cloud models support video inputs, so we went ahead with a cloud-based solution.Iintegrating Firebase in our sample empowers the app to:

      • Generate real-time, concise video summaries automatically.
      • Produce comprehensive content metadata, including chapter markers and relevant hashtags.
      • Facilitate seamless multilingual content translation.

    So how do you actually interact with a video and work with Gemini to process it? First, send your video as an input parameter to your prompt:

    val promptData =
    "Summarize this video in the form of top 3-4 takeaways only. Write in the form of bullet points. Don't assume if you don't know"
    
    val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
    _outputText.value = OutputTextState.Loading
    
    viewModelScope.launch(Dispatchers.IO) {
        try {
            val requestContent = content {
                fileData(videoSource.toString(), "video/mp4")
                text(prompt)
            }
            val outputStringBuilder = StringBuilder()
    
            generativeModel.generateContentStream(requestContent).collect { response ->
                outputStringBuilder.append(response.text)
                _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
            }
    
            _outputText.value = OutputTextState.Success(outputStringBuilder.toString())
    
        } catch (error: Exception) {
            _outputText.value = error.localizedMessage?.let { OutputTextState.Error(it) }
        }
    }
    

    Notice there are two key components here:

      • FileData: This component integrates a video into the query.
      • Prompt: This asks the user what specific assistance they need from AI in relation to the provided video.

    Of course, you can finetune your prompt as per your requirements and get the responses accordingly.

    In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI solutions like Gemini through Firebase, you can significantly elevate video experiences on Android. This combination enables advanced features like video summaries, enriched metadata, and seamless multilingual translations, ultimately enhancing accessibility and engagement for users. As these technologies continue to evolve, the potential for creating even more dynamic and intelligent video applications is vast.

    Go above-and-beyond with specialized APIs

    Mozart Louis

    Android 16 introduces the new audio PCM Offload mode which can reduce the power consumption of audio playback in your app, leading to longer playback time and increased user engagement. Eliminating the power anxiety greatly enhances the user experience.

    Oboe is Android’s premiere audio api that developers are able to use to create high performance, low latency audio apps. A new feature is being added to the Android NDK and Android 16 called Native PCM Offload playback.

    Offload playback helps save battery life when playing audio. It works by sending a large chunk of audio to a special part of the device’s hardware (a DSP). This allows the CPU of the device to go into a low-power state while the DSP handles playing the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), where the DSP also takes care of decoding.

    This can result in significant power saving while playing back audio and is perfect for applications that play audio in the background or while the screen is off (think audiobooks, podcasts, music etc).

    We created the sample app PowerPlay to demonstrate how to implement these features using the latest NDK version, C++ and Jetpack Compose.

    Here are the most important parts!

    First order of business is to assure the device supports audio offload of the file attributes you need. In the example below, we are checking if the device support audio offload of stereo, float PCM file with a sample rate of 48000Hz.

           val format = AudioFormat.Builder()
                .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
                .setSampleRate(48000)
                .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
                .build()
    
            val attributes =
                AudioAttributes.Builder()
                    .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                    .setUsage(AudioAttributes.USAGE_MEDIA)
                    .build()
           
            val isOffloadSupported = 
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
                    AudioManager.isOffloadedPlaybackSupported(format, attributes)
                } else {
                    false
                }
    
            if (isOffloadSupported) {
                player.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
            }
    

    Once we know the device supports audio offload, we can confidently set the Oboe audio streams’ performance mode to the new performance mode option, PerformanceMode::POWER_SAVING_OFFLOADED.

    // Create an audio stream
            AudioStreamBuilder builder;
            builder.setChannelCount(mChannelCount);
            builder.setDataCallback(mDataCallback);
            builder.setFormat(AudioFormat::Float);
            builder.setSampleRate(48000);
    
            builder.setErrorCallback(mErrorCallback);
            builder.setPresentationCallback(mPresentationCallback);
            builder.setPerformanceMode(PerformanceMode::POWER_SAVING_OFFLOADED);
            builder.setFramesPerDataCallback(128);
            builder.setSharingMode(SharingMode::Exclusive);
               builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
            Result result = builder.openStream(mAudioStream);
    

    Now when audio is played back, it will be offloading audio to the DSP, helping save power when playing back audio.

    There is more to this feature that will be covered in a future blog post, fully detailing out all of the new available APIs that will help you optimize your audio playback experience!

    What’s next

    Of course, we were only able to share the tip of the iceberg with you here, so to dive deeper into the samples, check out the following links:

    Hopefully these examples have inspired you to explore what new and fascinating experiences you can build on Android. Tune in to our session at Google I/O in a couple weeks to learn even more about use-cases supported by solutions like Jetpack CameraX and Jetpack Media3!



    Source link

  • Common media processing operations with Jetpack Media3 Transformer



    Posted by Nevin Mital – Developer Relations Engineer, and Kristina Simakova – Engineering Manager

    Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality.

    The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we’ll walk through some of the most common editing operations with Transformer and discuss its performance.

    Getting set up with Transformer

    To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you’ll:

      • Create one or many MediaItem instances from your video file(s), then
      • Apply item-specific edits to them by building an EditedMediaItem for each MediaItem,
      • Create a Transformer instance configured with settings applicable to the whole exported video,
      • and finally start the export to save your applied edits to a file.

    Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post!

    Here’s what this looks like in code:

    val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build()
    val editedMediaItem = EditedMediaItem.Builder(mediaItem).build()
    val transformer = 
      Transformer.Builder(context)
        .addListener(/* Add a Transformer.Listener instance here for completion events */)
        .build()
    transformer.start(editedMediaItem, outputFilePath)
    

    Transcoding, Trimming, Muting, and Resizing with the Transformer API

    Let’s now take a look at four of the most common single-asset media editing operations, starting with Transcoding.

    Transcoding is the process of re-encoding an input file into a specified output format. For this example, we’ll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change:

    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .setVideoMimeType(MimeTypes.VIDEO_H265)
        .setAudioMimeType(MimeTypes.AUDIO_AAC)
        .build()
    

    Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we’ll also include FFmpeg commands for each example to serve as a helpful reference. Here’s how you can perform the same transcoding with FFmpeg:

    $ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath
    

    The next operation we’ll try is Trimming.

    Specifically, we’ll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the “Getting set up” section above, here are the lines that change:

    // Configure the trim operation by adding a ClippingConfiguration to
    // the media item
    val clippingConfiguration =
       MediaItem.ClippingConfiguration.Builder()
         .setStartPositionMs(3000)
         .setEndPositionMs(8000)
         .build()
    val mediaItem =
       MediaItem.Builder()
         .setUri(mediaItemUri)
         .setClippingConfiguration(clippingConfiguration)
         .build()
    
    // Transformer also has a trim optimization feature we can enable.
    // This will prioritize Transmuxing over Transcoding where possible.
    // See more about Transmuxing further down in this post.
    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .experimentalSetTrimOptimizationEnabled(true)
        .build()
    

    With FFmpeg:

    $ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath
    

    Next, we can mute the audio in the exported video file.

    val editedMediaItem = 
      EditedMediaItem.Builder(mediaItem)
        .setRemoveAudio(true)
        .build()
    

    The corresponding FFmpeg command:

    $ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath
    

    And for our final example, we’ll try resizing the input video by scaling it down to half its original height and width.

    val scaleEffect = 
      ScaleAndRotateTransformation.Builder()
        .setScale(0.5f, 0.5f)
        .build()
    val editedMediaItem =
      EditedMediaItem.Builder(mediaItem)
        .setEffects(
          /* audio */ Effects(emptyList(), 
          /* video */ listOf(scaleEffect))
        )
        .build()
    

    An FFmpeg command could look like this:

    $ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath
    

    Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple.

    Transformer API Performance results

    Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device:

    (Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.)

    Input video format: 10s 720p H264 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~1300ms
    • Trimming video to 00:03-00:08: ~2300ms
    • Muting audio: ~200ms
    • Resizing video to half height and width: ~1200ms

    Input video format: 25s 360p VP8 video with Vorbis audio

    • Transcoding to H265 video and AAC audio: ~3400ms
    • Trimming video to 00:03-00:08: ~1700ms
    • Muting audio: ~1600ms
    • Resizing video to half height and width: ~4800ms

    Input video format: 4s 8k H265 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~2300ms
    • Trimming video to 00:03-00:08: ~1800ms
    • Muting audio: ~2000ms
    • Resizing video to half height and width: ~3700ms

    One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times.

    When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences:

    Transmuxing

      • Transformer’s preferred approach when possible – a quick transformation that preserves elementary streams.
      • Only applicable to basic operations, such as rotating, trimming, or container conversion.
      • No quality loss or bitrate change.

    Transmux

    Transcoding

      • Transformer’s fallback approach in cases when Transmuxing isn’t possible – Involves decoding and re-encoding elementary streams.
      • More extensive modifications to the input video are possible.
      • Loss in quality due to re-encoding, but can achieve a desired bitrate target.

    Transcode

    We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above.

    A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest.

    Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder’s output format and the input format must be compatible.

    If the optimization fails, Transformer automatically falls back to normal export.

    What’s next?

    As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs.

    To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We’ve already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app’s performance!



    Source link

  • Prioritize media privacy with Android Photo Picker and build user trust



    Posted by Tatiana van Maaren – Global T&S Partnerships Lead, Privacy & Security, and Roxanna Aliabadi Walker – Product Manager

    At Google Play, we’re dedicated to building user trust, especially when it comes to sensitive permissions and your data. We understand that managing files and media permissions can be confusing, and users often worry about which files apps can access. Since these files often contain sensitive information like family photos or financial documents, it’s crucial that users feel in control. That’s why we’re working to provide clearer choices, so users can confidently grant permissions without sacrificing app functionality or their privacy.

    Below are a set of best practices to consider for improving user trust in the sharing of broad access files, ultimately leading to a more successful and sustainable app ecosystem.

    Prioritize user privacy with data minimization

    Building user trust starts with requesting only the permissions essential for your app’s core functions. We understand that photos and videos are sensitive data, and broad access increases security risks. That’s why Google Play now restricts READ_MEDIA_IMAGES and READ_MEDIA_VIDEO permissions, allowing developers to request them only when absolutely necessary, typically for apps like photo/video managers and galleries.

    Leverage privacy-friendly solutions

    Instead of requesting broad storage access, we encourage developers to use the Android Photo Picker, introduced in Android 13. This tool offers a privacy-centric way for users to select specific media files without granting access to their entire library. Android photo picker provides an intuitive interface, including access to cloud-backed photos and videos, and allows for customization to fit your app’s needs. In addition, this system picker is backported to Android 4.4, ensuring a consistent experience for all users. By eliminating runtime permissions, Android photo picker simplifies the user experience and builds trust through transparency.

    Build trust through transparent data practices

    We understand that some developers have historically used custom photo pickers for tailored user experiences. However, regardless of whether you use a custom or system picker, transparency with users is crucial. Users want to know why your app needs access to their photos and videos.

    Developers should strive to provide clear and concise explanations within their apps, ideally at the point where the permission is requested. Take the following in consideration while crafting your permission request mechanisms as possible best practices guidelines:

      • When requesting media access, provide clear explanations within your app. Specifically, tell users which media your app needs (e.g., all photos, profile pictures, sharing videos) and explain the functionality that relies on it (e.g., ‘To choose a profile picture,’ ‘To share videos with friends’).
      • Clearly outline how user data will be used and protected in your privacy policies. Explain whether data is stored locally, transmitted to a server, or shared with third parties. Reassure users that their data will be handled responsibly and securely.

    Learn how Snap has embraced the Android System Picker to prioritize user privacy and streamline their media selection experience. Here’s what they have to say about their implementation:

    A grid of photos in the photo library is shown on a smartphone screen, including a waterfall and two people smiling and posing for the camera. The Google Photos interface is at the top, with the Photos tab selected, and one photo from the grid is selected for use

    “One of our goals is to provide a seamless and intuitive communication experience while ensuring Snapchatters have control over their content. The new flow of the Android Photo Picker is the perfect balance of providing user control of the content they want to share while ensuring fast communication with friends on Snapchat.”

    Marc Brown, Product Manager

    Get started

    Start building a more trustworthy app experience. Explore the Android Photo Picker and implement privacy-first data practices today.

    Acknowledgement

    Special thanks to: May Smith – Product Manager, and Anita Issagholyan – Senior Policy Specialist



    Source link

  • 7 Trending Social Media Content Types That Work

    7 Trending Social Media Content Types That Work


    When it comes to trendy social media content, the type of content you surf online is pretty much the same. But the question is, do you know what they are called and how important they are to support your business? No? Do not worry.

    In this blog, we will discuss all the types of social media content that are suggested and used by almost every digital marketing agency and many other places. So, let us dig into that. Here you go.

    Trending Social Media Content

    Before heading toward the trending content, you must consider conducting a social media audit. It is an essential step to make a clear road map by analyzing your current status. Here are the types of social media content that are famous currently:

    Short Videos

    If you surf social media, then you must have noticed that in the past few years, short videos have taken over social media platforms. The video is less than 10 minutes in duration and is slotted in the category of short videos. From Reels to YouTube Shorts and TikTok, all are types of short videos.

    Generally, the length of these videos is 0-3 minutes and they have proven themselves as the strongest mediums to connect to potential audiences. So many platforms are getting more users than ever because of this content, and some of them are Instagram, TikTok, and YouTube.

    If you hire a digital marketing company in Jaipur then they will also suggest including short videos for your strong social media presence. Also, the best example of using short videos as an advantage for creating an amazing brand identity among an audience is The Rang, which is a Vietnam indian food restaurant.

    Long Videos

    As short videos are great for your daily engagements long videos are unbeatable for building brand image. If you want to convey something or want to do storytelling that is 10 minutes long or more than that then long videos are the best medium to do so.

    Also, the best medium to convey your long video content is going to be YouTube. It is the platforms that are made for long video content because Shorts were launched very recently, before those platforms were known for longer video content.

    The best example of this content and its success can be taken from Buzzfeed’s Tasty. This YouTube channel has a range of long videos, from recipes to quick cooking and cooking games.

    User-Generated Content (UGC)

    UGC is the content that is created by unaffiliated individuals with your brand and for your brand. The motive is to create relevancy and authenticity of your brand for different users. Also, UGC is the perfect example of the power of social media content.

    If you want to understand what is UGC then let us tell you. Have you read reviews and ratings of a product on Google before buying it? If you did, then it is the same process. It is your peer review before trusting a brand that is 100% genuine and can be trusted. This is why almost every Social Media Marketing Company in India focuses on UGC.

    The best platform to post your UGC is got to be Instagram. You can post short videos, long videos, carousel posts, Insta stories, etc. Before, Instagram X was the hub for UGC, but in the past 3-4 years, the dynamics have changed, and Instagram has taken over the marketplace.

    The best example of UGC is Stonewood Homestay because they have been reviewed by many users including some famous influencers.

    Live Streams

    The next type of content in line we have is live streams. Imagine you can now easily make your digital family part of every event or product launch you have. You do not have to just inform your audience but also make them a part of your launch through this type of content. Now, you know the reason for the hype.

    Live streaming is becoming common for many platforms, but the best platforms to go live are Instagram, TikTok, and YouTube. These platforms are very popularly used by users and even celebrities. It is a strong medium to showcase your authenticity by showing behind-the-scenes and much more through live sessions.

    Poll and Questions

    The next type of content on our list is polls and questions. The best way to get engagement on your social media account is by posting something interactive, which is the main focus of social media advertising activities. Here are the polls and questions. Most influencers, celebrities, and social media users post these polls and questions on their statuses or stories to interact with their audience or followers.

    You do not have to have a legitimate question to ask your audience. It can be about anything you can ask them what should wear for the day by providing them with two options of outfits. As a business, you can ask for product choices or color choices. Also, you can hire a Digital Marketing Company in Jaipur to do this job for you.

    The best platforms for polls and questions are Instagram, LinkedIn, Reddit, X, and Meta. The best example of polls and question engagement can be seen with Fortune.

    Influencer Collaboration

    We have mentioned influencers before in this blog, so we must understand how important part have influencers have become to the social media industry. You can target an audience that is potentially loyal to a respective influencer and would trust your brand because of them. By now, if anybody mentions social media, you think of influencers.

    This is how immensely influencers have taken over the social media game. Influencer collaboration with brands will give you good exposure and make people reach out to your product or services effectively. Instagram, TikTok, Twitch, and YouTube are the best platforms for influencer marketing.

    The best example you can see of influencer collaboration is Charli D’Amelio with Dunkin’. People know Charli as a huge Dunkin’s fan too.

    Memes

    What is social media without memes? Right? So, promoting your brand through memes is a genius idea. You can create humor from an ongoing trend by representing it in your own way.

    Also, you can add your creativity to create humor or joke about yourself that can become a trend. But the point is Meme game never fails. The best platform for creating this content is Instagram, Meta, X, YouTube, TikTok, etc. If you are willing to up your SEO game, then you should hire an SEO company in Jaipur or any other place.

    The best example of this meme social media content is what Zomato did with its creative meme ideas, and they go viral every time.

    Last Words

    Finally, we can say that different types of social media content can help you build a strong social media presence. As a leading digital marketing company in Jaipur and in India or any other place, you must take care of all these content types while posting for clients. 

    Verve Online Marketing is the best Social Media Marketing Company in India that can help you build a strong social media presence through unique content creation and strategies. To know more please check out the website today.

    (Visited 13 times, 1 visits today)





    Source link