بلاگ

  • Peacock built adaptively on Android to deliver great experiences across screens



    Posted by Sa-ryong Kang and Miguel Montemayor – Developer Relations Engineers

    Peacock is NBCUniversal’s streaming service app available in the US, offering culture-defining entertainment including live sports, exclusive original content, TV shows, and blockbuster movies. The app continues to evolve, becoming more than just a platform to watch content, but a hub of entertainment.

    Today’s users are consuming entertainment on an increasingly wider array of device sizes and types, and in particular are moving towards mobile devices. Peacock has adopted Jetpack Compose to help with its journey in adapting to more screens and meeting users where they are.

    https://www.youtube.com/watch?v=ooRcQFMYzmA

    Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

    Adapting to more flexible form factors

    The Peacock development team is focused on bringing the best experience to users, no matter what device they’re using or when they want to consume content. With an emerging trend from app users to watch more on mobile devices and large screens like foldables, the Peacock app needs to be able to adapt to different screen sizes. As more devices are introduced, the team needed to explore new solutions that make the most out of each unique display permutation.

    The goal was to have the Peacock app to adapt to these new displays while continually offering high-quality entertainment without interruptions, like the stream reloading or visual errors. While thinking ahead, they also wanted to prepare and build a solution that was ready for Android XR as the entertainment landscape is shifting towards including more immersive experiences.

    quote card featuring a headshot of Diego Valente, Head of Mobile, Peacock & Global Streaming, reads 'Thinking adaptively isn't just about supporting tablets or large screens - it's about future proofing your app. Investing in adaptability helps you meet user's expectations of having seamless experiencers across all their devices and sets you up for what's next.'

    Building a future-proof experience with Jetpack Compose

    In order to build a scalable solution that would help the Peacock app continue to evolve, the app was migrated to Jetpack Compose, Android’s toolkit for building scalable UI. One of the essential tools they used was the WindowSizeClass API, which helps developers create and test UI layouts for different size ranges. This API then allows the app to seamlessly switch between pre-set layouts as it reaches established viewport breakpoints for different window sizes.

    The API was used in conjunction with Kotlin Coroutines and Flows to keep the UI state responsive as the window size changed. To test their work and fine tune edge case devices, Peacock used the Android Studio emulator to simulate a wide range of Android-based devices.

    Jetpack Compose allowed the team to build adaptively, so now the Peacock app responds to a wide variety of screens while offering a seamless experience to Android users. “The app feels more native, more fluid, and more intuitive across all form factors,” said Diego Valente, Head of Mobile, Peacock and Global Streaming. “That means users can start watching on a smaller screen and continue instantly on a larger one when they unfold the device—no reloads, no friction. It just works.”

    Preparing for immersive entertainment experiences

    In building adaptive apps on Android, John Jelley, Senior Vice President, Product & UX, Peacock and Global Streaming, says Peacock has also laid the groundwork to quickly adapt to the Android XR platform: “Android XR builds on the same large screen principles, our investment here naturally extends to those emerging experiences with less developmental work.”

    The team is excited about the prospect of features unlocked by Android XR, like Multiview for sports and TV, which enables users to watch multiple games or camera angles at once. By tailoring spatial windows to the user’s environment, the app could offer new ways for users to interact with contextual metadata like sports stats or actor information—all without ever interrupting their experience.

    Build adaptive apps

    Learn how to unlock your app’s full potential on phones, tablets, foldables, and beyond.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Android Developers Blog: Updates to the Android XR SDK: Introducing Developer Preview 2



    Posted by Matthew McCullough – VP of Product Management, Android Developer

    Since launching the Android XR SDK Developer Preview alongside Samsung, Qualcomm, and Unity last year, we’ve been blown away by all of the excitement we’ve been hearing from the broader Android community. Whether it’s through coding live-streams or local Google Developer Group talks, it’s been an outstanding experience participating in the community to build the future of XR together, and we’re just getting started.

    Today we’re excited to share an update to the Android XR SDK: Developer Preview 2, packed with new features and improvements to help you develop helpful and delightful immersive experiences with familiar Android APIs, tools and open standards created for XR.

    At Google I/O, we have two technical sessions related to Android XR. The first is Building differentiated apps for Android XR with 3D content, which covers many features present in Jetpack SceneCore and ARCore for Jetpack XR. The future is now, with Compose and AI on Android XR covers creating XR-differentiated UI and our vision on the intersection of XR with cutting-edge AI capabilities.

    Android XR sessions at Google I/O 2025

    Building differentiated apps for Android XR with 3D content and The future is now, with Compose and AI on Android XR

    What’s new in Developer Preview 2

    Since the release of Developer Preview 1, we’ve been focused on making the APIs easier to use and adding new immersive Android XR features. Your feedback has helped us shape the development of the tools, SDKs, and the platform itself.

    With the Jetpack XR SDK, you can now play back 180° and 360° videos, which can be stereoscopic by encoding with the MV-HEVC specification or by encoding view-frames adjacently. The MV-HEVC standard is optimized and designed for stereoscopic video, allowing your app to efficiently play back immersive videos at great quality. Apps built with Jetpack Compose for XR can use the SpatialExternalSurface composable to render media, including stereoscopic videos.

    Using Jetpack Compose for XR, you can now also define layouts that adapt to different XR display configurations. For example, use a SubspaceModifier to specify the size of a Subspace as a percentage of the device’s recommended viewing size, so a panel effortlessly fills the space it’s positioned in.

    Material Design for XR now supports more component overrides for TopAppBar, AlertDialog, and ListDetailPaneScaffold, helping your large-screen enabled apps that use Material Design effortlessly adapt to the new world of XR.

    An app adapts to XR using Material Design for XR with the new component overrides

    An app adapts to XR using Material Design for XR with the new component overrides

    In ARCore for Jetpack XR, you can now track hands after requesting the appropriate permissions. Hands are a collection of 26 posed hand joints that can be used to detect hand gestures and bring a whole new level of interaction to your Android XR apps:

    moving image demonstrates how hands bring a natural input method to your Android XR experience.

    Hands bring a natural input method to your Android XR experience.

    For more guidance on developing apps for Android XR, check out our Android XR Fundamentals codelab, the updates to our Hello Android XR sample project, and a new version of JetStream with Android XR support.

    The Android XR Emulator has also received updates to stability, support for AMD GPUs, and is now fully integrated within the Android Studio UI.

    the Android XR Emulator in Android STudio

    The Android XR Emulator is now integrated in Android Studio

    Developers using Unity have already successfully created and ported existing games and apps to Android XR. Today, you can upgrade to the Pre-Release version 2 of the Unity OpenXR: Android XR package! This update adds many performance improvements such as support for Dynamic Refresh Rate, which optimizes your app’s performance and power consumption. Shaders made with Shader Graph now support SpaceWarp, making it easier to use SpaceWarp to reduce compute load on the device. Hand meshes are now exposed with occlusion, which enables realistic hand visualization.

    Check out Unity’s improved Mixed Reality template for Android XR, which now includes support for occlusion and persistent anchors.

    We recently launched Android XR Samples for Unity, which demonstrate capabilities on the Android XR platform such as hand tracking, plane tracking, face tracking, and passthrough.

    moving image of Google’s open-source Unity samples demonstrating platform features and showing how they’re implemented

    Google’s open-source Unity samples demonstrate platform features and show how they’re implemented

    The Firebase AI Logic for Unity is now in public preview! This makes it easy for you to integrate gen AI into your apps, enabling the creation of AI-powered experiences with Gemini and Android XR. The Firebase AI Logic fully supports Gemini’s capabilities, including multimodal input and output, and bi-directional streaming for immersive conversational interfaces. Built with production readiness in mind, Firebase AI Logic is integrated with core Firebase services like App Check, Remote Config, and Cloud Storage for enhanced security, configurability, and data management. Learn more about this on the Firebase blog or go straight to the Gemini API using Vertex AI in Firebase SDK documentation to get started.

    Continuing to build the future together

    Our commitment to open standards continues with the glTF Interactivity specification, in collaboration with the Khronos Group. which will be supported in glTF models rendered by Jetpack XR later this year. Models using the glTF Interactivity specification are self-contained interactive assets that can have many pre-programmed behaviors, like rotating objects on a button press or changing the color of a material over time.

    Android XR will be available first on Samsung’s Project Moohan, launching later this year. Soon after, our partners at XREAL will release the next Android XR device. Codenamed Project Aura, it’s a portable and tethered device that gives users access to their favorite Android apps, including those that have been built for XR. It will launch as a developer edition, specifically for you to begin creating and experimenting. The best news? With the familiar tools you use to build Android apps today, you can build for these devices too.

    product image of XREAL’s Project Aura against a nebulous black background

    XREAL’s Project Aura

    The Google Play Store is also getting ready for Android XR. It will list supported 2D Android apps on the Android XR Play Store when it launches later this year. If you are working on an Android XR differentiated app, you can get it ready for the big launch and be one of the first differentiated apps on the Android XR Play Store:

    And we know many of you are excited for the future of Android XR on glasses. We are shaping the developer experience now and will share more details on how you can participate later this year.

    To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries, and resources you need to work with the Android XR SDK. In particular, try out our samples and codelabs.

    We welcome your feedback, suggestions, and ideas as you’re helping shape Android XR. Your passion, expertise, and bold ideas are vital as we continue to develop Android XR together. We look forward to seeing your XR-differentiated apps when Android XR devices launch later this year!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • What’s New in Jetpack Compose



    Posted by Nick Butcher – Product Manager

    At Google I/O 2025, we announced a host of features, performance, stability, libraries, and tools updates for Jetpack Compose, our recommended Android UI toolkit. With Compose you can build excellent apps that work across devices. Compose has matured a lot since it was first announced (at Google I/O 2019!) and we’re now seeing 60% of the top 1,000 apps in the Play Store such as MAX and Google Drive use and love it.

    New Features

    Since I/O last year, Compose Bill of Materials (BOM) version 2025.05.01 adds new features such as:

      • Autofill support that lets users automatically insert previously entered personal information into text fields.
      • Auto-sizing text to smoothly adapt text size to a parent container size.
      • Visibility tracking for when you need high-performance information on a composable’s position in its root container, screen, or window.
      • Animate bounds modifier for beautiful automatic animations of a Composable’s position and size within a LookaheadScope.
      • Accessibility checks in tests that let you build a more accessible app UI through automated a11y testing.

    LookaheadScope {
        Box(
            Modifier
                .animateBounds(this@LookaheadScope)
                .width(if(inRow) 100.dp else 150.dp)
                .background(..)
                .border(..)
        )
    }
    

    moving image of animate bounds modifier in action

    For more details on these features, read What’s new in the Jetpack Compose April ’25 release and check out these talks from Google I/O:

    If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we’re working on including:

      • Pausable Composition (see below)
      • Updates to LazyLayout prefetch
      • Context Menus
      • New modifiers: onFirstVisible, onVisbilityChanged, contentType
      • New Lint checks for frequently changing values and elements that should be remembered in composition

    Please try out the alpha features and provide feedback to help shape the future of Compose.

    Material Expressive

    At Google I/O, we unveiled Material Expressive, Material Design’s latest evolution that helps you make your products even more engaging and easier to use. It’s a comprehensive addition of new components, styles, motion and customization options that help you to build beautiful rich UIs. The Material3 library in the latest alpha BOM contains many of the new expressive components for you to try out.

    moving image of material expressive design example

    Learn more to start building with Material Expressive.

    Adaptive layouts library

    Developing adaptive apps across form factors including phones, foldables, tablets, desktop, cars and Android XR is now easier with the latest enhancements to the Compose adaptive layouts library. The stable 1.1 release adds support for predictive back gestures for smoother transitions and pane expansion for more flexible two pane layouts on larger screens. Furthermore, the 1.2 (alpha) release adds more flexibility for how panes are displayed, adding strategies for reflowing and levitating.

    moving image of compose adaptive layouts updates in the Google Play app

    Compose Adaptive Layouts Updates in the Google Play app

    Learn more about building adaptive android apps with Compose.

    Performance

    With each release of Jetpack Compose, we continue to prioritize performance improvements. The latest stable release includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations. Best of all these are available to you simply by upgrading your Compose dependency; no code changes required.

    bar chart of internal benchmarks for performance run on a Pixel 3a device from January to May 2023 measured by jank rate

    Internal benchmark, run on a Pixel 3a

    We continue to work on further performance improvements, notable changes in the latest alpha BOM include:

      • Pausable Composition allows compositions to be paused, and their work split up over several frames.
      • Background text prefetch enables text layout caches to be pre-warmed on a background thread, enabling faster text layout.
      • LazyLayout prefetch improvements enabling lazy layouts to be smarter about how much content to prefetch, taking advantage of pausable composition.

    Together these improvements eliminate nearly all jank in an internal benchmark.

    Stability

    We’ve heard from you that upgrading your Compose dependency can be challenging, encountering bugs or behaviour changes that prevent you from staying on the latest version. We’ve invested significantly in improving the stability of Compose, working closely with the many Google app teams building with Compose to detect and prevent issues before they even make it to a release.

    Google apps develop against and release with snapshot builds of Compose; as such, Compose is tested against the hundreds of thousands of Google app tests and any Compose issues are immediately actioned by our team. We have recently invested in increasing the cadence of updating these snapshots and now update them daily from Compose tip-of-tree, which means we’re receiving feedback faster, and are able to resolve issues long before they reach a public release of the library.

    Jetpack Compose also relies on @Experimental annotations to mark APIs that are subject to change. We heard your feedback that some APIs have remained experimental for a long time, reducing your confidence in the stability of Compose. We have invested in stabilizing experimental APIs to provide you a more solid API surface, and reduced the number of experimental APIs by 32% in the last year.

    We have also heard that it can be hard to debug Compose crashes when your own code does not appear in the stack trace. In the latest alpha BOM, we have added a new opt-in feature to provide more diagnostic information. Note that this does not currently work with minified builds and comes at a performance cost, so we recommend only using this feature in debug builds.

    class App : Application() {
       override fun onCreate() {
            // Enable only for debug flavor to avoid perf impact in release
            Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
       }
    }
    

    Libraries

    We know that to build great apps, you need Compose integration in the libraries that interact with your app’s UI.

    A core library that powers any Compose app is Navigation. You told us that you often encountered limitations when managing state hoisting and directly manipulating the back stack with the current Compose Navigation solution. We went back to the drawing-board and completely reimagined how a navigation library should integrate with the Compose mental model. We’re excited to introduce Navigation 3, a new artifact designed to empower you with greater control and simplify complex navigation flows.

    We’re also investing in Compose support for CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.

    @Composable
    private fun VideoPlayer(
        player: Player?, // from media3
        modifier: Modifier = Modifier
    ) {
        Box(modifier) {
            PlayerSurface(player) // from media3-ui-compose
            player?.let {
                // custom play-pause button UI
                val playPauseButtonState = rememberPlayPauseButtonState(it) // from media3-ui-compose
                MyPlayPauseButton(playPauseButtonState, Modifier.align(BottomEnd).padding(16.dp))
            }
        }
    }
    

    To learn more, see the media3 Compose documentation and the CameraX samples.

    Tools

    We continue to improve the Android Studio tools for creating Compose UIs. The latest Narwhal canary includes:

      • Resizable Previews instantly show you how your Compose UI adapts to different window sizes
      • Preview navigation improvements using clickable names and components
      • Studio Labs 🧪: Compose preview generation with Gemini quickly generate a preview
      • Studio Labs 🧪: Transform UI with Gemini change your UI with natural language, directly from preview.
      • Studio Labs 🧪: Image attachment in Gemini generate Compose code from images.

    For more information read What’s new in Android development tools.

    moving image of resizable preview in Jetpack Compose

    Resizable Preview

    New Compose Lint checks

    The Compose alpha BOM introduces two new annotations and associated lint checks to help you to write correct and performant Compose code. The @FrequentlyChangingValue annotation and FrequentlyChangedStateReadInComposition lint check warns in situations where function calls or property reads in composition might cause frequent recompositions. For example, frequent recompositions might happen when reading scroll position values or animating values. The @RememberInComposition annotation and RememberInCompositionDetector lint check warns in situations where constructors, functions, and property getters are called directly inside composition (e.g. the TextFieldState constructor) without being remembered.

    Happy Composing

    We continue to invest in providing the features, performance, stability, libraries and tools that you need to build excellent apps. We value your input so please share feedback on our latest updates or what you’d like to see next.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Why Your Audience Isn’t Listening Anymore (And What You Can Do About It)

    Why Your Audience Isn’t Listening Anymore (And What You Can Do About It)


    Opinions expressed by Entrepreneur contributors are their own.

    Every day, we’re bombarded with noise — emails, ads, pop-ups, sponsored posts and DMs from strangers who want to “hop on a quick call.” It’s relentless. And people are tired.

    Marketers often call this “audience fatigue,” blaming content overload. But after working with hundreds of leaders to build authentic authority, I’ve come to see it differently: it’s not just content overload — it’s trust fatigue.

    Trust fatigue is what happens when people stop believing. When every message feels like a sales pitch in disguise, people disengage — not just from brands, but from leaders who once earned their respect.

    So, in a world where trust is slipping and skepticism is rising, how do you become someone worth listening to?

    Trust moves from institutions to individuals

    One study found that 79% of people trust their employer more than the media, the government, or nonprofits. That’s huge.

    It means trust is no longer institutional — it’s personal. People don’t want another faceless brand talking at them. They want a real person who shows up with clarity, consistency and value.

    That’s your opportunity. If you want to lead, you need to earn trust. And the good news? It starts with three moves.

    Related: Trust Is a Business Metric Now. Here’s How Leaders Can Earn It.

    1. Be discoverable

    Let’s get practical. Google yourself — what comes up?

    If it’s outdated bios, scattered links, or worse — nothing — you’ve got work to do. Your digital presence is your first impression. When someone wants to vet you, they’re not asking for your resume. They’re looking you up.

    A strong LinkedIn profile is the first step. Make it sound like a leader, not a job seeker. Then, create a personal website that reflects who you are, what you stand for, and the people you serve. This is your platform.

    Next, give people a reason to trust you: thought leadership content — articles, interviews, podcasts — that showcase your ideas. If I can’t find you, I can’t follow you.

    2. Be credible

    The internet is full of opinions. What cuts through is proof.

    Credibility comes from evidence: media features, speaking gigs, client testimonials, books and bylines. These aren’t vanity metrics — they’re trust signals. They tell your audience: this person has earned a platform.

    You don’t need to headline a TEDx talk tomorrow. Start small. Write a piece for your industry publication. Share a client win. Build momentum with real, earned signals of authority.

    And the data backs this up. A Gallup/Knight Foundation study found that nearly 90% of Americans follow at least one public figure for news or insight, more than brands, and sometimes more than the media itself.

    3. Be human

    Here’s where many leaders go wrong: they forget that trust isn’t just about what you say — it’s how you make people feel.

    You can have the slickest website and the most polished profile, but if your tone feels robotic or your content sounds like corporate filler, people will scroll right past.

    You don’t need to spill your life story, but you do need to sound like a real person. Share lessons you’ve learned, not just what you’re selling. Tell stories. Speak plainly. Be generous with your insights.

    I once shared a story about a career setback on stage, unsure of how it would land. It ended up being the thing people remembered — and the reason they reached out. Vulnerability built more trust than any polished pitch ever could.

    Related: How Talking Less and Listening More Builds Your Business

    Trust is the strategy — authority is the reward

    Many leaders think, “If I’m good at what I do, people will notice.”

    They won’t.

    In a world overflowing with content and short on attention, visibility matters. Credibility matters. And most of all, connection matters. You build trust gradually — through how you show up, what you say and how well it resonates with what your audience actually needs.

    So here’s where to start:

    • Audit your online presence as if you’re a stranger seeing yourself for the first time.
    • Share stories in your writing and speaking that make people feel something real.
    • Post something this week that reflects what you believe, not what you’re trying to sell.

    Lead with service. Speak with clarity. Build trust by showing up as yourself.

    Authority doesn’t come from shouting the loudest. It comes from being the one people believe.

    Every day, we’re bombarded with noise — emails, ads, pop-ups, sponsored posts and DMs from strangers who want to “hop on a quick call.” It’s relentless. And people are tired.

    Marketers often call this “audience fatigue,” blaming content overload. But after working with hundreds of leaders to build authentic authority, I’ve come to see it differently: it’s not just content overload — it’s trust fatigue.

    Trust fatigue is what happens when people stop believing. When every message feels like a sales pitch in disguise, people disengage — not just from brands, but from leaders who once earned their respect.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • What’s new in Watch Faces



    Posted by Garan Jenkin – Developer Relations Engineer

    Wear OS has a thriving watch face ecosystem featuring a variety of designs that also aims to minimize battery impact. Developers have embraced the simplicity of creating watch faces using Watch Face Format – in the last year, the number of published watch faces using Watch Face Format has grown by over 180%*.

    Today, we’re continuing our investment and announcing version 4 of the Watch Face Format, available as part of Wear OS 6. These updates allow developers to express even greater levels of creativity through the new features we’ve added. And we’re supporting marketplaces, which gives flexibility and control to developers and more choice for users.

    In this blog post we’ll cover key new features, check out the documentation for more details of changes introduced in recent versions.

    Supporting marketplaces with Watch Face Push

    We’re also announcing a completely new API, the Watch Face Push API, aimed at developers who want to create their own watch face marketplaces.

    Watch Face Push, available on devices running Wear OS 6 and above, works exclusively with watch faces that use the Watch Face Format watch faces.

    We’ve partnered with well-known watch face developers – including Facer, TIMEFLIK, WatchMaker, Pujie, and Recreative – in designing this new API. We’re excited that all of these developers will be bringing their unique watch face experiences to Wear OS 6 using Watch Face Push.

    Three mobile devices representing watch face marketplace apps for watches running Wear OS 6

    From left to right, Facer, Recreative and TIMEFLIK watch faces have been developing marketplace apps to work with watches running Wear OS 6.

    Watch faces managed and deployed using Watch Face Push are all written using Watch Face Format. Developers publish these watch faces in the same way as publishing through Google Play, though there are some additional checks the developer must make which are described in the Watch Face Push guidance.

    A flow diagram demonstrating the flow of information from Cloud-based storage to the user's phone where the app is installed, then transferred to be installed on a wearable device using the Wear OS App via the Watch Face Push API

    The Watch Face Push API covers only the watch part of this typical marketplace system diagram – as the app developer, you have control and responsibility for the phone app and cloud components, as well as for building the Wear OS app using Watch Face Push. You’re also in control of the phone-watch communications, for which we recommend using the Data Layer APIs.

    Adding Watch Face Push to your project

    To start using Watch Face Push on Wear OS 6, include the following dependency in your Wear OS app:

    // Ensure latest version is used by checking the repository
    implementation("androidx.wear.watchface:watchface-push:1.3.0-alpha07")
    

    Declare the necessary permission in your AndroidManifest.xml:

    <uses-permission android:name="com.google.wear.permission.PUSH_WATCH_FACES" />
    

    Obtain a Watch Face Push client:

    val manager = WatchFacePushManagerFactory.createWatchFacePushManager(context)
    

    You’re now ready to start using the Watch Face Push API, for example to list the watch faces you have already installed, or add a new watch face:

    // List existing watch faces, installed by this app
    val listResponse = manager.listWatchFaces()
    
    // Add a watch face
    manager.addWatchFace(watchFaceFileDescriptor, validationToken)
    

    Understanding Watch Face Push

    While the basics of the Watch Face Push API are easy to understand and access through the WatchFacePushManager interface, it’s important to consider several other factors when working with the API in practice to build an effective marketplace app, including:

      • Setting active watch faces – Through an additional permission, the app can set the active watch face. Learn about how to integrate this feature, as well as how to handle the different permission scenarios.

    To learn more about using Watch Face Push, see the guidance and reference documentation.

    Updates to Watch Face Format

    Photos

    Available from Watch Face Format v4

    The new Photos element allows the watch face to contain user-selectable photos. The element supports both individual photos and a gallery of photos. For a gallery of photos, developers can choose whether the photos advance automatically or when the user taps the watch face.

    a wearable device and small screen mobile device side by side demonstrating how a user may configure photos for the watch face through the Companion app on the mobile device

    Configuring photos through the watch Companion app

    The user is able to select the photos of their choice through the companion app, making this a great way to include true personalization in your watch face. To use this feature, first add the necessary configuration:

    <UserConfigurations>
      <PhotosConfiguration id="myPhoto" configType="SINGLE"/>
    </UserConfigurations>
    

    Then use the Photos element within any PartImage, in the same way as you would for an Image element:

    <PartImage ...>
      <Photos source="[CONFIGURATION.myPhoto]"
              defaultImageResource="placeholder_photo"/>
    </PartImage>
    

    For details on how to support multiple photos, and how to configure the different change behaviors, refer to the Photos section of the guidance and reference, as well as the GitHub samples.

    Transitions

    Available from Watch Face Format v4

    Watch Face Format now supports transitions when exiting and entering ambient mode.

    moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit

    State transition animation: Example using an overshoot effect in revealing the seconds digits

    This is achieved through the existing Variant tag. For example, the hours and minutes in the above watch face are animated as follows:

    <DigitalClock ...>
      <Variant mode="AMBIENT" target="x" value="100" interpolation="OVERSHOOT" />
    
       <!-- Rest of "hh:mm" clock definition here -->
    </DigitalClock>
    

    By default, the animation takes the full extent of allowed time for the transition. The new interpolation attribute controls the animation effect – in this case the use of OVERSHOOT adds a playful experience.

    The seconds are implemented in a separate DigitalClock element, which shows the use of the new duration attribute:

    <DigitalClock ...>
      <Variant mode="AMBIENT" target="alpha" value="0" duration="0.5"/>
       <!-- Rest of "ss" clock definition here -->
    </DigitalClock>
    

    The duration attribute takes a value between 0.0 and 1.0, with 1.0 representing the full extent of the allowed time. In this example, by using a value of 0.5, the seconds animation is quicker – taking half the allowed time, in comparison to the hours and minutes, which take the entire transition period.

    For more details on using transitions, see the guidance documentation, as well as the reference documentation for Variant.

    Color Transforms

    Available from Watch Face Format v4

    We’ve extended the usefulness of the Transform element by allowing color to be transformed on the majority of elements where it is an attribute, and also allowing tintColor to be transformed on Group and Part* elements such as PartDraw and PartText.

    The main exceptions to this addition are the clock elements, DigitalClock and AnalogClock, and also ComplicationSlot, which do not currently support Transform.

    In addition to extending the list of transformable attributes to include colors, we’ve also added a handful of useful functions for manipulating color:

    To see these in action, let’s consider an example.

    The Weather data source provides the current UV index through [WEATHER.UV_INDEX]. When representing the UV index, these values are typically also assigned a color:

    moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit

    We want to represent this information as an Arc, not only showing the value, but also using the appropriate color. We can achieve this as follows:

    <Arc centerX="0" centerY="0" height="420" width="420"
      startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
      <Transform target="endAngle"
        value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
      <Stroke thickness="20" color="#ffffff" cap="ROUND">
        <Transform target="color"
          value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
      </Stroke>
    </Arc>
    

    Let’s break this down:

      • The first Transform restricts the UV index to the range 0.0 to 11.0 and adjusts the sweep of the Arc according to that value.
      • The second Transform uses the new extractColorFromWeightedColors function.
          • The first argument is our list of colors
          • The second argument is a list of weights – you can see from the chart above that green covers 3 values, whereas orange only covers 2, so we use weights to represent this.
          • The third argument is whether or not to interpolate the color values. In this case we want to stick strictly to the color convention for UV index, so this is false.
          • Finally in the fourth argument we coerce the UV value into the range 0.0 to 1.0, which is used as an index into our weighted colors.

    The result looks like this:

    side by side quadrants of watch face examples showing using the new color functions in applying color transforms to a Stroke in an Arc

    Using the new color functions in applying color transforms to a Stroke in an Arc.

    As well as being able to provide raw colors and weights to these functions, they can also be used with values from complications, such as HR, temperature or steps goal. For example, to use the color range specified in a goal complication:

    <Transform target="color"
        value="extractColorFromColors(
            [COMPLICATION.GOAL_PROGRESS_COLORS],
            [COMPLICATION.GOAL_PROGRESS_COLOR_INTERPOLATE],
            [COMPLICATION.GOAL_PROGRESS_VALUE] /    
                [COMPLICATION.GOAL_PROGRESS_TARGET_VALUE]
    )"/>
    

    Introducing the Reference element

    Available from Watch Face Format v4

    The new Reference element allows you to refer to any transformable attribute from one part of your watch face scene in other parts of the scene tree.

    In our UV index example above, we’d also like the text labels to use the same color scheme.

    We could perform the same color transform calculation as on our Arc, using [WEATHER.UV_INDEX], but this is duplicative work which could lead to inconsistencies, for example if we change the exact color hues in one place but not the other.

    Returning to the Arc definition, let’s create a Reference to the color:

    <Arc centerX="0" centerY="0" height="420" width="420"
      startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
      <Transform target="endAngle"
        value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
      <Stroke thickness="20" color="#ffffff" cap="ROUND">
        <Reference source="color" name="uv_color" defaultValue="#ffffff" />
        <Transform target="color"
          value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
      </Stroke>
    </Arc>
    

    The color of the Arc is calculated from the relatively complex extractColorFromWeightedColors function. To avoid repeating this elsewhere in our watch face, we have added a Reference element, which takes as its source the Stroke color.

    Let’s now look at how we can consume this value in a PartText elsewhere in the watch face. We gave the Reference the name uv_color, so we can simply refer to this in any expression:

    <PartText x="0" y="225" width="450" height="225">
      <TextCircular centerX="225" centerY="0" width="420" height="420"
        startAngle="120" endAngle="90"
        align="START" direction="COUNTER_CLOCKWISE">
        <Font family="SYNC_TO_DEVICE" size="24">
          <Transform target="color" value="[REFERENCE.uv_color]" />
          <Template>%d<Parameter expression="[WEATHER.UV_INDEX]" /></Template>
        </Font>
      </TextCircular>
    </PartText>
    <!-- Similar PartText here for the "UV:" label -->
    

    As a result, the color of the Arc and the UV numeric value are now coordinated:

    side by side quadrants of watch face examples showing Coordinating colors across elements using the Reference element

    Coordinating colors across elements using the Reference element

    For more details on how to use the Reference element, refer to the Reference guidance.

    Text autosizing

    Available from Watch Face Format v3

    Sometimes the exact length of the text to be shown on the watch face can vary, and as a developer you want to balance being able to display text that is both legible, but also complete.

    Auto-sizing text can help solve this problem, and can be enabled through the isAutoSize attribute introduced to the Text element:

    <Text align="CENTER" isAutoSize="true">
    

    Having set this attribute, text will then automatically fit the available space, starting at the maximum size specified in your Font element, and with a minimum size of 12.

    As an example, step count could range from tens or hundreds through to many thousands, and the new isAutoSize attribute enables best use of the available space for every possible value:

    side by side examples of text sizing adjustments on watch face using isAutosize

    Making the best use of the available text space through isAutoSize

    For more details on isAutoSize, see the Text reference.

    Android Studio support

    For developers working in Android Studio, we’ve added support to make working with Watch Face Format easier, including:

      • Run configuration support
      • Auto-complete and resource reference
      • Lint checking

    This is available from Android Studio Canary version 2025.1.1 Canary 10.

    Learn More

    To learn more about building watch faces, please take a look at the following resources:

    We’ve also recently launched a codelab for Watch Face Format and have updated samples on GitHub to showcase new features. The issue tracker is available for providing feedback.

    We’re excited to see the watch face experiences that you create and share!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    * Google Play data for period 2025-03-24 to 2025-03-23



    Source link

  • What’s new in Wear OS 6



    Posted by Chiara Chiappini – Developer Relations Engineer

    This year, we’re excited to introduce Wear OS 6: the most power-efficient and expressive version of Wear OS yet.

    Wear OS 6 introduces the new design system we call Material 3 Expressive. It features a major refresh with visual and motion components designed to give users an experience with more personalization. The new design offers a great level of expression to meet user demand for experiences that are modern, relevant, and distinct. Material 3 Expressive is coming to Wear OS, Android, and all your favorite Google apps on these devices later this year.

    The good news is that you don’t need to compromise battery for beauty: thanks to Wear OS platform optimizations, watches updating from Wear OS 5 to Wear OS 6 can see up to 10% improvement in battery life.1

    Wear OS 6 developer preview

    Today we’re releasing the Developer Preview of Wear OS 6, the next version of Google’s smartwatch platform, based on Android 16.

    Wear OS 6 brings a number of developer-facing changes, such as refining the always-on display experience. Check out what’s changed and try the new Wear OS 6 emulator to test your app for compatibility with the new platform version.

    Material 3 Expressive on Wear OS

    moving image displays examples of Material 3 Expressive on Wear OS experiences

    Some examples of Material 3 Expressive on Wear OS experiences

    Material 3 Expressive for the watch is fully optimized for the round display. We recommend developers embrace the new design system in their apps and tiles. To help you adopt Material 3 Expressive in your app, we have begun releasing new design guidance for Wear OS, along with corresponding Figma design kits.

    As a developer, you can get access the Material 3 Expressive on Wear OS using new Jetpack libraries:

    These two libraries provide implementations for the components catalog that adheres to the Material 3 Expressive design language.

    Make it personal with richer color schemes using themes

    moving image showing how dynamic color theme updates colors of apps and Tiles

    Dynamic color theme updates colors of apps and Tiles

    The Wear Compose Material 3 and Wear Protolayout Material 3 libraries provide updated and extended color schemes, typography, and shapes to bring both depth and variety to your designs. Additionally, your tiles now align with the system font by default (on Wear OS 6+ devices), offering a more cohesive experience on the watch.

    Both libraries introduce dynamic color theming, which automatically generates a color theme for your app or tile to match the colors of the watch face of Pixel watches.

    Make it more glanceable with new tile components

    Tiles now support a new framework and a set of components that embrace the watch’s circular form factor. These components make tiles more consistent and glanceable, so users can more easily take swift action on the information included in them.

    We’ve introduced a 3-slot tile layout to improve visual consistency in the Tiles carousel. This layout includes a title slot, a main content slot, and a bottom slot, designed to work across a range of different screen sizes:

    moving image showing some examples of Tiles with the 3-slot tile layout

    Some examples of Tiles with the 3-slot tile layout.

    Highlight user actions and key information with components optimized for round screen

    The new Wear OS Material 3 components automatically adapt to larger screen sizes, building on the Large Display support added as part of Wear OS 5. Additionally, components such as Buttons and Lists support shape morphing on apps.

    The following sections highlight some of the most exciting changes to these components.

    Embrace the round screen with the Edge Hugging Button

    We introduced a new EdgeButton for apps and tiles with an iconic design pattern that maximizes the space within the circular form factor, hugs the edge of the screen, and comes in 4 standard sizes.

    moving image of a sreenshot representing an EdgeButton in a scrollable screen.

    Screenshot representing an EdgeButton in a scrollable screen.

    Fluid navigation through lists using new indicators

    The new TransformingLazyColumn from the Foundation library makes expressive motion easy with motion that fluidly traces the edges of the display. Developers can customize the collapsing behavior of the list when scrolling to the top, bottom and both sides of the screen. For example, components like Cards can scale down as they are closer to the top of the screen.

    moving image showing a TransformingLazyColumn with content that collapses and changes in size when approaching the edge of the screens.
.

    TransformingLazyColumn allows content to collapse and change in size when approaching the edge of the screens

    Material 3 Expressive also includes a ScrollIndicator that features a new visual and motion design to make it easier for users to visualize their progress through a list. The ScrollIndicator is displayed by default when you use a TransformingLazyColumn and ScreenScaffold.

    moving image showing side by side examples of ScrollIndicator in action

    ScrollIndicator

    Lastly, you can now use segments with the new ProgressIndicator, which is now available as a full-screen component for apps and as a small-size component for both apps and tiles.

    moving image  showing a full-screen ProgressIndicator

    Example of a full-screen ProgressIndicator

    To learn more about the new features and see the full list of updates, see the release notes of the latest beta release of the Wear Compose and Wear Protolayout libraries. Check out the migration guidance for apps and tiles on how to upgrade your existing apps, or try one of our codelabs if you want to start developing using Material 3 Expressive design.

    Watch Faces

    With Wear OS 6 we are launching updates for watch face developers:

      • New options for customizing the appearance of your watch face using version 4 of Watch Face Format, such as animated state transitions from ambient to interactive and photo watch faces.
      • A new API for building watch face marketplaces.

    Learn more about what’s new in Watch Face updates.

    Look for more information about the general availability of Wear OS 6 later this year.

    Library updates

    ProtoLayout

    Since our last major release, we’ve improved capabilities and the developer experience of the Tiles and ProtoLayout libraries to address feedback we received from developers. Some of these enhancements include:

    The example below shows how to display a layout with a text on a Tile using new enhancements:

    // returns a LayoutElement for use in onTileRequest()
    materialScope(context, requestParams.deviceConfiguration) {
        primaryLayout(
            mainSlot = {
                text(
                    text = "Hello, World!".layoutString,
                    typography = BODY_LARGE,
                )
            }
        )
    }
    

    For more information, see the migration instructions.

    Credential Manager for Wear OS

    The CredentialManager API is now available on Wear OS, starting with Google Pixel Watch devices running Wear OS 5.1. It introduces passkeys to Wear OS with a platform-standard authentication UI that is consistent with the experience on mobile.

    The Credential Manager Jetpack library provides developers with a unified API that simplifies and centralizes their authentication implementation. Developers with an existing implementation on another form factor can use the same CredentialManager code, and most of the same supporting code to fulfill their Wear OS authentication workflow.

    Credential Manager provides integration points for passkeys, passwords, and Sign in With Google, while also allowing you to keep your other authentication solutions as backups.

    Users will benefit from a consistent, platform-standard authentication UI; the introduction of passkeys and other passwordless authentication methods, and the ability to authenticate without their phone nearby.

    Check out the Authentication on Wear OS guidance to learn more.

    Richer Wear Media Controls

    New media controls for a Podcast

    New media controls for a Podcast

    Devices that run Wear OS 5.1 or later support enhanced media controls. Users who listen to media content on phones and watches can now benefit from the following new media control features on their watch:

      • They can fast-forward and rewind while listening to podcasts.
      • They can access the playlist and controls such as shuffle, like, and repeat through a new menu.

    Developers with an existing implementation of action buttons and playlist can benefit from this feature without additional effort. Check out how users will get more controls from your media app on a Google Pixel Watch device.

    Start building for Wear OS 6 now

    With these updates, there’s never been a better time to develop an app on Wear OS. These technical resources are a great place to learn more how to get started:

    Earlier this year, we expanded our smartwatch offerings with Galaxy Watch for Kids, a unique, phone-free experience designed specifically for children. This launch gives families a new way to stay connected, allowing children to explore Wear OS independently with a dedicated smartwatch. Consult our developer guidance to create a Wear OS app for kids.

    We’re looking forward to seeing the experiences that you build on Wear OS!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    1 Actual battery performance varies.



    Source link

  • KaruQ Actually Makes Math Fun As You Defeat Ghosts With Calculations

    KaruQ Actually Makes Math Fun As You Defeat Ghosts With Calculations


    Ghosts with numbers will appear on the screen, toy can then defeat a ghost by hitting it Wirth a flame that has the same number. But there’s more than meets the eye.

    Merging numbers multiplies them while splitting divides, allowing you to create a variety of numbers to take home victory against the scary ghosts. Adding a matchstick will give you 1.

    You can enjoy more than 190 different puzzles. There is also a challenging daily puzzle with one new problem you can tackle daily. As another fun plus, you can even create you own puzzles and share them with others using a QR code.

    KaruQ is a completely free download now on the App Store for the iPhone and all iPads.

    You could probably guess that I’m not a huge math fan, but playing KaruQ is an enjoyable way to test your skills while defeating ghosts. While the premise sounds easy and perfect for any age, you’ll definitely be challenged finding the right number for make short work of the enemy.



    Source link

  • Announcing Kotlin Multiplatform Shared Module Template



    Posted by Ben Trengrove – Developer Relations Engineer, Matt Dyor – Product Manager

    To empower Android developers, we’re excited to announce Android Studio’s new Kotlin Multiplatform (KMP) Shared Module Template. This template was specifically designed to allow developers to use a single codebase and apply business logic across platforms. More specifically, developers will be able to add shared modules to existing Android apps and share the business logic across their Android and iOS applications.

    This makes it easier for Android developers to craft, maintain, and most importantly, own the business logic. The KMP Shared Module Template is available within Android Studio when you create a new module within a project.

    a screen shot of the new module tab in Android Studio

    Shared Module Templates are found under the New Module tab

    A single code base for business logic

    Most developers have grown accustomed to maintaining different code bases, platform to platform. In the past, whenever there’s an update to the business logic, it must be carefully updated in each codebase. But with the KMP Shared Module Template:

      • Developers can write once and publish the business logic to wherever they need it.
      • Engineering teams can do more faster.
      • User experiences are more consistent across the entire audience, regardless of platform or form factor.
      • Releases are better coordinated and launched with fewer errors.

    Customers and developer teams who adopt KMP Shared Module Templates should expect to achieve greater ROI from mobile teams who can turn their attention towards delighting their users more and worrying about inconsistent code less.

    KMP enthusiasm

    The Android developer community remains very excited about KMP, especially after Google I/O 2024 where Google announced official support for shared logic across Android and iOS. We have seen continued momentum and enthusiasm from the community. For example, there are now over 1,500 KMP libraries listed on JetBrains’ klibs.io.

    Our customers are excited because KMP has made Android developers more productive. Consistently, Android developers have said that they want solutions that allow them to share code more easily and they want tools which boost productivity. This is why we recommend KMP; KMP simultaneously delivers a great experience for Android users while boosting ROI for the app makers. The KMP Shared Module Template is the latest step towards a developer ecosystem where user experience is consistent and applications are updated seamlessly.

    Large scale KMP adoptions

    This KMP Shared Module Template is new, but KMP more broadly is a maturing technology with several large-scale migrations underway. In fact, KMP has matured enough to support mission critical applications at Google. Google Docs, for example, is now running KMP in production on iOS with runtime performance on par or better than before. Beyond Google, Stone’s 130 mobile developers are sharing over 50% of their code, allowing existing mobile teams to ship features approximately 40% faster to both Android and iOS.

    KMP was designed for Android development

    As always, we’ve designed the Shared Module Template with the needs of Android developer teams in mind. Making the KMP Shared Module Template part of the native Android Studio experience allows developers to efficiently add a shared module to an existing Android application and immediately start building shared business logic that leverages several KMP-ready Jetpack libraries including Room, SQLite, and DataStore to name just a few.

    Come check it out at KotlinConf

    Releasing Android Studio’s KMP Shared Module Template marks a significant step toward empowering Android development teams to innovate faster, to efficiently manage business logic, and to build high-quality applications with greater confidence. It means that Android developers can be responsible for the code that drives the business logic for every app across Android and iOS. We’re excited to bring Shared Module Template to KotlinConf in Copenhagen, May 21 – 23.

    KotlinConf 2025 Copenhagen Denmark, May 21 Workshops May 22-23 Conference

    Get started with KMP Shared Module Template

    To get started, you’ll need the latest edition of Android Studio. In your Android project, the Shared Module Template is available within Android Studio when you create a new module. Click on “File” then “New” then “New Module” and finally “Kotlin Multiplatform Shared Module” and you are ready to add a KMP Shared Module to your Android app.

    We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue. Remember to also follow us on X, LinkedIn, Blog, or YouTube for more Android development updates!



    Source link

  • Skims Boss Emma Grede: Here Are My Tips for Business Success

    Skims Boss Emma Grede: Here Are My Tips for Business Success


    Emma Grede, 42, is a founding partner and chief product officer at Skims, a shapewear brand worth $4 billion. She also serves as the co-founder and CEO of apparel brand Good American, which recorded $200 million in sales in 2022 (and $1 million on its first day live on October 18, 2016, marking the biggest denim launch in history). She’s worth a reported $390 million.

    She’s also a high school dropout raised by a single mother in East London who began working a paper route at 12 years old to earn extra money. By 16, she had left school and started working at a fashion production company. While there, Grede came up with the idea for her first business, a marketing and entertainment agency called Independent Talent Brand (ITB) that matched fashion designers with funding. She founded the company in 2008 at age 25 and grew the agency before selling it 10 years later to marketing firm Rogers & Cowen for an undisclosed sum.

    Related: Good American CEO Emma Grede Talks Management, Navigating Outside Noise, and Why You Should Always Stick to Your Mission

    Now, Grede is based in Los Angeles with her husband, Skims CEO Jens Grede, and their four children. She also co-founded the sports apparel brand Off Season and the chemical-free cleaning company, Safely. She appeared as a guest investor on Shark Tank in seasons 13 and 14.

    And now she can add podcast host to her resume. The serial entrepreneur just launched a new podcast called Aspire, which aims to educate and inspire business leaders through in-depth conversations with leading executives and celebrities.

    Emma Grede. Photo Credit: Jamie Girdler

    Grede sat down with Entrepreneur to talk about her new podcast, how she manages several businesses, and what it takes to be a successful entrepreneur.

    Why did you start your podcast, and how is it different from other business podcasts?
    I left school when I was 16 years old. So, I don’t have a traditional trajectory. I’m trying to unpack as much as the success that I’ve had, the mistakes that I’ve had. I wanted to give something that I thought would have been useful to me when I started my businesses.

    What kind of advice would have been useful?
    To start, you have to love what you are doing. I say that because it’s tough to start something from scratch, and it’ll test every fiber of your being. So you have to really want to do it. It has to be more than just a single goal, like I need to make money, or I just want to leave the place where I work. It has to be something that fuels you.

    What kind of mindset does it take to be successful in entrepreneurship? Is there a trait or skill that stands out?
    I think you have to have unwavering self-belief. There’s a part of this that is really about a mindset that won’t take no for an answer and can see around and through problems and adversity. That works every time.

    How did you decide on entrepreneurship?
    It’s something I fell into. Like so many of us, I worked a corporate job for many years. I left that job because I didn’t think I was being remunerated well enough for what I did. So I fell into entrepreneurship. And that’s why I started my own thing.

    If you could start a side hustle today, what would it be?
    I would want to be a florist. That’s the only thing I’ve ever wanted to do that I’ve never touched. I would love to have a job that is just about the beauty, and is artistically fulfilling. That would be my little dream side hustle. A flower shop somewhere in a lovely place.

    What’s your leadership style?
    At [Good American], there are over 150 people. I’m the chief product officer in another company [Skims] where there are probably 400 people. So, it’s a lot of people, but I tend to hire the best people and get out of their way. One of the things that I do well is hire. I’m particularly good at putting teams together.

    What do you look for in new hires?
    I hire for attitude over experience often. That’s not in all positions, but I think especially when you’re starting a company, having people who have the energy, who have the passion, you can’t put a price on that.

    What keeps you motivated?
    I honestly feel that I’ve created the life of my dreams. I’m grateful every day that I get to do what I do. I think that keeps me motivated, that I have made this life for myself, and it’s of my choosing.

    What is it like working with your husband on the same C-suite leadership team? Do you keep a separation between the family and work dynamics?
    I’ve worked with Jens for a very long time, and we had a solid professional relationship before we were a couple. He handles the marketing and day-to-day running of Skims while I focus on the product. So our roles are very defined, and we do different things. We have different skills, which makes us very compatible as business partners. We also have a lot of separation in our actual roles. But if I’m honest, we love what we do so much. So does business spill into home time, and do we talk about what we do all the time? Absolutely. Yes. There’s a part of that that’s inevitable.

    Do you have a lot of help at home?
    I have twin three-year-olds, and then I have an 11-year-old and an 8-year-old. At home, I don’t have four kids that I get to school myself in the morning. I have a lot of help around me, and I rely on all of that help to get through the day. I think it’s very important to be honest about that because I don’t want anyone to look at me and think, Oh, wow. She’s some kind of superwoman. It’s like, No, I’m not superwoman. I’m just a woman. I’m making choices every day and making lots of sacrifices every day.

    This interview has been lightly edited and cut for clarity.

    Related: Kristin Cavallari and Emma Grede Reveal How They Built Brands That Stand Out in a Saturated Market — and the Secret Isn’t Star Power

    Emma Grede, 42, is a founding partner and chief product officer at Skims, a shapewear brand worth $4 billion. She also serves as the co-founder and CEO of apparel brand Good American, which recorded $200 million in sales in 2022 (and $1 million on its first day live on October 18, 2016, marking the biggest denim launch in history). She’s worth a reported $390 million.

    She’s also a high school dropout raised by a single mother in East London who began working a paper route at 12 years old to earn extra money. By 16, she had left school and started working at a fashion production company. While there, Grede came up with the idea for her first business, a marketing and entertainment agency called Independent Talent Brand (ITB) that matched fashion designers with funding. She founded the company in 2008 at age 25 and grew the agency before selling it 10 years later to marketing firm Rogers & Cowen for an undisclosed sum.

    Related: Good American CEO Emma Grede Talks Management, Navigating Outside Noise, and Why You Should Always Stick to Your Mission

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • 16 things to know for Android developers at Google I/O 2025



    Posted by Matthew McCullough – VP of Product Management, Android Developer

    Today at Google I/O, we announced the many ways we’re helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here’s a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!

    Building AI into your Apps

    1: Building intelligent apps with Generative AI

    Generative AI enhances apps’ experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.

    New experiences across devices

    2: One app, every screen: think adaptive and unlock 500 million screens

    Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we’re helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices – a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral’s streaming service (available in the US) is building adaptively to meet users where they are.

    https://www.youtube.com/watch?v=ooRcQFMYzmA

    Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

    3: Material 3 Expressive: design for intuition and emotion

    The new Material 3 Expressive update provides tools to enhance your product’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

    moving image of Material 3 Expressive demo

    4: Smarter widgets, engaging live updates

    Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

    moving image of Material 3 Expressive demo

    5: Enhanced Camera & Media: low light boost and battery savings

    This year’s I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.

    6: Build next-gen app experiences for Cars

    We’re launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we’ll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.

    7: Build for Android XR’s expanding ecosystem with Developer Preview 2 of the SDK

    We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung’s Project Moohan, you’ll also see more devices including a new portable Android XR device from our partners at XREAL. There’s lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

    product image of XREAL’s Project Aura against a nebulous black background

    XREAL’s Project Aura

    8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6

    This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

    moving image displays examples of Material 3 Expressive on Wear OS experiences

    Some examples of Material 3 Expressive on Wear OS experiences

    9: Engage users on Google TV with excellent TV apps

    You can leverage more resources within Compose’s core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We’re also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.

    Developer productivity

    10: Build beautiful apps faster with Jetpack Compose

    Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

    moving image of compose adaptive layouts updates in the Google Play app

    Compose Adaptive Layouts Updates in the Google Play app

    11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily

    Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We’ve released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what’s new in Android’s Kotlin Multiplatform.

    12: Gemini in Android Studio: AI Agents to help you work

    Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What’s new in Android development tools.

    https://www.youtube.com/watch?v=ubyPjBesW-8

    13: Android Studio: smarter with Gemini

    In this latest release, we’re empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What’s new in Android development tools.

    moving image of Gemini in Android Studio Agentic Experiences including Journeys and Version Upgrade

    And the latest on driving business growth

    14: What’s new in Google Play

    Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we’re continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What’s new in Google Play to learn more.

    a moving image of three mobile devices displaying how content is displayed on the Play Store

    15: Start migrating to Play Games Services v2 today

    Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.

    16: And of course, Android 16

    We unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.

    Check out all of the Android and Play content at Google I/O

    This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What’s New in Android and the full Android track of sessions, and whether you’re joining in person or around the world, we can’t wait to engage with you!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link