برچسب: developers

  • Android Developers Blog: Android 16 is here



    Posted by Matthew McCullough – VP of Product Management, Android Developer

    Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.

    Today we’re releasing Android 16 and making it available on most supported Pixel devices. Look for new devices running Android 16 in the coming months.

    This also marks the availability of the source code at the Android Open Source Project (AOSP). You can examine the source code for a deeper understanding of how Android works, and our focus on compatibility means that you can leverage your app development skills in Android Studio with Jetpack Compose to create applications that thrive across the entire ecosystem.

    Major and minor SDK releases

    With Android 16, we’ve added the concept of a minor SDK release to allow us to iterate our APIs more quickly, reflecting the rapid pace of the innovation Android is bringing to apps and devices.

    Android 16 2025 SDK release timeline

    We plan to have another release in Q4 of 2025 which also will include new developer APIs. Today’s major release will be the only release in 2025 to include planned app-impacting behavior changes.
    In addition to new developer APIs, the Q4 minor release will pick up feature updates, optimizations, and bug fixes.

    We’ll continue to have quarterly Android releases. The Q3 update in-between the API releases is providing much of the new visual polish associated with Material Expressive, and you can get the Q3 beta today on your supported Pixel device.

    Camera and media APIs to empower creators

    Android 16 enhances support for professional camera users, allowing for night mode scene detection, hybrid auto exposure, and precise color temperature adjustments. It’s easier than ever to capture motion photos with new Intent actions, and we’re continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard. Support for the Advanced Professional Video (APV) codec improves Android’s place in professional recording and post-production workflows, with perceptually lossless video quality that survives multiple decodings/re-encodings without severe visual quality degradation.

    Also, Android’s photo picker can now be embedded in your view hierarchy, and users will appreciate the ability to search cloud media.

    More consistent, beautiful apps

    Android 16 introduces changes to improve the consistency and visual appearance of apps, laying the foundation for the upcoming Material 3 Expressive changes. Apps targeting Android 16 can no longer opt-out of going edge-to-edge, and ignores the elegantTextHeight attribute to ensure proper spacing in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.

    Adaptive Android apps

    With Android apps now running on a variety of devices and more windowing modes on large screens, developers should build Android apps that adapt to any screen and window size, regardless of device orientation. For apps targeting Android 16 (API level 36), Android 16 includes changes to how the system manages orientation, resizability, and aspect ratio restrictions. On displays with smallest width >= 600dp, the restrictions no longer apply and apps will fill the entire display window. You should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We’re providing frameworks, tools, and libraries to help.

    Side by side displays of non-adaptive app UI with on the left with text reading Goodbye 'mobile-only' apps and adaptive app UI on the right with text reads Hello adaptive apps

    You can test these overrides without targeting using the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag. Read more about changes to orientation and resizability APIs in Android 16.

    Predictive back by default and more

    Apps targeting Android 16 will have system animations for back-to-home, cross-task, and cross-activity by default. In addition, Android 16 extends predictive back navigation to three-button navigation, meaning that users long-pressing the back button will see a glimpse of the previous screen before navigating back.

    To make it easier to get the back-to-home animation, Android 16 adds support for the onBackInvokedCallback with the new PRIORITY_SYSTEM_NAVIGATION_OBSERVER. Android 16 additionally adds the finishAndRemoveTaskCallback and moveTaskToBackCallback for custom back stack behavior with predictive back.

    Consistent progress notifications

    Android 16 introduces Notification.ProgressStyle, which lets you create progress-centric notifications that can denote states and milestones in a user journey using points and segments. Key use cases include rideshare, delivery, and navigation. It’s the basis for Live Updates, which will be fully realized in an upcoming Android 16 update.

    side-by-side screenshots of a Pixel device showing progress notifications on the homescreen on the left and the updated progress notification in the notification menu on the right

    Custom AGSL graphical effects

    Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation in AGSL and apply them to draw calls.

    Help to create better performing, more efficient apps and games

    From APIs to help you understand app performance, to platform changes designed to increase efficiency, Android 16 is focused on making sure your apps perform well. Android 16 introduces system-triggered profiling to ProfilingManager, ensures at most one missed execution of scheduleAtFixedRate is immediately executed when the app returns to a valid lifecycle for better efficiency, introduces hasArrSupport and getSuggestedFrameRate(int) to make it easier for your apps to take advantage of adaptive display refresh rates, and introduces the getCpuHeadroom and getGpuHeadroom APIs along with CpuHeadroomParams and GpuHeadroomParams in SystemHealthManager to provide games and resource-intensive apps estimates of available GPU and CPU resources on supported devices.

    JobScheduler updates

    JobScheduler.getPendingJobReasons in Android 16 returns multiple reasons why a job is pending, due to both explicit constraints you set and implicit constraints set by the system. The new JobScheduler.getPendingJobReasonsHistory returns the list of the most recent pending job reason changes, allowing you to better tune the way your app works in the background.

    Android 16 is making adjustments for regular and expedited job runtime quota based on which apps standby bucket the app is in, whether the job starts execution while the app is in a top state, and whether the job is executing while the app is running a Foreground Service.

    To detect (and then reduce) abandoned jobs, apps should use the new STOP_REASON_TIMEOUT_ABANDONED job stop reason that the system assigns for abandoned jobs, instead of STOP_REASON_TIMEOUT.

    16KB page sizes

    Android 15 introduced support for 16KB page sizes to improve the performance of app launches, system boot-ups, and camera starts, while reducing battery usage. Android 16 adds a 16 KB page size compatibility mode, which, combined with new Google Play technical requirements, brings Android closer to having devices shipping with this important change. You can validate if your app needs updating using the 16KB page size checks & APK Analyzer in the latest version of Android Studio.

    ART internal changes

    Android 16 includes the latest updates to the Android Runtime (ART) that improve the Android Runtime’s (ART’s) performance and provide support for additional language features. These improvements are also available to over a billion devices running Android 12 (API level 31) and higher through Google Play System updates. Apps and libraries that rely on internal non-SDK ART structures may not continue to work correctly with these changes.

    Privacy and security

    Android 16 continues our mission to improve security and ensure user privacy. It includes Improved security against Intent redirection attacks, makes MediaStore.getVersion unique to each app, adds an API that allows apps to share Android Keystore keys, incorporates the latest version of the Privacy Sandbox on Android, introduces a new behavior during the companion device pairing flow to protect the user’s location privacy, and allows a user to easily select from and limit access to app-owned shared media in the photo picker.

    Local network permission testing

    Android 16 allows your app to test the upcoming local network permission feature, which will require your app to be granted NEARBY_WIFI_DEVICES permission. This change will be enforced in a future Android major release.

    An Android built for everyone

    Android 16 adds features such as Auracast broadcast audio with compatible LE Audio hearing aids, Accessibility changes such as extending TtsSpan with TYPE_DURATION, a new list-based API within AccessibilityNodeInfo, improved support for expandable elements using setExpandedState, RANGE_TYPE_INDETERMINATE for indeterminate ProgressBar widgets, AccessibilityNodeInfo getChecked and setChecked(int) methods that support a “partially checked” state, setSupplementalDescription so you can provide text for a ViewGroup without overriding information from its children, and setFieldRequired so apps can tell an accessibility service that input to a form field is required.

    Outline text for maximum text contrast

    Android 16 introduces outline text, replacing high contrast text, which draws a larger contrasting area around text to greatly improve legibility, along with new AccessibilityManager APIs to allow your apps to check or register a listener to see if this mode is enabled.

    side-by-side screenshots of a Pixel device showing text with enhanced contrast before and after Android 16's new outline text accessbility feature

    Text with enhanced contrast before and after Android 16’s new outline text accessibility feature

    Get your apps, libraries, tools, and game engines ready!

    If you develop an SDK, library, tool, or game engine, it’s even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates to your SDK are needed to fully support Android 16.

    Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16. Work through all your app’s flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you aren’t yet targeting Android 16:

      • Broadcasts: Ordered broadcasts using priorities only work within the same process. Use another IPC if you need cross-process ordering.
      • ART: If you use reflection, JNI, or any other means to access Android internals, your app might break. This is never a best practice. Test thoroughly.
      • 16KB Page Size: If your app isn’t 16KB-page-size ready, you can use the new compatibility mode flag, but we recommend migrating to 16KB for best performance.

    Other changes that will be impactful once your app targets Android 16:

    Get your app ready for the future:

      • Local network protection: Consider testing your app with the upcoming Local Network Protection feature. It will give users more control over which apps can access devices on their local network in a future Android major release.

    Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.

    Once you’ve published the Android 16-compatible version of your app, you can start the process to update your app’s targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.

    Get started with Android 16

    Your Pixel device should get Android 16 shortly if you haven’t already been on the Android Beta. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 4.1 and have not yet taken an Android 16 QPR1 beta, you can opt out of the program and you will then be offered the release version of Android 16 over the air.

    For the best development experience with Android 16, we recommend that you use the latest Canary build of Android Studio Narwhal. Once you’re set up, here are some of the things you should do:

    Thank you again to everyone who participated in our Android developer preview and beta program. We’re looking forward to seeing how your apps take advantage of the updates in Android 16, and have plans to bring you updates in a fast-paced release cadence going forward.

    For complete information on Android 16 please visit the Android 16 developer site.



    Source link

  • Apple Worldwide Developers Conference Day 1: WWDC Highlights

    Apple Worldwide Developers Conference Day 1: WWDC Highlights


    Apple announced sweeping changes to its products on Monday, the first day of its annual Worldwide Developers Conference (WWDC).

    Notably, the iPhone maker confirmed rumors that it is changing how it names its software updates. Instead of using a version number, Apple will use a system with a number that represents the year after the update is released. Apple’s next update will be iOS 26, arriving in the fall for most Apple users.

    The naming change applies to all Apple products. For example, the next iPad update will be named iPadOS 26, and the next MacBook update will be called macOS 26.

    Related: This College Student Wanted to Help People During the LA Wildfires. She Built a Practical App in Just 1 Month — and Won Apple’s Annual Competition.

    Apple announced other changes, like a new design and AI features. The company has made all of the new features available for testing starting on Monday through the Apple Developer program, with a public beta version to be released next month and a broader update rolling out this fall.

    “We continue to advance each of our platforms with more ways to harness the power of Apple Intelligence, as well as a beautiful new design, our product experiences become even more seamless and enjoyable,” Apple CEO Tim Cook said during the event keynote.

    Here are some interesting new capabilities Apple debuted on Monday.

    1. A new design with Liquid Glass

    Apple made its biggest design update ever, with a new design element called Liquid Glass. The translucent element, which Apple used to craft new buttons, switches, tab bars, and notifications across its products, looks like glass on the screen and takes on the color of its environment.

    “This is our broadest software design update ever,” Alan Dye, Apple’s vice president of Human Interface Design, stated in a press release. “It combines the optical qualities of glass with a fluidity only Apple can achieve, as it transforms depending on your content or context.”

    Apple has integrated Liquid Glass components into its redesign, including translucent menus that respond to a user’s perspective of the screen and a new lock screen with a Liquid Glass time.

    Liquid Glass time on the lock screen. Credit: Apple

    Apple’s new Liquid Glass design applies to iOS, iPadOS, macOS, watchOS, and tvOS for a unified look.

    2. Live Translation will be integrated into the iPhone

    Apple is bringing Live Translation to Messages, FaceTime, and Phone to translate text and audio on the spot. The AI-powered feature is backed by Apple-built AI models that run entirely locally, so personal conversations are never uploaded to the cloud.

    Live Translation means that users can talk to each other in different languages, and have the translation appear on the screen in real-time. For example, when a user receives an iMessage in another language, they can opt for that message to be translated into their preferred language. On FaceTime, a user can hear someone talking in another language while following along with translated live captions.

    Live Translation on FaceTime. Credit: Apple

    Live Translation could help overcome language barriers for business calls and personal calls alike.

    Related: Own a Pair of AirPods? Listen to This — New Apple Tech Will Translate Languages During Conversations in Real Time

    3. Visual intelligence gets sharper

    Apple is bringing more AI capabilities to the forefront with visual intelligence. Users can tap into their iPhone cameras and use what they see to ask ChatGPT questions. They can also search Google or Etsy to find products similar to what they are looking at.

    Visual intelligence also recognizes when a user is looking at a poster of an event, and can extract the data to add as an event on their calendar.

    Related: Apple Is Reportedly Developing AI Smart Glasses to Compete with Meta and Google

    4. Workout Buddy on Apple Watch

    Apple is adding an AI workout companion to the Apple Watch called Workout Buddy.

    Workout Buddy considers metrics tracked by the Apple Watch, like heart rate, pace, distance, and previous workouts, and leverages this knowledge to motivate users during their workouts with verbal encouragement.

    For example, when a user starts their run, Workout Buddy could say, “Great job starting your run. This is your second run this week.”

    Workout Buddy will be available for workout types like indoor run and outdoor cycle.

    Apple announced sweeping changes to its products on Monday, the first day of its annual Worldwide Developers Conference (WWDC).

    Notably, the iPhone maker confirmed rumors that it is changing how it names its software updates. Instead of using a version number, Apple will use a system with a number that represents the year after the update is released. Apple’s next update will be iOS 26, arriving in the fall for most Apple users.

    The naming change applies to all Apple products. For example, the next iPad update will be named iPadOS 26, and the next MacBook update will be called macOS 26.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • Android Developers Blog: Announcing Jetpack Navigation 3



    Posted by Don Turner – Developer Relations Engineer

    Navigating between screens in your app should be simple, shouldn’t it? However, building a robust, scalable, and delightful navigation experience can be a challenge. For years, the Jetpack Navigation library has been a key tool for developers, but as the Android UI landscape has evolved, particularly with the rise of Jetpack Compose, we recognized the need for a new approach.

    Today, we’re excited to introduce Jetpack Navigation 3, a new navigation library built from the ground up specifically for Compose. For brevity, we’ll just call it Nav3 from now on. This library embraces the declarative programming model and Compose state as fundamental building blocks.

    Why a new navigation library?

    The original Jetpack Navigation library (sometimes referred to as Nav2 as it’s on major version 2) was initially announced back in 2018, before AndroidX and before Compose. While it served its original goals well, we heard from you that it had several limitations when working with modern Compose patterns.

    One key limitation was that the back stack state could only be observed indirectly. This meant there could be two sources of truth, potentially leading to an inconsistent application state. Also, Nav2’s NavHost was designed to display only a single destination – the topmost one on the back stack – filling the available space. This made it difficult to implement adaptive layouts that display multiple panes of content simultaneously, such as a list-detail layout on large screens.

    illustration of single pane and two-pane layouts showing list and detail features

    Figure 1. Changing from single pane to multi-pane layouts can create navigational challenges

    Founding principles

    Nav3 is built upon principles designed to provide greater flexibility and developer control:

      • You own the back stack: You, the developer, not the library, own and control the back stack. It’s a simple list which is backed by Compose state. Specifically, Nav3 expects your back stack to be SnapshotStateList<T> where T can be any type you choose. You can navigate by adding or removing items (Ts), and state changes are observed and reflected by Nav3’s UI.
      • Get out of your way: We heard that you don’t like a navigation library to be a black box with inaccessible internal components and state. Nav3 is designed to be open and extensible, providing you with building blocks and helpful defaults. If you want custom navigation behavior you can drop down to lower layers and create your own components and customizations.
      • Pick your building blocks: Instead of embedding all behavior within the library, Nav3 offers smaller components that you can combine to create more complex functionality. We’ve also provided a “recipes book” that shows how to combine components to solve common navigation challenges.

    illustration of the Nav3 display observing changes to the developer-owned back stack

    Figure 2. The Nav3 display observes changes to the developer-owned back stack.

    Key features

      • Adaptive layouts: A flexible layout API (named Scenes) allows you to render multiple destinations in the same layout (for example, a list-detail layout on large screen devices). This makes it easy to switch between single and multi-pane layouts.
      • Modularity: The API design allows navigation code to be split across multiple modules. This improves build times and allows clear separation of responsibilities between feature modules.

        moving image demonstrating custom animations and predictive back features on a mobile device

        Figure 3. Custom animations and predictive back are easy to implement, and easy to override for individual destinations.

        Basic code example

        To give you an idea of how Nav3 works, here’s a short code sample.

        // Define the routes in your app and any arguments.
        data object Home
        data class Product(val id: String)
        
        // Create a back stack, specifying the route the app should start with.
        val backStack = remember { mutableStateListOf<Any>(Home) }
        
        // A NavDisplay displays your back stack. Whenever the back stack changes, the display updates.
        NavDisplay(
            backStack = backStack,
        
            // Specify what should happen when the user goes back
            onBack = { backStack.removeLastOrNull() },
        
            // An entry provider converts a route into a NavEntry which contains the content for that route.
            entryProvider = { route ->
                when (route) {
                    is Home -> NavEntry(route) {
                        Column {
                            Text("Welcome to Nav3")
                            Button(onClick = {
                                // To navigate to a new route, just add that route to the back stack
                                backStack.add(Product("123"))
                            }) {
                                Text("Click to navigate")
                            }
                        }
                    }
                    is Product -> NavEntry(route) {
                        Text("Product ${route.id} ")
                    }
                    else -> NavEntry(Unit) { Text("Unknown route: $route") }
                }
            }
        )
        

        Get started and provide feedback

        To get started, check out the developer documentation, plus the recipes repository which provides examples for:

          • common navigation UI, such as a navigation rail or bar
          • conditional navigation, such as a login flow
          • custom layouts using Scenes

        We plan to provide code recipes, documentation and blogs for more complex use cases in future.

        Nav3 is currently in alpha, which means that the API is liable to change based on feedback. If you have any issues, or would like to provide feedback, please file an issue.

        Nav3 offers a flexible and powerful foundation for building modern navigation in your Compose applications. We’re really excited to see what you build with it.

        Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.




    Source link

  • Android Developers Blog: New in-car app experiences



    Posted by Ben Sagmoe – Developer Relations Engineer

    The in-car experience continues to evolve rapidly, and Google remains committed to pushing the boundaries of what’s possible. At Google I/O 2025, we’re excited to unveil the latest advancements for drivers, car manufacturers, and developers, furthering our goal of a safe, seamless, and helpful connected driving experience.

    Today’s car cabins are increasingly digital, offering developers exciting new opportunities with larger displays and more powerful computing. Android Auto is now supported in nearly all new cars sold, with almost 250 million compatible vehicles on the road.

    We’re also seeing significant growth in cars powered by Android Automotive OS with Google built-in. Over 50 models are currently available, with more launching this year. This growth is fueled by a thriving app ecosystem, including over 300 apps already available on the Play Store. These include apps optimized for a safe and seamless experience while driving as well as entertainment apps for while you’re parked and waiting in your car—many of which are adaptive mobile apps that have been seamlessly brought to cars through the Car Ready Mobile Apps Program.

    A vibrant developer community is essential to delivering these innovative in-car experiences utilizing the different screens within the car cabin. This past year, we’ve focused on key areas to help empower developers to build more differentiated experiences in cars across both platforms, as we embark on the Gemini era in cars!

    Gemini for Cars

    Exciting news for in-car experiences: Gemini, Google’s advanced AI, is coming to vehicles! This unlocks a new era of safe and helpful interactions on the go.

    Gemini enables natural voice conversations and seamless multitasking, empowering drivers to get more done simply by speaking naturally. Imagine effortlessly finding charging stations or navigating to a location pulled directly from an email, all with just your voice.

    You can learn how to leverage Gemini’s potential to create engaging in-car experiences in your app.

    Navigation apps can integrate with Gemini using three core intent formats, allowing you to start navigation, display relevant search results, and execute custom actions, such as enabling users to report incidents like traffic congestion using their voice.

    Gemini for cars will be rolling out in the coming months. Get ready to build the next generation of in-car AI experiences!

    New developer programs and tools

    table of app categories showing availability in android Auto and cars with Google built-in, including media, navigation, point-of-interest, internet of things, weather, video, browsers, games, and communication such as messaging and voip

    Last year, we introduced car app quality tiers to inspire developers to create high quality in-car experiences. By developing your app in compliance with the Car ready tier, you can bring video, gaming, or browser apps to run while parked in cars with Google built-in with almost no additional effort. Learn more about Car Ready Mobile Apps.

    Your app can further shine in cars within the Car optimized and Car differentiated tiers to unlock experiences while the car is in motion, and also when transitioning between parked and driving modes, while utilizing the different screens within the modern car cabin. Check the car app quality guidelines for details.

    To start with, across both Android Auto and for cars with Google built-in, we’ve made some exciting improvements for Car App Library:

      • The Weather app category has graduated from beta: any developer can now publish weather apps to production tracks on both Android Auto and cars with Google Built-in. Before you publish your app, check that it meets the quality guidelines for weather apps.
      • Two new templates, the SectionedItemTemplate and MediaPlaybackTemplate, are now available in the Car App Library 1.8 alpha release for use on Android Auto. These templates are a great fit for building templated media apps, allowing for increased customization in layout and browsing structure.

        example of sectioneditemtemplate on the left and mediaplaybacktemplate on the right

    On Android Auto, many new app categories and capabilities are now in beta:

      • We are adding support for Building media apps with the Car App Library, enabling media app developers to build both richer and more complete experiences that users are used to on their phones. During beta, developers can build and publish media apps built using the Car App Library to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta. 

      • The communications category is in beta. We’ve simplified calling integration for calling apps by utilizing the CallsManager Jetpack API. Together with the templates provided by the Car App Library, this enables communications apps to build features like full message history, upcoming meetings list, rich in-call views, and more. During beta, developers can build and publish communications apps to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.

      • Games are now supported in Android Auto, while parked, on phones running Android 15 and above. You can already find some popular titles like Angry Birds 2, Farm Heroes Saga, Candy Crush Soda Saga and Beach Buggy Racing 2. The Games category is in Beta and developers can publish games to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.

    Finally, we have further simplified building, testing and distribution experience for developers building apps for Android Automotive OS cars with Google built-in:

      • Distribution through Google Play is more flexible than ever. It’s now possible for apps in the parked categories to distribute in the same APK or App Bundle to cars with Google built-in as to phones, including through the mobile release track. Learn more on how to Distribute to cars.

      • Android Automotive OS on Pixel Tablet is now generally available, giving you a physical device option for testing Android Automotive OS apps without buying or renting a car. Additionally, the most recent system images include support for acting as an Android Auto receiver, meaning you can use the same device to test both your app’s experience on Android Auto and Android Automotive OS. Apply for access to these images.

    The road ahead

    You can look forward to more updates later this year, including:

      • Video apps will be supported on Android Auto, starting with phones running Android 16 on select compatible cars. If your app is already adaptive, enabling your app experience while parked only requires minimal steps to distribute to cars.

      • For Android Automotive OS cars running Android 14+ with Google built-in, we are working with car manufacturers to add additional app compatibility, to enable thousands of adaptive mobile apps in the next phase of the Car Ready Mobile Apps Program.

      • Updated design documentation that visualizes car app quality guidelines and integration paths to simplify designing your app for cars.

      • Google Play Services for cars with Google built-in are expanding to bring them on-par with mobile, including:
        • a. Passkeys and Credential Manager APIs for a more seamless user sign-in experience.
          b. Quick Share, which will enable easy cross-device sharing from phone to car.

      • Pre-launch reports for Android Automotive OS are coming soon to the Play Console, helping you ensure app quality before distributing your app to cars.

    Be sure to keep up to date through goo.gle/cars-whats-new on these features and more as we continuously invest in the future of Android in the car. Stay tuned for more resources to help you build innovative and engaging experiences for drivers and passengers.

    Ready to publish your car app? Check our guidance for distributing to cars.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Android Developers Blog: Updates to the Android XR SDK: Introducing Developer Preview 2



    Posted by Matthew McCullough – VP of Product Management, Android Developer

    Since launching the Android XR SDK Developer Preview alongside Samsung, Qualcomm, and Unity last year, we’ve been blown away by all of the excitement we’ve been hearing from the broader Android community. Whether it’s through coding live-streams or local Google Developer Group talks, it’s been an outstanding experience participating in the community to build the future of XR together, and we’re just getting started.

    Today we’re excited to share an update to the Android XR SDK: Developer Preview 2, packed with new features and improvements to help you develop helpful and delightful immersive experiences with familiar Android APIs, tools and open standards created for XR.

    At Google I/O, we have two technical sessions related to Android XR. The first is Building differentiated apps for Android XR with 3D content, which covers many features present in Jetpack SceneCore and ARCore for Jetpack XR. The future is now, with Compose and AI on Android XR covers creating XR-differentiated UI and our vision on the intersection of XR with cutting-edge AI capabilities.

    Android XR sessions at Google I/O 2025

    Building differentiated apps for Android XR with 3D content and The future is now, with Compose and AI on Android XR

    What’s new in Developer Preview 2

    Since the release of Developer Preview 1, we’ve been focused on making the APIs easier to use and adding new immersive Android XR features. Your feedback has helped us shape the development of the tools, SDKs, and the platform itself.

    With the Jetpack XR SDK, you can now play back 180° and 360° videos, which can be stereoscopic by encoding with the MV-HEVC specification or by encoding view-frames adjacently. The MV-HEVC standard is optimized and designed for stereoscopic video, allowing your app to efficiently play back immersive videos at great quality. Apps built with Jetpack Compose for XR can use the SpatialExternalSurface composable to render media, including stereoscopic videos.

    Using Jetpack Compose for XR, you can now also define layouts that adapt to different XR display configurations. For example, use a SubspaceModifier to specify the size of a Subspace as a percentage of the device’s recommended viewing size, so a panel effortlessly fills the space it’s positioned in.

    Material Design for XR now supports more component overrides for TopAppBar, AlertDialog, and ListDetailPaneScaffold, helping your large-screen enabled apps that use Material Design effortlessly adapt to the new world of XR.

    An app adapts to XR using Material Design for XR with the new component overrides

    An app adapts to XR using Material Design for XR with the new component overrides

    In ARCore for Jetpack XR, you can now track hands after requesting the appropriate permissions. Hands are a collection of 26 posed hand joints that can be used to detect hand gestures and bring a whole new level of interaction to your Android XR apps:

    moving image demonstrates how hands bring a natural input method to your Android XR experience.

    Hands bring a natural input method to your Android XR experience.

    For more guidance on developing apps for Android XR, check out our Android XR Fundamentals codelab, the updates to our Hello Android XR sample project, and a new version of JetStream with Android XR support.

    The Android XR Emulator has also received updates to stability, support for AMD GPUs, and is now fully integrated within the Android Studio UI.

    the Android XR Emulator in Android STudio

    The Android XR Emulator is now integrated in Android Studio

    Developers using Unity have already successfully created and ported existing games and apps to Android XR. Today, you can upgrade to the Pre-Release version 2 of the Unity OpenXR: Android XR package! This update adds many performance improvements such as support for Dynamic Refresh Rate, which optimizes your app’s performance and power consumption. Shaders made with Shader Graph now support SpaceWarp, making it easier to use SpaceWarp to reduce compute load on the device. Hand meshes are now exposed with occlusion, which enables realistic hand visualization.

    Check out Unity’s improved Mixed Reality template for Android XR, which now includes support for occlusion and persistent anchors.

    We recently launched Android XR Samples for Unity, which demonstrate capabilities on the Android XR platform such as hand tracking, plane tracking, face tracking, and passthrough.

    moving image of Google’s open-source Unity samples demonstrating platform features and showing how they’re implemented

    Google’s open-source Unity samples demonstrate platform features and show how they’re implemented

    The Firebase AI Logic for Unity is now in public preview! This makes it easy for you to integrate gen AI into your apps, enabling the creation of AI-powered experiences with Gemini and Android XR. The Firebase AI Logic fully supports Gemini’s capabilities, including multimodal input and output, and bi-directional streaming for immersive conversational interfaces. Built with production readiness in mind, Firebase AI Logic is integrated with core Firebase services like App Check, Remote Config, and Cloud Storage for enhanced security, configurability, and data management. Learn more about this on the Firebase blog or go straight to the Gemini API using Vertex AI in Firebase SDK documentation to get started.

    Continuing to build the future together

    Our commitment to open standards continues with the glTF Interactivity specification, in collaboration with the Khronos Group. which will be supported in glTF models rendered by Jetpack XR later this year. Models using the glTF Interactivity specification are self-contained interactive assets that can have many pre-programmed behaviors, like rotating objects on a button press or changing the color of a material over time.

    Android XR will be available first on Samsung’s Project Moohan, launching later this year. Soon after, our partners at XREAL will release the next Android XR device. Codenamed Project Aura, it’s a portable and tethered device that gives users access to their favorite Android apps, including those that have been built for XR. It will launch as a developer edition, specifically for you to begin creating and experimenting. The best news? With the familiar tools you use to build Android apps today, you can build for these devices too.

    product image of XREAL’s Project Aura against a nebulous black background

    XREAL’s Project Aura

    The Google Play Store is also getting ready for Android XR. It will list supported 2D Android apps on the Android XR Play Store when it launches later this year. If you are working on an Android XR differentiated app, you can get it ready for the big launch and be one of the first differentiated apps on the Android XR Play Store:

    And we know many of you are excited for the future of Android XR on glasses. We are shaping the developer experience now and will share more details on how you can participate later this year.

    To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries, and resources you need to work with the Android XR SDK. In particular, try out our samples and codelabs.

    We welcome your feedback, suggestions, and ideas as you’re helping shape Android XR. Your passion, expertise, and bold ideas are vital as we continue to develop Android XR together. We look forward to seeing your XR-differentiated apps when Android XR devices launch later this year!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • 16 things to know for Android developers at Google I/O 2025



    Posted by Matthew McCullough – VP of Product Management, Android Developer

    Today at Google I/O, we announced the many ways we’re helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here’s a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!

    Building AI into your Apps

    1: Building intelligent apps with Generative AI

    Generative AI enhances apps’ experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.

    New experiences across devices

    2: One app, every screen: think adaptive and unlock 500 million screens

    Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we’re helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices – a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral’s streaming service (available in the US) is building adaptively to meet users where they are.

    https://www.youtube.com/watch?v=ooRcQFMYzmA

    Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

    3: Material 3 Expressive: design for intuition and emotion

    The new Material 3 Expressive update provides tools to enhance your product’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

    moving image of Material 3 Expressive demo

    4: Smarter widgets, engaging live updates

    Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

    moving image of Material 3 Expressive demo

    5: Enhanced Camera & Media: low light boost and battery savings

    This year’s I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.

    6: Build next-gen app experiences for Cars

    We’re launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we’ll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.

    7: Build for Android XR’s expanding ecosystem with Developer Preview 2 of the SDK

    We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung’s Project Moohan, you’ll also see more devices including a new portable Android XR device from our partners at XREAL. There’s lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

    product image of XREAL’s Project Aura against a nebulous black background

    XREAL’s Project Aura

    8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6

    This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

    moving image displays examples of Material 3 Expressive on Wear OS experiences

    Some examples of Material 3 Expressive on Wear OS experiences

    9: Engage users on Google TV with excellent TV apps

    You can leverage more resources within Compose’s core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We’re also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.

    Developer productivity

    10: Build beautiful apps faster with Jetpack Compose

    Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

    moving image of compose adaptive layouts updates in the Google Play app

    Compose Adaptive Layouts Updates in the Google Play app

    11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily

    Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We’ve released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what’s new in Android’s Kotlin Multiplatform.

    12: Gemini in Android Studio: AI Agents to help you work

    Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What’s new in Android development tools.

    https://www.youtube.com/watch?v=ubyPjBesW-8

    13: Android Studio: smarter with Gemini

    In this latest release, we’re empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What’s new in Android development tools.

    moving image of Gemini in Android Studio Agentic Experiences including Journeys and Version Upgrade

    And the latest on driving business growth

    14: What’s new in Google Play

    Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we’re continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What’s new in Google Play to learn more.

    a moving image of three mobile devices displaying how content is displayed on the Play Store

    15: Start migrating to Play Games Services v2 today

    Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.

    16: And of course, Android 16

    We unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.

    Check out all of the Android and Play content at Google I/O

    This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What’s New in Android and the full Android track of sessions, and whether you’re joining in person or around the world, we can’t wait to engage with you!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Android Developers Blog: The Android Show: I/O Edition



    Posted by Matthew McCullough – Vice President, Product Management, Android Developer

    We just dropped an I/O Edition of The Android Show, where we unpacked exciting new experiences coming to the Android ecosystem: a fresh and dynamic look and feel, smarts across your devices, and enhanced safety and security features. Join Sameer Samat, President of Android Ecosystem, and the Android team to learn about exciting new development in the episode below, and read about all of the updates for users.

    Tune into Google I/O next week – including the Developer Keynote as well as the full Android track of sessions – where we’re covering these topics in more detail and how you can get started.

    https://www.youtube.com/watch?v=l3yDd3CmA_Y

    Start building with Material 3 Expressive

    The world of UX design is constantly evolving, and you deserve the tools to create truly engaging and impactful experiences. That’s why Material Design’s latest evolution, Material 3 Expressive, provides new ways to make your product more engaging, easy to use, and desirable. Learn more, and try out the new Material 3 Expressive: an expansion pack designed to enhance your app’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. It comes with new components, motion-physics system, type styles, colors, shapes and more.

    Material 3 Expressive will be coming to Android 16 later this year; check out the Google I/O talk next week where we’ll dive into this in more detail.

    A fluid design built for your watch’s round display

    Wear OS 6, arriving later this year, brings Material 3 Expressive design to Google’s smartwatch platform. New design language puts the round watch display at the heart of the experience, and is embraced in every single component and motion of the System, from buttons to notifications. You’ll be able to try new visual design and upgrade existing app experiences to a new level. Next week, tune in to the What’s New in Android session to learn more.

    Plus some goodies in Android 16…

    We also unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, you can try out the latest Beta of Android 16.

    A few new features that Android 16 adds which developers should pay attention to are Live updates, professional media and camera features, desktop windowing for tablets, major accessibility enhancements and much more:

      • Live Updates allow your app to show time-sensitive progress updates. Use the new ProgressStyle template for an improved experience around navigation, deliveries, and rideshares.

    Watch the What’s New in Android session and the Live updates talk to learn more.

    Tune in next week to Google I/O

    This was just a preview of some Android-related news, so remember to tune in next week to Google I/O, where we’ll be diving into a range of Android developer topics in a lot more detail. You can check out What’s New in Android and the full Android track of sessions to start planning your time.

    We can’t wait to see you next week, whether you’re joining in person or virtually from anywhere around the world!



    Source link

  • Android Developers Blog: Introducing Widget Quality Tiers



    Posted by Ivy Knight – Senior Design Advocate

    Level up your app Widgets with new quality tiers

    Widgets can be a powerful tool for engaging users and increasing the visibility of your app. They can also help you to improve the user experience by providing users with a more convenient way to access your app’s content and features.

    To build a great Android widget, it should be helpful, adaptive, and visually cohesive with the overall aesthetic of the device home screen.

    In order to help you achieve a great widget, we are pleased to introduce Android Widget Quality Tiers!

    The new Widget quality tiers are here to help guide you towards a best practice implementation of widgets, that will look great and bring your user’s value across the ecosystem of Android Phone, Tablets and Foldables.

    What does this mean for widget makers?

    Whether you are planning a new widget, or investing in an update to an existing widget, the Widget Quality Tiers will help you evaluate and plan for a high quality widget.

    Just like Large Screen quality tiers help optimize app experiences, these Widget tiers guide you in creating great widgets across all Android devices. Now, similar tiers are being introduced for widgets to ensure they’re not just functional, but also visually appealing and user-friendly.

    Two screenshots of a phone display different views in the Google Play app. The first shows a list of running apps with the Widget filter applied in a search for 'Running apps'; the second shows the Nike Run Club app page.

    Widgets that meet quality tier guidelines will be discoverable under the new Widget filter in Google Play.

    Consider using our Canonical Widget layouts, which are based on Jetpack Glance components, to make it easier for you to design and build a Tier 1 widget your users will love.

    Let’s take a look at the Widget Quality Tiers

    There are three tiers built with required system defaults and suggested guidance to create an enhanced widget experience:

    Tier 1: Differentiated

    Four mockups show examples of Material Design 3 dynamic color applied to an app called 'Radio Hour'.

    Differentiated widgets go further by implementing theming and adapting to resizing.

    Tier 1 widgets are exemplary widgets offering hero experiences that are personalized, and create unique and productive homescreens. These widgets meet Tier 2 standards plus enhancements for layout, color, discovery, and system coherence criteria.

    A stylized cartoon figure holds their chin thoughtfully while a chat bubble icon is highlighted

    For example, use the system provided corner radius, and don’t set a custom corner radius on Widgets.

    Add more personalization with dynamic color and generated previews while ensuring your widgets look good across devices by not overriding system defaults.

     Four mockups show examples of Material Design 3 components on Android: a contact card, a podcast player, a task list, and a news feed.

    Tier 1 widgets that, from the top left, properly crop content, fill the layout bounds, have appropriately sized headers and touch targets, and make good use of colors and contrast.

    Tier 2: Quality Standard

    These widgets are helpful, usable, and provide a quality experience. They meet all criteria for layout, color, discovery, and content.

    A simple to-do list app widget displays two tasks: 'Water plants' and 'Water more plants.' Both tasks have calendar icons next to them. The app is titled 'Plants' and has search and add buttons in the top right corner.

    Make sure your widget has appropriate touch targets.

    Tier 2 widgets are functional but simple, they meet the basic criteria for a usable app. But if you want to create a truly stellar experience for your users, tier 1 criteria introduce ways to make a more personal, interactive, and coherent widget.

    Tier 3: Low Quality

    These widgets don’t meet the minimum quality bar and don’t provide a great user experience, meaning they are not following or missing criteria from Tier 2.

     Examples of Material Design 3 widgets are displayed on a light pink background with stylized X shapes. Widgets include a podcast player, a contact card, to-do lists, and a music player.

    Clockwise from the top left not filling the bounds, poorly cropped content, low color contrast, mis-sized header, and small touch targets.

    A stylized cartoon person with orange hair, a blue shirt, holds a pencil to their cheek.  'Kacie' is written above them, with a cut off chat bubble icon.

    For example, ensure content is visible and not cropped

    Build and elevate your Android widgets with Widget Quality Tiers

    Dive deeper into the widget quality tiers and start building widgets that not only look great but also provide an amazing user experience! Check out the official Android documentation for detailed information and best practices.


    This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.



    Source link

  • Android Developers Blog: #WeArePlay | How Memory Lane Games helps people with dementia



    Posted by Robbie McLachlan – Developer Marketing

    In our latest #WeArePlay film, which celebrates the people behind apps and games, we meet Bruce – a co-founder of Memory Lane Games. His company turns cherished memories into simple, engaging quizzes for people with different types of dementia. Discover how Memory Lane Games blends nostalgia and technology to spark conversations and emotional connections.

    https://www.youtube.com/watch?v=oBDJH8h7FYs

    What inspired the idea behind Memory Lane Games?

    The idea for Memory Lane Games came about one day at the pub when Peter was telling me how his mum, even with vascular dementia, lights up when she looks at old family photos. It got me thinking about my own mum, who treasures old photos just as much. The idea hit us – why not turn those memories into games? We wanted to help people reconnect with their past and create moments where conversations could flow naturally.

    Memory Lane Games co-founders, Peter and Bruce from Isle of Man

    Can you tell us of a memorable moment in the journey when you realized how powerful the game was?

    We knew we were onto something meaningful when a caregiver in a memory cafe told us about a man who was pretty much non-verbal but would enjoy playing. He started humming along to one of our music trivia games, then suddenly said, “Roy Orbison is a way better singer than Elvis, but Elvis had a better manager.” The caregiver was in tears—it was the first complete sentence he’d spoken in months. Moments like these remind us why we’re doing this—it’s not just about games; it’s about unlocking moments of connection and joy that dementia often takes away.

    A user plays Memory Lane Games from their phone

    One of the key features is having errorless fun with the games, why was that so important?

    We strive for frustration-free design. With our games, there are no wrong answers—just gentle prompts to trigger memories and spark conversations about topics they are interested in. It’s not about winning or losing; it’s about rekindling connections and creating moments of happiness without any pressure or frustration. Dementia can make day-to-day tasks challenging, and the last thing anyone needs is a game that highlights what they might not remember or get right. Caregivers also like being able to redirect attention back to something familiar and fun when behaviour gets more challenging.

    How has Google Play helped your journey?

    What’s been amazing is how Google Play has connected us with an incredibly active and engaged global community without any major marketing efforts on our part.

    For instance, we got our first big traction in places like the Philippines and India—places we hadn’t specifically targeted. Yet here we are, with thousands of downloads in more than 100 countries. That reach wouldn’t have been possible without Google Play.

    A group of senior citizen gather around a table to play a round of Memory Lane Games from a shared mobile device

    What is next for Memory Lane Games?

    We’re really excited about how we can use AI to take Memory Lane Games to the next level. Our goal is to use generative AI, like Google’s Gemini, to create more personalized and localized game content. For example, instead of just focusing on general memories, we want to tailor the game to a specific village the player came from, or a TV show they used to watch, or even local landmarks from their family’s hometown. AI will help us offer games that are deeply personal. Plus, with the power of AI, we can create games in multiple languages, tapping into new regions like Japan, Nigeria or Mexico.

    Discover other inspiring app and game founders featured in #WeArePlay.

    How useful did you find this blog post?






    Source link

  • Android Developers Blog: Get ready for Google I/O: Program lineup revealed



    Posted by the Google I/O team

    The Google I/O agenda is live. We’re excited to share Google’s biggest announcements across AI, Android, Web, and Cloud May 20-21. Tune in to learn how we’re making development easier so you can build faster.

    We’ll kick things off with the Google Keynote at 10:00 AM PT on May 20th, followed by the Developer Keynote at 1:30 PM PT. This year, we’re livestreaming two days of sessions directly from Mountain View, bringing more of the I/O experience to you, wherever you are.

    Here’s a sneak peek of what we’ll cover:

      • AI advancements: Learn how Gemini models enable you to build new applications and unlock new levels of productivity. Explore the flexibility offered by options like our Gemma open models and on-device capabilities.
      • Build excellent apps, across devices with Android: Crafting exceptional app experiences across devices is now even easier with Android. Dive into sessions focused on building intelligent apps with
        Google AI and boosting your productivity, alongside creating adaptive user experiences and leveraging the power of Google Play.
      • Powerful web, made easier: Exciting new features continue to accelerate web development, helping you to build richer, more reliable web experiences. We’ll share the latest innovations in web UI, Baseline progress, new multimodal built-in AI APIs using Gemini Nano, and how AI in DevTools streamline building innovative web experiences.

    Plan your I/O

    Join us online for livestreams May 20-21, followed by on-demand sessions and codelabs on May 22. Register today and explore the full program for sessions like these:

    We’re excited to share what’s next and see what you build!




    Source link