If the e-book app on your phone or tablet is overflowing and full of outdated files, use these tools to tidy it up.
Source link
برچسب: Google
-
How to Organize Your E-Books on Kindle, Apple and Google and Nook
-
Google Play’s Indie Games Fund in Latin America returns for its 4th year
Posted by Daniel Trócoli – Google Play Partnerships
We’re thrilled to announce the return of Google Play’s Indie Games Fund (IGF) in Latin America for its fourth consecutive year! This year, we’re once again committing $2 million to empower another 10 indie game studios across the region. With this latest round of funding, our total investment in Latin American indie games will reach an impressive $8 million USD.
Since its inception, the IGF has been a cornerstone of our commitment to fostering growth for developers of all sizes on Google Play. We’ve seen firsthand the transformative impact this support has had, enabling studios to expand their teams, refine their creations, and reach new audiences globally.
What’s in store for the Indie Games Fund in 2025?
Just like in previous years, selected small game studios based in Latin America will receive a share of the $2 million fund, along with support from the Google Play team.
As Vish Game Studio, a previously selected studio, shared: “The IGF was a pivotal moment for our studio, boosting us to the next level and helping us form lasting connections.” We believe in fostering these kinds of pivotal moments for all our selected studios.
The program is open to indie game developers who have already launched a game, whether it’s on Google Play, another mobile platform, PC, or console. Each selected recipient will receive between $150,000 and $200,000 to help them elevate their game and realize their full potential.
Check out all eligibility criteria and apply now! Applications will close at 12:00 PM BRT on July 31, 2025. To give your application the best chance, remember that priority will be given to applications received by 12:00 PM BRT on July 15, 2025.
-
Top announcements to know from Google Play at I/O ‘25
Posted by Raghavendra Hareesh Pottamsetty – Google Play Developer and Monetization Lead
At Google Play, we’re dedicated to helping people discover experiences they’ll love, while empowering developers like you to bring your ideas to life and build successful businesses. This year, Google I/O was packed with exciting announcements designed to do just that. For a comprehensive overview of everything we shared, be sure to check out our blog post recapping What’s new in Google Play.
Today, we’ll dive specifically into the latest updates designed to help you streamline your subscriptions offerings and maximize your revenue on Play. Get a quick overview of these updates in our video below, or read on for more details.
https://www.youtube.com/watch?v=Cny82VuONU4
#1: Subscriptions with add-ons: Streamlining subscriptions for you and your users
We’re excited to announce multi-product checkout for subscriptions, a new feature designed to streamline your purchase flow and offer a more unified experience for both you and your users. This enhancement allows you to sell subscription add-ons right alongside your base subscriptions, all while maintaining a single, aligned payment schedule.
The result? A simplified user experience with just one price and one transaction, giving you more control over how your subscribers upgrade, downgrade, or manage their add-ons. Learn more about how to create add-ons.
You can now sell base subscriptions and add-ons together in a single, streamlined transaction #2: Showcasing benefits in more places across Play: Increasing visibility and value
We’re also making it easier for you to retain more of your subscribers by showcasing subscription benefits in more key areas across Play. This includes the Subscriptions Center, within reminder emails, and even during the purchase and cancellation processes. This increased visibility has already proved effective, reducing voluntary churn by 2%. To take advantage of this powerful new capability, be sure to enter your subscription benefits details in Play Console.
To help reduce voluntary churn, we’re showcasing your subscriptions benefits across Play #3: New grace period and account hold duration: Decreasing involuntary churn
Another way we’re helping you maximize your revenue is by extending grace periods and account hold durations to tackle unintended subscription losses, which often occur when payment methods unexpectedly decline.
Now, you can customize both the grace period (when users retain access while renewal is attempted) and the account hold period (when access is suspended). You can set a grace period of up to 30 days and an account hold period of up to 60 days. However, the total combined recovery period (grace period + account hold) cannot exceed 60 days.
This means instead of an immediate cancellation, your users have a longer window to update their payment information. Developers who’ve already extended their decline recovery period—from 30 to 60 days—have seen impressive results, with an average 10% reduction in involuntary churn for renewals. Ready to see these results for yourself? Adjust your grace period and account hold durations in Play Console today.
Developers who extend their decline recovery period see an average 10% reduction in involuntary churn But that’s not all. We’re constantly investing in ways to help you optimize conversion throughout the entire buyer lifecycle. This includes boosting purchase-readiness by prompting users to set up payment methods and verification right from device setup, and we’ve integrated these prompts into highly visible areas like the Play and Google account menus. Beyond that, we’re continuously enabling payments in more markets and expanding payment options. Our AI models are even working to optimize in-app transactions by suggesting the right payment method at the right time, and we’re bringing buyers back with effective cart abandonment reminders.
That’s it for our top announcements from Google I/O 2025, but there’s so many more updates to discover from this year’s event. Check out What’s new in Google Play to learn more, and to dive deeper into the session details, view the Google Play I/O playlist for all the announcements.
https://www.youtube.com/watch?v=T41OD37tI54
-
How Mecha BREAK is driving PC-only growth on Google Play Games
Posted by Kosuke Suzuki – Director, Games on Google Play
On July 1, Amazing Seasun Games is set to unveil its highly anticipated action shooting game – Mecha BREAK, with a multiplatform launch across PC and Console. A key to their PC growth strategy is Google Play Games on PC, enabling the team to build excitement with a pre-registration campaign, maximize revenue with PC earnback, and ensure a secure, top-tier experience on PC.
Building momentum with pre-registration
With a legacy of creating high-quality games since 1995, Amazing Seasun Games has already seen Mecha BREAK attract over 3.5 million players during the last beta test. To build on this momentum, the studio is bringing their game to Google Play Games on PC to open pre-registration and connect with its massive player audience.
“We were excited to launch on Google Play Games on PC. We want to make sure all players can enjoy the Mecha BREAK experience worldwide.”
– Kris Kwok, Executive Producer of Mecha BREAK and CEO of Amazing Seasun Games
Mecha BREAK pre-registration on Google Play Games on PC homepage Accelerating growth with the Native PC program
Mecha BREAK‘s launch strategy includes leveraging the native PC earnback, a program that gives native PC developers the opportunity to unlock up to 15% in additional earnback.
Beyond earnback, the program offers comprehensive support for PC game development, distribution, and growth. Developers can manage PC builds in Play Console, simplifying the process of packaging PC versions, configuring releases, and managing store listings. Now, you can also view PC-specific sales reports, providing a more precise analysis of your game’s financial performance.
Delivering a secure and high quality PC experience
Mecha BREAK is designed to deliver an intense and high-fidelity experience on PC. Built on a cutting-edge, proprietary 3D engine, the game offers players three unique modes of fast-paced combat on land and in the air.
- Diverse combat styles: Engage in six-on-six hero battles, three-on-three matches, or the unique PvPvE extraction mode “Mashmak”.
- Free customization options: Create personalized characters with a vast array of colors, patterns and gameplay styles, from close-quarters brawlers to long-range tactical units.
Mecha BREAK offers a high-fidelity experience on PC The decision to integrate with Google Play Games on PC was driven by the platform’s robust security infrastructure, including tools such as Play Integrity API, supporting large-scale global games like Mecha BREAK.
“Mecha BREAK’s multiplayer setting made Google Play Games a strong choice, as we expect exceptional operational stability and performance. The platform also offers advanced malware protection and anti-cheat capabilities.”
– Kris Kwok, Executive Producer of Mecha BREAK and CEO of Amazing Seasun Games
Bring your game to Google Play Games on PC
This year, the native PC program is open to all PC games, including PC-only titles. If you’re ready to expand your game’s reach and accelerate its growth, learn more about the eligibility requirements and how to join the program today.
-
Top 3 Updates for Android Developer Productivity @ Google I/O ‘25
Posted by Meghan Mehta – Android Developer Relations Engineer
https://www.youtube.com/watch?v=-GikklXjkgM
#1 Agentic AI is available for Gemini in Android Studio
Gemini in Android Studio is the AI-powered coding companion that makes you more productive at every stage of the dev lifecycle. At Google I/O 2025 we previewed new agentic AI experiences: Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier for you to build and test code. We also announced Agent Mode, which was designed to handle complex, multi-stage development tasks that go beyond typical AI assistant capabilities, invoking multiple tools to accomplish tasks on your behalf. We’re excited to see how you leverage these agentic AI experiences which are now available in the latest preview version of Android Studio on the canary release channel.
You can also use Gemini to automatically generate Jetpack Compose previews, as well as transform UI code using natural language, saving you time and effort. Give Gemini more context by attaching images and project files to your prompts, so you can get more relevant responses. And if you’re looking for enterprise-grade privacy and security features backed by Google Cloud, Gemini in Android Studio for businesses is now available. Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions.
https://www.youtube.com/watch?v=KXKP2tDPW4Y
#2 Build better apps faster with the latest stable release of Jetpack Compose
Compose is our recommended UI toolkit for Android development, used by over 60% of the top 1K apps on Google Play. We released a new version of our Jetpack Navigation library: Navigation 3, which has been rebuilt from the ground up to give you more flexibility and control over your implementation. We unveiled the new Material 3 Expressive update which provides tools to enhance your product’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for your users. The latest stable Bill of Materials (BOM) release for Compose adds new features such as autofill support, auto-sizing text, visibility tracking, animate bounds modifier, accessibility checks in tests, and more! This release also includes significant rewrites and improvements to multiple sub-systems including semantics, focus and text optimizations.
These optimizations are available to you with no code changes other than upgrading your Compose dependency. If you’re looking to try out new Compose functionality, the alpha BOM offers new features that we’re working on including pausable composition, updates to LazyLayout prefetch, context menus, and others. Finally, we’ve added Compose support to CameraX and Media3, making it easier to integrate camera capture and video playback into your UI with Compose idiomatic components.
https://www.youtube.com/watch?v=89UusPuz8q4
#3 The new Kotlin Multiplatform (KMP) shared module template helps you share business logic
KMP enables teams to deliver quality Android and iOS apps with less development time. The KMP ecosystem continues to grow: last year alone, over 900 new KMP libraries were published. At Google I/O we released a new Android Studio KMP shared module template to help you craft and manage business logic, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help you get started with KMP. We also shared additional announcements at KotlinConf.
https://www.youtube.com/watch?v=gP5Y-ct6QXI
Learn more about what we announced at Google I/O 2025 to help you build better apps, faster.
-
Top 3 things to know for AI on Android at Google I/O ‘25
Posted by Kateryna Semenova – Sr. Developer Relations Engineer
AI is reshaping how users interact with their favorite apps, opening new avenues for developers to create intelligent experiences. At Google I/O, we showcased how Android is making it easier than ever for you to build smart, personalized and creative apps. And we’re committed to providing you with the tools needed to innovate across the full development stack in this evolving landscape.
This year, we focused on making AI accessible across the spectrum, from on-device processing to cloud-powered capabilities. Here are the top 3 announcements you need to know for building with AI on Android from Google I/O ‘25:
#1 Leverage the efficiency of Gemini Nano for on-device AI experiences
https://www.youtube.com/watch?v=mP9QESmEDls
For on-device AI, we announced a new set of ML Kit GenAI APIs powered by Gemini Nano, our most efficient and compact model designed and optimized for running directly on mobile devices. These APIs provide high-level, easy integration for common tasks including text summarization, proofreading, rewriting content in different styles, and generating image description. Building on-device offers significant benefits such as local data processing and offline availability at no additional cost for inference. To start integrating these solutions explore the ML Kit GenAI documentation, the sample on GitHub and watch the “Gemini Nano on Android: Building with on-device GenAI” talk.
#2 Seamlessly integrate on-device ML/AI with your own custom models
https://www.youtube.com/watch?v=xLmJJk1gbuE
The Google AI Edge platform enables building and deploying a wide range of pretrained and custom models on edge devices and supports various frameworks like TensorFlow, PyTorch, Keras, and Jax, allowing for more customization in apps. The platform now also offers improved support of on-device hardware accelerators and a new AI Edge Portal service for broad coverage of on-device benchmarking and evals. If you are looking for GenAI language models on devices where Gemini Nano is not available, you can use other open models via the MediaPipe LLM Inference API.
Serving your own custom models on-device can pose challenges related to handling large model downloads and updates, impacting the user experience. To improve this, we’ve launched Play for On-Device AI in beta. This service is designed to help developers manage custom model downloads efficiently, ensuring the right model size and speed are delivered to each Android device precisely when needed.
For more information watch “Small language models with Google AI Edge” talk.
#3 Power your Android apps with Gemini Flash, Pro and Imagen using Firebase AI Logic
https://www.youtube.com/watch?v=U8Nb68XsVY4
For more advanced generative AI use cases, such as complex reasoning tasks, analyzing large amounts of data, processing audio or video, or generating images, you can use larger models from the Gemini Flash and Gemini Pro families, and Imagen running in the cloud. These models are well suited for scenarios requiring advanced capabilities or multimodal inputs and outputs. And since the AI inference runs in the cloud any Android device with an internet connection is supported. They are easy to integrate into your Android app by using Firebase AI Logic, which provides a simplified, secure way to access these capabilities without managing your own backend. Its SDK also includes support for conversational AI experiences using the Gemini Live API or generating custom contextual visual assets with Imagen. To learn more, check out our sample on GitHub and watch “Enhance your Android app with Gemini Pro and Flash, and Imagen” session.
These powerful AI capabilities can also be brought to life in immersive Android XR experiences. You can find corresponding documentation, samples and the technical session: “The future is now, with Compose and AI on Android XR“.
Figure 1: Firebase AI Logic integration architecture Get inspired and start building with AI on Android today
We released a new open source app, Androidify, to help developers build AI-driven Android experiences using Gemini APIs, ML Kit, Jetpack Compose, CameraX, Navigation 3, and adaptive design. Users can create personalized Android bot with Gemini and Imagen via the Firebase AI Logic SDK. Additionally, it incorporates ML Kit pose detection to detect a person in the camera viewfinder. The full code sample is available on GitHub for exploration and inspiration. Discover additional AI examples in our Android AI Sample Catalog.
The original image and Androidifi-ed image Choosing the right Gemini model depends on understanding your specific needs and the model’s capabilities, including modality, complexity, context window, offline capability, cost, and device reach. To explore these considerations further and see all our announcements in action, check out the AI on Android at I/O ‘25 playlist on YouTube and check out our documentation.
We are excited to see what you will build with the power of Gemini!
-
Top 3 updates for building excellent, adaptive apps at Google I/O ‘25
Posted by Mozart Louis – Developer Relations Engineer
Today, Android is launching a few updates across the platform! This includes the start of Android 16’s rollout, with details for both developers and users, a Developer Preview for enhanced Android desktop experiences with connected displays, and updates for Android users across Google apps and more, plus the June Pixel Drop. We’re also recapping all the Google I/O updates for Android developers focused on building excellent, adaptive Android apps.
Google I/O 2025 brought exciting advancements to Android, equipping you with essential knowledge and powerful tools you need to build outstanding, user-friendly applications that stand out.
If you missed any of the key #GoogleIO25 updates and just saw the release of Android 16 or you’re ready to dive into building excellent adaptive apps, our playlist is for you. Learn how to craft engaging experiences with Live Updates in Android 16, capture video effortlessly with CameraX, process it efficiently using Media3’s editing tools, and engage users across diverse platforms like XR, Android for Cars, Android TV, and Desktop.
https://www.youtube.com/watch?v=KiYHuY3hiZc
Check out the Google I/O playlist for all the session details.
Here are three key announcements directly influencing how you can craft deeply engaging experiences and truly connect with your users:
#1: Build adaptively to unlock 500 million devices
https://www.youtube.com/watch?v=15oPNK1W0Tw
In today’s diverse device ecosystem, users expect their favorite applications to function seamlessly across various form factors, including phones, tablets, Chromebooks, automobiles, and emerging XR glasses and headsets. Our recommended approach for developing applications that excel on each of these surfaces is to create a single, adaptive application. This strategy avoids the need to rebuild the application for every screen size, shape, or input method, ensuring a consistent and high-quality user experience across all devices.
The talk emphasizes that you don’t need to rebuild apps for each form factor. Instead, small, iterative changes can unlock an app’s potential.
Here are some resources we encourage you to use in your apps:
New feature support in Jetpack Compose Adaptive Libraries
- We’re continuing to make it as easy as possible to build adaptively with Jetpack Compose Adaptive Libraries. with new features in 1.1 like pane expansion and predictive back. By utilizing canonical layout patterns such as List Detail or Supporting Pane layouts and integrating your app code, your application will automatically adjust and reflow when resized.
Navigation 3
- The alpha release of the Navigation 3 library now supports displaying multiple panes. This eliminates the need to alter your navigation destination setup for separate list and detail views. Instead, you can adjust the setup to concurrently render multiple destinations when sufficient screen space is available.
Updates to Window Manager Library
- AndroidX.window 1.5 introduces two new window size classes for expanded widths, facilitating better layout adaptation for large tablets and desktops. A width of 1600dp or more is now categorized as “extra large,” while widths between 1200dp and 1600dp are classified as “large.” These subdivisions offer more granularity for developers to optimize their applications for a wider range of window sizes.
Support all orientations and be resizable
Extend to Android XR
Upgrade your Wear OS apps to Material 3 Design
You should build a single, adaptive mobile app that brings the best experiences to all Android surfaces. By building adaptive apps, you meet users where they are today and in the future, enhancing user engagement and app discoverability. This approach represents a strategic business decision that optimizes an app’s long-term success.
#2: Enhance your app’s performance optimization
https://www.youtube.com/watch?v=IaNpcrCSDiI
Get ready to take your app’s performance to the next level! Google I/O 2025, brought an inside look at cutting-edge tools and techniques to boost user satisfaction, enhance technical performance metrics, and drive those all-important key performance indicators. Imagine an end-to-end workflow that streamlines performance optimization.
Redesigned UiAutomator API
- To make benchmarking reliable and reproducible, there’s the brand new UiAutomator API. Write robust test code and run it on your local devices or in Firebase Test Lab, ensuring consistent results every time.
Macrobenchmarks
- Once your tests are in place, it’s time to measure and understand. Macrobenchmarks give you the hard data, while App Startup Insights provide actionable recommendations for improvement. Plus, you can get a quick snapshot of your app’s health with the App Performance Score via DAC. These tools combined give you a comprehensive view of your app’s performance and where to focus your efforts.
R8, More than code shrinking and obfuscation
- You might know R8 as a code shrinking tool, but it’s capable of so much more! The talk dives into R8’s capabilities using the “Androidify” sample app. You’ll see how to apply R8, troubleshoot any issues (like crashes!), and configure it for optimal performance. It’ll also be shown how library developers can include “consumer Keep rules” so that their important code is not touched when used in an application.
#3: Build Richer Image and Video Experiences
https://www.youtube.com/watch?v=3zXVPU2vKXs
In today’s digital landscape, users increasingly expect seamless content creation capabilities within their apps. To meet this demand, developers require robust tools for building excellent camera and media experiences.
Media3Effects in CameraX Preview
- At Google I/O, developers delve into practical strategies for capturing high-quality video using CameraX, while simultaneously leveraging the Media3Effects on the preview.
Google Low-Light Boost
- Google Low Light Boost in Google Play services enables real-time dynamic camera brightness adjustment in low light, even without device support for Low Light Boost AE Mode.
New Camera & Media Samples!
Learn more about how CameraX & Media3 can accelerate your development of camera and media related features.
Learn how to build adaptive apps
Want to learn more about building excellent, adaptive apps? Watch this playlist to learn more about all the session details.
-
Google CEO Sundar Pichai Is ‘Vibe Coding’ a Website for Fun
Google and Alphabet CEO Sundar Pichai disclosed that he has been “vibe coding,” or using AI to code for him through prompts, to build a webpage.
Pichai said on Wednesday at Bloomberg Tech in San Francisco that he had been experimenting with AI coding assistants Cursor and Replit, both of which are advertised as able to create code from text prompts, to build a new webpage.
Related: Here’s How Much a Typical Google Employee Makes in a Year
“I’ve just been messing around — either with Cursor or I vibe coded with Replit — trying to build a custom webpage with all the sources of information I wanted in one place,” Pichai said, per Business Insider.
Google CEO Sundar Pichai. Photographer: David Paul Morris/Bloomberg via Getty Images
Pichai said that he had “partially” completed the webpage, and that coding had “come a long way” from its early days.
Vibe coding is a term coined by OpenAI co-founder Andrej Karpathy. In a post on X in February, Karpathy described how AI tools are getting good enough that software developers can “forget that the code even exists.” Instead, they can ask for AI to code on their behalf and create a project or web app without writing a line of code themselves.
There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper…
— Andrej Karpathy (@karpathy) February 2, 2025
The rise of vibe coding has led AI coding assistants to explode in popularity. One AI coding tool, Cursor, became the fastest-growing software app to reach $100 million in annual revenue in January. Almost all of Cursor’s revenue comes from 360,000 individual subscribers, not big enterprises. However, that balance could change: As of earlier this week, Amazon is reportedly in talks to adopt Cursor for its employees.
Another coding tool, Replit, says it has enabled users to make more than two million apps in six months. The company has 34 million global users as of November.
Related: This AI Startup Spent $0 on Marketing. Its Revenue Just Hit $200 Million.
Noncoders are using vibe coding to bring their ideas to life. Lenard Flören, a 28-year-old art director with no prior coding experience, told NBC News last month that he used AI tools to vibe code a personalized workout tracking app. Harvard University neuroscience student, Rishab Jain, 20, told the outlet that he used Replit to vibe code an app that translates ancient texts into English. Instead of downloading someone else’s app and paying a subscription fee, “now you can just make it,” Jain said.
Popular vibe coding tools offer a free entry point into vibe coding, as well as subscription plans. Replit has a free tier, a $20 a month core level with expanded capabilities, such as unlimited private and public apps, and a $35 per user, per month teams subscription. Cursor also has a free tier, a $20 per month pro level, and a $40 per user, per month, business subscription.
Despite the existence of vibe coding, Pichai still thinks that human software engineers are necessary. At Bloomberg Tech on Wednesday, Pichai said that Google will keep hiring human engineers and growing its engineering workforce “even into next year” because a bigger workforce “allows us to do more.”
“I just view this [AI] as making engineers dramatically more productive,” he said.
Alphabet is the fifth most valuable company in the world with a market cap of $2 trillion.
Google and Alphabet CEO Sundar Pichai disclosed that he has been “vibe coding,” or using AI to code for him through prompts, to build a webpage.
Pichai said on Wednesday at Bloomberg Tech in San Francisco that he had been experimenting with AI coding assistants Cursor and Replit, both of which are advertised as able to create code from text prompts, to build a new webpage.
Related: Here’s How Much a Typical Google Employee Makes in a Year
The rest of this article is locked.
Join Entrepreneur+ today for access.
-
Texas Requires Apple and Google to Verify Ages for App Downloads
The state’s governor signed a new law that will give parents more control over the apps that minors download, part of a raft of new legislation.
Source link -
Android Design at Google I/O 2025
Posted by Ivy Knight – Senior Design Advocate
Here’s your guide to the essential Android Design sessions, resources, and announcements for I/O ‘25:
Check out the latest Android updates
The Android Show: I/O Edition
The Android Show had a special I/O edition this year with some exciting announcements like Material Expressive!
Learn more about the new Live Update Notification templates in the Android Notifications & Live Updates for an in-depth look at what they are, when to use them, and why. You can also get the Live Update design template in the Android UI Kit, read more in the updated Notification guidance, and get hands-on with the Jetsnack Live Updates and Widget case study.
Make your apps more expressive
Get a jump on the future of Google’s UX design: Material 3 Expressive. Learn how to use new emotional design patterns to boost engagement, usability, and desire for your product in the Build Next-Level UX with Material 3 Expressive session and check out the expressive update on Material.io.
Stay up to date with Android Accessibility Updates, highlighting accessibility features launching with Android 16: enhanced dark themes, options for those with motion sickness, a new way to increase text contrast, and more.
Catch the Mastering text input in Compose session to learn more about how engaging robust text experiences are built with Jetpack Compose. It covers Autofill integration, dynamic text resizing, and custom input transformations. This is a great session to watch to see what’s possible when designing text inputs.
Thinking across form factors
These design resources and sessions can help you design across more Android form factors or update your existing experiences.
Preview Gemini in-car, imagining seamless navigation and personalized entertainment in the New In-Car App Experiences session. Then explore the new Car UI Design Kit to bring your app to Android Car platforms and speed up your process with the latest Android form factor kit.
Engaging with users on Google TV with excellent TV apps session discusses new ways the Google TV experience is making it easier for users to find and engage with content, including improvement to out-of-box solutions and updates to Android TV OS.
Want a peek at how to bring immersive content, like 3D models, to Android XR with the Building differentiated apps for Android XR with 3D Content session.
Plus WearOS is releasing an updated design kit @AndroidDesign Figma and learning Pathway.
Tip top apps
We’ve also released the following new Android design guidance to help you design the best Android experiences:
Read up on the latest suggested patterns to build out your app’s settings.
Along with settings, learn about adding help and feedback to your app.
Does your app need setup? New guidance to help guide in adding configuration to your app’s widgets.
Allow your apps to take full advantage of the entire screen with the latest guidance on designing for edge-to-edge.
Check out figma.com/@androiddesign for even more new and updated resources.
Visit the I/O 2025 website, build your schedule, and engage with the community. If you are at the Shoreline come say hello to us in the Android tent at our booths.
We can’t wait to see what you create with these new tools and insights. Happy I/O!
Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.