برچسب: Things

  • Top 3 things to know for AI on Android at Google I/O ‘25



    Posted by Kateryna Semenova – Sr. Developer Relations Engineer

    AI is reshaping how users interact with their favorite apps, opening new avenues for developers to create intelligent experiences. At Google I/O, we showcased how Android is making it easier than ever for you to build smart, personalized and creative apps. And we’re committed to providing you with the tools needed to innovate across the full development stack in this evolving landscape.

    This year, we focused on making AI accessible across the spectrum, from on-device processing to cloud-powered capabilities. Here are the top 3 announcements you need to know for building with AI on Android from Google I/O ‘25:

    #1 Leverage the efficiency of Gemini Nano for on-device AI experiences

    https://www.youtube.com/watch?v=mP9QESmEDls

    For on-device AI, we announced a new set of ML Kit GenAI APIs powered by Gemini Nano, our most efficient and compact model designed and optimized for running directly on mobile devices. These APIs provide high-level, easy integration for common tasks including text summarization, proofreading, rewriting content in different styles, and generating image description. Building on-device offers significant benefits such as local data processing and offline availability at no additional cost for inference. To start integrating these solutions explore the ML Kit GenAI documentation, the sample on GitHub and watch the “Gemini Nano on Android: Building with on-device GenAI” talk.

    #2 Seamlessly integrate on-device ML/AI with your own custom models

    https://www.youtube.com/watch?v=xLmJJk1gbuE

    The Google AI Edge platform enables building and deploying a wide range of pretrained and custom models on edge devices and supports various frameworks like TensorFlow, PyTorch, Keras, and Jax, allowing for more customization in apps. The platform now also offers improved support of on-device hardware accelerators and a new AI Edge Portal service for broad coverage of on-device benchmarking and evals. If you are looking for GenAI language models on devices where Gemini Nano is not available, you can use other open models via the MediaPipe LLM Inference API.

    Serving your own custom models on-device can pose challenges related to handling large model downloads and updates, impacting the user experience. To improve this, we’ve launched Play for On-Device AI in beta. This service is designed to help developers manage custom model downloads efficiently, ensuring the right model size and speed are delivered to each Android device precisely when needed.

    For more information watch “Small language models with Google AI Edge” talk.

    #3 Power your Android apps with Gemini Flash, Pro and Imagen using Firebase AI Logic

    https://www.youtube.com/watch?v=U8Nb68XsVY4

    For more advanced generative AI use cases, such as complex reasoning tasks, analyzing large amounts of data, processing audio or video, or generating images, you can use larger models from the Gemini Flash and Gemini Pro families, and Imagen running in the cloud. These models are well suited for scenarios requiring advanced capabilities or multimodal inputs and outputs. And since the AI inference runs in the cloud any Android device with an internet connection is supported. They are easy to integrate into your Android app by using Firebase AI Logic, which provides a simplified, secure way to access these capabilities without managing your own backend. Its SDK also includes support for conversational AI experiences using the Gemini Live API or generating custom contextual visual assets with Imagen. To learn more, check out our sample on GitHub and watch “Enhance your Android app with Gemini Pro and Flash, and Imagen” session.

    These powerful AI capabilities can also be brought to life in immersive Android XR experiences. You can find corresponding documentation, samples and the technical session: “The future is now, with Compose and AI on Android XR“.

    Flow cahrt demonstrating Firebase AI Logic integration architecture

    Figure 1: Firebase AI Logic integration architecture

    Get inspired and start building with AI on Android today

    We released a new open source app, Androidify, to help developers build AI-driven Android experiences using Gemini APIs, ML Kit, Jetpack Compose, CameraX, Navigation 3, and adaptive design. Users can create personalized Android bot with Gemini and Imagen via the Firebase AI Logic SDK. Additionally, it incorporates ML Kit pose detection to detect a person in the camera viewfinder. The full code sample is available on GitHub for exploration and inspiration. Discover additional AI examples in our Android AI Sample Catalog.

    moving image of the Androidify app on a mobile device, showing a fair-skinned woman with blond hair wearing a red jacket with black shirt and pants and a pair of sunglasses converting into a 3D image of a droid with matching skin tone and blond hair wearing a red jacket with black shirt and pants and a pair of sunglasses

    The original image and Androidifi-ed image

    Choosing the right Gemini model depends on understanding your specific needs and the model’s capabilities, including modality, complexity, context window, offline capability, cost, and device reach. To explore these considerations further and see all our announcements in action, check out the AI on Android at I/O ‘25 playlist on YouTube and check out our documentation.

    We are excited to see what you will build with the power of Gemini!



    Source link

  • 16 things to know for Android developers at Google I/O 2025



    Posted by Matthew McCullough – VP of Product Management, Android Developer

    Today at Google I/O, we announced the many ways we’re helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here’s a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!

    Building AI into your Apps

    1: Building intelligent apps with Generative AI

    Generative AI enhances apps’ experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.

    New experiences across devices

    2: One app, every screen: think adaptive and unlock 500 million screens

    Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we’re helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices – a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral’s streaming service (available in the US) is building adaptively to meet users where they are.

    https://www.youtube.com/watch?v=ooRcQFMYzmA

    Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.

    3: Material 3 Expressive: design for intuition and emotion

    The new Material 3 Expressive update provides tools to enhance your product’s appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

    moving image of Material 3 Expressive demo

    4: Smarter widgets, engaging live updates

    Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

    moving image of Material 3 Expressive demo

    5: Enhanced Camera & Media: low light boost and battery savings

    This year’s I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.

    6: Build next-gen app experiences for Cars

    We’re launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we’ll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.

    7: Build for Android XR’s expanding ecosystem with Developer Preview 2 of the SDK

    We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung’s Project Moohan, you’ll also see more devices including a new portable Android XR device from our partners at XREAL. There’s lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

    product image of XREAL’s Project Aura against a nebulous black background

    XREAL’s Project Aura

    8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6

    This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

    moving image displays examples of Material 3 Expressive on Wear OS experiences

    Some examples of Material 3 Expressive on Wear OS experiences

    9: Engage users on Google TV with excellent TV apps

    You can leverage more resources within Compose’s core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We’re also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.

    Developer productivity

    10: Build beautiful apps faster with Jetpack Compose

    Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

    moving image of compose adaptive layouts updates in the Google Play app

    Compose Adaptive Layouts Updates in the Google Play app

    11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily

    Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We’ve released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what’s new in Android’s Kotlin Multiplatform.

    12: Gemini in Android Studio: AI Agents to help you work

    Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What’s new in Android development tools.

    https://www.youtube.com/watch?v=ubyPjBesW-8

    13: Android Studio: smarter with Gemini

    In this latest release, we’re empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What’s new in Android development tools.

    moving image of Gemini in Android Studio Agentic Experiences including Journeys and Version Upgrade

    And the latest on driving business growth

    14: What’s new in Google Play

    Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we’re continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What’s new in Google Play to learn more.

    a moving image of three mobile devices displaying how content is displayed on the Play Store

    15: Start migrating to Play Games Services v2 today

    Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.

    16: And of course, Android 16

    We unpacked some of the latest features coming to users in Android 16, which we’ve been previewing with you for the last few months. If you haven’t already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.

    Check out all of the Android and Play content at Google I/O

    This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What’s New in Android and the full Android track of sessions, and whether you’re joining in person or around the world, we can’t wait to engage with you!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Your Clients Are Using AI to Replace You — Do These 3 Things Before They Do

    Your Clients Are Using AI to Replace You — Do These 3 Things Before They Do


    Opinions expressed by Entrepreneur contributors are their own.

    If you think using AI to save time is enough — you’re already at risk.

    Your clients aren’t just admiring your efficiency. They’re studying it to replace you. AI now delivers 80% of what most service providers offer — at a fraction of the cost. Freelancers, consultants and agencies are getting blindsided as their clients quietly build AI workflows that eliminate the need to rehire. In this video, I’ll show you how to flip the script and become irreplaceable.

    While most professionals are still stuck using AI for content drafts or task automation, the smartest entrepreneurs are repositioning themselves as designers of outcomes, not just doers of work.

    Inside, you’ll learn the three steps to audit, evolve, and future-proof your offer — before your clients replace it.

    • How to spot the hidden weakness in your offer before your clients do
      If you don’t audit your service, your clients will — and when they realize AI can do it faster and cheaper, it’s game over. I’ll show you the first move to make now.

    • Why “doing the work” is making you replaceable — and what to do instead
      Execution used to be enough. Not anymore. Discover how to shift into the only role AI can’t automate (and clients will actually pay a premium for).

    • The one thing AI can’t replicate — and why it’s now your greatest asset
      It’s not your skills. It’s not your speed. Learn how to turn your story and perspective into a positioning moat that makes you untouchable — even if AI clones your voice.

    Whether you’re a solo consultant or leading a lean team, this is your blueprint for staying one step ahead of AI — and 10 steps ahead of your competition.

    Download the free “AI Success Kit” (limited time only). And you’ll also get a free chapter from my brand new book, “The Wolf is at The Door – How to Survive and Thrive in an AI-Driven World.”



    Source link