بلاگ

  • Instagram Head Adam Mosseri Experiences Google Phishing Scam

    Instagram Head Adam Mosseri Experiences Google Phishing Scam


    CEOs of Big Tech, they’re just like us.

    The Head of Instagram, Adam Mosseri, 42, says he was very close to being a victim of a well-played phishing scheme that involved some very real-looking “secure Google domains.”

    Mosseri wrote on Threads, which, like Instagram, is owned by Mark Zuckerberg’s Meta, said on Tuesday that he “experienced a sophisticated phishing attack yesterday.”

    Related: Mark Cuban’s Google Account Was Hacked By ‘Sophisticated’ Bad Actors

    Mosseri said he got a call from an 818 number (and he answered). The caller said that his “Google account was compromised, and they sent an email to confirm identity.”

    “On the phone, they asked me to change my password using my Gmail app and to *not* say my new password out loud. What was impressive was their email came from forms-receipts-noreply@google.com and linked to sites.google.com/view…, which of course asked me to sign in…,” he continued.

    “The email and the form both coming from secure Google domains (via Google products) might have got me if I hadn’t heard from a friend who experienced a similar attack a year ago,” he added. “Anybody know someone at Google that might want this context?”

    Related: If Your Bank Is Calling, Don’t Answer. It’s Probably a Scam.

    Threads users, of course, had a day in the comments. To start, many wondered how the top boss at Instagram doesn’t know someone at Google. There were also a lot of jokes.

    “>sophisticated attack, >Google called me,” one user replied.

    “Adam, I can help you out here. Just need your mom’s maiden name and the street you grew up on,” another responded.

    “Not the Head of Instagram believing Google calls you on the phone about resetting your password?” the comments continued.

    Google Workspace’s official Threads account thanked Mosseri for “flagging” and reminded him that the company will “never” call you.

    Related: Andy Cohen Lost ‘A Lot of Money’ to a Highly Sophisticated Scam

    “We suspended that form and site yesterday, and we constantly roll out defenses against these types of attacks. As a reminder: Google will never call you about your account,” they wrote, adding a link to their “how to spot scams” blog.

    Other users said it reminded them of a similar Google phishing scheme from 2022.

    Still, with all the competition in Silicon Valley, we couldn’t help but wonder: Do all executives at Meta use Gmail and Google’s suite of products?

    CEOs of Big Tech, they’re just like us.

    The Head of Instagram, Adam Mosseri, 42, says he was very close to being a victim of a well-played phishing scheme that involved some very real-looking “secure Google domains.”

    Mosseri wrote on Threads, which, like Instagram, is owned by Mark Zuckerberg’s Meta, said on Tuesday that he “experienced a sophisticated phishing attack yesterday.”

    The rest of this article is locked.

    Join Entrepreneur+ today for access.





    Source link

  • What’s new in Android development tools



    Posted by Mayank Jain – Product Manager, Android Studio

    Android Studio continues to advance Android development by empowering developers to build better app experiences, faster. Our focus has been on improving AI-driven functionality with Gemini, streamlining UI creation and testing, and helping you future-proof apps for the evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in the fast-paced world of mobile development.

    You can check out the What’s new in Android Developer Tools session at Google I/O 2025 to see some of the new features in action or better yet, try them out yourself by downloading Android Studio Narwhal Feature Drop (2025.2.1) in the preview release channel. Here’s a look at our latest developments:

    Get the latest Gemini 2.5 Pro model in Android Studio

    The power of artificial intelligence through Gemini is now deeply integrated into Android Studio, helping you at all stages of Android app development. Now with access to Gemini 2.5 Pro, we’re continuing to look for new ways to use AI to supercharge Android development — and help you build better app experiences, faster.

    Journeys for Android Studio

    We’re also introducing agentic AI with Gemini in Android Studio.Testing your app is now much easier when you create journeys – just describe the actions and assertions in natural language for the user journeys you want to test, and Gemini performs the tests for you. Creating journeys lets you test your app’s critical user journeys across various devices without writing extensive code. You can then run these tests on local physical or virtual Android devices to validate that the test worked as intended by reviewing detailed results directly within the IDE. Although the feature is experimental, the goal is to increase the speed that you can ship high-quality code, while significantly reducing the amount of time you spend manually testing, validating, or reproducing issues.

    moving image of Gemini testing an app in Android Studio

    Journeys for Android Studio uses Gemini to test your app.

    https://www.youtube.com/watch?v=mP1tlIKK0R4

    Suggested fixes for crashes with Gemini

    The App Quality Insights panel has a great new feature. The crash insights now analyzes your app’s source code referenced from the crash, and not only offers a comprehensive analysis and explanation of the crash, in some cases it even offers a source fix! With just a few clicks, you are able to review the changes, accept the code suggestions, and push the changes to your source control. Now you can determine the root cause of a crash and fix it much faster!

    screenshot of crash analysis with Gemini in Android Studio

    Crash analysis with Gemini

    AI features in Studio Labs (stable releases only)

    We’ve heard feedback that developers want to access AI features in stable channels as soon as possible. You can now discover and try out the latest AI experimental features through the Studio Labs menu in the Settings menu starting with Narwhal stable release. You can get a first look at AI experiments, share your feedback, and help us bring them into the IDE you use everyday. Go to the Studio Labs tab in Settings and enable the features you would like to start using. These AI features are automatically enabled in canary releases and no action is required.

    screenshot of AI features in Studio Labs

    AI features in Studio Labs

      • Compose preview generation with Gemini

      • Gemini can automatically generate Jetpack Compose preview code saving you time and effort. You can access this feature by right-clicking within a composable and navigating to Gemini > Generate Compose Preview or Generate Compose Preview for this file, or by clicking the link in an empty preview panel. The generated preview code is presented in a diff view that enables you to quickly accept, edit, or reject the suggestions, providing a faster way to visualize your composables.

        moving image of compose preview generation with gemini in Android Studio

        Compose Preview generation with Gemini

      • Transform UI with Gemini

      • You can now transform UI code within the Compose Preview environment using natural language directly in the preview. To use it, right click in the Compose Preview and select “Transform UI With Gemini”. Then enter your natural language requests, such as “Center align these buttons,” to guide Gemini in adjusting your layout or styling, or select specific UI elements in the preview for better context. Gemini will then edit your Compose UI code in place, which you can review and approve, speeding up the UI development workflow.

        side by side screenshots showing transforming UI with Gemini in Android Studio

        Transform UI with Gemini

      • Image attachment in Gemini

      • You can now attach image files and provide additional information along with your prompt. For example: you can attach UI mock-ups or screenshots to tell Gemini context about your app’s layout. Consequently, Gemini can generate Compose code based on a provided image, or explain the composables and data flow of a UI screenshot.

        screenshot of image atteachment and preview generation via Gemini in Android Studio

        Image attachment and preview generation via Gemini in Android Studio

      • @File context in Gemini

      • You can now attach your project files as context in chat interactions with Gemini in Android Studio. This lets you quickly reference files in your prompts for Gemini. In the Gemini chat input, type @ to bring up a file completion menu and select files to attach. You can also click the Context drop-down to see which files were automatically attached by Gemini. This gives you more control over the context sent to Gemini.

        screenshot of @File context in Gemini in Android Studio

        @File context in Gemini

    Rules in Prompt Library

    Rules in Gemini let you define preferred coding styles or output formats within the Prompt Library. You can also mention your preferred tech stack and languages. When you set these preferences once, they are automatically applied to all subsequent prompts sent to Gemini. Rules help the AI understand project standards and preferences for more accurate and tailored code assistance. For example, you can create a rule such as “Always give me concise responses in Kotlin.”

    prompt library in Android Studio

    Prompt Library Improvements

    Gemini in Android Studio for businesses

    Gemini in Android Studio for businesses is now available. It provides all the benefits of Gemini in Android Studio, plus enterprise-grade privacy and security features backed by Google Cloud — giving your team the confidence they need to deploy AI at scale while keeping their data protected.

    Developers and admins can unlock these features and benefits by subscribing to Gemini Code Assist Standard or Enterprise editions. Discover the full list of Gemini in Android for business features available for your organization.

    Improved tools for creating great user experiences

    Elevate your Compose UI development with the latest Android Studio enhancements.

    Compose preview improvements

    Compose preview interaction is now more efficient with the latest navigation improvements. Click on the preview name to jump to the preview definition or click the individual component to jump to the function where it’s defined. Hover states provide immediate visual feedback as you mouse over a preview frame. Improved keyboard arrow navigation eases movement through multiple previews, enabling faster UI iteration and refinement. Additionally, the Compose preview picker is now also available in the stable release.

    moving image of compose preview navigation improvements in Android Studio

    Compose preview navigation improvements

    Compose preview picker in Android Studio

    Compose preview picker

    Resizable Previews

    While in Compose Preview’s focus mode in Android Studio, you can now resize the preview window by dragging its edges. This gives you instant visual feedback on how your UI adapts to different screen sizes, ensuring responsiveness and visual consistency. This rapid iteration helps create UIs that look great on any Android device.

    ALT TEXT

    Resizable Preview

    Embedded Android XR Emulator

    The Android XR Emulator now launches by default in the embedded state. You can now deploy your application, navigate the 3D space and use the Layout Inspector directly inside Android Studio, streamlining your development flow.

    Embedded XR emulator in Android Studio

    Embedded XR Emulator

    Improved tools for future-proofing and testing your Android apps

    We’ve enhanced some of your favorite features so that you can test more confidently, future-proof your apps, and ensure app compatibility across a wide range of devices and Android versions.

    Streamlined testing with Backup and Restore support

    Android Studio offers built-in Backup and Restore support by letting you trigger app backups on connected devices directly from the Running Devices window. You can also configure your Run/Debug settings to automatically restore from a previous backup when launching your app. This simplifies the process of validating your app’s Backup and Restore implementation and speeds up development by reducing manual setup for testing.

    Streamlined testing with backup and restore support in Android Studio

    Streamlined testing with Backup and Restore support

    Android’s transition to 16 KB Page Size

    The underlying architecture of Android is evolving, and a key step forward is the transition to 16 KB page sizes. This fundamental change requires all Android apps with native code or dependencies to be recompiled for compatibility. To help you navigate this transition smoothly, Android Studio now offers proactive warnings when building APKs or Android App Bundles that are incompatible with 16 KB devices. Using the APK Analyzer, you can also find out which libraries are incompatible with 16 KB devices. To test your apps in this new environment, a dedicated 16 KB emulator target is also available in Android Studio alongside existing 4 KB images.

    Android’s transition to 16 KB page size in Android Studio

    Android’s transition to 16 KB page size

    Backup and Sync your Studio settings

    When you sign in with your Google account or a JetBrains account in Android Studio, you can now sync your customizations and preferences across all installs and restore preferences automatically on remote Android Studio instances. Simply select “Enable Backup and Sync” while you’re logging in to Android Studio, or from the Settings > Backup and Sync page, and follow the prompts.

    Backup and sync settings in Android Studio

    Backup and Sync your Studio settings

    Increasing developer productivity with Android’s Kotlin Multiplatform improvements

    Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. Usage has been growing in the developer community, with apps such as Google Docs now using it in production. We’ve released new Android Studio KMP project templates, updated Jetpack libraries and new codelabs (Get Started with KMP and Migrate Existing Apps to Room KMP) to help developers who are looking to get started with KMP.

    Experimental and features that are coming soon to Android Studio

    Android Studio Cloud (experimental)

    Android Studio Cloud is now available as an experimental public preview, accessible through Firebase Studio. This service streams a Linux virtual machine running Android Studio directly to your web browser, enabling Android application development from anywhere with an internet connection. Get started quickly with dedicated workspaces featuring pre-downloaded Android SDK components. Explore sample projects or seamlessly access your existing Android app projects from GitHub without a local installation. Please note that Android Studio Cloud is currently in an experimental phase. Features and capabilities are subject to significant change, and users may encounter known limitations.

    Android Studio Cloud

    Version Upgrade Agent (coming soon)

    The Version Upgrade Agent, as part of Gemini in Android Studio, is designed to save you time and effort by automating your dependency upgrades. It intelligently analyzes your Android project, parses the release notes for included libraries, and proposes updates directly from your libs.versions.toml file or the refactoring menu (right-click > Refactor > Update dependencies). The agent automatically updates dependencies to the latest compatible version, builds the project, fixes any errors, and repeats until all errors are fixed. Once the dependencies are upgraded, the agent generates a report showing the changes it made, as well as a high level summary highlighting the changes included in the updated libraries.

    Version updgrade agent in Android Studio

    Version Upgrade Agent

    https://www.youtube.com/watch?v=ubyPjBesW-8

    Agent Mode (coming soon)

    Agent Mode is a new autonomous AI feature using Gemini, designed to handle complex, multi-stage development tasks that go beyond typical AI assistant capabilities, invoking multiple tools to accomplish tasks on your behalf.

    You can describe a complex goal, like integrating a new API, and the agent will formulate an execution plan that spans across files in your project — adding necessary dependencies, editing files, and iteratively fixing bugs. This feature aims to empower all developers to tackle intricate challenges and accelerate the building and prototyping process. You can access it via the Gemini chat window in Android Studio.

    Agent Mode in Android Studio

    Agent Mode

    Play Policy Insights beta in Android Studio (coming soon)

    Android Studio now includes richer insights and guidance on Google Play policies that might impact your app. This information, available as lint checks, helps you build safer apps from the start, preventing issues that could disrupt your launch process and cost more time and resources to fix later on. These lint checks will present an overview of the policy, do and don’ts, and links to Play policy pages where you can find more information about the policy.

    Play Policy Insights beta in Android Studio

    Play Policy Insights beta in Android Studio

    IntelliJ Platform Update (2025.1)

    Here are some important IDE improvements in the IntelliJ IDEA 2025.1 platform release

      • Kotlin K2 mode: Android Studio now supports Kotlin K2 mode in Android-specific features requiring language support such as Live Edit, Compose Preview and many more

      • Improved dependency resolution in Kotlin build scripts: Makes your Kotlin build scripts for Android projects more stable and predictable

      • Hints about code alterations by Kotlin compiler plugins: Gives you clearer insights into how plugins used in Android development modify your Kotlin code

      • Automatic download of library sources for Gradle projects: Simplifies debugging and understanding your Android project dependencies by providing immediate access to their source code

      • Support for Gradle Daemon toolchains: Helps prevent potential JVM errors during your Android project builds and ensures smoother synchronization

      • Automatic plugin updates: Keeps your Android development tools within IntelliJ IDEA up-to-date effortlessly

    To Summarize

    Android Studio Narwhal Feature Drop (2025.2.1) is now available in the Android Studio canary channel with some amazing features to help your Android development

    AI-powered development tools for Android

      • Journeys for Android Studio: Validate app flows easily using tests and assertions in natural language
      • Suggested fixes for crashes with Gemini: Determine the root cause of a crash and fix it much faster with Gemini
      • AI features in Studio Labs
          • Compose preview generation with Gemini: Generate Compose previews with Gemini’s code suggestions
          • Transform UI with Gemini: Transform UI in Compose Preview with natural language, speeding development
          • Image attachment in Gemini: Attach images to Gemini for context-aware code generation
          • @File context in Gemini: Reference project files in Gemini chats for quick AI prompts
      • Rules in Prompt Library: Define preferred coding styles or output formats within the Prompt Library

    Improved tools for creating great user experiences

      • Compose preview improvements: Navigate the Compose Preview using clickable names and components
      • Resizable preview: Instantly see how your Compose UI adapts to different screen sizes
      • Embedded XR Emulator: XR Emulator now launches by default in the embedded state

    Improved tools for future-proofing and testing your Android apps

      • Streamlined testing with Backup and Restore support: Effortless app testing, trigger backups, auto-restore for faster validation
      • Android’s transition to 16 KB Page Size: Prepare for Android’s 16KB page size with Studio’s early warnings and testing
      • Backup and Sync your Studio settings: Sync Android Studio settings across devices and restore automatically for convenience
      • Increasing developer productivity with Android’s Kotlin Multiplatform improvements: simplified cross-platform Android and iOS development with new tools

    Experimental and features that are coming soon to Android Studio

      • Android Studio Cloud (experimental): Develop Android apps from any browser with just an internet connection
      • Version Upgrade Agent (coming soon): Automated dependency updates save time and effort, ensuring projects stay current
      • Agent Mode (coming soon): Empowering developers to tackle multistage complex tasks that go beyond typical AI assistant capabilities
      • Play Policy Insights beta in Android Studio (coming soon): Insights and guidance on Google Play policies that might impact your app

    How to get started

    Ready to try the exciting new features in Android Studio?

    You can download the canary version of Android Studio Narwhal Feature Drop (2025.1.2) today to incorporate these new features into your workflow or try the latest AI features using Studio Labs in the stable version of Android Studio Meerkat. You can also install them side by side by following these instructions.

    As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X. Let’s build the future of Android apps together!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.





    Source link

  • In-App Ratings and Reviews for TV



    Posted by Paul Lammertsma – Developer Relations Engineer

    Ratings and reviews are essential for developers, offering quantitative and qualitative feedback on user experiences. In 2022, we enhanced the granularity of this feedback by segmenting these insights by countries and form factors.

    Now, we’re extending the In-App Ratings and Reviews API to TV to allow developers to prompt users for ratings and reviews directly from Google TV.

    Ratings and reviews on Google TV

    Ratings and reviews entry point forJetStream sample app on TV

    Users can now see rating averages, browse reviews, and leave their own review directly from an app’s store listing on Google TV.

    Ratings and written reviews input screen on TV

    Users can interact with in-app ratings and reviews on their TVs by doing the following:

      • Select ratings using the remote control D-pad.
      • Provide optional written reviews using Gboard’s on-screen voice input, or by easily typing from their phone.
      • Send mobile notifications to themselves to complete their TV app review directly on their phone.

    User instructions for submitting TV app ratings and reviews on mobile

    Additionally, users can leave reviews for other form factors directly from their phone by simply selecting the device chip when submitting an app rating or writing a review.

    We’ve already seen a considerable lift in app ratings on TV since bringing these changes to Google TV, and now, we’re making it possible for developers to trigger a ratings prompt as well.

    Before we look at the integration, let’s first carefully consider the best time to request a review prompt. First, identify optimal moments within your app to request user feedback, ensuring prompts appear only when the UI is idle to prevent interruption of ongoing content.

    In-App Review API

    Integrating the Google Play In-App Review API is the same as on mobile and it’s only a couple of method calls:

    val manager = ReviewManagerFactory.create(context)
    manager.requestReviewFlow().addOnCompleteListener { task ->
        if (task.isSuccessful) {
            // We got the ReviewInfo object
            val reviewInfo = task.result
            manager.launchReviewFlow(activity, reviewInfo)
        } else {
            // There was some problem, log or handle the error code
            @ReviewErrorCode val reviewErrorCode =
                (task.getException() as ReviewException).errorCode
        }
    }
    

    First, invoke requestReviewFlow() to obtain a ReviewInfo object which is used to launch the review flow. You must include an addOnCompleteListener() not just to obtain the ReviewInfo object, but also to monitor for any problems triggering this flow, such as the unavailability of Google Play on the device. Note that ReviewInfo does not offer any insights on whether or not a prompt appeared or which action the user took if a prompt did appear.

    The challenge is to identify when to trigger launchReviewFlow(). Track user actions—identifying successful journeys and points where users encounter issues—so you can be confident they had a delightful experience in your app.

    For this method, you may optionally also include an addOnCompleteListener() to ensure it resumes when the returned task is completed.

    Note that due to throttling of how often users are presented with this prompt, there are no guarantees that the ratings dialog will appear when requesting to start this flow. For best practices, check this guide on when to request an in-app review.

    Get started with In-App Reviews on Google TV

    You can get a head start today by following these steps:

    1. Identify successful journeys for users, like finishing a movie or TV show season.
    2. Identify poor experiences that should be avoided, like buffering or playback errors.
    3. Integrate the Google Play In-App Review API to trigger review requests at optimal moments within the user journey.
    4. Test your integration by following the testing guide.
    5. Publish your app and continuously monitor your ratings by device type in the Play Console.

    We’re confident this integration enables you to elevate your Google TV app ratings and empowers your users to share valuable feedback.

    Play Console Ratings graphic

    Resources

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • What’s new in Google Play



    Posted by Paul Feng, VP of Product Management, Google Play

    At Google Play, we’re dedicated to helping people discover experiences they’ll love, while empowering developers like you to bring your ideas to life and build successful businesses.

    At this year’s Google I/O, we unveiled the latest ways we’re empowering your success with new tools that provide robust testing and actionable insights. We also showcased how we’re continuing to build a content-rich Play Store that fosters repeat engagement alongside new subscription capabilities that streamline checkout and reduce churn.

    Check out all the exciting developments from I/O below and learn how they’ll help you grow your business on Google Play.

    Helping you succeed every step of the way

    Last month, we introduced our latest Play Console updates focused on improving quality and performance. A redesigned app dashboard centered around four developer objectives (Test and release, Monitor and improve, Grow users, Monetize) and new Android vitals metrics offer quick insights and actionable suggestions to proactively improve the user experience.

    Get more actionable insights with new Play Console overview pages

    Building on these updates, we’ve launched dedicated overview pages for two developer objectives: Test and release and Monitor and improve. These new pages bring together more objective-related metrics, relevant features, and a “Take action” section with contextual, dynamic advice. Overview pages for Grow and Monetize will be coming soon.

    Halt fully-rolled out releases when needed

    Historically, a release at 100% live meant there was no turning back, leaving users stuck with a flawed version until a new update rolled out. Soon, you’ll be able to halt fully-live releases, through Play Console and the Publishing API to stop the distribution of problematic versions to new users.

    a moving screen grab of release manager in Play Console

    You’ll soon be able to halt fully live releases directly from Play Console and the Publishing API, stopping the distribution of problematic versions to new users.

    Optimize your store listings with better management tools and metrics

    We launched two tools to enhance your store listings. The asset library makes it easy to upload, edit, and view your visual assets. Upload them from Google Drive, organize with tags, and crop for repurposing. And with new open metrics, you gain deeper insights into listing performance so you can better understand how they attract, engage, and re-engage users.

    Stay ahead of threats with the Play Integrity API

    We’re committed to robust security and preventing abuse so you can thrive on Play’s trusted platform. The Play Integrity API continuously evolves to combat emerging threats, with these recent enhancements:

      • Stronger abuse detection for all developers that leverages the latest Android hardware-security with no developer effort required.
      • Device security update checks to safeguard your app’s sensitive actions like transfers or data access.
      • Public beta for device recall which enables you to detect if a device is being reused for abuse or repeated actions, even after a device reset. You can express interest in this beta.

    Unlocking more discovery and engagement for your apps and its content

    Last year, we shared our vision for a content-rich Google Play that has already delivered strong results. Year-over-year, Apps Home has seen over a 25% increase in average monthly visitors with apps seeing a 10% growth in acquisitions and double-digit growth in app spend for those monetizing on Google Play. Building on that vision, we’re introducing even more updates to elevate your discovery and engagement, both on and off the store.

    For example, curated spaces, launched last year, celebrate seasonal interests like football (soccer) in Brazil and cricket in India, and evergreen interests like comics in Japan. By adding daily content—match highlights, promotions, and editorial articles directly on the Apps Home—these spaces foster discovery and engagement. Curated spaces are a hit with over 920,000 highly engaged users in Japan returning to the comics space monthly. Building on this momentum, we are expanding to more locations and categories this year.

    a moving image of three mobile devices displaying curated spaces on the Play Store

    Our curated spaces add daily content to foster repeat discovery and engagement.

    We’re launching new topic browse pages that feature timely, relevant, and visually engaging content. Users can find them throughout the Store, including Apps Home, store listing pages, and search. These pages debut this month in the US with Media & Entertainment, showcasing over 100,000 shows, movies, and select sports. More localized topic pages will roll out globally later this year.

    a moving image of two mobile devices displaying new browse pages for media and entertainment in the Play Store

    New topic browse pages for media and entertainment are rolling out this month in the US.

    We’re expanding Where to Watch to more markets, including the UK, Korea, Indonesia, and Mexico, to help users find and deep-link directly into their subscribed apps for movies and TV. Since launching in the US in November 2024, we’ve seen promising results: People who view app content through Where to Watch return to Play more frequently and increase their content search activity by 30%.

    We’re also enhancing how your content is displayed on the Play Store. Starting this July, all app developers can add a hero content carousel and a YouTube playlist carousel to their store listings. These formats will help showcase your best content and drive greater user engagement and discovery.

    For apps best experienced through sound, we’re launching audio samples on the Apps Home. A simple tap offers users a brief escape into your audio content. In early testing, audio samples made users 3x more likely to install or open an app! This feature is now available for all Health & Wellness app developers with users in the US, with more categories and markets coming soon. You can express your interest in promoting audio content.

    a moving image of three mobile devices displaying how content is displayed on the Play Store

    We’re enhancing how your content is displayed on the Play Store, 
    offering new ways to showcase your app and drive user engagement.

    Helping you take advantage of deeper engagement on Play, on and off the Store

    Last year, we introduced Engage SDK, a unified solution to deliver personalized content and guide users to relevant in-app experiences. Integrating it unlocks surfaces like Collections, our immersive full-screen experience bringing content directly to the user’s home screen.

    We’re rolling out updates to expand your content’s reach even further:

      • Engage SDK content is coming to the Play Store this summer, in addition to existing spaces like Collections and Entertainment Space on select Android tablets.
      • New content categories are now supported, starting today with Travel.
      • Collections are rolling out globally to Google Play markets starting today, including Brazil, India, Indonesia, Japan, and Mexico.

    Integrate with Engage SDK today to take advantage of this new expansion and boost re-engagement. Try our codelab to test the ease of publishing content with Engage SDK and express interest in the developer preview.

    a mobile device displaying Collections on the Play Store

    Engage SDK now supports Collections for Travel. 
    Users can find timely itineraries and recent searches, all in one convenient place.

    Maximizing your revenue with subscriptions enhancements

    With over a quarter-billion subscriptions, Google Play is one of the world’s largest subscriptions platforms. We’re committed to helping you turn engaged users into revenue growth by continually enhancing our tools to meet evolving customer needs.

    To streamline your purchase flow, we’re introducing multi-product checkout for subscriptions. This lets you sell subscription add-ons alongside base subscriptions, all under a single, aligned payment schedule. Users get a simplified experience with one price and one transaction, while you gain more control over how subscribers upgrade, downgrade, or manage their add-ons.

    a mobile devices displaying multi-checkout where a base subscription plus add ons in shown a singluar transaction on the Play Store

    You can now sell base subscriptions and add-ons together 
    in a single, streamlined transaction.

    To help you retain more of your subscribers, we’re now showcasing subscription benefits in more places across Play – including the Subscriptions Center, in reminder emails, and during purchase and cancellation flows. This increased visibility has already reduced voluntary churn by 2%. Be sure to enter your subscription benefits in Play Console so you can leverage this powerful new capability.

    five mobile devices showing subscriptions in Play

    To help reduce voluntary churn, we’re showcasing your subscriptions benefits across Play.

    Reducing involuntary churn is a key factor in optimizing your revenue. When payment methods unexpectedly decline, users might unintentionally cancel. Now, instead of immediate cancellation, you can now choose a grace period (up to 30 days) or an account hold (up to 60 days). Developers who increased the decline recovery period – from 30 to 60 days – saw an average 10% reduction in involuntary churn for renewals.

    On top of this, we’re expanding our commitment to get more buyers ready for purchases throughout their entire journey. This includes prompting users to set up payment methods and verification right at device setup. After setup, we’ve integrated prompts into highly visible areas like the Play and Google account menus. And as always, we’re continuously enabling payments in more markets and expanding payment options. Plus, our AI models now help optimize in-app transactions by suggesting the right payment method at the right time, and we’re bringing buyers back with effective cart abandonment reminders.

    Grow your business on Google Play

    Our latest updates reinforce our commitment to fostering a thriving Google Play ecosystem. From enhanced discovery and robust tools to new monetization avenues, we’re empowering you to innovate and grow. We’re excited for the future we’re building together and encourage you to use these new capabilities to create even more impactful experiences. Thank you for being an essential part of the Google Play community.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • TikTok Layoffs Hit E-Commerce Division in US, TikTok Shop

    TikTok Layoffs Hit E-Commerce Division in US, TikTok Shop


    TikTok’s e-commerce unit, TikTok Shop, is facing layoffs in the U.S.

    TikTok Shop head Mu Qing circulated an internal email to U.S. staff late Tuesday, telling them to work from home on Wednesday because some would receive emails indicating their roles had been cut. In the memo, which was viewed by Bloomberg, Mu advised staff to expect “operational and personnel changes” to TikTok’s U.S. operations and global key accounts divisions “beginning early on Wednesday.”

    Related: Microsoft’s Mass Layoffs Affected at Least 800 in Software Engineering, According to New Documents

    The global key accounts team works closely with large brands, while operations supports merchants, partners, and creators on TikTok.

    Mu wrote that TikTok was laying off workers to “create more efficient operating models for the team’s long-term growth,” and framed the job cuts as “difficult discussions,” per the memo.

    According to Business Insider, employees started receiving emails informing them that they were impacted by layoffs beginning Wednesday morning.

    It is unclear how many employees were affected by the layoffs. Mu stated in the memo that TikTok’s goal was to quickly tell impacted employees they were let go.

    Related: Google Layoffs Affect Hundreds in Division Working on Chrome Browser, Pixel Phones

    Last month, TikTok Shop let go of some U.S. employees as it restructured its governance and experience team.

    TikTok officially introduced Shop to the U.S. in September 2023. The shopping marketplace is a tab on the TikTok video app and features items for sale from third-party sellers. It keeps shoppers within TikTok to complete purchases and uses its engaging algorithm to suggest products customers might be interested in.

    In 2024, TikTok Shop attracted over 47 million U.S. shoppers, with Americans spending $32 million per day shopping on the social media app, according to Capital One research.

    This year, TikTok Shop sales have fallen due to tariffs, four TikTok Shop staffers told Business Insider. Tariffs rose as high as 145% on Chinese goods in mid-April. Earlier this month, the U.S. temporarily lowered tariffs on Chinese goods to 30% while China reduced its levies on U.S. imports from 125% to 10%. BI‘s sources disclosed that in early May, TikTok Shop’s daily U.S. sales from foreign sellers were down by close to 25% month-over-month due to tariffs.

    Related: ‘More Than Marketing Tools’: Some Business Owners Are Worried About the Possible TikTok Ban

    TikTok has 7,000 U.S. employees, with over 1,000 employees located near Seattle. The company has other offices in New York, California, and Texas, per Bloomberg.

    TikTok has until June 19 to find a new owner in the U.S. and separate from its parent company, ByteDance, or face a ban. The deadline is in response to a law passed by Congress in April 2024 and has been extended twice by President Donald Trump as TikTok attempts to find a buyer. Trump, who has previously stated that he has “a little warm spot” for TikTok, said earlier this month that he may extend the deadline further if no deal is reached by June 19.

    So far, TikTok has received bids from Oracle co-founder Larry Ellison, AI startup Perplexity, AppLovin, Amazon, and former LA Dodgers owner Frank McCourt Jr., who teamed up with Shark Tank investor Kevin O’Leary and Reddit co-founder Alexis Ohanian on The People’s Bid for TikTok, among others.

    TikTok’s e-commerce unit, TikTok Shop, is facing layoffs in the U.S.

    TikTok Shop head Mu Qing circulated an internal email to U.S. staff late Tuesday, telling them to work from home on Wednesday because some would receive emails indicating their roles had been cut. In the memo, which was viewed by Bloomberg, Mu advised staff to expect “operational and personnel changes” to TikTok’s U.S. operations and global key accounts divisions “beginning early on Wednesday.”

    Related: Microsoft’s Mass Layoffs Affected at Least 800 in Software Engineering, According to New Documents

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • Are NPS surveys still worth using?

    Are NPS surveys still worth using?


    At Alchemer, we’ve spent decades immersed in customer feedback, helping brands collect it, make sense of it, and take positive action. And people bring up one metric  in just about every conversation, it’s NPS. 

    So, let’s talk about it. What does NPS do well? Where does it fall short? And most importantly: does it still deserve a place in your customer experience strategy? 

    What are NPS surveys? 

    Before diving in, let’s first give a quick refresher on NPS surveys.  

    Fred Reichheld, in collaboration with Bain & Company and Satmetrix, developed the Net Promoter Score (NPS) around a single, powerful question: “How likely are you to recommend this product or service to a friend?”.  

    Customers respond on a scale from 0 to 10. Respondents who answer 9 or 10 are classified as Promoters—loyal enthusiasts likely to spread the word. Respondents who choose 7 or 8 are Passives—satisfied but unenthusiastic customers unlikely to promote or criticize. Anyone rating 6 or below is a Detractor—an unhappy customer who could damage your reputation through negative word-of-mouth. 

    To calculate your NPS, simply subtract the percentage of Detractors from the percentage of Promoters. A score above 75 indicates excellence and offers a quick snapshot of how your customers truly feel about your brand.  

    What NPS surveys are good for 

    Despite its simplicity, NPS has stood the test of time for a reason. When used correctly, it can offer valuable insights and help teams stay connected to the customer experience. Here are a few reasons why NPS still earns a spot in many feedback programs:  

    1. Simplicity 

    NPS is easy to roll out and easy to interpret. It doesn’t take a CX team weeks to analyze, you get an instant read on customer sentiment. That’s why companies across every industry, from airlines to leading SaaS platforms, still include it in their post-purchase or support surveys. 

    2. Real-time signals 

    Because the question is quick to answer, many companies deploy it right after key touchpoints—like after onboarding, a support interaction, or delivery. When responses start trending downward, that can be an early warning sign of a bigger issue. 

    3. Predictive power 

    NPS has been shown to correlate with future loyalty and revenue. Promoters often spend more, churn less, and refer others. For growth-focused teams, knowing where you stand today can help predict performance down the road. 

    4. Benchmarking 

    Because so many companies use NPS, it provides a common measurement tool. You can compare your score to industry averages, track progress over time, or even set internal targets by department or region. 

    Where NPS falls short 

    As helpful as NPS can be, it’s not without its flaws. On its own, a single score doesn’t always tell the full story—or provide the depth needed to drive real improvements. Here are some of the most common limitations teams run into when relying too heavily on NPS. 

    1. Insights often lack context 

    Here’s the catch: NPS tells you how customers feel, but not why. A score of 4 doesn’t explain whether the issue was product performance, pricing, customer service, or all of the above. 

    Unless you pair it with an open-ended follow-up or additional questions, NPS alone leaves teams guessing. 

    Advancements in AI and open text analysis tools have made open-text questions easier to analyze at scale, helping teams quickly surface nuanced themes and uncover meaningful insights into customer sentiment. 

    2. Results are not always actionable 

    You can’t improve what you don’t understand. A low score without additional detail isn’t helpful to your support team. Likewise, a high score might feel great—but without insights, it’s hard to know what you’re doing right or how to replicate it. 

    3. Overused and Misunderstood 

    Some companies lean too hard on NPS, treating it as a cure-all for customer insight. But customer experience is complex and nuance matters. Relying solely on a single number risks oversimplifying what should be a rich, ongoing dialogue with your customers. 

    The Verdict: NPS is a start, but brands need to go beyond 

    So, do NPS surveys still matter? Absolutely. But only when they’re part of a bigger picture and feedback program

    NPS is great at giving you a quick signal. But to truly understand your customers—and build lasting loyalty—you need to go deeper. That means asking smarter follow-up questions, analyzing trends over time, segmenting by persona or journey stage, and taking meaningful action based on what you learn. 

    The best organizations use NPS as the entry point, not the end point, for customer feedback.  

    At Alchemer, we help you connect that score to richer insights and real outcomes. Because knowing your number is just the beginning. Acting on it? That’s where the magic happens. 

    Still curious about how to elevate your NPS program? 
    Watch our latest webinar with Alchemer CMO Bo Bandy and SVP of Product & Services Ryan Tamminga.  



    Source link

  • 5 Great Local Multiplayer Games

    5 Great Local Multiplayer Games


    Heads Up! is the perfect party game! If you’re a game-nighter, you need this game installed and ready to go. It’s a ton of fun, and you won’t waste a bunch of time trying to explain the rules to everyone since it’s so simple.

    Created by Ellen DeGeneres, this riotous game-night challenge reinvents charades for the app generation. Pick a category, then hold your device up to your head, screen facing outward, and guess the words using your friends’ clues. It’s a riot to play, and you’re sure to have a great time with your friends enjoying this party game.

    The game can be played with only one other person, but where’s the fun in that? If you don’t have a large group, you can most likely skip this one.

    For the iPhone and iPad, the game is $1.99.



    Source link

  • Android Developers Blog: New in-car app experiences



    Posted by Ben Sagmoe – Developer Relations Engineer

    The in-car experience continues to evolve rapidly, and Google remains committed to pushing the boundaries of what’s possible. At Google I/O 2025, we’re excited to unveil the latest advancements for drivers, car manufacturers, and developers, furthering our goal of a safe, seamless, and helpful connected driving experience.

    Today’s car cabins are increasingly digital, offering developers exciting new opportunities with larger displays and more powerful computing. Android Auto is now supported in nearly all new cars sold, with almost 250 million compatible vehicles on the road.

    We’re also seeing significant growth in cars powered by Android Automotive OS with Google built-in. Over 50 models are currently available, with more launching this year. This growth is fueled by a thriving app ecosystem, including over 300 apps already available on the Play Store. These include apps optimized for a safe and seamless experience while driving as well as entertainment apps for while you’re parked and waiting in your car—many of which are adaptive mobile apps that have been seamlessly brought to cars through the Car Ready Mobile Apps Program.

    A vibrant developer community is essential to delivering these innovative in-car experiences utilizing the different screens within the car cabin. This past year, we’ve focused on key areas to help empower developers to build more differentiated experiences in cars across both platforms, as we embark on the Gemini era in cars!

    Gemini for Cars

    Exciting news for in-car experiences: Gemini, Google’s advanced AI, is coming to vehicles! This unlocks a new era of safe and helpful interactions on the go.

    Gemini enables natural voice conversations and seamless multitasking, empowering drivers to get more done simply by speaking naturally. Imagine effortlessly finding charging stations or navigating to a location pulled directly from an email, all with just your voice.

    You can learn how to leverage Gemini’s potential to create engaging in-car experiences in your app.

    Navigation apps can integrate with Gemini using three core intent formats, allowing you to start navigation, display relevant search results, and execute custom actions, such as enabling users to report incidents like traffic congestion using their voice.

    Gemini for cars will be rolling out in the coming months. Get ready to build the next generation of in-car AI experiences!

    New developer programs and tools

    table of app categories showing availability in android Auto and cars with Google built-in, including media, navigation, point-of-interest, internet of things, weather, video, browsers, games, and communication such as messaging and voip

    Last year, we introduced car app quality tiers to inspire developers to create high quality in-car experiences. By developing your app in compliance with the Car ready tier, you can bring video, gaming, or browser apps to run while parked in cars with Google built-in with almost no additional effort. Learn more about Car Ready Mobile Apps.

    Your app can further shine in cars within the Car optimized and Car differentiated tiers to unlock experiences while the car is in motion, and also when transitioning between parked and driving modes, while utilizing the different screens within the modern car cabin. Check the car app quality guidelines for details.

    To start with, across both Android Auto and for cars with Google built-in, we’ve made some exciting improvements for Car App Library:

      • The Weather app category has graduated from beta: any developer can now publish weather apps to production tracks on both Android Auto and cars with Google Built-in. Before you publish your app, check that it meets the quality guidelines for weather apps.
      • Two new templates, the SectionedItemTemplate and MediaPlaybackTemplate, are now available in the Car App Library 1.8 alpha release for use on Android Auto. These templates are a great fit for building templated media apps, allowing for increased customization in layout and browsing structure.

        example of sectioneditemtemplate on the left and mediaplaybacktemplate on the right

    On Android Auto, many new app categories and capabilities are now in beta:

      • We are adding support for Building media apps with the Car App Library, enabling media app developers to build both richer and more complete experiences that users are used to on their phones. During beta, developers can build and publish media apps built using the Car App Library to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta. 

      • The communications category is in beta. We’ve simplified calling integration for calling apps by utilizing the CallsManager Jetpack API. Together with the templates provided by the Car App Library, this enables communications apps to build features like full message history, upcoming meetings list, rich in-call views, and more. During beta, developers can build and publish communications apps to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.

      • Games are now supported in Android Auto, while parked, on phones running Android 15 and above. You can already find some popular titles like Angry Birds 2, Farm Heroes Saga, Candy Crush Soda Saga and Beach Buggy Racing 2. The Games category is in Beta and developers can publish games to internal testing and closed testing tracks. You can also express interest in being an early access partner to publish to production while the category is in beta.

    Finally, we have further simplified building, testing and distribution experience for developers building apps for Android Automotive OS cars with Google built-in:

      • Distribution through Google Play is more flexible than ever. It’s now possible for apps in the parked categories to distribute in the same APK or App Bundle to cars with Google built-in as to phones, including through the mobile release track. Learn more on how to Distribute to cars.

      • Android Automotive OS on Pixel Tablet is now generally available, giving you a physical device option for testing Android Automotive OS apps without buying or renting a car. Additionally, the most recent system images include support for acting as an Android Auto receiver, meaning you can use the same device to test both your app’s experience on Android Auto and Android Automotive OS. Apply for access to these images.

    The road ahead

    You can look forward to more updates later this year, including:

      • Video apps will be supported on Android Auto, starting with phones running Android 16 on select compatible cars. If your app is already adaptive, enabling your app experience while parked only requires minimal steps to distribute to cars.

      • For Android Automotive OS cars running Android 14+ with Google built-in, we are working with car manufacturers to add additional app compatibility, to enable thousands of adaptive mobile apps in the next phase of the Car Ready Mobile Apps Program.

      • Updated design documentation that visualizes car app quality guidelines and integration paths to simplify designing your app for cars.

      • Google Play Services for cars with Google built-in are expanding to bring them on-par with mobile, including:
        • a. Passkeys and Credential Manager APIs for a more seamless user sign-in experience.
          b. Quick Share, which will enable easy cross-device sharing from phone to car.

      • Pre-launch reports for Android Automotive OS are coming soon to the Play Console, helping you ensure app quality before distributing your app to cars.

    Be sure to keep up to date through goo.gle/cars-whats-new on these features and more as we continuously invest in the future of Android in the car. Stay tuned for more resources to help you build innovative and engaging experiences for drivers and passengers.

    Ready to publish your car app? Check our guidance for distributing to cars.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • On-device GenAI APIs as part of ML Kit help you easily build with Gemini Nano



    Posted by Caren Chang – Developer Relations Engineer, Chengji Yan – Software Engineer, Taj Darra – Product Manager

    We are excited to announce a set of on-device GenAI APIs, as part of ML Kit, to help you integrate Gemini Nano in your Android apps.

    To start, we are releasing 4 new APIs:

      • Summarization: to summarize articles and conversations
      • Proofreading: to polish short text
      • Rewriting: to reword text in different styles
      • Image Description: to provide short description for images

    Key benefits of GenAI APIs

    GenAI APIs are high level APIs that allow for easy integration, similar to existing ML Kit APIs. This means you can expect quality results out of the box without extra effort for prompt engineering or fine tuning for specific use cases.

    GenAI APIs run on-device and thus provide the following benefits:

      • Input, inference, and output data is processed locally
      • Functionality remains the same without reliable internet connection
      • No additional cost incurred for each API call

    To prevent misuse, we also added safety protection in various layers, including base model training, safety-aware LoRA fine-tuning, input and output classifiers and safety evaluations.

    How GenAI APIs are built

    There are 4 main components that make up each of the GenAI APIs.

    1. Gemini Nano is the base model, as the foundation shared by all APIs.
    2. Small API-specific LoRA adapter models are trained and deployed on top of the base model to further improve the quality for each API.
    3. Optimized inference parameters (e.g. prompt, temperature, topK, batch size) are tuned for each API to guide the model in returning the best results.
    4. An evaluation pipeline ensures quality in various datasets and attributes. This pipeline consists of: LLM raters, statistical metrics and human raters.

    Together, these components make up the high-level GenAI APIs that simplify the effort needed to integrate Gemini Nano in your Android app.

    Evaluating quality of GenAI APIs

    For each API, we formulate a benchmark score based on the evaluation pipeline mentioned above. This score is based on attributes specific to a task. For example, when evaluating the summarization task, one of the attributes we look at is “grounding” (ie: factual consistency of generated summary with source content).

    To provide out-of-box quality for GenAI APIs, we applied feature specific fine-tuning on top of the Gemini Nano base model. This resulted in an increase for the benchmark score of each API as shown below:

    Use case in English Gemini Nano Base Model ML Kit GenAI API
    Summarization 77.2 92.1
    Proofreading 84.3 90.2
    Rewriting 79.5 84.1
    Image Description 86.9 92.3

    In addition, this is a quick reference of how the APIs perform on a Pixel 9 Pro:

    Prefix Speed
    (input processing rate)
    Decode Speed
    (output generation rate)
    Text-to-text 510 tokens/second 11 tokens/second
    Image-to-text 510 tokens/second + 0.8 seconds for image encoding 11 tokens/second

    Sample usage

    This is an example of implementing the GenAI Summarization API to get a one-bullet summary of an article:

    val articleToSummarize = "We are excited to announce a set of on-device generative AI APIs..."
    
    // Define task with desired input and output format
    val summarizerOptions = SummarizerOptions.builder(context)
        .setInputType(InputType.ARTICLE)
        .setOutputType(OutputType.ONE_BULLET)
        .setLanguage(Language.ENGLISH)
        .build()
    val summarizer = Summarization.getClient(summarizerOptions)
    
    suspend fun prepareAndStartSummarization(context: Context) {
        // Check feature availability. Status will be one of the following: 
        // UNAVAILABLE, DOWNLOADABLE, DOWNLOADING, AVAILABLE
        val featureStatus = summarizer.checkFeatureStatus().await()
    
        if (featureStatus == FeatureStatus.DOWNLOADABLE) {
            // Download feature if necessary.
            // If downloadFeature is not called, the first inference request will 
            // also trigger the feature to be downloaded if it's not already
            // downloaded.
            summarizer.downloadFeature(object : DownloadCallback {
                override fun onDownloadStarted(bytesToDownload: Long) { }
    
                override fun onDownloadFailed(e: GenAiException) { }
    
                override fun onDownloadProgress(totalBytesDownloaded: Long) {}
    
                override fun onDownloadCompleted() {
                    startSummarizationRequest(articleToSummarize, summarizer)
                }
            })    
        } else if (featureStatus == FeatureStatus.DOWNLOADING) {
            // Inference request will automatically run once feature is      
            // downloaded.
            // If Gemini Nano is already downloaded on the device, the   
            // feature-specific LoRA adapter model will be downloaded very  
            // quickly. However, if Gemini Nano is not already downloaded, 
            // the download process may take longer.
            startSummarizationRequest(articleToSummarize, summarizer)
        } else if (featureStatus == FeatureStatus.AVAILABLE) {
            startSummarizationRequest(articleToSummarize, summarizer)
        } 
    }
    
    fun startSummarizationRequest(text: String, summarizer: Summarizer) {
        // Create task request  
        val summarizationRequest = SummarizationRequest.builder(text).build()
    
        // Start summarization request with streaming response
        summarizer.runInference(summarizationRequest) { newText -> 
            // Show new text in UI
        }
    
        // You can also get a non-streaming response from the request
        // val summarizationResult = summarizer.runInference(summarizationRequest)
        // val summary = summarizationResult.get().summary
    }
    
    // Be sure to release the resource when no longer needed
    // For example, on viewModel.onCleared() or activity.onDestroy()
    summarizer.close()
    

    For more examples of implementing the GenAI APIs, check out the official documentation and samples on GitHub:

    Use cases

    Here is some guidance on how to best use the current GenAI APIs:

    For Summarization, consider:

      • Conversation messages or transcripts that involve 2 or more users
      • Articles or documents less than 4000 tokens (or about 3000 English words). Using the first few paragraphs for summarization is usually good enough to capture the most important information.

    For Proofreading and Rewriting APIs, consider utilizing them during the content creation process for short content below 256 tokens to help with tasks such as:

      • Refining messages in a particular tone, such as more formal or more casual
      • Polishing personal notes for easier consumption later

    For the Image Description API, consider it for:

      • Generating titles of images
      • Generating metadata for image search
      • Utilizing descriptions of images in use cases where the images themselves cannot be displayed, such as within a list of chat messages
      • Generating alternative text to help visually impaired users better understand content as a whole

    GenAI API in production

    Envision is an app that verbalizes the visual world to help people who are blind or have low vision lead more independent lives. A common use case in the app is for users to take a picture to have a document read out loud. Utilizing the GenAI Summarization API, Envision is now able to get a concise summary of a captured document. This significantly enhances the user experience by allowing them to quickly grasp the main points of documents and determine if a more detailed reading is desired, saving them time and effort.

    side by side images of a mobile device showing a document on a table on the left, and the results of the scanned document on the right showing details providing the what, when, and where as written in the document

    Supported devices

    GenAI APIs are available on Android devices using optimized MediaTek Dimensity, Qualcomm Snapdragon, and Google Tensor platforms through AICore. For a comprehensive list of devices that support GenAI APIs, refer to our official documentation.

    Learn more

    Start implementing GenAI APIs in your Android apps today with guidance from our official documentation and samples on GitHub: AI Catalog GenAI API Samples with Compose, ML Kit GenAI APIs Quickstart.



    Source link

  • Fortnite Returns to Apple’s App Store After Scoring a Legal Victory



    Apple kicked the popular game out of the App Store nearly five years ago, prompting a court battle that was partially resolved on Tuesday.



    Source link