برچسب: with

  • Grandma’s Recipe Started Business With $2B+ Annual Revenue

    Grandma’s Recipe Started Business With $2B+ Annual Revenue


    Mildred Reser started selling potato salad to pay the bills back in 1950. The recipe she perfected in a rural Cornelius, Oregon, farmhouse helped her launch a seasonal business, Mrs. Reser’s Salads, which supplied local meat markets before it moved to its first small factory and landed distribution in Safeway.

    Image Credit: Courtesy of Reser’s Fine Foods. Grandma Mildred with her family.

    Mildred’s son, Al, stepped in as president in 1960, and the company became Reser’s Fine Foods. Eager to transition operations to a larger facility but lacking the cash to do so, he took the company public and raised a little over $200,000. Those funds went toward opening Reser’s 55,000-square-foot Beaverton facility in 1978.

    Because potato salad was primarily considered a summer staple in the Pacific Northwest, Al also expanded the product line to include sausages, tortillas and more to offset seasonal sales slowdowns.

    Shortly thereafter, in 1986, Al took the company private again to prevent an outside investor from assuming control.

    “[We]  actually received some loans from customers, vendors, employees [and] a lot of family members to make that move,” Mark Reser, Al’s son and the current CEO of Reser’s Fine Foods, says. “We were much smaller at the time, but it was a very strategic move to take it back private.”

    Related: The Business He Started in Response to a Frustrating Grocery Store Experience Surpassed $1 Billion in Sales and Counts Ray Dalio Among Its Investors

    Image Credit: Courtesy of Reser’s Fine Foods. Mark Reser with his father, Al.

    “I had my own little route, and [it was a] great way to learn the whole product line.”

    Mark began working in the Reser’s factory in eighth grade; he continued helping with the family business through high school and into college during the summer months. His degree in accounting proved useful in understanding the business’s numbers. After graduation, Mark spent a couple of years driving a truck route for the company’s direct store delivery.

    “I had my own little route,” Mark recalls, “and [it was a] great way to learn the whole product line, to have that experience, the interaction with the customers.”

    Related: A Cambodian Refugee Paralyzed By Polio Says ‘Not Much’ Was Expected of Him. He and His Wife Built a Multimillion-Dollar Business That Beat All Odds.

    Reser’s needed help managing its peak salad season, so Al acquired a company with about 40 employees in Corona, California, and Mark relocated to run it in 1990. Mark learned a lot before moving on to lead an even larger operation in Topeka, Kansas, where he spent eight years growing the company’s first built facility, he says.

    He moved back to Oregon in 1998 and became COO. He then stepped in as president in 2006.

    Image Credit: Courtesy of Reser’s Fine Foods. CEO Mark Reser.

    The Kansas facility remains Reser’s largest base today, with four manufacturing plants and a distribution center. Reser’s currently boasts over 5,000 employees across North America and more than $2 billion in annual revenue; the business has also seen double-digit sales growth each of the past five years, per the company.

    “We always stress that the 4th of July always comes on the 4th of July.”

    These days, as Reser’s celebrates its 75th year in business, it must navigate some of the same challenges it has over decades past, like potential commodity issues and labor shortages. Putting in the work to prepare, especially for the company’s busiest stretch, Memorial Day through the Fourth of July, remains an indispensable strategy, Mark says.

    Image Credit: Courtesy of Reser’s Fine Foods

    “We always stress that the 4th of July always comes on the 4th of July,” Mark explains. “It’s all about the planning up front. We did planning in the earlier years, but not as much as we’re doing today.”

    Related: This Couple Used Their Savings to Start a Small Business. A Smart Strategy Helped Make It a Multimillion-Dollar Success.

    The company continues to innovate to help fuel year-round sales, and its hot side dishes, big sellers in the fall and winter months, have become an integral part of that, Mark notes. Now, alongside Reser’s Fine Foods, the company’s line includes Main St Bistro, Stonemill Kitchens, Reser’s Foodservice, Fresh Creative Foods, St Clair Foods, Baja Café and Don Pancho. Its Mexican food category in particular enjoys sales stability year-round, Mark adds.

    “Our family’s aligned, and that’s so critical.”

    According to the CEO, Reser’s strength as a family business stems from its shared goals when it comes to leadership and growth.

    “Our family’s aligned, and that’s so critical,” Reser explains. “ They’re aligned on reinvestment, they’re aligned on the next generation, taking the business even further, and they’re aligned on the drive to continue to grow the business.”

    Related: Entrepreneurship Means Generational Independence. These Leaders of a 115-Year-Old Family Business Are Honoring the Past and Building for the Future.

    Mark’s nephew and his oldest son are currently part of that next generation working in the business, and he hopes to see several other family members join the company down the line.

    “There’s a lot of learning that they have to do, but we do feel  we’ve got some great, strong leaders coming up within the ranks, taking the business further,” Reser says. “We want [Reser’s Fine Foods] to become a bigger part of the meal.”

    Image Credit: Courtesy of Reser’s Fine Foods

    The company sees growth opportunities in meal kit bundling, convenience stores and more snack-sized options, and it continues to research potential categories for expansion. Reser’s launches close to 300 items per year, Mark says, noting that many are custom-made for restaurant chains or private label.

    Related: 10 Growth Strategies Every Business Owner Should Know

    The key to growth is to always consider what’s next and resist the urge to get too comfortable, the CEO says.

    “ Don’t forget who pays the bills — it’s the customers,” Reser says. “And don’t forget who does the heavy lifting. That’s your employees. Make sure you’re having fun and enjoying yourself. If you’re not, you’re in the wrong spot.”



    Source link

  • See Westeros in a New Light With Game of Thrones: Kingsroad

    See Westeros in a New Light With Game of Thrones: Kingsroad


    In the open-world game, you can explore the seven kingdoms and some of the iconic locations like King’s Landing and Castle Black.

    You’ll play as a illegitimate child from a small mobile house the North who is looking to forge a legacy. You’ll try to restore your house to its former glory. That road is full of twists and turns as you need to navigate the struggles between all of the noble houses of Westeros while finding allies and more.

    As you might expect, the game is full of combat. The controls are fully manual, putting you in the middle of tense sword battles. You’ll need to use strategy and Fien control to dodge and counter the attack of opponents.

    There are three distinct classes you can choose from inspired by the original series, Knight, Sellsword, and Assassin. Each option comes with its own strengths and weaknesses along with specific combat skills and mechanics.

    The game also features real-time, co-op content where you can encounter dangerous beasts and defeat them along with others to earn rewards and create high-end gear.

    Game of Thrones: Kingsroad is a free download now on the App Store. It’s for the iPhone, iPad, and all Macs with an M1 chip or later. There are in-app purchases available.



    Source link

  • Stop Losing $500+ a Month — The Mistake Starts With a Missed Call

    Stop Losing $500+ a Month — The Mistake Starts With a Missed Call


    Opinions expressed by Entrepreneur contributors are their own.

    For many small business owners, the ringing phone is a lifeline. But what happens when it goes unanswered? According to a new survey by my company, Vida, 42% of SMBs estimate they lose at least $500 every month to missed calls.

    That’s over $6,000 a year — vanishing without a trace. Yet despite growing awareness of the issue, only 22% have adopted AI-powered voice agents to help solve the problem.

    What businesses are doing

    When teams are stretched thin and customer demands keep growing, staying on top of inbound calls is tough, and usually means hiring more staff, which drives up costs. That’s where AI voice agents come in. These tools step in to fill customer service and sales gaps, ensuring every call is answered, common questions are addressed, and new opportunities aren’t missed.

    Many SMBs are already putting AI voice agents to work, handling inbound sales, responding to support inquiries and even serving customers in their preferred language, extending accessibility without the need for additional hires.

    Related: Is It Always PR’s Job to Make the Phone Ring?

    Take Larry, for example, who runs an independent cleaning business. Before implementing an AI voice agent, Larry estimates he was missing 8-10 calls a week, often during jobs or after hours. Now, his AI agent books appointments, answers after-hours inquiries and provides updates to clients while his team is en route. He’s not only retaining more leads but also improving customer satisfaction simply by being available, even when he can’t pick up the phone.

    AI voice agents also offer a major advantage when it comes to scaling a business. Whether it’s a seasonal surge or a promotional push, automation helps absorb spikes in call volume so staff can stay focused on more complex tasks.

    And it pays off — according to a global study by Qualtrics, customers who have a 5-star experience are 3 times more likely to recommend a business.

    Overcoming misconceptions

    Adoption still lags in part because many business owners associate AI voice agents with the clunky, robotic systems of the past, or feel overwhelmed by the idea of implementing them. There’s also a lingering concern that customers will reject automation.

    But the reality? Most customers don’t care how they get help, as long as they get it quickly and accurately. Today’s AI tools sound natural, respond dynamically and work seamlessly alongside your team.

    Actually, Zendesk reports that 59% of consumers expect generative AI to change how they interact with companies within the next two years, highlighting just how quickly customer expectations are shifting.

    And the results speak for themselves. With the right setup, AI voice agents quickly go from a “nice to have” to a critical part of the team.

    How to get started

    Bringing AI voice agents into your business doesn’t require a massive overhaul. In fact, the most effective implementations start small and scale up:

    • Start small. Focus on high-volume, low-complexity tasks like scheduling appointments, qualifying leads or answering FAQs.
    • Train your team. Help employees understand how to work with the AI agent, not against it.
    • Scale gradually. As confidence builds, expand the agent’s responsibilities to include other repetitive or time-consuming tasks.
    • Track and optimize. Monitor performance, gather insights and adjust workflows to improve outcomes over time.

    Getting started is easier than many business owners expect. Today’s AI voice agents are built to plug into existing systems, whether a CRM, calendar or phone platform, making the transition minimally disruptive and requiring no technical expertise. Some solutions even allow business owners to simply forward calls to the AI agent.

    For business owners like Larry, setup took just minutes. He provided a bit of background, shared a few of his existing marketing materials and FAQ documents to help train the system, and the AI agent was ready to go. Now, it effortlessly handles appointment bookings, inquiries and client updates. And because these agents are adaptive, they learn and improve over time, creating more value the longer they’re in use.

    According to Vida’s SMB AI Voice Agent Adoption & Impact Survey, 97% of businesses already using AI voice agents reported increased revenue. Another 82% saw stronger customer engagement, and 80% saved five or more hours each week, time that can be reinvested into higher-value work.

    Related: How to Turn Your Key Employees Into Your Business Successors (and Avoid the Headache of Outside Buyers)

    Why it matters

    AI voice agents are becoming a strategic necessity for SMBs aiming to stay responsive and competitive. As more companies embrace digital tools, those who stay complacent risk falling behind. Small slips like a missed call might seem minor, but over time, they lead to lost revenue, missed connections and stalled growth. Forward-thinking businesses go beyond streamlining operations; they embrace intelligent systems that evolve alongside customer needs and technological change.

    In a world where speed, personalization and 24/7 availability are becoming the norm, AI voice agents help SMBs make every call count. Every missed call is a missed opportunity, one that your competitors may be ready to catch. Fortunately, staying competitive doesn’t require a full operational overhaul. It starts with taking one smart step forward.

    And with the right AI voice agent in place, businesses can become more responsive, more reliable and more profitable, without burning out their teams or breaking the bank. The difference between a missed call and a booked customer is often just a few seconds. AI voice agents help you win those moments — and in business, moments matter.

    For many small business owners, the ringing phone is a lifeline. But what happens when it goes unanswered? According to a new survey by my company, Vida, 42% of SMBs estimate they lose at least $500 every month to missed calls.

    That’s over $6,000 a year — vanishing without a trace. Yet despite growing awareness of the issue, only 22% have adopted AI-powered voice agents to help solve the problem.

    What businesses are doing

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • Join Entrepreneur’s Live Webinar With Ollyball Inventor

    Join Entrepreneur’s Live Webinar With Ollyball Inventor


    Getting a “no thanks” on Shark Tank — twice! — might lead some people to think that they needed a new idea. But Joe Burke is not one of those people. He persisted with a kids’ toy idea that he developed at his kitchen table, and today, Ollyball has sold over 3 million units and has won multiple awards, including Toy of the Year.

    Ollyball is an inflatable ball designed for “full-speed, full-force indoor play.” It weighs less than an ounce, so kids toss it around without fear of knocking over or breaking anything in the house.

    On May 28 at 2 PM ET, Joe will join Entrepreneur for an online workshop to give a behind-the-scenes look at how he turned Ollyball into the all-time best-selling indoor play ball — and the strategies he used to do it without big investors or flashy ads.

    This event is free for Entrepeneur+ subscribers. Sign up here to reserve your spot and have the opportunity to ask Joe your own questions live.

    Not an Entrepreneur+ subscriber? Subscribe today for just $5.

    In advance of the conversation, we spoke with Joe to get some insights about his product and his entrepreneurial journey.

    What inspired you to create Ollyball?
    My 9-year-old daughter was crying in the kitchen after breaking stuff in the house with a volleyball. I knew if we could invent a ball for parents to let their kids play with in the house, we would sell millions of them. It became an obsession through 100 prototypes, two U.S. utility patents and a Toy of the Year award before we ever went to mass market.

    How did you test it out?
    I made a ton of prototypes with different materials and sizes. I would take the kids to the Kids Club at 24-Hour Fitness and have them bring the prototypes with them. I’d watch to see if the kids were into it or not. When I brought an early version of the now-patented KrunchCor Construction Ball on a Saturday, the kids went bonkers and the manager had to take the ball away because they all fought over it. I knew then we were close to the answer.

    Any other big moments that stand out to you about the early days?
    When the hosts of the CBS Morning Show started drilling each other with Ollyballs live on national TV. For that moment, they were kids in a playground. Another key moment was an Instagram post from 2019 of a kid named Martin Vodicka and his father in Austria playing Ollyball together in their home. I realized a ball can change the world.

    What has been your biggest challenge and how did you pivot to overcome it?
    COVID and tariffs. Treated them the same: Go head-first and full-force into crisis and find an unfair competitive advantage. Our pivot was to take a reckless leap of faith. The other big challenge was financial — investing $150k of savings, growing on profits, and fending off predatory investors. The key pivot there was trusting two fellas I met doing non-profit/charitable service, a CPA and a patent attorney.

    What advice would you give entrepreneurs looking for funding?
    When you want capital, you won’t get it. When you don’t need capital, everyone will offer it — don’t take it. My best advice is to avoid surrendering equity and rejoice in learning every dimension within your business and brand. Get busy building your empire brick by bloody brick.

    How do you suggest preparing for a pitch?
    Anticipate a room full of a-holes asking a-hole questions. Out-research and have a true answer for every a-hole question. This forces reality.

    What does the word “entrepreneur” mean to you?
    “Survivor.” The films The Shawshank Redemption and There Will Be Blood illustrate both the apex and the dungeon of entrepreneurism.

    What is something many aspiring business owners think they need that they really don’t?
    Money. I started my first company on a card table and a metal chair at the end of a hallway that I traded out for work.

    What is a book you always recommend?
    Here are four:

    1. The Velveteen Rabbit by Margery Williams. Greatest book on authenticity in business.
    2. Henry IV by William Shakespeare. Specifically, Act I sc. II., which relates to understanding your guest/customer.
    3. Lean Startup by Eric Ries. I’m living it.
    4. Blink by Malcolm Gladwell. Best book on authentic marketing.
    5. Anatomy of Yes by Joseph G. Burke. That’s the book I wrote in 46 hours on a train. It was published by Matthew Kelly’s company and the foreword was written by the founder of Domino’s Pizza.

    Is there a particular quote or saying that you use as personal motivation?
    Yes, I have three of them:

    1. “Champions are made in the lonely hours.” A good friend, Kevin Karro, author of Rules of the Red Rubber Ball, handed me this pearl of wisdom.
    2. “A team is a group of people who trust each other.” It’s an unavoidable truth.
    3. “Put God at the center of everything you do.”

    Getting a “no thanks” on Shark Tank — twice! — might lead some people to think that they needed a new idea. But Joe Burke is not one of those people. He persisted with a kids’ toy idea that he developed at his kitchen table, and today, Ollyball has sold over 3 million units and has won multiple awards, including Toy of the Year.

    Ollyball is an inflatable ball designed for “full-speed, full-force indoor play.” It weighs less than an ounce, so kids toss it around without fear of knocking over or breaking anything in the house.

    On May 28 at 2 PM ET, Joe will join Entrepreneur for an online workshop to give a behind-the-scenes look at how he turned Ollyball into the all-time best-selling indoor play ball — and the strategies he used to do it without big investors or flashy ads.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • Simplify Investing With Stock Recommendations App

    Simplify Investing With Stock Recommendations App


    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    Need to raise capital to grow your business? The stock market is a great way for entrepreneurs to do just that. According to Gallup.com, 42.5% of entrepreneurs are buying and trading stocks. If you’d like to take part and start investing smarter, this lifetime subscription to Sterling Stock Picker is currently on sale for $55.19 with code SAVE20 through June 1.

    This app makes the stock market accessible for everyone

    If you’ve always wanted to use the stock market to your advantage, but haven’t been sure where to start, Sterling Stock Picker is here to help. This award-winning platform was created to make the stock market more accessible to everyone, with no expertise needed.

    Sterling Stock Picker uses different methods to select winning stocks for your portfolio, making sure they line up with your personal values, investment preferences, and risk tolerance so that you can make solid decisions. You just take a five-minute questionnaire to get started, and watch as the done-for-you portfolio builder makes investing straightforward.

    Their patent-pending North Star technology also gives clear guidance on when to sell, buy, hold, or avoid certain stocks. You’ll also get access to Finley, your very own personal AI financial coach, to help you reach your financial goals. Ask Finley for strategic investment advice, risk assessment, educational support, or questions about your portfolio or the stock market in general.

    Real-life user Chris raved about Sterling Stock Picker, sharing, “I have been using the Sterling Stock Picker for almost a year and it has played an integral part in me achieving over a 200% return on my investments.”

    Start your own stock market journey with this lifetime subscription to Sterling Stock Picker, now $55.19 with code SAVE20 through June 1.

    StackSocial prices subject to change.

    Need to raise capital to grow your business? The stock market is a great way for entrepreneurs to do just that. According to Gallup.com, 42.5% of entrepreneurs are buying and trading stocks. If you’d like to take part and start investing smarter, this lifetime subscription to Sterling Stock Picker is currently on sale for $55.19 with code SAVE20 through June 1.

    This app makes the stock market accessible for everyone

    If you’ve always wanted to use the stock market to your advantage, but haven’t been sure where to start, Sterling Stock Picker is here to help. This award-winning platform was created to make the stock market more accessible to everyone, with no expertise needed.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • Building delightful UIs with Compose



    Posted by Rebecca Franks – Developer Relations Engineer

    Androidify is a new sample app we built using the latest best practices for mobile apps. Previously, we covered all the different features of the app, from Gemini integration and CameraX functionality to adaptive layouts. In this post, we dive into the Jetpack Compose usage throughout the app, building upon our base knowledge of Compose to add delightful and expressive touches along the way!

    Material 3 Expressive

    Material 3 Expressive is an expansion of the Material 3 design system. It’s a set of new features, updated components, and design tactics for creating emotionally impactful UX.

    https://www.youtube.com/watch?v=n17dnMChX14

    It’s been released as part of the alpha version of the Material 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and contains a wide range of new components you can use within your apps to build more personalized and delightful experiences. Learn more about Material 3 Expressive’s component and theme updates for more engaging and user-friendly products.

    Material Expressive Component updates

    Material Expressive Component updates

    In addition to the new component updates, Material 3 Expressive introduces a new motion physics system that’s encompassed in the Material theme.

    In Androidify, we’ve utilized Material 3 Expressive in a few different ways across the app. For example, we’ve explicitly opted-in to the new MaterialExpressiveTheme and chosen MotionScheme.expressive() (this is the default when using expressive) to add a bit of playfulness to the app:

    @Composable
    fun AndroidifyTheme(
       content: @Composable () -> Unit,
    ) {
       val colorScheme = LightColorScheme
    
    
       MaterialExpressiveTheme(
           colorScheme = colorScheme,
           typography = Typography,
           shapes = shapes,
           motionScheme = MotionScheme.expressive(),
           content = {
               SharedTransitionLayout {
                   CompositionLocalProvider(LocalSharedTransitionScope provides this) {
                       content()
                   }
               }
           },
       )
    }
    

    Some of the new componentry is used throughout the app, including the HorizontalFloatingToolbar for the Prompt type selection:

    moving example of expressive button shapes in slow motion

    The app also uses MaterialShapes in various locations, which are a preset list of shapes that allow for easy morphing between each other. For example, check out the cute cookie shape for the camera capture button:

    Material Expressive Component updates

    Camera button with a MaterialShapes.Cookie9Sided shape

    Animations

    Wherever possible, the app leverages the Material 3 Expressive MotionScheme to obtain a themed motion token, creating a consistent motion feeling throughout the app. For example, the scale animation on the camera button press is powered by defaultSpatialSpec(), a specification used for animations that move something across a screen (such as x,y or rotation, scale animations):

    val interactionSource = remember { MutableInteractionSource() }
    val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec<Float>()
    Spacer(
       modifier
           .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
           .clip(MaterialShapes.Cookie9Sided.toShape())
           .size(size)
           .drawWithCache {
               //.. etc
           },
    )
    

    Camera button scale interaction

    Camera button scale interaction

    Shared element animations

    The app uses shared element transitions between different screen states. Last year, we showcased how you can create shared elements in Jetpack Compose, and we’ve extended this in the Androidify sample to create a fun example. It combines the new Material 3 Expressive MaterialShapes, and performs a transition with a morphing shape animation:

    moving example of expressive button shapes in slow motion

    To do this, we created a custom Modifier that takes in the target and resting shapes for the sharedBounds transition:

    @Composable
    fun Modifier.sharedBoundsRevealWithShapeMorph(
       sharedContentState: 
    SharedTransitionScope.SharedContentState,
       sharedTransitionScope: SharedTransitionScope = 
    LocalSharedTransitionScope.current,
       animatedVisibilityScope: AnimatedVisibilityScope = 
    LocalNavAnimatedContentScope.current,
       boundsTransform: BoundsTransform = 
    MaterialTheme.motionScheme.sharedElementTransitionSpec,
       resizeMode: SharedTransitionScope.ResizeMode = 
    SharedTransitionScope.ResizeMode.RemeasureToBounds,
       restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
       targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
    )
    

    Then, we apply a custom OverlayClip to provide the morphing shape, by tying into the AnimatedVisibilityScope provided by the LocalNavAnimatedContentScope:

    val animatedProgress =
       animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)
    
    
    val morph = remember {
       Morph(restingShape, targetShape)
    }
    val morphClip = MorphOverlayClip(morph, { animatedProgress.value })
    
    
    return this@sharedBoundsRevealWithShapeMorph
       .sharedBounds(
           sharedContentState = sharedContentState,
           animatedVisibilityScope = animatedVisibilityScope,
           boundsTransform = boundsTransform,
           resizeMode = resizeMode,
           clipInOverlayDuringTransition = morphClip,
           renderInOverlayDuringTransition = renderInOverlayDuringTransition,
       )
    

    View the full code snippet for this Modifer on GitHub.

    Autosize text

    With the latest release of Jetpack Compose 1.8, we added the ability to create text composables that automatically adjust the font size to fit the container’s available size with the new autoSize parameter:

    BasicText(text,
    style = MaterialTheme.typography.titleLarge,
    autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
    )
    

    This is used front and center for the “Customize your own Android Bot” text:

    Text reads Customize your own Android Bot with an inline moving image

    “Customize your own Android Bot” text with inline GIF

    This text composable is interesting because it needed to have the fun dancing Android bot in the middle of the text. To do this, we use InlineContent, which allows us to append a composable in the middle of the text composable itself:

    @Composable
    private fun DancingBotHeadlineText(modifier: Modifier = Modifier) {
       Box(modifier = modifier) {
           val animatedBot = "animatedBot"
           val text = buildAnnotatedString {
               append(stringResource(R.string.customize))
               // Attach "animatedBot" annotation on the placeholder
               appendInlineContent(animatedBot)
               append(stringResource(R.string.android_bot))
           }
           var placeHolderSize by remember {
               mutableStateOf(220.sp)
           }
           val inlineContent = mapOf(
               Pair(
                   animatedBot,
                   InlineTextContent(
                       Placeholder(
                           width = placeHolderSize,
                           height = placeHolderSize,
                           placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                       ),
                   ) {
                       DancingBot(
                           modifier = Modifier
                               .padding(top = 32.dp)
                               .fillMaxSize(),
                       )
                   },
               ),
           )
           BasicText(
               text,
               modifier = Modifier
                   .align(Alignment.Center)
                   .padding(bottom = 64.dp, start = 16.dp, end = 16.dp),
               style = MaterialTheme.typography.titleLarge,
               autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
               maxLines = 6,
               onTextLayout = { result ->
                   placeHolderSize = result.layoutInput.style.fontSize * 3.5f
               },
               inlineContent = inlineContent,
           )
       }
    }
    

    Composable visibility with onLayoutRectChanged

    With Compose 1.8, a new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a more performant version of onGloballyPositioned, and includes features such as debouncing and throttling to make it performant inside lazy layouts.

    In Androidify, we’ve used this modifier for the color splash animation. It determines the position where the transition should start from, as we attach it to the “Let’s Go” button:

    var buttonBounds by remember {
       mutableStateOf<RelativeLayoutBounds?>(null)
    }
    var showColorSplash by remember {
       mutableStateOf(false)
    }
    Box(modifier = Modifier.fillMaxSize()) {
       PrimaryButton(
           buttonText = "Let's Go",
           modifier = Modifier
               .align(Alignment.BottomCenter)
               .onLayoutRectChanged(
                   callback = { bounds ->
                       buttonBounds = bounds
                   },
               ),
           onClick = {
               showColorSplash = true
           },
       )
    }
    

    We use these bounds as an indication of where to start the color splash animation from.

    moving image of a blue color splash transition between Androidify demo screens

    Learn more delightful details

    From fun marquee animations on the results screen, to animated gradient buttons for the AI-powered actions, to the path drawing animation for the loading screen, this app has many delightful touches for you to experience and learn from.

    animated marquee example

    animated gradient button for AI powered actions example

    animated loading screen example

    Check out the full codebase at github.com/android/androidify and learn more about the latest in Compose from using Material 3 Expressive, the new modifiers, auto-sizing text and of course a couple of delightful interactions!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX


    The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

    Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

    moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

    Under the hood

    The app combines a variety of different Google technologies, such as:

      • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
      • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
      • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
      • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

    This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

    How does the Androidify app work?

    The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

    Flow chart describing Androidify app flow

    Androidify app flow chart detailing how the app works with AI

    AI in Androidify with Gemini and ML Kit

    The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

      • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

    val response = generativeModel.generateContent(
       content {
           text(prompt)
           image(image)
       },
    )
    

      • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

      • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

      • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

      • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

    The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

    Explore more detailed information about AI usage in Androidify.

    Jetpack Compose

    The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

    Delightful details with the UI

    The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

    MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

    Androidify app UI showing camera button

    Camera button with a MaterialShapes.Cookie9Sided shape

    Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

      • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

        moving example of expressive button shapes in slow motion

      • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

        animated marquee example

      • Fun color splash animation as a transition between screens.

        moving image of a blue color splash transition between Androidify demo screens

      • Animating gradient buttons for the AI-powered actions.

        animated gradient button for AI powered actions example

    To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

    Adapting to different devices

    Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

    a collage of different adaptive layouts for the Androidify app across small and large screens

    Various adaptive layouts in the app

    For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

      • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

      • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

      • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

    Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

    CameraX and Media3 Compose

    To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

    The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

    @Composable
    private fun VideoPlayer(modifier: Modifier = Modifier) {
        val context = LocalContext.current
        var player by remember { mutableStateOf<Player?>(null) }
        LifecycleStartEffect(Unit) {
            player = ExoPlayer.Builder(context).build().apply {
                setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
                repeatMode = Player.REPEAT_MODE_ONE
                prepare()
            }
            onStopOrDispose {
                player?.release()
                player = null
            }
        }
        Box(
            modifier
                .background(MaterialTheme.colorScheme.surfaceContainerLowest),
        ) {
            player?.let { currentPlayer ->
                PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
            }
        }
    }
    

    Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

    var videoFullyOnScreen by remember { mutableStateOf(false) }     
    
    LaunchedEffect(videoFullyOnScreen) {
         if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
    } 
    
    // We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
    Modifier.onVisibilityChanged(
                    containerWidth = LocalView.current.width,
                    containerHeight = LocalView.current.height,
    ) { fullyVisible -> videoFullyOnScreen = fullyVisible }
    
    // A simple version of visibility changed detection
    fun Modifier.onVisibilityChanged(
        containerWidth: Int,
        containerHeight: Int,
        onChanged: (visible: Boolean) -> Unit,
    ) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
        onChanged(
            layoutBounds.boundsInRoot.top > 0 &&
                layoutBounds.boundsInRoot.bottom < containerHeight &&
                layoutBounds.boundsInRoot.left > 0 &&
                layoutBounds.boundsInRoot.right < containerWidth,
        )
    }
    

    Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

    val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
                OutlinedIconButton(
                    onClick = playPauseButtonState::onClick,
                    enabled = playPauseButtonState.isEnabled,
                ) {
                    val icon =
                        if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                    val contentDescription =
                        if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                    Icon(
                        painterResource(icon),
                        stringResource(contentDescription),
                    )
                }
    

    Check out the code for more details on how CameraX and Media3 were used in Androidify.

    Navigation 3

    Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

    @Composable
    fun MainNavigation() {
       val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
       NavDisplay(
           backStack = backStack,
           onBack = { backStack.removeLastOrNull() },
           entryProvider = entryProvider {
               entry<Home> { entry ->
                   HomeScreen(
                       onAboutClicked = {
                           backStack.add(About)
                       },
                   )
               }
               entry<Camera> {
                   CameraPreviewScreen(
                       onImageCaptured = { uri ->
                           backStack.add(Create(uri.toString()))
                       },
                   )
               }
               // etc
           },
       )
    }
    

    Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

    CameraLayout in Compose

    Learn more about Jetpack Navigation 3, currently in alpha.

    Learn more

    By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Engage users on Google TV with excellent TV apps



    Posted by Shobana Radhakrishnan – Senior Director of Engineering, Google TV, and Paul Lammertsma – Developer Relations Engineer, Android

    Over the past year, Google TV and Android TV achieved over 270 million monthly active devices, establishing one of the largest smart TV OS footprints. Building on this momentum, we are excited to share new platform features and developer tools designed to help you increase app engagement with our expanding user base.

    https://www.youtube.com/watch?v=OosLbRBM9dA

    Google TV with Gemini capabilities

    Earlier this year, we announced that we’ll bring Gemini capabilities to Google TV, so users can speak more naturally and conversationally to find what to watch and get answers to complex questions.

    A user pulls up Gemini on a TV asking for kid-friendly movie recommendations similar to Jurassic Park. Gemini responds with several movie recommendations

    After each movie or show search, our new voice assistant will suggest relevant content from your apps, significantly increasing the discoverability of your content.

    A user pulls up Gemini on a TV asking for help explaining the solar system to a first grader. Gemini responds with YouTube videos to help explain the solar system

    Plus, users can easily ask questions about topics they’re curious about and receive insightful answers with supporting videos.

    We’re so excited to bring this helpful and delightful experience to users this fall.

    Video Discovery API

    Today, we’ve also opened partner enrollment for our Video Discovery API.

    Video Discovery optimizes Resumption, Entitlements, and Recommendations across all Google TV form factors to enhance the end-user experience and boost app engagement.

      • Resumption: Partners can now easily display a user’s paused video within the ‘Continue Watching’ row from the home screen. This row is a prime location that drives 60% of all user interactions on Google TV.
      • Entitlements: Video Discovery streamlines entitlement management, which matches app content to user eligibility. Users appreciate this because they can enjoy personalized recommendations without needing to manually update all their subscription details. This allows partners to connect with users across multiple discovery points on Google TV.
      • Recommendations: Video Discovery even highlights personalized content recommendations based on content that users watched inside apps.

    Partners can begin incorporating the Video Discovery API today, starting with resumption and entitlement integrations. Check out g.co/tv/vda to learn more.

    Jetpack Compose for TV

    Compose for TV 1.0 expands on the core and Material Compose libraries

    Last year, we launched Compose for TV 1.0 beta, which lets you build beautiful, adaptive UIs across Android, including Android TV OS.

    Now, Compose for TV 1.0 is stable, and expands on the core and Material Compose libraries. We’ve even seen how the latest release of Compose significantly improves app startup within our internal benchmarking mobile sample, with roughly a 20% improvement compared with the March 2024 release. Because Compose for TV builds upon these libraries, apps built with Compose for TV should also see better app startup times.

    New to building with Compose, and not sure where to start? Our updated Jetcaster audio streaming app sample demonstrates how to use Compose across form factors. It includes a dedicated module for playing podcasts on TV by combining separate view models with shared business logic.

    Focus Management Codelab

    We understand that focus management can be challenging at times. That’s why we’ve published a codelab that reviews how to set initial focus, prepare for unexpected focus traversal, and efficiently restore focus.

    Memory Optimization Guide

    We’ve released a comprehensive guide on memory optimization, including memory targets for low RAM devices as well. Combined with Android Studio’s powerful memory profiler, this helps you understand when your app exceeds those limits and why.

    In-App Ratings and Reviews

    Ratings and reviews entry point forJetStream sample app on TV

    Moreover, app ratings and reviews are essential for developers, offering quantitative and qualitative feedback on user experiences. Now, we’re extending the In-App Ratings and Reviews API to TV to allow developers to prompt users for ratings and reviews directly from Google TV. Check out our recent blog post detailing how to easily integrate the In-App Ratings and Reviews API.

    Android 16 for TV

    Android 16 for TV

    We’re excited to announce the upcoming release of Android 16 for TV. Developers can begin using the latest beta today. With Android 16, TV developers can access several great features:

      • Platform support for the Eclipsa Audio codec enables creators to use the IAMF spatial audio format. For ExoPlayer support that includes previous platform versions, see ExoPlayer’s IAMF decoder module.
      • There are various improvements to media playback speed, consistency and efficiency, as well as HDMI-CEC reliability and performance optimizations for 64-bit kernels.
      • Additional APIs and user experiences from Android 16 are also available. We invite you to explore the complete list from the Android 16 for TV release notes.

    What’s next

    We’re incredibly excited to see how these announcements will optimize your development journey, and look forward to seeing the fantastic apps you’ll launch on the platform!

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • On-device GenAI APIs as part of ML Kit help you easily build with Gemini Nano



    Posted by Caren Chang – Developer Relations Engineer, Chengji Yan – Software Engineer, Taj Darra – Product Manager

    We are excited to announce a set of on-device GenAI APIs, as part of ML Kit, to help you integrate Gemini Nano in your Android apps.

    To start, we are releasing 4 new APIs:

      • Summarization: to summarize articles and conversations
      • Proofreading: to polish short text
      • Rewriting: to reword text in different styles
      • Image Description: to provide short description for images

    Key benefits of GenAI APIs

    GenAI APIs are high level APIs that allow for easy integration, similar to existing ML Kit APIs. This means you can expect quality results out of the box without extra effort for prompt engineering or fine tuning for specific use cases.

    GenAI APIs run on-device and thus provide the following benefits:

      • Input, inference, and output data is processed locally
      • Functionality remains the same without reliable internet connection
      • No additional cost incurred for each API call

    To prevent misuse, we also added safety protection in various layers, including base model training, safety-aware LoRA fine-tuning, input and output classifiers and safety evaluations.

    How GenAI APIs are built

    There are 4 main components that make up each of the GenAI APIs.

    1. Gemini Nano is the base model, as the foundation shared by all APIs.
    2. Small API-specific LoRA adapter models are trained and deployed on top of the base model to further improve the quality for each API.
    3. Optimized inference parameters (e.g. prompt, temperature, topK, batch size) are tuned for each API to guide the model in returning the best results.
    4. An evaluation pipeline ensures quality in various datasets and attributes. This pipeline consists of: LLM raters, statistical metrics and human raters.

    Together, these components make up the high-level GenAI APIs that simplify the effort needed to integrate Gemini Nano in your Android app.

    Evaluating quality of GenAI APIs

    For each API, we formulate a benchmark score based on the evaluation pipeline mentioned above. This score is based on attributes specific to a task. For example, when evaluating the summarization task, one of the attributes we look at is “grounding” (ie: factual consistency of generated summary with source content).

    To provide out-of-box quality for GenAI APIs, we applied feature specific fine-tuning on top of the Gemini Nano base model. This resulted in an increase for the benchmark score of each API as shown below:

    Use case in English Gemini Nano Base Model ML Kit GenAI API
    Summarization 77.2 92.1
    Proofreading 84.3 90.2
    Rewriting 79.5 84.1
    Image Description 86.9 92.3

    In addition, this is a quick reference of how the APIs perform on a Pixel 9 Pro:

    Prefix Speed
    (input processing rate)
    Decode Speed
    (output generation rate)
    Text-to-text 510 tokens/second 11 tokens/second
    Image-to-text 510 tokens/second + 0.8 seconds for image encoding 11 tokens/second

    Sample usage

    This is an example of implementing the GenAI Summarization API to get a one-bullet summary of an article:

    val articleToSummarize = "We are excited to announce a set of on-device generative AI APIs..."
    
    // Define task with desired input and output format
    val summarizerOptions = SummarizerOptions.builder(context)
        .setInputType(InputType.ARTICLE)
        .setOutputType(OutputType.ONE_BULLET)
        .setLanguage(Language.ENGLISH)
        .build()
    val summarizer = Summarization.getClient(summarizerOptions)
    
    suspend fun prepareAndStartSummarization(context: Context) {
        // Check feature availability. Status will be one of the following: 
        // UNAVAILABLE, DOWNLOADABLE, DOWNLOADING, AVAILABLE
        val featureStatus = summarizer.checkFeatureStatus().await()
    
        if (featureStatus == FeatureStatus.DOWNLOADABLE) {
            // Download feature if necessary.
            // If downloadFeature is not called, the first inference request will 
            // also trigger the feature to be downloaded if it's not already
            // downloaded.
            summarizer.downloadFeature(object : DownloadCallback {
                override fun onDownloadStarted(bytesToDownload: Long) { }
    
                override fun onDownloadFailed(e: GenAiException) { }
    
                override fun onDownloadProgress(totalBytesDownloaded: Long) {}
    
                override fun onDownloadCompleted() {
                    startSummarizationRequest(articleToSummarize, summarizer)
                }
            })    
        } else if (featureStatus == FeatureStatus.DOWNLOADING) {
            // Inference request will automatically run once feature is      
            // downloaded.
            // If Gemini Nano is already downloaded on the device, the   
            // feature-specific LoRA adapter model will be downloaded very  
            // quickly. However, if Gemini Nano is not already downloaded, 
            // the download process may take longer.
            startSummarizationRequest(articleToSummarize, summarizer)
        } else if (featureStatus == FeatureStatus.AVAILABLE) {
            startSummarizationRequest(articleToSummarize, summarizer)
        } 
    }
    
    fun startSummarizationRequest(text: String, summarizer: Summarizer) {
        // Create task request  
        val summarizationRequest = SummarizationRequest.builder(text).build()
    
        // Start summarization request with streaming response
        summarizer.runInference(summarizationRequest) { newText -> 
            // Show new text in UI
        }
    
        // You can also get a non-streaming response from the request
        // val summarizationResult = summarizer.runInference(summarizationRequest)
        // val summary = summarizationResult.get().summary
    }
    
    // Be sure to release the resource when no longer needed
    // For example, on viewModel.onCleared() or activity.onDestroy()
    summarizer.close()
    

    For more examples of implementing the GenAI APIs, check out the official documentation and samples on GitHub:

    Use cases

    Here is some guidance on how to best use the current GenAI APIs:

    For Summarization, consider:

      • Conversation messages or transcripts that involve 2 or more users
      • Articles or documents less than 4000 tokens (or about 3000 English words). Using the first few paragraphs for summarization is usually good enough to capture the most important information.

    For Proofreading and Rewriting APIs, consider utilizing them during the content creation process for short content below 256 tokens to help with tasks such as:

      • Refining messages in a particular tone, such as more formal or more casual
      • Polishing personal notes for easier consumption later

    For the Image Description API, consider it for:

      • Generating titles of images
      • Generating metadata for image search
      • Utilizing descriptions of images in use cases where the images themselves cannot be displayed, such as within a list of chat messages
      • Generating alternative text to help visually impaired users better understand content as a whole

    GenAI API in production

    Envision is an app that verbalizes the visual world to help people who are blind or have low vision lead more independent lives. A common use case in the app is for users to take a picture to have a document read out loud. Utilizing the GenAI Summarization API, Envision is now able to get a concise summary of a captured document. This significantly enhances the user experience by allowing them to quickly grasp the main points of documents and determine if a more detailed reading is desired, saving them time and effort.

    side by side images of a mobile device showing a document on a table on the left, and the results of the scanned document on the right showing details providing the what, when, and where as written in the document

    Supported devices

    GenAI APIs are available on Android devices using optimized MediaTek Dimensity, Qualcomm Snapdragon, and Google Tensor platforms through AICore. For a comprehensive list of devices that support GenAI APIs, refer to our official documentation.

    Learn more

    Start implementing GenAI APIs in your Android apps today with guidance from our official documentation and samples on GitHub: AI Catalog GenAI API Samples with Compose, ML Kit GenAI APIs Quickstart.



    Source link

  • KaruQ Actually Makes Math Fun As You Defeat Ghosts With Calculations

    KaruQ Actually Makes Math Fun As You Defeat Ghosts With Calculations


    Ghosts with numbers will appear on the screen, toy can then defeat a ghost by hitting it Wirth a flame that has the same number. But there’s more than meets the eye.

    Merging numbers multiplies them while splitting divides, allowing you to create a variety of numbers to take home victory against the scary ghosts. Adding a matchstick will give you 1.

    You can enjoy more than 190 different puzzles. There is also a challenging daily puzzle with one new problem you can tackle daily. As another fun plus, you can even create you own puzzles and share them with others using a QR code.

    KaruQ is a completely free download now on the App Store for the iPhone and all iPads.

    You could probably guess that I’m not a huge math fan, but playing KaruQ is an enjoyable way to test your skills while defeating ghosts. While the premise sounds easy and perfect for any age, you’ll definitely be challenged finding the right number for make short work of the enemy.



    Source link