برچسب: with

  • Design with Widget Canonical Layouts



    Posted by Summers Pitman – Developer Relations Engineer, and Ivy Knight – Senior Design Advocate

    Widgets can bring more productive, delightful and customized experiences to users’ home screens, but they can be tricky to design to ensure a high quality focused experience. In this blog post, we’ll cover how easy Widget Canonical Layouts can make this process.

    But, what is a Canonical Layout? It is a common layout pattern that works for various screen sizes. You can use them as a starting point, ready-to-use compositions that help layouts adapt for common use cases and screen sizes. Widgets also provide Canonical Layouts to get started crafting higher quality widgets.

    Widget Canonical Layouts

    The Widget Canonical Layouts Figma makes previewing your widget content in multiple breakpoints and layout types. Join me in our Figma design resource to explore how they can simplify designing a widget for one of our sample apps, JetNews.

    Three side-by-side examples of Widget Canonical Layouts in Figma being used to design a widget for JetNews

    1. Content to adapt

    Jetnews is a sample news reading app, built with Jetpack Compose. With the experience in mind, the primary user journey is reading articles.

      • A widget should be glanceable, so displaying a full article would not be a good use case.
      • Since they are timely news articles, surfacing newer content could be more productive for users.
      • We’ll want to give a condensed version of each article similar to the app home feed.
      • The addition of a bookmark action would allow the user to save and read later in the full app experience.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    2. Choosing a Canonical Layout

    With our content and user journey established, we’ll take a glance at which canonical layouts would make sense.

    We want to show at least a few new articles with a headline, truncated description, and possible thumbnail. Which brings us to the Image + Text Grid layout and maybe the list layout.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Within our new Figma Widget Canonical Layout preview, we can add in some mock content to check out how these layouts will look in various sizes.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Moving example of using Widget Canonical Layouts in Figma to design a widget for JetNews

    3. Adapting to breakpoint sizes

    Now that we’ve previewed our content in both the grid and list layouts, we don’t have to choose between just one!

    The grid layout better displays our content for larger sizes, where we have some more room to take advantage of multiple columns and a larger thumbnail image. While the list is working nicely for smaller sizes, giving a one column layout with a smaller thumbnail.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    But we can adapt even further to allow the user to have more resizing flexibility and anticipate different OEM grid sizing. For JetNews, we decided on an additional extra small layout to accommodate a smaller grid size and vertical height while still using the List layout. For this size I decided to remove the thumbnail all together to give the title and action space.

    Consider these in-between design tweaks as needed (between any of the breakpoints), that can be applied as general rules in your widget designs.

    Here are a few guidelines to borrow:

      • Establish a content hierarchy on what to hide as the widget shrinks.
      • Use a type scale so the type scales consistently.
      • Create some parameters for image scaling with aspect ratios and cropping techniques.
      • Use component presentation changes. For example, the title bar’s FAB can be reduced to a standard icon.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    Last, I’ll swap the app icon, round up all the breakpoint sizes, and provide an option with brand colors.

    Examples of using Widget Canonical Layouts in Figma to design a widget for JetNews

    These are ready to send over to dev! Tune in for the code along to check out how to implement the final widget.

    Go try it out and explore more widgets

    You can find the Widget Canonical Layouts at our new Figma Community Page: figma.com/@androiddesign. Stay tuned for more Android Figma resources.

    Check out the official Android documentation for detailed information and best practices Widgets on Android and more on Widget Quality Tiers, and join us for the rest of Widget Spotlight week!

    Android Banner

    This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.



    Source link

  • Improving User Experience with Apple Intelligence

    Improving User Experience with Apple Intelligence



    This course equips you with the skills to leverage Apple’s latest user experience (UX) advancements within your iOS apps. You’ll explore Writing Tools, a powerful suite for enhancing text input and editing. Dive into Genmoji, a brand new tool for creating custom emoji characters, adding a layer of personalization and expression to your apps. And unlock the power of Siri and App Intents with Apple Intelligence, enabling seamless voice interaction and context-aware functionality within your creations.



    Source link

  • Never Stop Learning with More Than 1,000 Courses for $20

    Never Stop Learning with More Than 1,000 Courses for $20


    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    Remember when learning new skills meant signing up for expensive classes, sitting in freezing (or sweltering) classrooms under fluorescent lights, and wondering if the vending machine would ever accept your crumpled dollar bill? Yeah, StackSkills EDU Unlimited is here to wipe that memory clean.

    For just $19.97—yes, less than your last food delivery—you can grab lifetime access to 1,000+ online courses. IT, coding, graphic design, business strategy, marketing—you name it, it’s probably already waiting for you. New courses are added monthly, so your library actually grows with you over time, not against you.

    This is real-world learning made for real-world schedules. Whether you’re a business leader trying to sharpen your digital strategy, a parent plotting a return to the workforce, a freelancer adding a new service, or a student supplementing a less-than-exciting course catalog—StackSkills gives you the flexibility to learn on your own time, from any device, without having to sacrifice your sanity (or your weekend plans).

    And StackSkills isn’t about fluff. Their 350+ elite instructors are people who’ve been there, done that, and are ready to show you how they actually succeeded (and yes, sometimes how they failed—because that’s where the real lessons live). Each course includes progress tracking, certificates, and even quarterly live Q&As to keep you engaged and growing.

    Compared to one college course that costs, what, $600, $1,000, more?—$19.97 for lifetime access is almost criminally affordable. Plus, you’ll be able to pivot your learning as new trends pop up, industries shift, and opportunities arise. No need to re-enroll, re-pay, or re-think every time you want to pick up a new skill.

    It’s lifetime learning—built for people who actually have lives.

    Take the leap. Own your growth. And seriously, stop paying $300 just to sit through a PowerPoint for beginners class. StackSkills has you covered for life.

    Get lifetime access to StackSkills by EDU for just $19.97 (reg. $600) through June 1.

    EDU Unlimited by StackSkills: Lifetime Access

    See Deal

    StackSocial prices subject to change.



    Source link

  • Generate stunning visuals in your Android apps with Imagen 3 via Vertex AI in Firebase



    Posted by Thomas Ezan Sr. – Android Developer Relation Engineer (@lethargicpanda)

    Imagen 3, our most advanced image generation model, is now available through Vertex AI in Firebase, making it even easier to integrate it to your Android apps.

    Designed to generate well-composed images with exceptional details, reduced artifacts, and rich lighting, Imagen 3 represents a significant leap forward in image generation capabilities.

    Hot air balloons float over a scenic desert landscape with unique rock formations.

    Image generated by Imagen 3 with prompt: “Shot in the style of DSLR camera with the polarizing filter. A photo of two hot air balloons over the unique rock formations in Cappadocia, Turkey. The colors and patterns on these balloons contrast beautifully against the earthy tones of the landscape below. This shot captures the sense of adventure that comes with enjoying such an experience.”

    A wooden robot stands in a field of yellow flowers, holding a small blue bird on its outstretched hand.

    Image generated by Imagen 3 with prompt: A weathered, wooden mech robot covered in flowering vines stands peacefully in a field of tall wildflowers, with a small blue bird resting on its outstretched hand. Digital cartoon, with warm colors and soft lines. A large cliff with a waterfall looms behind.

    Imagen 3 unlocks exciting new possibilities for Android developers. Generated visuals can adapt to the content of your app, creating a more engaging user experience. For instance, your users can generate custom artwork to enhance their in-app profile. Imagen can also improve your app’s storytelling by bringing its narratives to life with delightful personalized illustrations.

    You can experiment with image prompts in Vertex AI Studio, and learn how to improve your prompts by reviewing the prompt and image attribute guide.

    Get started with Imagen 3

    The integration of Imagen 3 is similar to adding Gemini access via Vertex AI in Firebase. Start by adding the gradle dependencies to your Android project:

    dependencies {
        implementation(platform("com.google.firebase:firebase-bom:33.10.0"))
    
        implementation("com.google.firebase:firebase-vertexai")
    }
    

    Then, in your Kotlin code, create an ImageModel instance by passing the model name and optionally, a model configuration and safety settings:

    val imageModel = Firebase.vertexAI.imagenModel(
      modelName = "imagen-3.0-generate-001",
      generationConfig = ImagenGenerationConfig(
        imageFormat = ImagenImageFormat.jpeg(compresssionQuality = 75),
        addWatermark = true,
        numberOfImages = 1,
        aspectRatio = ImagenAspectRatio.SQUARE_1x1
      ),
      safetySettings = ImagenSafetySettings(
        safetyFilterLevel = ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE
        personFilterLevel = ImagenPersonFilterLevel.ALLOW_ADULT
      )
    )
    

    Finally generate the image by calling generateImages:

    val imageResponse = imageModel.generateImages(
      prompt = "An astronaut riding a horse"
    )
    

    Retrieve the generated image from the imageResponse and display it as a bitmap as follow:

    val image = imageResponse.images.first()
    val uiImage = image.asBitmap()
    

    Next steps

    Explore the comprehensive Firebase documentation for detailed API information.

    Access to Imagen 3 using Vertex AI in Firebase is currently in Public Preview, giving you an early opportunity to experiment and innovate. For pricing details, please refer to the Vertex AI in Firebase pricing page.

    Start experimenting with Imagen 3 today! We’re looking forward to seeing how you’ll leverage Imagen 3’s capabilities to create truly unique, immersive and personalized Android experiences.



    Source link

  • Common media processing operations with Jetpack Media3 Transformer



    Posted by Nevin Mital – Developer Relations Engineer, and Kristina Simakova – Engineering Manager

    Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality.

    The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we’ll walk through some of the most common editing operations with Transformer and discuss its performance.

    Getting set up with Transformer

    To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you’ll:

      • Create one or many MediaItem instances from your video file(s), then
      • Apply item-specific edits to them by building an EditedMediaItem for each MediaItem,
      • Create a Transformer instance configured with settings applicable to the whole exported video,
      • and finally start the export to save your applied edits to a file.

    Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post!

    Here’s what this looks like in code:

    val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build()
    val editedMediaItem = EditedMediaItem.Builder(mediaItem).build()
    val transformer = 
      Transformer.Builder(context)
        .addListener(/* Add a Transformer.Listener instance here for completion events */)
        .build()
    transformer.start(editedMediaItem, outputFilePath)
    

    Transcoding, Trimming, Muting, and Resizing with the Transformer API

    Let’s now take a look at four of the most common single-asset media editing operations, starting with Transcoding.

    Transcoding is the process of re-encoding an input file into a specified output format. For this example, we’ll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change:

    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .setVideoMimeType(MimeTypes.VIDEO_H265)
        .setAudioMimeType(MimeTypes.AUDIO_AAC)
        .build()
    

    Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we’ll also include FFmpeg commands for each example to serve as a helpful reference. Here’s how you can perform the same transcoding with FFmpeg:

    $ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath
    

    The next operation we’ll try is Trimming.

    Specifically, we’ll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the “Getting set up” section above, here are the lines that change:

    // Configure the trim operation by adding a ClippingConfiguration to
    // the media item
    val clippingConfiguration =
       MediaItem.ClippingConfiguration.Builder()
         .setStartPositionMs(3000)
         .setEndPositionMs(8000)
         .build()
    val mediaItem =
       MediaItem.Builder()
         .setUri(mediaItemUri)
         .setClippingConfiguration(clippingConfiguration)
         .build()
    
    // Transformer also has a trim optimization feature we can enable.
    // This will prioritize Transmuxing over Transcoding where possible.
    // See more about Transmuxing further down in this post.
    val transformer = 
      Transformer.Builder(context)
        .addListener(...)
        .experimentalSetTrimOptimizationEnabled(true)
        .build()
    

    With FFmpeg:

    $ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath
    

    Next, we can mute the audio in the exported video file.

    val editedMediaItem = 
      EditedMediaItem.Builder(mediaItem)
        .setRemoveAudio(true)
        .build()
    

    The corresponding FFmpeg command:

    $ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath
    

    And for our final example, we’ll try resizing the input video by scaling it down to half its original height and width.

    val scaleEffect = 
      ScaleAndRotateTransformation.Builder()
        .setScale(0.5f, 0.5f)
        .build()
    val editedMediaItem =
      EditedMediaItem.Builder(mediaItem)
        .setEffects(
          /* audio */ Effects(emptyList(), 
          /* video */ listOf(scaleEffect))
        )
        .build()
    

    An FFmpeg command could look like this:

    $ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath
    

    Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple.

    Transformer API Performance results

    Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device:

    (Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.)

    Input video format: 10s 720p H264 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~1300ms
    • Trimming video to 00:03-00:08: ~2300ms
    • Muting audio: ~200ms
    • Resizing video to half height and width: ~1200ms

    Input video format: 25s 360p VP8 video with Vorbis audio

    • Transcoding to H265 video and AAC audio: ~3400ms
    • Trimming video to 00:03-00:08: ~1700ms
    • Muting audio: ~1600ms
    • Resizing video to half height and width: ~4800ms

    Input video format: 4s 8k H265 video with AAC audio

    • Transcoding to H265 video and AAC audio: ~2300ms
    • Trimming video to 00:03-00:08: ~1800ms
    • Muting audio: ~2000ms
    • Resizing video to half height and width: ~3700ms

    One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times.

    When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences:

    Transmuxing

      • Transformer’s preferred approach when possible – a quick transformation that preserves elementary streams.
      • Only applicable to basic operations, such as rotating, trimming, or container conversion.
      • No quality loss or bitrate change.

    Transmux

    Transcoding

      • Transformer’s fallback approach in cases when Transmuxing isn’t possible – Involves decoding and re-encoding elementary streams.
      • More extensive modifications to the input video are possible.
      • Loss in quality due to re-encoding, but can achieve a desired bitrate target.

    Transcode

    We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above.

    A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest.

    Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder’s output format and the input format must be compatible.

    If the optimization fails, Transformer automatically falls back to normal export.

    What’s next?

    As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs.

    To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We’ve already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app’s performance!



    Source link

  • Building Integrated AI Services with LangChain & LangGraph

    Building Integrated AI Services with LangChain & LangGraph


    While the course is designed to accommodate developers with varying levels of experience, the following prerequisites are recommended: 

    • Basic programming knowledge in any language 
    • Familiarity with web development concepts and RESTful APIs 
    • Understanding of basic AI and machine learning concepts (beneficial but not required) 



    Source link

  • High-Level AI with Azure AI Services

    High-Level AI with Azure AI Services


    While the program is designed to accommodate developers with varying levels of experience, the following prerequisites are recommended: 

    • Basic programming knowledge in any language 
    • Familiarity with web development concepts and RESTful APIs 
    • Understanding of basic AI and machine learning concepts (beneficial but not required)



    Source link

  • Building excellent games with better graphics and performance



    Posted by Matthew McCullough – VP of Product Management, Android

    We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem.

    Today, we’re sharing a closer look at what’s new from Android. We’re making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we’re enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out the video or keep reading below.

    https://www.youtube.com/watch?v=9MN0-qwYAFU

    More immersive visuals built on Vulkan, now the official graphics API

    These days, games require more processing power for realistic graphics and cutting-edge visuals. Vulkan is an API used for low level graphics that helps developers maximize the performance of modern GPUs, and today we’re making it the official graphics API for Android. This unlocks advanced features like ray tracing and multithreading for realistic and immersive gaming visuals. For example, Diablo Immortal used Vulkan to implement ray tracing, bringing the world of Sanctuary to life with spectacular special effects, from fiery explosions to icy blasts.

    Moving image showing ray tracing in Diablo Immortal on Google Play

    Diablo Immortal running on Vulkan

    For casual games like Pokémon TCG Pocket, which draws players into the vibrant world of each Pokémon, Vulkan helps optimize graphics across a broad range of devices to ensure a smooth and engaging experience for every player.

    Moving image showing gameplay of Pokemon TCG Pocket on Google Play

    Pokémon TCG Pocket running on Vulkan

    We’re excited to announce that Android is transitioning to a modern, unified rendering stack with Vulkan at its core. Starting with our next Android release, more devices will use Vulkan to process all graphics commands. If your game is running on OpenGL, it will use ANGLE as a system driver that translates OpenGL to Vulkan. We recommend testing your game on ANGLE today to ensure it’s ready for the Vulkan transition.

    We’re also partnering with major game engines to make Vulkan integration easier. With Unity 6, you can configure Vulkan per device while older versions can access this setting through plugins. Over 45% of sessions from new games on Unity* use Vulkan, and we expect this number to grow rapidly.

    To simplify workflows further, we’re teaming up with the Samsung Austin Research Center to create an integrated GPU profiler toolchain for Vulkan and AI/ML optimization. Coming later this year, this tool will enable developers to make graphics, memory and compute workloads more efficient.

    Longer and smoother gameplay sessions with ADPF

    Android Dynamic Performance Framework (ADPF) enables developers to adjust between the device and game’s performance in real-time based on the thermal state of the device, and it’s getting a big update today to provide longer and smoother gameplay sessions. ADPF is designed to work across a wide range of devices including models like the Pixel 9 family and the Samsung S25 Series. We’re excited to see MMORPGs like Lineage W integrating ADPF to optimize performance on their core target devices.

    Moving image showing gameplay from Lineage w on Google Play

    Lineage W running on ADPF

    Here’s how we’re enhancing ADPF with better performance and simplified integration:

    Performance optimization with more features in Play Console

    Once you’ve launched your game, Play Console offers the tools to monitor and improve your game’s performance. We’re newly including Low Memory Killers (LMK) in Android vitals, giving you insight into memory constraints that can cause your game to crash. Android vitals is your one-stop destination for monitoring metrics that impact your visibility on the Play Store like slow sessions. You can find this information next to reach and devices which provides updates on your game’s user distribution and notifies developers for device-specific issues.

    Android vitals details in Google Play Console

    Check your Android vitals regularly to ensure high technical quality

    Bringing PC games to mobile, and pushing the boundaries of gaming

    We’re launching a pilot program to simplify the process of bringing PC games to mobile. It provides support starting from Android game development all the way through publishing your game on Play. Starting this month, games like DREDGE and TABS Mobile are growing their mobile audience using this program. Many more are following in their footsteps this year, including Disco Elysium. You can express your interest to join the PC to mobile program.

    Moving image displaying thumbnails of titles of new PC games coming to mobile - Disco Elysium, TABS Mobile, and DREDGE

    New PC games are coming to mobile

    You can learn more about Android game development from our developer site. We can’t wait to see your title join the ranks of these amazing games built for Android. And if you’ll be at GDC next week, we’d love to say hello – stop by at the Moscone Center West Hall!

    * Source: Google internal data measuring games on Android 14 or later launched between August 2024 – February 2025.



    Source link

  • Artificial Intelligence APIs with Python

    Artificial Intelligence APIs with Python


    We understand that circumstances can change, and if you need to withdraw from the bootcamp, your options will vary depending on your billing cycle:

    – If you enrolled with a monthly plan, you can cancel your future billing with your membership and you will not be renewed on your next billing date OR you can pause your membership for up to three months, then you can pick up your studies again at that time.

    – If you enrolled with a one-time payment, you will be eligible for a full refund within the first 14 days of your enrollment into the bootcamp.

    *Please note: if you’ve accessed a significant portion of program materials, this might affect your eligibility for a full refund.

    Please email support@kodeco.com for further assistance on the withdrawal process.



    Source link

  • Android Developers Blog: #WeArePlay | How Memory Lane Games helps people with dementia



    Posted by Robbie McLachlan – Developer Marketing

    In our latest #WeArePlay film, which celebrates the people behind apps and games, we meet Bruce – a co-founder of Memory Lane Games. His company turns cherished memories into simple, engaging quizzes for people with different types of dementia. Discover how Memory Lane Games blends nostalgia and technology to spark conversations and emotional connections.

    https://www.youtube.com/watch?v=oBDJH8h7FYs

    What inspired the idea behind Memory Lane Games?

    The idea for Memory Lane Games came about one day at the pub when Peter was telling me how his mum, even with vascular dementia, lights up when she looks at old family photos. It got me thinking about my own mum, who treasures old photos just as much. The idea hit us – why not turn those memories into games? We wanted to help people reconnect with their past and create moments where conversations could flow naturally.

    Memory Lane Games co-founders, Peter and Bruce from Isle of Man

    Can you tell us of a memorable moment in the journey when you realized how powerful the game was?

    We knew we were onto something meaningful when a caregiver in a memory cafe told us about a man who was pretty much non-verbal but would enjoy playing. He started humming along to one of our music trivia games, then suddenly said, “Roy Orbison is a way better singer than Elvis, but Elvis had a better manager.” The caregiver was in tears—it was the first complete sentence he’d spoken in months. Moments like these remind us why we’re doing this—it’s not just about games; it’s about unlocking moments of connection and joy that dementia often takes away.

    A user plays Memory Lane Games from their phone

    One of the key features is having errorless fun with the games, why was that so important?

    We strive for frustration-free design. With our games, there are no wrong answers—just gentle prompts to trigger memories and spark conversations about topics they are interested in. It’s not about winning or losing; it’s about rekindling connections and creating moments of happiness without any pressure or frustration. Dementia can make day-to-day tasks challenging, and the last thing anyone needs is a game that highlights what they might not remember or get right. Caregivers also like being able to redirect attention back to something familiar and fun when behaviour gets more challenging.

    How has Google Play helped your journey?

    What’s been amazing is how Google Play has connected us with an incredibly active and engaged global community without any major marketing efforts on our part.

    For instance, we got our first big traction in places like the Philippines and India—places we hadn’t specifically targeted. Yet here we are, with thousands of downloads in more than 100 countries. That reach wouldn’t have been possible without Google Play.

    A group of senior citizen gather around a table to play a round of Memory Lane Games from a shared mobile device

    What is next for Memory Lane Games?

    We’re really excited about how we can use AI to take Memory Lane Games to the next level. Our goal is to use generative AI, like Google’s Gemini, to create more personalized and localized game content. For example, instead of just focusing on general memories, we want to tailor the game to a specific village the player came from, or a TV show they used to watch, or even local landmarks from their family’s hometown. AI will help us offer games that are deeply personal. Plus, with the power of AI, we can create games in multiple languages, tapping into new regions like Japan, Nigeria or Mexico.

    Discover other inspiring app and game founders featured in #WeArePlay.

    How useful did you find this blog post?






    Source link