برچسب: and

  • Cub8 Is Hypnotic and High Stakes Fun

    Cub8 Is Hypnotic and High Stakes Fun


    Gameplay is simple to pick up buy very difficult to master. You’ll tap on the beat to activate when then cube is perfectly aligned with the hole. If you miss even once the game ends.

    After 10 successful presses, you’ll start on a new stage with fresh music, new mechanics, and even more intensity. The further you go, the faster the game becomes.

    There are eight stages to master. Each one brings interesting mechanics like hazard cubes that redirect movement, fake cubes that try to trick you, and much more

    Along with a neon aesthetic, the game features a fun techno soundtrack to keep you entertained and synced with the gameplay. You can also unlock skins to customize the hydraulic press and power-ups to help you survive longer.

    There is also a global leaderboard so you can see how you stack up against others.

    Cube8 is designed for the iPhone and all iPad models. It’s a free download now on the App Store. There are in-app purchases available.



    Source link

  • Texas Requires Apple and Google to Verify Ages for App Downloads



    The state’s governor signed a new law that will give parents more control over the apps that minors download, part of a raft of new legislation.



    Source link

  • Best and Worst States for Retirement? Here’s the Ranking

    Best and Worst States for Retirement? Here’s the Ranking


    One in five Americans aged 50 and over has no retirement savings, and more than half worry that they won’t have enough money to last once they leave the workforce, according to an AARP survey.

    However, where U.S. workers live can have a significant impact on their retirement readiness.

    Getting familiar with some of the key averages in your state, from 401(k) balances to median incomes, life expectancies, cost of living and more, can help you understand just how prepared you are — or aren’t — for your golden years.

    Related: How Much Money Do You Need to Retire Comfortably in Your State? Here’s the Breakdown.

    Western & Southern Financial Group examined those metrics and others to rank all 50 states based on where retirees have the best and worst readiness for retirement.

    New Jersey, Connecticut, Maryland, Virginia and Vermont came out on top for states where people are most prepared for retirement, per the study.

    What’s more, residents in Connecticut and New Jersey reported the highest average 401(k) balances: $546,000 and $514,000, respectively. Residents over the age of 65 in those states also have high median incomes — over $96,000.

    Related: Here Are the Best and Worst States for Retirement in 2025, According to a New Report

    Americans living in West Virginia, Mississippi, Arkansas, Tennessee and Arizona may fare the worst in retirement, according to the research.

    Mississippi and Arkansas residents reported some of the lowest average 401(k) balances, at $348,000 and $364,000, respectively. In West Virginia and Arkansas, residents over the age of 65 have median incomes under $58,000.

    Related: These Are the States Where $1 Million in Retirement Savings Lasts the Longest (and Where You’ll Be Broke in No Time)

    Check out Western & Southern Financial Group’s full ranking of Americans’ retirement readiness by state below:

    Image Credit: Courtesy of Western & Southern Financial Group

    One in five Americans aged 50 and over has no retirement savings, and more than half worry that they won’t have enough money to last once they leave the workforce, according to an AARP survey.

    However, where U.S. workers live can have a significant impact on their retirement readiness.

    Getting familiar with some of the key averages in your state, from 401(k) balances to median incomes, life expectancies, cost of living and more, can help you understand just how prepared you are — or aren’t — for your golden years.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • How Androidify leverages Gemini, Firebase and ML Kit



    Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Manager

    We’re bringing back Androidify later this year, this time powered by Google AI, so you can customize your very own Android bot and share your creativity with the world. Today, we’re releasing a new open source demo app for Androidify as a great example of how Google is using its Gemini AI models to enhance app experiences.

    In this post, we’ll dive into how the Androidify app uses Gemini models and Imagen via the Firebase AI Logic SDK, and we’ll provide some insights learned along the way to help you incorporate Gemini and AI into your own projects. Read more about the Androidify demo app.

    App flow

    The overall app functions as follows, with various parts of it using Gemini and Firebase along the way:

    flow chart demonstrating Androidify app flow

    Gemini and image validation

    To get started with Androidify, take a photo or choose an image on your device. The app needs to make sure that the image you upload is suitable for creating an avatar.

    Gemini 2.5 Flash via Firebase helps with this by verifying that the image contains a person, that the person is in focus, and assessing image safety, including whether the image contains abusive content.

    val jsonSchema = Schema.obj(
       properties = mapOf("success" to Schema.boolean(), "error" to Schema.string()),
       optionalProperties = listOf("error"),
       )
       
    val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI())
       .generativeModel(
                modelName = "gemini-2.5-flash-preview-04-17",
       	     generationConfig = generationConfig {
                    responseMimeType = "application/json"
                    responseSchema = jsonSchema
                },
                safetySettings = listOf(
                    SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.LOW_AND_ABOVE),
                    SafetySetting(HarmCategory.CIVIC_INTEGRITY, HarmBlockThreshold.LOW_AND_ABOVE),
        	),
        )
    
     val response = generativeModel.generateContent(
                content {
                    text("You are to analyze the provided image and determine if it is acceptable and appropriate based on specific criteria.... (more details see the full sample)")
                    image(image)
                },
            )
    
    val jsonResponse = Json.parseToJsonElement(response.text)
    val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
    val error = jsonResponse.jsonObject["error"]?.jsonPrimitive?.content
    

    In the snippet above, we’re leveraging structured output capabilities of the model by defining the schema of the response. We’re passing a Schema object via the responseSchema param in the generationConfig.

    We want to validate that the image has enough information to generate a nice Android avatar. So we ask the model to return a json object with success = true/false and an optional error message explaining why the image doesn’t have enough information.

    Structured output is a powerful feature enabling a smoother integration of LLMs to your app by controlling the format of their output, similar to an API response.

    Image captioning with Gemini Flash

    Once it’s established that the image contains sufficient information to generate an Android avatar, it is captioned using Gemini 2.5 Flash with structured output.

    val jsonSchema = Schema.obj(
                properties = mapOf(
                    "success" to Schema.boolean(),
                    "user_description" to Schema.string(),
                ),
                optionalProperties = listOf("user_description"),
            )
    val generativeModel = createGenerativeTextModel(jsonSchema)
    
    val prompt = "You are to create a VERY detailed description of the main person in the given image. This description will be translated into a prompt for a generative image model..."
    
    val response = generativeModel.generateContent(
    content { 
           	text(prompt) 
                 	image(image) 
    	})
            
    val jsonResponse = Json.parseToJsonElement(response.text!!) 
    val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
    
    val userDescription = jsonResponse.jsonObject["user_description"]?.jsonPrimitive?.content
    

    The other option in the app is to start with a text prompt. You can enter in details about your accessories, hairstyle, and clothing, and let Imagen be a bit more creative.

    Android generation via Imagen

    We’ll use this detailed description of your image to enrich the prompt used for image generation. We’ll add extra details around what we would like to generate and include the bot color selection as part of this too, including the skin tone selected by the user.

    val imagenPrompt = "A 3D rendered cartoonish Android mascot in a photorealistic style, the pose is relaxed and straightforward, facing directly forward [...] The bot looks as follows $userDescription [...]"
    

    We then call the Imagen model to create the bot. Using this new prompt, we create a model and call generateImages:

    // we supply our own fine-tuned model here but you can use "imagen-3.0-generate-002" 
    val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
                "imagen-3.0-generate-002",
                safetySettings =
                ImagenSafetySettings(
                    ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
                    personFilterLevel = ImagenPersonFilterLevel.ALLOW_ALL,
                ),
    )
    
    val response = generativeModel.generateImages(imagenPrompt)
    
    val image = response.images.first().asBitmap()
    

    And that’s it! The Imagen model generates a bitmap that we can display on the user’s screen.

    Finetuning the Imagen model

    The Imagen 3 model was finetuned using Low-Rank Adaptation (LoRA). LoRA is a fine-tuning technique designed to reduce the computational burden of training large models. Instead of updating the entire model, LoRA adds smaller, trainable “adapters” that make small changes to the model’s performance. We ran a fine tuning pipeline on the Imagen 3 model generally available with Android bot assets of different color combinations and different assets for enhanced cuteness and fun. We generated text captions for the training images and the image-text pairs were used to finetune the model effectively.

    The current sample app uses a standard Imagen model, so the results may look a bit different from the visuals in this post. However, the app using the fine-tuned model and a custom version of Firebase AI Logic SDK was demoed at Google I/O. This app will be released later this year and we are also planning on adding support for fine-tuned models to Firebase AI Logic SDK later in the year.

    moving image of Androidify app demo turning a selfie image of a bearded man wearing a black tshirt and sunglasses, with a blue back pack into a green 3D bearded droid wearing a black tshirt and sunglasses with a blue backpack

    The original image… and Androidifi-ed image

    ML Kit

    The app also uses the ML Kit Pose Detection SDK to detect a person in the camera view, which triggers the capture button and adds visual indicators.

    To do this, we add the SDK to the app, and use PoseDetection.getClient(). Then, using the poseDetector, we look at the detectedLandmarks that are in the streaming image coming from the Camera, and we set the _uiState.detectedPose to true if a nose and shoulders are visible:

    private suspend fun runPoseDetection() {
        PoseDetection.getClient(
            PoseDetectorOptions.Builder()
                .setDetectorMode(PoseDetectorOptions.STREAM_MODE)
                .build(),
        ).use { poseDetector ->
            // Since image analysis is processed by ML Kit asynchronously in its own thread pool,
            // we can run this directly from the calling coroutine scope instead of pushing this
            // work to a background dispatcher.
            cameraImageAnalysisUseCase.analyze { imageProxy ->
                imageProxy.image?.let { image ->
                    val poseDetected = poseDetector.detectPersonInFrame(image, imageProxy.imageInfo)
                    _uiState.update { it.copy(detectedPose = poseDetected) }
                }
            }
        }
    }
    
    private suspend fun PoseDetector.detectPersonInFrame(
        image: Image,
        imageInfo: ImageInfo,
    ): Boolean {
        val results = process(InputImage.fromMediaImage(image, imageInfo.rotationDegrees)).await()
        val landmarkResults = results.allPoseLandmarks
        val detectedLandmarks = mutableListOf<Int>()
        for (landmark in landmarkResults) {
            if (landmark.inFrameLikelihood > 0.7) {
                detectedLandmarks.add(landmark.landmarkType)
            }
        }
    
        return detectedLandmarks.containsAll(
            listOf(PoseLandmark.NOSE, PoseLandmark.LEFT_SHOULDER, PoseLandmark.RIGHT_SHOULDER),
        )
    }
    

    moving image showing the camera shutter button activating when an orange droid figurine is held in the camera frame

    The camera shutter button is activated when a person (or a bot!) enters the frame.

    Get started with AI on Android

    The Androidify app makes an extensive use of the Gemini 2.5 Flash to validate the image and generate a detailed description used to generate the image. It also leverages the specifically fine-tuned Imagen 3 model to generate images of Android bots. Gemini and Imagen models are easily integrated into the app via the Firebase AI Logic SDK. In addition, ML Kit Pose Detection SDK controls the capture button, enabling it only when a person is present in front of the camera.

    To get started with AI on Android, go to the Gemini and Imagen documentation for Android.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Building powerful AI-driven experiences with Jetpack Compose, Gemini and CameraX


    The Android bot is a beloved mascot for Android users and developers, with previous versions of the bot builder being very popular – we decided that this year we’d rebuild the bot maker from the ground up, using the latest technology backed by Gemini. Today we are releasing a new open source app, Androidify, for learning how to build powerful AI driven experiences on Android using the latest technologies such as Jetpack Compose, Gemini through Firebase, CameraX, and Navigation 3.

    Here’s an example of the app running on the device, showcasing converting a photo to an Android bot that represents my likeness:

    moving image showing the conversion of an image of a woman in a pink dress holding na umbrella into a 3D image of a droid bot wearing a pink dress holding an umbrella

    Under the hood

    The app combines a variety of different Google technologies, such as:

      • Gemini API – through Firebase AI Logic SDK, for accessing the underlying Imagen and Gemini models.
      • Jetpack Compose – for building the UI with delightful animations and making the app adapt to different screen sizes.
      • Navigation 3 – the latest navigation library for building up Navigation graphs with Compose.
      • CameraX Compose and Media3 Compose – for building up a custom camera with custom UI controls (rear camera support, zoom support, tap-to-focus) and playing the promotional video.

    This sample app is currently using a standard Imagen model, but we’ve been working on a fine-tuned model that’s trained specifically on all of the pieces that make the Android bot cute and fun; we’ll share that version later this year. In the meantime, don’t be surprised if the sample app puts out some interesting looking examples!

    How does the Androidify app work?

    The app leverages our best practices for Architecture, Testing, and UI to showcase a real world, modern AI application on device.

    Flow chart describing Androidify app flow

    Androidify app flow chart detailing how the app works with AI

    AI in Androidify with Gemini and ML Kit

    The Androidify app uses the Gemini models in a multitude of ways to enrich the app experience, all powered by the Firebase AI Logic SDK. The app uses Gemini 2.5 Flash and Imagen 3 under the hood:

      • Image validation: We ensure that the captured image contains sufficient information, such as a clearly focused person, and assessing for safety. This feature uses the multi-modal capabilities of Gemini API, by giving it a prompt and image at the same time:

    val response = generativeModel.generateContent(
       content {
           text(prompt)
           image(image)
       },
    )
    

      • Text prompt validation: If the user opts for text input instead of image, we use Gemini 2.5 Flash to ensure the text contains a sufficiently descriptive prompt to generate a bot.

      • Image captioning: Once we’re sure the image has enough information, we use Gemini 2.5 Flash to perform image captioning., We ask Gemini to be as descriptive as possible,focusing on the clothing and its colors.

      • “Help me write” feature: Similar to an “I’m feeling lucky” type feature, “Help me write” uses Gemini 2.5 Flash to create a random description of the clothing and hairstyle of a bot.

      • Image generation from the generated prompt: As the final step, Imagen generates the image, providing the prompt and the selected skin tone of the bot.

    The app also uses the ML Kit pose detection to detect a person in the viewfinder and enable the capture button when a person is detected, as well as adding fun indicators around the content to indicate detection.

    Explore more detailed information about AI usage in Androidify.

    Jetpack Compose

    The user interface of Androidify is built using Jetpack Compose, the modern UI toolkit that simplifies and accelerates UI development on Android.

    Delightful details with the UI

    The app uses Material 3 Expressive, the latest alpha release that makes your apps more premium, desirable, and engaging. It provides delightful bits of UI out-of-the-box, like new shapes, componentry, and using the MotionScheme variables wherever a motion spec is needed.

    MaterialShapes are used in various locations. These are a preset list of shapes that allow for easy morphing between each other—for example, the cute cookie shape for the camera capture button:

    Androidify app UI showing camera button

    Camera button with a MaterialShapes.Cookie9Sided shape

    Beyond using the standard Material components, Androidify also features custom composables and delightful transitions tailored to the specific needs of the app:

      • There are plenty of shared element transitions across the app—for example, a morphing shape shared element transition is performed between the “take a photo” button and the camera surface.

        moving example of expressive button shapes in slow motion

      • Custom enter transitions for the ResultsScreen with the usage of marquee modifiers.

        animated marquee example

      • Fun color splash animation as a transition between screens.

        moving image of a blue color splash transition between Androidify demo screens

      • Animating gradient buttons for the AI-powered actions.

        animated gradient button for AI powered actions example

    To learn more about the unique details of the UI, read Androidify: Building delightful UIs with Compose

    Adapting to different devices

    Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like WindowSizeClass to determine what kind of layout to display.

    a collage of different adaptive layouts for the Androidify app across small and large screens

    Various adaptive layouts in the app

    For Androidify, we only needed to leverage the width window size class. Combining this with different layout mechanisms, we were able to reuse or extend the composables to cater to the multitude of different device sizes and capabilities.

      • Responsive layouts: The CreationScreen demonstrates adaptive design. It uses helper functions like isAtLeastMedium() to detect window size categories and adjust its layout accordingly. On larger windows, the image/prompt area and color picker might sit side-by-side in a Row, while on smaller windows, the color picker is accessed via a ModalBottomSheet. This pattern, called “supporting pane”, highlights the supporting dependencies between the main content and the color picker.

      • Foldable support: The app actively checks for foldable device features. The camera screen uses WindowInfoTracker to get FoldingFeature information to adapt to different features by optimizing the layout for tabletop posture.

      • Rear display: Support for devices with multiple displays is included via the RearCameraUseCase, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).

    Using window size classes, coupled with creating a custom @LargeScreensPreview annotation, helps achieve unique and useful UIs across the spectrum of device sizes and window sizes.

    CameraX and Media3 Compose

    To allow users to base their bots on photos, Androidify integrates CameraX, the Jetpack library that makes camera app development easier.

    The app uses a custom CameraLayout composable that supports the layout of the typical composables that a camera preview screen would include— for example, zoom buttons, a capture button, and a flip camera button. This layout adapts to different device sizes and more advanced use cases, like the tabletop mode and rear-camera display. For the actual rendering of the camera preview, it uses the new CameraXViewfinder that is part of the camerax-compose artifact.

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    CameraLayout in Compose

    CameraLayout composable that takes care of different device configurations, such as table top mode

    The app also integrates with Media3 APIs to load an instructional video for showing how to get the best bot from a prompt or image. Using the new media3-ui-compose artifact, we can easily add a VideoPlayer into the app:

    @Composable
    private fun VideoPlayer(modifier: Modifier = Modifier) {
        val context = LocalContext.current
        var player by remember { mutableStateOf<Player?>(null) }
        LifecycleStartEffect(Unit) {
            player = ExoPlayer.Builder(context).build().apply {
                setMediaItem(MediaItem.fromUri(Constants.PROMO_VIDEO))
                repeatMode = Player.REPEAT_MODE_ONE
                prepare()
            }
            onStopOrDispose {
                player?.release()
                player = null
            }
        }
        Box(
            modifier
                .background(MaterialTheme.colorScheme.surfaceContainerLowest),
        ) {
            player?.let { currentPlayer ->
                PlayerSurface(currentPlayer, surfaceType = SURFACE_TYPE_TEXTURE_VIEW)
            }
        }
    }
    

    Using the new onLayoutRectChanged modifier, we also listen for whether the composable is completely visible or not, and play or pause the video based on this information:

    var videoFullyOnScreen by remember { mutableStateOf(false) }     
    
    LaunchedEffect(videoFullyOnScreen) {
         if (videoFullyOnScreen) currentPlayer.play() else currentPlayer.pause()
    } 
    
    // We add this onto the player composable to determine if the video composable is visible, and mutate the videoFullyOnScreen variable, that then toggles the player state. 
    Modifier.onVisibilityChanged(
                    containerWidth = LocalView.current.width,
                    containerHeight = LocalView.current.height,
    ) { fullyVisible -> videoFullyOnScreen = fullyVisible }
    
    // A simple version of visibility changed detection
    fun Modifier.onVisibilityChanged(
        containerWidth: Int,
        containerHeight: Int,
        onChanged: (visible: Boolean) -> Unit,
    ) = this then Modifier.onLayoutRectChanged(100, 0) { layoutBounds ->
        onChanged(
            layoutBounds.boundsInRoot.top > 0 &&
                layoutBounds.boundsInRoot.bottom < containerHeight &&
                layoutBounds.boundsInRoot.left > 0 &&
                layoutBounds.boundsInRoot.right < containerWidth,
        )
    }
    

    Additionally, using rememberPlayPauseButtonState, we add on a layer on top of the player to offer a play/pause button on the video itself:

    val playPauseButtonState = rememberPlayPauseButtonState(currentPlayer)
                OutlinedIconButton(
                    onClick = playPauseButtonState::onClick,
                    enabled = playPauseButtonState.isEnabled,
                ) {
                    val icon =
                        if (playPauseButtonState.showPlay) R.drawable.play else R.drawable.pause
                    val contentDescription =
                        if (playPauseButtonState.showPlay) R.string.play else R.string.pause
                    Icon(
                        painterResource(icon),
                        stringResource(contentDescription),
                    )
                }
    

    Check out the code for more details on how CameraX and Media3 were used in Androidify.

    Navigation 3

    Screen transitions are handled using the new Jetpack Navigation 3 library androidx.navigation3. The MainNavigation composable defines the different destinations (Home, Camera, Creation, About) and displays the content associated with each destination using NavDisplay. You get full control over your back stack, and navigating to and from destinations is as simple as adding and removing items from a list.

    @Composable
    fun MainNavigation() {
       val backStack = rememberMutableStateListOf<NavigationRoute>(Home)
       NavDisplay(
           backStack = backStack,
           onBack = { backStack.removeLastOrNull() },
           entryProvider = entryProvider {
               entry<Home> { entry ->
                   HomeScreen(
                       onAboutClicked = {
                           backStack.add(About)
                       },
                   )
               }
               entry<Camera> {
                   CameraPreviewScreen(
                       onImageCaptured = { uri ->
                           backStack.add(Create(uri.toString()))
                       },
                   )
               }
               // etc
           },
       )
    }
    

    Notably, Navigation 3 exposes a new composition local, LocalNavAnimatedContentScope, to easily integrate your shared element transitions without needing to keep track of the scope yourself. By default, Navigation 3 also integrates with predictive back, providing delightful back experiences when navigating between screens, as seen in this prior shared element transition:

    CameraLayout in Compose

    Learn more about Jetpack Navigation 3, currently in alpha.

    Learn more

    By combining the declarative power of Jetpack Compose, the camera capabilities of CameraX, the intelligent features of Gemini, and thoughtful adaptive design, Androidify is a personalized avatar creation experience that feels right at home on any Android device. You can find the full code sample at github.com/android/androidify where you can see the app in action and be inspired to build your own AI-powered app experiences.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • The Old Man and the iPhone



    Our modern conveniences are exhaustingly inconvenient.



    Source link

  • Android’s Kotlin Multiplatform announcements at Google I/O and KotlinConf 25



    Posted by Ben Trengrove – Developer Relations Engineer, Matt Dyor – Product Manager

    Google I/O and KotlinConf 2025 bring a series of announcements on Android’s Kotlin and Kotlin Multiplatform efforts. Here’s what to watch out for:

    Announcements from Google I/O 2025

    Jetpack libraries

    Our focus for Jetpack libraries and KMP is on sharing business logic across Android and iOS, but we have begun experimenting with web/WASM support.

    We are adding KMP support to Jetpack libraries. Last year we started with Room, DataStore and Collection, which are now available in a stable release and recently we have added ViewModel, SavedState and Paging. The levels of support that our Jetpack libraries guarantee for each platform have been categorised into three tiers, with the top tier being for Android, iOS and JVM.

    Tool improvements

    We’re developing new tools to help easily start using KMP in your app. With the KMP new module template in Android Studio Meerkat, you can add a new module to an existing app and share code to iOS and other supported KMP platforms.

    In addition to KMP enhancements, Android Studio now supports Kotlin K2 mode for Android specific features requiring language support such as Live Edit, Compose Preview and many more.

    How Google is using KMP

    Last year, Google Workspace began experimenting with KMP, and this is now running in production in the Google Docs app on iOS. The app’s runtime performance is on par or better than before1.

    It’s been helpful to have an app at this scale test KMP out, because we’re able to identify issues and fix issues that benefit the KMP developer community.

    For example, we’ve upgraded the Kotlin Native compiler to LLVM 16 and contributed a more efficient garbage collector and string implementation. We’re also bringing the static analysis power of Android Lint to Kotlin targets and ensuring a unified Gradle DSL for both AGP and KGP to improve the plugin management experience.

    New guidance

    We’re providing comprehensive guidance in the form of two new codelabs: Getting started with Kotlin Multiplatform and Migrating your Room database to KMP, to help you get from standalone Android and iOS apps to shared business logic.

    Kotlin Improvements

    Kotlin Symbol Processing (KSP2) is stable to better support new Kotlin language features and deliver better performance. It is easier to integrate with build systems, is thread-safe, and has better support for debugging annotation processors. In contrast to KSP1, KSP2 has much better compatibility across different Kotlin versions. The rewritten command line interface also becomes significantly easier to use as it is now a standalone program instead of a compiler plugin.

    KotlinConf 2025

    Google team members are presenting a number of talks at KotlinConf spanning multiple topics:

    Talks

      • Deploying KMP at Google Workspace by Jason Parachoniak, Troels Lund, and Johan Bay from the Workspace team discusses the challenges and solutions, including bugs and performance optimizations, encountered when launching Kotlin Multiplatform at Google Workspace, offering comparisons to ObjectiveC and a Q&A. (Technical Session)

      • The Life and Death of a Kotlin/Native Object by Troels Lund offers a high-level explanation of the Kotlin/Native runtime’s inner workings concerning object instantiation, memory management, and disposal. (Technical Session)

      • APIs: How Hard Can They Be? presented by Aurimas Liutikas and Alan Viverette from the Jetpack team delves into the lifecycle of API design, review processes, and evolution within AndroidX libraries, particularly considering KMP and related tools. (Technical Session)

      • Project Sparkles: How Compose for Desktop is changing Android Studio and IntelliJ with Chris Sinco and Sebastiano Poggi from the Android Studio team introduces the initiative (‘Project Sparkles’) aiming to modernize Android Studio and IntelliJ UIs using Compose for Desktop, covering goals, examples, and collaborations. (Technical Session)

      • JSpecify: Java Nullness Annotations and Kotlin presented by David Baker explains the significance and workings of JSpecify’s standard Java nullness annotations for enhancing Kotlin’s interoperability with Java libraries. (Lightning Session)

      • Lessons learned decoupling Architecture Components from platform specific code features Jeremy Woods and Marcello Galhardo from the Jetpack team sharing insights from the Android team on decoupling core components like SavedState and System Back from platform specifics to create common APIs. (Technical Session)

      • KotlinConf’s Closing Panel, a regular staple of the conference, returns, featuring Jeffrey van Gogh as Google’s representative on the panel. (Panel)

    Live Workshops

    If you are at KotlinConf in person, we will have guided live workshops with our new codelabs from above.

      • The codelab Migrating Room to Room KMP, also led by Matt Dyor, and Dustin Lam, Tomáš Mlynarič, demonstrates the process of migrating an existing Room database implementation to Room KMP within a shared module.

    We love engaging with the Kotlin community. If you are attending KotlinConf, we hope you get a chance to check out our booth, with opportunities to chat with our engineers, get your questions answered, and learn more about how you can leverage Kotlin and KMP.

    Learn more about Kotlin Multiplatform

    To learn more about KMP and start sharing your business logic across platforms, check out our documentation and the sample.

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.

    1 Google Internal Data, March 2025



    Source link

  • In-App Ratings and Reviews for TV



    Posted by Paul Lammertsma – Developer Relations Engineer

    Ratings and reviews are essential for developers, offering quantitative and qualitative feedback on user experiences. In 2022, we enhanced the granularity of this feedback by segmenting these insights by countries and form factors.

    Now, we’re extending the In-App Ratings and Reviews API to TV to allow developers to prompt users for ratings and reviews directly from Google TV.

    Ratings and reviews on Google TV

    Ratings and reviews entry point forJetStream sample app on TV

    Users can now see rating averages, browse reviews, and leave their own review directly from an app’s store listing on Google TV.

    Ratings and written reviews input screen on TV

    Users can interact with in-app ratings and reviews on their TVs by doing the following:

      • Select ratings using the remote control D-pad.
      • Provide optional written reviews using Gboard’s on-screen voice input, or by easily typing from their phone.
      • Send mobile notifications to themselves to complete their TV app review directly on their phone.

    User instructions for submitting TV app ratings and reviews on mobile

    Additionally, users can leave reviews for other form factors directly from their phone by simply selecting the device chip when submitting an app rating or writing a review.

    We’ve already seen a considerable lift in app ratings on TV since bringing these changes to Google TV, and now, we’re making it possible for developers to trigger a ratings prompt as well.

    Before we look at the integration, let’s first carefully consider the best time to request a review prompt. First, identify optimal moments within your app to request user feedback, ensuring prompts appear only when the UI is idle to prevent interruption of ongoing content.

    In-App Review API

    Integrating the Google Play In-App Review API is the same as on mobile and it’s only a couple of method calls:

    val manager = ReviewManagerFactory.create(context)
    manager.requestReviewFlow().addOnCompleteListener { task ->
        if (task.isSuccessful) {
            // We got the ReviewInfo object
            val reviewInfo = task.result
            manager.launchReviewFlow(activity, reviewInfo)
        } else {
            // There was some problem, log or handle the error code
            @ReviewErrorCode val reviewErrorCode =
                (task.getException() as ReviewException).errorCode
        }
    }
    

    First, invoke requestReviewFlow() to obtain a ReviewInfo object which is used to launch the review flow. You must include an addOnCompleteListener() not just to obtain the ReviewInfo object, but also to monitor for any problems triggering this flow, such as the unavailability of Google Play on the device. Note that ReviewInfo does not offer any insights on whether or not a prompt appeared or which action the user took if a prompt did appear.

    The challenge is to identify when to trigger launchReviewFlow(). Track user actions—identifying successful journeys and points where users encounter issues—so you can be confident they had a delightful experience in your app.

    For this method, you may optionally also include an addOnCompleteListener() to ensure it resumes when the returned task is completed.

    Note that due to throttling of how often users are presented with this prompt, there are no guarantees that the ratings dialog will appear when requesting to start this flow. For best practices, check this guide on when to request an in-app review.

    Get started with In-App Reviews on Google TV

    You can get a head start today by following these steps:

    1. Identify successful journeys for users, like finishing a movie or TV show season.
    2. Identify poor experiences that should be avoided, like buffering or playback errors.
    3. Integrate the Google Play In-App Review API to trigger review requests at optimal moments within the user journey.
    4. Test your integration by following the testing guide.
    5. Publish your app and continuously monitor your ratings by device type in the Play Console.

    We’re confident this integration enables you to elevate your Google TV app ratings and empowers your users to share valuable feedback.

    Play Console Ratings graphic

    Resources

    Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.



    Source link

  • Why Your Audience Isn’t Listening Anymore (And What You Can Do About It)

    Why Your Audience Isn’t Listening Anymore (And What You Can Do About It)


    Opinions expressed by Entrepreneur contributors are their own.

    Every day, we’re bombarded with noise — emails, ads, pop-ups, sponsored posts and DMs from strangers who want to “hop on a quick call.” It’s relentless. And people are tired.

    Marketers often call this “audience fatigue,” blaming content overload. But after working with hundreds of leaders to build authentic authority, I’ve come to see it differently: it’s not just content overload — it’s trust fatigue.

    Trust fatigue is what happens when people stop believing. When every message feels like a sales pitch in disguise, people disengage — not just from brands, but from leaders who once earned their respect.

    So, in a world where trust is slipping and skepticism is rising, how do you become someone worth listening to?

    Trust moves from institutions to individuals

    One study found that 79% of people trust their employer more than the media, the government, or nonprofits. That’s huge.

    It means trust is no longer institutional — it’s personal. People don’t want another faceless brand talking at them. They want a real person who shows up with clarity, consistency and value.

    That’s your opportunity. If you want to lead, you need to earn trust. And the good news? It starts with three moves.

    Related: Trust Is a Business Metric Now. Here’s How Leaders Can Earn It.

    1. Be discoverable

    Let’s get practical. Google yourself — what comes up?

    If it’s outdated bios, scattered links, or worse — nothing — you’ve got work to do. Your digital presence is your first impression. When someone wants to vet you, they’re not asking for your resume. They’re looking you up.

    A strong LinkedIn profile is the first step. Make it sound like a leader, not a job seeker. Then, create a personal website that reflects who you are, what you stand for, and the people you serve. This is your platform.

    Next, give people a reason to trust you: thought leadership content — articles, interviews, podcasts — that showcase your ideas. If I can’t find you, I can’t follow you.

    2. Be credible

    The internet is full of opinions. What cuts through is proof.

    Credibility comes from evidence: media features, speaking gigs, client testimonials, books and bylines. These aren’t vanity metrics — they’re trust signals. They tell your audience: this person has earned a platform.

    You don’t need to headline a TEDx talk tomorrow. Start small. Write a piece for your industry publication. Share a client win. Build momentum with real, earned signals of authority.

    And the data backs this up. A Gallup/Knight Foundation study found that nearly 90% of Americans follow at least one public figure for news or insight, more than brands, and sometimes more than the media itself.

    3. Be human

    Here’s where many leaders go wrong: they forget that trust isn’t just about what you say — it’s how you make people feel.

    You can have the slickest website and the most polished profile, but if your tone feels robotic or your content sounds like corporate filler, people will scroll right past.

    You don’t need to spill your life story, but you do need to sound like a real person. Share lessons you’ve learned, not just what you’re selling. Tell stories. Speak plainly. Be generous with your insights.

    I once shared a story about a career setback on stage, unsure of how it would land. It ended up being the thing people remembered — and the reason they reached out. Vulnerability built more trust than any polished pitch ever could.

    Related: How Talking Less and Listening More Builds Your Business

    Trust is the strategy — authority is the reward

    Many leaders think, “If I’m good at what I do, people will notice.”

    They won’t.

    In a world overflowing with content and short on attention, visibility matters. Credibility matters. And most of all, connection matters. You build trust gradually — through how you show up, what you say and how well it resonates with what your audience actually needs.

    So here’s where to start:

    • Audit your online presence as if you’re a stranger seeing yourself for the first time.
    • Share stories in your writing and speaking that make people feel something real.
    • Post something this week that reflects what you believe, not what you’re trying to sell.

    Lead with service. Speak with clarity. Build trust by showing up as yourself.

    Authority doesn’t come from shouting the loudest. It comes from being the one people believe.

    Every day, we’re bombarded with noise — emails, ads, pop-ups, sponsored posts and DMs from strangers who want to “hop on a quick call.” It’s relentless. And people are tired.

    Marketers often call this “audience fatigue,” blaming content overload. But after working with hundreds of leaders to build authentic authority, I’ve come to see it differently: it’s not just content overload — it’s trust fatigue.

    Trust fatigue is what happens when people stop believing. When every message feels like a sales pitch in disguise, people disengage — not just from brands, but from leaders who once earned their respect.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link

  • What 8 Years in Corporate Life Did — and Didn’t — Prepare Me For as a Founder

    What 8 Years in Corporate Life Did — and Didn’t — Prepare Me For as a Founder


    Opinions expressed by Entrepreneur contributors are their own.

    As a consultant, chaos was a problem I had to solve. As a founder, it’s the air I breathe.

    I entered the startup world armed with what I thought was the ultimate toolkit: a consulting background. Years of strategy decks, stakeholder management and cross-functional collaboration taught me how to turn chaos into structure and solve problems fast. I thought I had seen it all.

    But I quickly realized that the transition from consultant to founder wasn’t so much a pivot — it was a free fall. See, consultants and founders couldn’t be more different. Consultants are trained to be perfect, founders need to be scrappy. Consultants are trained to eliminate chaos, founders need to thrive in it. Consultants have a safety net, founders don’t.

    Related: Are You Ready to Be a CEO, a Founder or Both? Here’s How to Know

    Let’s dive right in.

    This is what consulting did prepare me for:

    1. Finding structure in chaos: I am stating the obvious here, but it is essential for founders to be able to execute on their vision; and to do that effectively, founders need structure. Something as simple as creating an organized folder structure — which coincidentally was my first task as an associate — can go so far as securing your term sheet with investors when they ask for the data room during the due diligence process. Being due diligence-ready isn’t just about having your documents in order; it’s about demonstrating transparency and building confidence with potential investors.
    2. Thinking on the spot: As a founder, it feels like you’re in the middle of the ocean and you need to swim your way back to shore. Consulting prepared me for that. I remember being chucked into remote environments to explain technical workflows to non-technical people — in my third language nonetheless. Thinking fast and adapting your message to whoever’s in front of you isn’t just useful — it’s how you create openings. It’s how you pitch before your product is ready. It’s how you get a meeting before there’s anything to show.
    3. Burning the midnight oil: Let’s be real, consultants — at least, the good ones — are machines and can be extremely productive. Founders are part of a world where being busy includes attending a lot of conferences, exhibitions and the post-event functions that come with them. Consultants can rarely afford such luxuries. Crunchtime is real and forces them to converge their efforts on work. Knowing when to lock in and say no is crucial as a founder.

    This is what consulting did not prepare me for:

    1. Building and failing fast: Most founders and visionaries fall into the fallacy of building an end-to-end super solution that promises to be the holy grail of their customers — myself included. Enter the pivots. Your startup does not succeed when it builds out your vision — that is often just a very expensive dream. It succeeds when you find out what your customers are willing to pay for as quickly as possible. As Eric Ries puts it in The Lean Startup, the key is learning what customers actually want – not what you think they should want.
    2. Storytelling as an art: In my first days as a founder, I walked into a potential client’s office long before I had a product or even a live website. I took the consulting route and brought a strategy deck with me. I got destroyed that meeting. Off the bat, it sounds like a mistake — but it was the best decision I could have made. I took note of the feedback and acted on them immediately. Get out there, pitch your idea and ask for feedback! Feedback helps you figure out what sticks, what doesn’t and how to sharpen your message until it cuts through.
    3. Learning how to network: I did more networking in my first year as a founder than I did during my eight years as a consultant. Let that sink in. I thought I was networking as a consultant, but I was really just moving within the same orbit. As a founder, the galaxy is yours to explore. From day one, you find yourself networking with fellow founders from all walks of life, angel investors, venture capitals, tech builders, community leads — you name it. And the best part is, they don’t care about your CV. They care about your energy, passion and convinction. A study by Queen Mary University of London found that the quality of a startup’s network significantly impacts its chances of success, often more so than initial funding or team size.

    Related: Are You Thinking Like a Founder? 4 Principles Every Successful Team Should Follow

    In the end, the transition from consultant to founder was less about applying what I knew and more about unlearning what I thought I knew. And if you’re willing to unlearn, embrace different perspectives, take constructive criticism, to be honest with yourself and to move fast without all the answers — you will find yourself growing in ways no corporate job could ever offer.

    As a consultant, chaos was a problem I had to solve. As a founder, it’s the air I breathe.

    I entered the startup world armed with what I thought was the ultimate toolkit: a consulting background. Years of strategy decks, stakeholder management and cross-functional collaboration taught me how to turn chaos into structure and solve problems fast. I thought I had seen it all.

    But I quickly realized that the transition from consultant to founder wasn’t so much a pivot — it was a free fall. See, consultants and founders couldn’t be more different. Consultants are trained to be perfect, founders need to be scrappy. Consultants are trained to eliminate chaos, founders need to thrive in it. Consultants have a safety net, founders don’t.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.



    Source link