Discover the latest enhancements in Android Studio.
What's new in Android development tools
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
Join Jamal and Tor, representatives from the Android development experience team, as they provide an in-depth look at the latest advancements in Android Studio aimed at enhancing high-quality app development across Android devices. They unveil the new product roadmap, demonstrating improvements like AI-powered developer assistance, upgrades to Android Studio Narwhal, and other surprises. Key updates include Gemini AI integrations facilitating code suggestions and error-solving, usability enhancements in code completion, the introduction of user journey tests, remote testing via Firebase device streaming, and more. The session wraps up highlighting extensive features that cater to making the Android development process more efficient and streamlined.
Highlights
AI-powered suggestions and error-solving in Android Studio with Gemini. π€
User journey testing to simplify complex test writing. πββοΈ
Remote device testing through Firebase device streaming, including partner labs from Samsung, OPPO, and others. π±
Improvements in code completion usability with new animations and streaming. π
Streamlined backup and restore solutions for app data ensuring smoother user transition between devices. πΎ
Key Takeaways
Android Studio updates double its releases for improved quality and features. π
Introduction of Gemini AI in Android Studio for smarter coding assistance. π€
New tools for bug fixes, testing, and device streaming elevate development experience. π
Android Studio Cloud allows developing from anywhere with just the internet. βοΈ
Focus on user-friendly integrations and optimization in Android development tools. π―
Overview
This exciting session kicks off with Jamal from the Android development experience team, alongside Tor from the engineering team, as they showcase a plethora of updates in the Android development space. Tapping into the advanced AI capabilities of the Gemini model, they're revolutionizing how developers approach coding, testing, and building apps. Whether itβs through smarter code completion or streamlined testing processes, the new tools in Android Studio are bound to enhance developer productivity.
The presentations delved into significant AI developments allowing for intelligent code suggestions and error management. The highlight of these demos was the user journey testing feature which simplifies creating tests for complex user actions, making sure apps run smoothly. Additionally, with the introduction of cloud-based development environments, the reach and flexibility of crafting Android applications have been significantly amplified.
Emphasizing the blend between technological advancements and developer needs, the talks underscored how these updates are making the development process more efficient and intuitive. Be it through major usability upgrades like improved mobile device connectivity or smarter folding with R8 optimizations, Android Studioβs new features are truly setting a new benchmark for app development.
Chapters
00:00 - 00:30: Introduction In the 'Introduction' chapter, Jamal, the director of product management for the Android development experience team, along with Tor, the engineering director for Android developer tools and libraries, welcome viewers to a session focused on updates to Android Studio. The chapter sets the stage for discussing new developments in Android development tools.
00:30 - 01:00: Android Studio Roadmap The chapter provides an overview of the Android Studio roadmap. It highlights the mission of making it easier to develop high-quality Android apps for various devices. The chapter promises a brief update on the product roadmap, a demonstration of recent features, and concludes with additional feature updates. It also mentions that the release frequency of Android Studio has doubled.
01:00 - 01:30: Feature Drop Releases The initial platform release focuses on aligning Android Studio with the latest version of the IntelliJ platform, along with fixing various bugs and product quality. Following this, feature drop releases are initiated to include Android-specific features. Since the last IO event, complete release cycles of Android Studio Ladybug and Mircat have been completed. Updates from the Android Studio Ladybug feature drop release are highlighted, which include several noteworthy changes that may have been missed by users.
01:30 - 02:00: AI Enhancements in Android Studio This chapter discusses various AI enhancements in Android Studio aimed at improving user experience. The updates include an improved app links assistant for easier testing of app links and the integration of Google Play SDK insights directly into the IDE, eliminating the need to visit the Play Dev console for information on potential publishing issues. Additionally, a significant focus was placed on addressing over 700 bugs reported by users and identified internally to enhance the overall quality of the platform.
02:00 - 02:30: AI Features Demonstration The Android Studio Marai release introduces several updates and features aimed at enhancing developer productivity and efficiency. Key highlights include enhanced tools for Jetpack Compose preview, a new template for integrating Kotlin multiplatform into existing projects, and improvements to the build menu for a more efficient and streamlined process. Additionally, this release continues to build on AI-powered developer assistance capabilities with the integration of Gemini and Android Studio.
02:30 - 03:00: Library Update Assistant The chapter titled "Library Update Assistant" discusses the use of AI technologies like Gemini and Android Studio in core developer workflows. It acknowledges the rapidly evolving nature of AI, emphasizing that each update to the Gemini model enhances these workflows. The philosophy towards AI is described as not a 'one-size-fits-all' solution, with the understanding that different cases, like going from a prototype or a new idea to a fully realized API, require flexible and quick processes.
03:00 - 03:30: Journey Testing This chapter discusses the different mindsets involved in taking a new idea to production, particularly in a stable and high-quality production environment. It highlights the features delivered by the Android Studio team aimed at addressing these challenges. The emphasis is on the latest AI-enabled features in Android Studio Narwhal, which are presented as part of their ongoing innovations.
03:30 - 04:00: New Agent Features The chapter titled 'New Agent Features' focuses on the unveiling of new enhancements to various workflows, particularly in Android Studio. Initially, there is a mention of surprises in AI integrations followed by a preview of other significant improvements in workflows. The presenter is set to first demonstrate new AI features in Android Studio, followed by additional product updates. Emphasis is given to the usability of these new features, hinting at a comprehensive overview that blends AI innovations with practical enhancements for developers.
04:00 - 04:30: Crash Analysis and Compose Integration Chapter discusses recent improvements in code completion, including syntax highlighting on ghost text for better visual scanning.
04:30 - 05:00: Firebase Device Streaming and ADB Over Wi-Fi In this chapter, the focus is on Firebase Device Streaming and utilizing ADB over Wi-Fi. It covers several improvements to the streaming of code snippets, which now appear more immediately and have an enhanced scrolling behavior, eliminating the need to scroll back to see the beginning of the response. Moreover, a context drawer is introduced, allowing users to view implicitly attached files to their queries and attach their own, such as inquiring about an image and converting it to code. Additionally, there's a mention of using Gemini 2.5 Pro in the process.
05:00 - 05:30: Compose Layout Resizing and Backup The chapter "Compose Layout Resizing and Backup" begins by introducing a new feature in the model being used, highlighted in the bottom right corner, and demonstrated through a 'hello' test to show the model selector's response. This feature showcases a nice shimmer effect, highlighting small but notable enhancements. The Gemini 25 Pro model supports the development of more advanced features, including an update assistant designed to simplify library updates, which are typically cumbersome.
05:30 - 06:00: XR Support and Lint Checks This chapter discusses the challenges faced when there are source-breaking changes and how an upgrade assistant is helping to automate the process of updating. A toml file is presented, showing a list of dependency warnings. The latest improvement in lint checks is introduced, which now verifies dependencies not only from maven.google.com but also from other sources such as maven central. The chapter concludes with a note on the ability to apply quick fixes for updates, using Kotlin as an example.
06:00 - 06:30: Conclusion and Additional Updates The chapter concludes with discussions on updating libraries using Gemini. The process involves analyzing build files and proposing updates to the compiler version and AGP gradle. Furthermore, it examines a bill of materials to identify which versions require updates. Libraries listed in blue serve as hyperlinks, indicating available release notes for these updates. These notes will be used to augment the agent. The section wraps up with executing the planned updates.
What's new in Android development tools Transcription
00:00 - 00:30 Hi, welcome to what's new in the Android development tools. I'm Jamal, director of product management on the Android development experience team. And I'm Tor. I'm the engineering director for Android developer tools and libraries. [Applause] So together we're going to walk through some updates to Android Studio where our
00:30 - 01:00 mission is to make it easy for you to make highquality apps for Android across the whole portfolio of Android devices. First up, I'll cover a quick update to the product road map and then tour will give a demo of some of the latest features in action and lastly I'll wrap up and get some additional feature updates. Okay, first up, let's take a quick peek at the road map. Now, as a reminder, we have doubled our releases of Android Studio where the
01:00 - 01:30 initial platform release is really focused on getting Android Studio aligned to the latest version of the intelligent platform. Plus, it includes a handful of bugs and product quality fixes. Then, we follow up with a feature drop release, which includes Android specific features. So since IO last year, we went through a whole release cycle of Android Studio Ladybug and Mircat. So let me walk through a few updates that you might have missed. For Android Studio Ladybug feature drop release, we have a handful
01:30 - 02:00 of features around where preview and health services. Plus, for those of you who use ads in your apps and games, we improved the app links assistant to make it even easier to test your app links. and we integrated Google Play SDK insights right into the IDE so you don't have to go to the Play Dev console to see potential publishing hurdles. Lastly, we spent a notable amount of time on addressing over 700 bugs both reported by you and our internal quality
02:00 - 02:30 team for the Android Studio Marai release. We are continuing to invest in more refined tools such as Jetack compose preview enhancements and a new template for adding Cotlin multiplatform to existing projects. Plus, we also updated the build menu to be more efficient and more streamlined in this release. And across all these releases, we've been building on our capabilities of AI powered developer assistance with Gemini and Android Studio. From code
02:30 - 03:00 suggestions, generating documentation to solving crashes, we've applied Gemini and Android Studio to a wide range of your core developer workflows. Now, with AI, we know this is an evolving space with almost every couple weeks a new improvement to the Gemini model. Our philosophy of AI is that it's not a one-sizefits-all. Let me explain. So, in some cases, we know you want to go from a prototype, a new idea, or an API. it needs to go from 0 to one and rapidly
03:00 - 03:30 think of a new idea. While on the other hand, you have an existing codebase that you need help with taking your idea from one to production, a production environment where you have your core app business where you probably have a higher bar for quality and stability. So for the Android Studio team, we have are delivering on a wide range of features that address all these mindsets. Today, we'll show you some of the latest AI enabled features we've been working on in Android Studio Narwhal, plus some
03:30 - 04:00 other surprises. And in addition to AI work, we show you some new improvements in other critical workflows. Okay, now I'll have to walk us through some of the new features in Android Studio. And after we finish the demo, I'll walk through some additional product updates. Okay, Tor, take it away. All right, good morning everybody. So, I have a lot to show. Thanks. And I'm actually going to start with AI features this time. But before I get into some of the bigger features, I just want to highlight some of the usability
04:00 - 04:30 improvements we made because I just think it makes it nicer to use. So yes, you can see the screen already. So the first one I'll talk about is uh code completion. So we're now doing uh syntax highlighting on the ghost text. So you can see it's subtle, but it's sort of translucent. And I think it just makes it a lot easier to visually scan. Um, and in chat, we've rewritten the entire thing in compose. So, we now have nice animations and motion. Um, when I run a query here, you can see I can interrupt
04:30 - 05:00 it. Another nice animation. We are now streaming uh code snippets. So, they they they appear more uh immediately. And the scrolling behavior is better. So, you don't have to scroll back up to see the beginning of the reply. Uh we also have a context drawer so you can see which files are implicitly attached to your queries and you can also go and attach your own files if you want to ask about an image for example to convert it to code. Um and notice how we're now able to actually use Gemini 2.5 Pro. So we're
05:00 - 05:30 going to start showing the model you're using in the bottom right corner. So let me just uh say silly question hello. Look at the model selector when I press hello. See the nice shimmer? I like small stuff like that. All right. Uh anyway, Gemini 25 Pro is enabling us to write some much more advanced features than before. And so the first feature I want to show you like that is our update assistant. So we know that library updates are a pain.
05:30 - 06:00 We've heard that loud and clear. Uh especially when there are source breaking changes, which there often are. So the upgrade assistant is basically uh trying to automate that for you. So here we have a toml file. We can see lots of dependency warnings. And a new improvement in the latest version of lint is that it doesn't just check the maven.google.com dependencies. It checks well for example maven central 2. So we can see we have a new version of cotlin. So now I can apply a quick fix update
06:00 - 06:30 all libraries with gemini. When I do that we are analyzing the build files. You can see it's now proposing to update the compiler version AGP gradal itself. Here we have a bill of materials and it's showing us which version updates within that bill of materials is going to be updated. All the blue libraries here are hyperlinks. That means that we found release notes for these updates that we're going to augment the agent with. And at the end I can let it do its thing. So uh I'm going
06:30 - 07:00 to press start update. And so now the agent is going to build the project. We've we've already changed the build files. You can see in the commit window here that some files have already been changed. Now it's running a build. It's found an error. It's now going to reason about that error. Uh and there it is. You can see the explanation for what it's doing along the way. It's reading the file. Um and if you went and saw the developer keynote yesterday, you'll notice this
07:00 - 07:30 looks a bit different. So our original update bot uh update agent, sorry, uh was written for Gemini 2.0 uh and we had it months ago. Uh but then 2.5 has enabled us to write a much more advanced agent. So this isn't a facelift. This this is a body lift. We're now putting the update agent into the generic agent instead. And it's kind of fun. Uh so you can see it has now succeeded. It's running a gradal sync and it should be done momentarily.
07:30 - 08:00 Um I would think there it is. And then we get a summary at the end where I can review the changes. So, this was a deliberately very simple update so that we could sit and watch it live. What I want to do now is start a longer and frankly harder update job uh and run it in the background during the rest of the demo. So, for that I'm going to use Android Studio on the cloud. So, here is Firebase Studio and I can open up Android Studio workspaces here and I've already started one. So,
08:00 - 08:30 what you're seeing here is a version of Studio running uh in the cloud. I've preloaded the Sunflower project, but I've gone back in time. I've checked out an older version of this project, and now I'm going to just see if the AI can update it for us. Uh, by the way, we have implemented backup, backup, and sync now. So, you can see that all my settings from my local computer and all my remote instances can be synced with my Google account as well as my Jet Brains account. All right, let's kick off the
08:30 - 09:00 update here. And I've run this update about 10 times in the last week and it's succeeded in about half of them. So it's truly a difficult project. So we'll see what it does. Start update. All right. So it's off to the races. We'll check back in on this in a little bit. So updating libraries um is toil. It's not something anyone really enjoys, I think. And I'll wager many of you feel the same way about writing integration
09:00 - 09:30 tests. Um certainly I feel that way. Unit test great but integration test not so much. So the next feature I want to show you is meant to help with that. So historically you test these things using espresso. Um but we're now launching journeys where you get to describe a user journey which is a sort of a critical task the user is trying to do in natural language. So here we have a journey. This is a journey file and this
09:30 - 10:00 is actually just XML behind the scenes. So you can do code reviews on this. You can check it into version control. You can even comment out parts like I've done here temporarily. But we also give you a uh nice graphical editor. So I can, you know, drag and drop to reorder reorder these things. I can add new things. Uh and I can delete items if I want as well. So let's try to run uh a journey. I'm not going to run this one since I just trashed it. So, here is a
10:00 - 10:30 journey for u this is a podcast app and this journey is testing that we can add custom filters. So, what it's doing now is it's building the project. It's got lots of modules so it takes a few seconds to do that. All right. And so now it's starting to run the journey. So, we should see it in a second. So, here's the app. So you can see it's now supposed to tap on the filters panel. Then it's going to make sure that there's no existing list
10:30 - 11:00 custom list code start. Then it's going to tap on the plus icon in the top right corner. And so you can see that we're showing you progress along the way here on the left. Which step is executing? And you might notice that it's highlighting two. That's a temporary situation cuz we don't know in studio exactly when one task is done and when the next one begins. So we're just highlighting that the truth is somewhere in the middle. And in a few weeks we'll actually have the right and it failed. That's exciting. Um, so now I'll show the next feature to
11:00 - 11:30 help with that. So we have these results shown live while the test is running. So I can look at the reasoning here and I can see the before picture and the during and the after and why it failed. I'm going to run it one more time because this usually succeeds. And the step that failed is a that is actually a pretty complex one. So the the other ones are just like click on this, check that. This one is tapping multiple actions and it's entering text into a text field and so on. So we'll give it one more
11:30 - 12:00 chance. I can also open up the test results window here uh where I can also see these actions as they're executing. All right, second times the charm, right? Okay. So now it's entering text. It's naming the filter. It's going to name it starred. Hopefully yes. All right. Looks like it's good. And so when it's done creating the
12:00 - 12:30 filter, it goes and makes sure that it's now showing up, which we can see as the last filter on the list. And so it passed. So what you've seen now is how to edit journeys and how to run them. The last thing I want to show you is uh the new test recorder we're working on to make it easier to create as well. So, let me first of all just run the app again and open up our recorder. So, the
12:30 - 13:00 recorder lets me append to any existing journey or blank journey if I want. So, you can use it to augment an existing journey you're working on. So, here for example, I've already started and saying tap the profile icon. So, I'll do that. And now I can press the recorder. So now it's going to capture inputs. So what I'm going to do here is I'm going to scroll to the bottom and then I'm going to press refresh now. So you can see in the update bar
13:00 - 13:30 here that it's now processing this input. Each gesture I took takes about 20 seconds because we're uploading a bunch of frames to Gemini and doing some processing. You don't have to wait 20 seconds for each step. Uh we're going to cue them up and process them in order. Uh, unfortunately, it didn't do exactly what I wanted. So, I wanted to swipe up until refresh now is visible, and it said help and feedback instead. So, I'm going to help it out and say until refresh now is visible. And then I
13:30 - 14:00 pressed the refresh now button. That should show up momentarily. There it is. And I'll add a manual verification. So, I'll say verify that last refresh uh says less than 5 seconds because the AI can do math, right? And I can stop capturing. So, now I'm going to run this journey. And while that's running, I want to show you something else. We're working with other teams at Google to bring journeys to more surfaces. So,
14:00 - 14:30 we're working with, for example, the Firebase uh app distribution team. Uh wrong tab. with the Firebase app distribution team uh on their app testing agent. So they have an app testing agent that is that we're integrating journeys with. So I can see my journeys now here in app distribution. I've already run this journey. So now I can go and inspect the results. And this is a lot like what you saw in studio. Here I can look at the
14:30 - 15:00 reasoning the images before and after and so on. All right. So, let's see if the journey is done running. Okay, it's almost done running. It looks like it's already scrolled. Yeah, I'm confident this is going to pass. All right. So, the next thing I want to show you is our new agent. So, for that, let me switch to a I'll start with a very simple
15:00 - 15:30 example. So, we have the new agent tab here. Uh, let me make give the window a little bigger. So, here I'm going to I'm going to ask it um, this method doesn't return the right value. Can you fix it? So, the bug in this method isn't actually in that file. The bug is somewhere else. And the AI should understand
15:30 - 16:00 that. And so, you can see it's reasoning. It's saying it's finding the declaration of comput square. So it is using an IDE facility which is the go to declaration which uses the IDE compiler integration and index. It looked up the code uh and it looks like it overzealously deleted some code. Great. But it also found the bug. Um and it fixed it because I had turned on autoverify. I'm going to turn that back off so we can see the approval
16:00 - 16:30 process as well. U the next thing I want to show you is this linked list class that the AI wrote for me. Um so I'm going to ask it write a write a unit test for this class. So what it's going to do now is figure out where to put the test. And that is not as simple as it sounds in real world projects because in Gradel
16:30 - 17:00 you could be using a custom variant. Maybe you've set up a different uh source set name. And so you can see we are teaching this agent about Gradel. So it is not making assumptions. It's not just saying it's probably debug source main test, right? You can see here these tool calls. So it's looking up the right places. By the way, look at the animation. Sorry, I like this stuff. Uh, so it's figuring out where exactly to put the test. It's taking longer than usual. I guess the servers are loaded. Okay. And
17:00 - 17:30 it's now proposing to create this this uh test. And I can accept that change. I'm just going to auto approve from now on. You get you get this. Like you can you can have it just do it or you can approve each change. So it's now made this test for me. And let's go in and break the code. So I'm going to delete a line in the code. So now I've introduced a bug. I'll say, can you run the tests? Can you check if the tests pass
17:30 - 18:00 and if not fix the code? So again, it should be figuring out, you know, which gradal target to run corresponding to this test file. Uh, and then it should run the tests and hopefully it'll be able to fix the bug as well. We will see. All right. So, here it goes. Running the
18:00 - 18:30 tests. Hopefully, they will fail. Yeah. So, the tests were good. They were catching this behavior. Um, let me scroll to the bottom. So, you can see it is now reasoning about the code. It figured out what was wrong. It autofixed it and it ran the tests again. The name now passed. So this is yeah pretty useful, right? Looks like I have time for one more. Uh I guess I could keep
18:30 - 19:00 going on the agent demos all day long. Uh where did I put this? Here. So here's an XML file and this one has a lint warning. Uh but no quick fix. But we can actually ask the AI if it can fix it. Can you fix the warnings, the warnings in this file? So again, the agent should have access to look up the warnings that are
19:00 - 19:30 coming in this file. You can see it's analyzed the current file and it has now basically implemented this using a single text view which is with a compound drawable which is what the uh which is lint was suggesting it should do verbally. All right that's the agent. Um I want to hit a couple more quick AI features. So we have integration with uh crash litics. So um here we now have a panel which
19:30 - 20:00 explains what this crash is doing. And the reason this is great is because in studio we have access to your source code. So this isn't trying to hallucinate and guess based on the crash thread dump based on the method names what might be the problem. This is actually looking at the code and in some cases we're able to then suggest a fix. Um, so here we have a pretty reasonable fix for this crash. Um, here's a second example. So
20:00 - 20:30 this crash, if I if I jump to this crash, it can't go to the right line because this uh source file has changed since the crash happened. But in crash litics and play vitals, we actually have the commit ID corresponding to each crash. So we can account for that line number drift when we are feeding the AI the source file. Since you can see the explanation here is actually correct even though the line numbers didn't match. Uh one other quick AI feature I want to show you is
20:30 - 21:00 um is compose integration. So here is a compose uh composible and I don't have any previews for it but I can actually press this autogenerate compose previews button. This will now analyze the file and create a preview for me with some reasonable sample data. So when I accept the changes, it should rebuild uh momentarily. Rebuilding. Oh, it looks like it did
21:00 - 21:30 more than I expected. Here we go. All right. Yeah. Now adding a preview is something you can do with the agent obviously, but sometimes it's nice to just have a button. All right, so um I think that's it for AI features. Uh but we have lots of other things to show as well. And so the next thing I want to show you is Firebase device streaming. So Firebased device streaming
21:30 - 22:00 is a feature which lets me connect to remote devices uh in Google's uh device labs. And so the way I do this is I can go and select the remote device, but we are now launching uh device partner labs. So notice how there's a new lab column here. And I can uncheck Google. And you can see we now have devices from Samsung, OPPO, OnePlus, uh
22:00 - 22:30 Vivo, and Xiaomi. And so when you connect to these devices, you're actually connecting to devices in their device labs, not Google's device labs. And so doing this is easy. You just pick a device. That'll add them to this list. And I've already ahead of time connected to this uh Samsung uh Galaxy S25. We can switch to that now. And here it is. And what I like to do whenever I connect to a remote device is to pull up Google Maps and see where in the world this
22:30 - 23:00 device is located. And so let's see. And we can see it's in Korea, it's in Imudong. And you can also notice that the um the interactivity is really good. Even though this is halfway across the world, uh it's pretty fast and I can run my app there as well. So um Firebased device streaming and the new device partner labs is a great way to uh test
23:00 - 23:30 and interactively debug your apps in a variety of u scenarios. Last but not least, let's return the device so someone else can use it. Right, we've been working really hard on device connectivity for several years. Um, so last year we were working on USB stack and so we had speed improvements, reliability improvements, and we also shipped the USB cable uh cable speed detection. So hopefully you all have USB3 cables now, right? Yeah.
23:30 - 24:00 Um, but if you're an engineer who doesn't like to connect your phone with a cable, I have really good news. We've been working on ADB Wi-Fi again. So, I know there were many problems. Uh, but fixing it was really hard. We've been working on the entire stack swapping out whole pieces. We did work on studio, lots of work on ADB and even on the Android operating system. So, for example, a common problem in the past with the OS was that if your phone went
24:00 - 24:30 into standby mode, it was no longer listening for connections from studio. So, they wouldn't reconnect. But now that's changed. So, with the new version of the OS, once you're paired, it'll always reconnect. We also reworked the way the pairing flow works because a very common problem when you tried to pair and it wasn't working was that the the device and studio were not actually on the same network. So, uh, what I've done here is I'd like to tempt the demo gods with a Wi-Fi demo. So, I brought my own Wi-Fi router. So, I now have studio
24:30 - 25:00 u, my laptop is on my own little network studio demo. And so is my test device. And so this the phone and laptop are not paired yet. But when I turn on Wi-Fi uh wireless debugging on my phone, within about a second, we should see the device showing up in the device manager. Yeah, here it is. You can see, right? So, they're not paired yet. And another nice feature is that
25:00 - 25:30 this is actually listing the device name I've set for myself in my phone under, you know, about phone. Uh, that requires an operating system update, which we're working on. So, on older phones, it'll just say, for this one, it would say Google Pixel Fold. And so, now I can initiate pairing by pressing this button. Let's do it just to make sure that it actually works. It's confused by the lights reflection. Here we
25:30 - 26:00 go. Okay. And they're paired. Cool. All right. So now uh we can see the phone hair connected. All right. Um thank you. All right. So now I'm going to show you uh compose. So let me switch to this layout. So when you are building adaptive apps or any app really it's very important to make sure that your that your compos layouts are resizing
26:00 - 26:30 correctly uh under a variety of um available sizes. And so we have a new feature for that. So in the compost preview window you can switch to focus. And so that lets you focus on uh one particular preview and you can quickly switch between them. So notice how it's pretty subtle and dark theme, but in the bottom right corner there's a resize knob. And so when I press this, I can now
26:30 - 27:00 interactively resize the composible and see how it behaves. Yeah. And you can see we have these markers on the surface too. So I can see roughly, you know, what a foldable should be and so on. And then I can go ahead and save this preview if I want. That'll just write the size back on the preview annotation or I can restore. And speaking of restore, uh let's talk about backup and restore. So let me run my app
27:00 - 27:30 again. So when uh when users get a new phone, they can transfer their their settings to the new phone. Um, but it doesn't always work uh if app developers aren't doing it quite right or worse, some app developers opt out of it. And this is obviously a bad experience for users. So, we're working on features to make it easier to implement backup correctly. So, here you see my podcasting app again and we have these
27:30 - 28:00 two buttons, take backup and restore backup now. And so if I click on take backup, I now have the option to choose the kind of backup type I want and pick a file name. And when I do that, it's a lot like taking an emulator snapshot. It's going to grab a local file. And this is a file you would then put in the um in the project typically. You can check it in. You can see I've already taken several backups in the past here. Um and so I can now restore these backups. So notice again how this how I have no
28:00 - 28:30 podcast subscriptions in my app state right now. But I'm going to restore a backup I took when I did. So I just right click and say restore app data. This takes about 10 seconds because it's not a very big backup. I had one where I had also downloaded lots and lots of episodes. That was a 500 megabyte uh backup. Took longer. Um all right. So it's now pushed the settings for my backup. And if I open this, it's as if I had transferred my my phone, right? And you can see I now have
28:30 - 29:00 all my subscriptions uh along with some custom filters and even some listening statistics. So with this feature I can now go and make sure that my app is correctly persisting and restoring data. And if not I can debug it and the way I would do that is using the run configuration. Under run configurations you can click to restore app data and pick a backup file. So this is not just good for users, this is good for you. Imagine how much easier it is
29:00 - 29:30 to clone your app settings across all your AVDs or testing devices than to each time log in. Right? So um hopefully that'll make it a lot lot easier. And this is for manual verification. We're also working on making it easier to actually do this automatically through tests. And of course we're going to support this for journeys as well. But here I'm showing you uh the instrumentation test we're working on. So this is a special restore test, right? So basically I can point to one of these backup files. It'll restore
29:30 - 30:00 from that backup before it runs test. So this is a good way to make sure that you're handling state from let's say two versions of your app ago. And then we have a more complex backup and restore test. This one I can iterate through all the backup types and I get the right hooks for you know before it takes a backup and before it restores the backup. So, um the the test support here is not yet available. We're working on it. Uh but once it's available, uh it's going to be great. And please everyone
30:00 - 30:30 implement backup for your apps. All right. Um now I want to show you XR. So let me boot the uh let me boot the XR emulator. Unfortunately, we don't support snapshots for for XR yet, but hopefully soon. So, while we wait for that, I'm going to show you some lint checks. So, um if we open up this file, uh we can
30:30 - 31:00 see we get a warning on this media store access. And so, it's telling us that there's more information. So, I'm going to press command F1. And now I get a really long and comprehensive explanation of of this problem. So, it's telling me that this API may have API use restrictions or implications in play. And I don't have to hunt around on my project for these things. We have a way to run this these checks uh and to audit the whole app. So, I'll say inspect uh play policy
31:00 - 31:30 insights analyze. And now I can walk through and read the pretty long explanations about each API to understand how this might affect uh publishing. All right. Uh the emulator stuff. Let me also run the app and I'll show one other lint check in the meantime. So KTX libraries uh we have many extension libraries in Android X that makes the API much nicer. So for
31:30 - 32:00 example, instead of these static utility methods, Lint will now suggest you switch to KTX. So you get nice uh extension methods instead. Um and you don't have to supply defaults in some cases. Uh we can combine both comparisons and function calls into a single nice utility method. And look at this canvas call here. We're making repeated calls on the same object and we have to remember to call restore in the end. And KTX has this nice with
32:00 - 32:30 translation method instead. This code isn't just cleaner, but it's exactly equivalent because if you look at KTX, this is an inline function. So you're calling exactly the same bite code, but you're not going to forget to call restore because it's making sure you always do that. All right, XR is up. So let's let's talk about XR. So, you can run XR on the emulator now, and it also runs inside of Studio.
32:30 - 33:00 And so, I can, for example, go in and click on this expand button to put the app into multi-wind mode um and interact with it, right? So, my mouse clicks are now going straight to the app, but I I want to zoom out. How do I do that? So, one important thing to be aware of are these input modes, right? So, we have this little toolbar in the bottom right, and there are keyboard shortcuts for these that are useful to learn because you'll be switching back and forth between toggling the view and the app all the time. So, with view direction, you know,
33:00 - 33:30 you can look around and I can just switch between zoom, u direction, reset, and then command I to go back to typing into my app again. And we also have the layout inspector working for XR. Uh, and it works as you would expect. Well, hopefully it works as you would expect. Well, I realized I forgot to actually check back in on our cloud
33:30 - 34:00 upgrade. So, let's see how it did. Uh, uhoh. It looks like it gave up. Well, I told you this was a hard problem. Uh so one of the one of the things I mentioned earlier that we had switched out from a customcoded agent to a generic agent. And so in our customcoded agent we were really feeding it all the release notes. We haven't hooked up the release notes lookup and the generic agent yet. So this is an
34:00 - 34:30 even harder problem. It has to guess um try to fix the project. We'll see if it can do it on its own. Um all right. It's trying to build again. It's back at it and layout inspector isn't connecting. That's unfortunate. Um, but I think we can probably move back to the rest of the presentation. Um, most of what you've seen today is
34:30 - 35:00 already available. There's a few things that are not the generic agent and update bot, but they are very close to done. We really were hoping to ship it this week and decided last minute to wait a little bit longer, but should be available soon. So with that, uh, Jamal, you want to take us? Thank you. Okay. Um, amazing demo. Thanks. Thanks, Tor. Um, all right. So, let's talk about some more additional updates
35:00 - 35:30 before we wrap up. All right. So, alongside all the cool Gemini features we showed you today, we're excited. We just launched Gemini and Android Studio for businesses. After you or your IT administrator purchases a Gemini Code Assist license, you now have the option to choose a business plan and sign in and tap into an enterprise ready version of Gemini and Android Studio. You get enterprise management controls, code security that meets a whole host of security and enterprise standards. And of course, you have a highly tailored AI
35:30 - 36:00 powered development experience built exclusively for Android development. Now, as you recall, Taurus showed you a preview of Android Studio Cloud accessed through Firebase Studio. Again, this is a convenient way to open up your projects with an only an internet connection that uses a remotely provisioned a Linux virtual machine specifically designed for Android app development. So, for those of you who can't keep source code on your laptop or perhaps you have an underpowered machine, try out Android Studio
36:00 - 36:30 Cloud. And also, as mentioned, to aid in keeping your settings in sync between local and remote versions of Android Studio, we now have IDE sync settings. So again, you can log in with either your Google or Jet Brains account to sync your settings across instances. Now, on the build front, we've been talking about using R8 for some time as a way to optimize your app by using the minify or resource shrinking flags to remove excess methods and resources. However, if you used a dependency flag
36:30 - 37:00 before that does not have the correct keep rules, this feature can be challenging to use. So, with AGP9, we're introducing gradual R8, which can automatically shrink libraries that include shrinking roles while leaving other parts unaffected. Pretty nice. [Applause] So in addition to R8, we have a few more built updates like for instance phase sync which for those of you who have a a
37:00 - 37:30 large monolithic project will enjoy this change because we now segment the sync process into smaller several phases which allows the IDE to load a function project much faster and this allows the build to perform the longer time consuming processes in subsequent phases. Next, we're working on fused libraries, which for those of you who publish a ars can now access a new file type that builds together merges together several
37:30 - 38:00 AA libraries into one core single library. So, the hope is that's more efficient. And lastly, for build, we address a longstanding feature request in which at times it's hard to align the gradal and build JDK versions. So now with this update you can specify the exact JDK version to run your gradal demon on which is nice. So thank you for those who update this vish. And lastly for those developing with C++ there's an upcoming Android
38:00 - 38:30 platform and Google Play Store policy that requires your app to support 16 kilobyte page size instead of the classical 4 kilob page size. To aid with this, Android Studio now will warn you if you build with out 60 kilobyte page support. Additionally, we've updated the APK analyzer to tell you exactly which SO files need to be updated so you're ready to go for the new Android update. Okay, today we talked about a whole host of core features from back
38:30 - 39:00 and restore testing, Android XR support to a whole host of new AI features enabled by Gemini and Android Studio. Now note, not everything we showed you today will land in a final stable release, but we welcome you to download the preview version of Android Studio to try out all the features we showed you today and give us your feedback. [Applause] Okay, so again uh on behalf of myself,
39:00 - 39:30 tour and the Android tool team, please download and and thank you for listening and watching with us. Thanks. Excellent. [Applause]