The Future of AI at Google DeepMind

What's next for AI at DeepMind, Google's artificial intelligence lab | 60 Minutes

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    The iconic AI lab, DeepMind, is making groundbreaking strides towards achieving artificial general intelligence (AGI). Co-founder and CEO Demis Hassabis discusses the exponential curve of progress in AI, highlighting recent innovations like Project Astra, which interprets the world through its own 'eyes.' The lab's flagship model, Gemini, aims to act with human-like versatility. Despite potential implications of AI autonomy, including ethical concerns, Hassabis remains optimistic about AI’s potential to eradicate scarcity and revolutionize healthcare by accelerating drug discovery processes. Still, he stresses the importance of maintaining control and ensuring safety measures are in place as AI systems become more advanced. With the global race for AI dominance heating up, the future promises radical changes, and perhaps a new age of technological integration into everyday life.

      Highlights

      • Project Astra is an app that allows AI to interpret the world with its own 'eyes', blurring the lines between human and machine perception. 👁️‍🗨️
      • The Gemini model from DeepMind aims to perform versatile tasks like booking tickets or shopping online, a step towards human-like AI capabilities. 🛒
      • In just one year, DeepMind's AI has mapped 200 million 3D protein structures, slashing the time needed for scientific and medical breakthroughs. ⏱️
      • Demis Hassabis anticipates AI could end diseases, offering radical abundance by eliminating scarcity in resources and solutions. 🌐
      • Global implications of AI necessitate cooperation across nations to ensure AI development remains safe and aligned with ethical values. 🤝

      Key Takeaways

      • DeepMind is accelerating towards artificial general intelligence (AGI) with advancements like Project Astra, offering AI vision. 🤖
      • The development of AI models such as Gemini could lead these systems to mimic human versatility and application in daily life. 🌟
      • Demis Hassabis envisions AI curing diseases within a decade, underlining the transformative potential of AI in healthcare. 💊
      • The pursuit of AI presents risks, such as misuse by bad actors and the challenge of keeping AI aligned with human values. ⚠️
      • Safety and ethical considerations are crucial, and global collaboration is necessary to harness AI’s benefits responsibly. 🌍

      Overview

      DeepMind, led by Nobel laureate Demis Hassabis, is at the forefront of AI innovation, particularly focusing on achieving artificial general intelligence (AGI). This involves developing AI systems with human-like versatility and understanding, poised to revolutionize daily life and industries by acting within our environment with unprecedented autonomy.

        Project Astra and the Gemini model reflect DeepMind's current endeavors. Project Astra showcases AI's capability to 'see' and 'interpret' the world, while Gemini represents the journey towards AGI by performing complex tasks akin to human activities. These developments promise transformative impacts, especially in healthcare with potential expedited drug development.

          However, Hassabis highlights the duality of AI's promise, bringing attention to potential risks like misuse by malicious entities and the importance of embedding ethical frameworks within AI. He calls for international collaboration to ensure AI advancements benefit humanity globally while remaining safely within ethical boundaries.

            Chapters

            • 00:00 - 01:00: Introduction and Background of Demis Hassabas Demis Hassabas celebrated winning the Nobel Prize by playing poker with a chess world champion. Known for his love of games, Hassabas has made significant strides in artificial intelligence. He is the co-founder and CEO of Deep Mind, a part of Google. The chapter also references a significant meeting two years ago around the rise of chatbots, highlighting Hassabas's role in the AI landscape.
            • 01:00 - 03:00: Artificial General Intelligence and Exponential Progress The chapter discusses the pursuit of artificial general intelligence (AGI), which aims to create a silicon-based intellect as adaptable as a human but with superior speed and knowledge. It also touches on the achievements of a notable individual who has been knighted and is contributing significantly to this field, foreshadowing further developments in the story.
            • 03:00 - 05:30: Project Astra - Advanced Chatbots The chapter explores the author's lifelong passion for understanding profound questions about life, consciousness, and reality. It highlights their admiration for scientists and philosophers and their aspiration to advance human knowledge. The author believes building advanced chatbots is the ultimate tool for achieving this goal.
            • 05:30 - 07:30: Capabilities and Challenges of AI Systems In the chapter titled 'Capabilities and Challenges of AI Systems', the discussion revolves around the rapid advancement of AI technology. Participants recall meeting two years ago and express their astonishment at the speed of AI's development, stating it is progressing much faster than anticipated. They describe this growth as following an exponential curve, fueled by increased attention, resources, and talent in the field due to its recent successes. This exponential progress continues to pose both exciting opportunities and challenges in AI's evolution.
            • 07:30 - 11:00: DeepMind's AI Model Gemini In this chapter, the development and capabilities of DeepMind's AI model, Gemini, are explored. This model represents an artificial companion that possesses the ability to see, hear, and converse about various topics. Unlike earlier chatbots which relied solely on the internet for learning, Gemini also integrates insights from the world through an app called Astra. The discussion also touches on the anthropomorphism of AI, such as referring to AI with gender pronouns, highlighting uncertainties about AI identities.
            • 11:00 - 13:00: Robotics and AI's Future Capabilities This chapter discusses the role of Shu, a product manager for Project Astra, which is an innovative app in the realm of new-age chatbots that have the capability to interpret visual art. The text describes a challenge posed to Astra, where it is introduced to various paintings it has never seen before, such as 'July Hi' by Thomas Hart Benton from 1942, 'The Virgin of Charity' by El Greco, and 'Automat' by Edward Hopper.
            • 13:00 - 18:00: Achievements and Ambitions of Demis Hassabas This chapter delves into the profound achievements and future ambitions of Demis Hassabas, intertwining strategic thinking, deep learning, and creativity in the realm of artificial intelligence. It highlights Hassabas's pivotal role in shaping AI technologies that mirror human-like cognition. The narrative also touches upon the challenges and aspirations that propel his journey to redefine AI's potential. The chapter paints a thoughtful picture likened to a serene painting of a woman named Eleanor, embodying the contemplative emotions and dreams yet to be realized. This allegory captures the essence of pausing at the brink of new advancements, poised between past accomplishments and future possibilities. Hassabas's story, much like Eleanor's, is one of reflection, ambition, and the pursuit of uncharted territories in the AI domain.
            • 18:00 - 21:00: AI Ethics, Risks, and Safety Concerns The chapter delves into a fictional conversation between characters Eleanor and her friend, discussing the pursuit of shared dreams and the challenges faced. It reflects on the nature of advancing ideas and the feelings of impatience that can arise. There is also a meta-commentary on tone and communication, highlighting misunderstandings in tone perception.
            • 21:00 - 23:00: The Future of AI and Philosophy In this chapter, the focus is on the interactions between humans and AI systems, particularly highlighting how AI systems, when released on the internet to learn autonomously, can return with unforeseen skills. This points to the challenges and the surprising capacities of AI when they are exposed to new, untested contexts.

            What's next for AI at DeepMind, Google's artificial intelligence lab | 60 Minutes Transcription

            • 00:00 - 00:30 When Demis Hassabas won the Nobel Prize last year, he celebrated by playing poker with a world champion of chess. Habisas loves a game, which is how he became a pioneer of artificial intelligence. The 48-year-old British scientist is co-founder and CEO of Google's AI powerhouse called Deep Mind. We met two years ago when chatbots announced a new age. Now Habis and
            • 00:30 - 01:00 others are chasing what's called artificial general intelligence. A silicon intellect as versatile as a human but with superhuman speed and knowledge. After his Nobel and a nighthood from King Charles, we hurried back to London to see what's next from a genius who may hold the cards of our future. The story will continue in a moment.
            • 01:00 - 01:30 What's always guided me and and and the passion I've always had is understanding the world around us. I've always been um since I was a kid fascinated by the biggest questions, you know, the the the meaning of of life, the the the nature of consciousness, the nature of reality itself. I've loved reading about all the great scientists who've worked on these problems and the philosophers and I wanted to uh see if we could advance human knowledge and for me my expression of doing that was to build what I think is the ultimate tool for for advancing
            • 01:30 - 02:00 human knowledge which is which is AI. We sat down in this room two years ago and I wonder if AI is moving faster today than you imagined. It's moving incredibly fast. uh I think we are on some kind of exponential curve of improvement. Of course, the success of the field in the last few years has attracted even more attention, more resources, more talent. So, um that's adding to the to this exponential progress, exponential curve, in other
            • 02:00 - 02:30 words, straight up. Yep. Straight up and increasing speed of progress. Start. Yeah, we saw the progress. Hello, Scott. It's nice to see you again. in an artificial companion that can see and hear and chat about anything. Early chatbots learned only the internet. An app called Astra also takes in the world. Do we call her she? Um, it's a good question. I'm not sure we I'm not sure we all know the answer yet. Bibbo
            • 02:30 - 03:00 Shu is product manager for Project Astra, an app in a new generation of chatbots that interpret the world with their own eyes. We challenged Astra with virtual paintings we chose and showed to Astra for the first time. This is July Hi by Thomas Hartbenton from 1942. How about this one? That is the Virgin of Charity by Elgreco. That painting is Automat by Edward Hopper. The person in
            • 03:00 - 03:30 the painting, what emotion is she exhibiting? The subject in the painting appears pensive and contemplative, her expression suggesting a sense of solitude. Can you create a story around this picture? It's a chilly evening in the city, a Tuesday perhaps. The woman perhaps named Eleanor sits alone in the diner. She is feeling melancholy due to the uncertainty of her future and the weight of unfulfilled dreams. The scene freezes in time. Only the flow of ideas
            • 03:30 - 04:00 moving onward. That line, only the flow of ideas moving onward. I wish I'd written that. And did we hear a limit to Astra's patience? What was that fictional conversation about? Ah, in the story I imagined, Eleanor and her friend talked about their shared dreams and the challenges of pursuing them. Something about the way you said ah made me wonder if you were getting bored. I apologize if my tone came across that way. My aim
            • 04:00 - 04:30 is always to engage thoughtfully. And she said, "Well, I'm sorry if you don't like my tone." Wow. What's happening there? Well, that's interesting. That's again in challenge with these systems as they act in the moment with the context that's around them and that may have never been tested before. He's often surprised because AI programs are sent out on the internet to learn for themselves. They can return later with unexpected skills. So we have
            • 04:30 - 05:00 theories about what kinds of uh capabilities these systems will have. That's obviously what we try to build into the architectures. But at the end of the day, how it learns, what it picks up from the data is part of the training of these systems. We don't program that in. It learns like a human being would learn. So, um, so new capabilities or properties can emerge from that training situation. You understand how that would worry people. Of course, it's the duality of these types of systems that
            • 05:00 - 05:30 they're able to uh do incredible things, go beyond the things that we're able to uh uh design ourselves or understand ourselves. But of course, the challenge is is making sure um that the the knowledge databases they create um we understand what's in them. Now, DeepMind is training its AI model called Gemini to not just reveal the world, but to act in it, like booking tickets and shopping online. It's a step toward AGI,
            • 05:30 - 06:00 artificial general intelligence with the versatility of a human mind. On track for AGI in the next 5 to 10 years, I think, and in 2030, you will have what? Well, we'll have a system that um really understand everything around you in very uh nuanced and deep ways um and kind of embedded in your everyday life. Embedded like Astra in eyelasses. What can you
            • 06:00 - 06:30 tell me about this building I'm looking at? This is the cold drops yard, a shopping and dining district. She sees what I see. There's a speaker in the earpiece only I can hear. What was it originally before it became shops? The coal drops yard was originally a set of Victorian coal warehouses used to receive and distribute coal across London. Was coal ever a problem for the environment in London? Yes, coal was a significant source of air pollution in London, particularly during the industrial revolution. It occurred to us
            • 06:30 - 07:00 that the only thing we contributed to this relationship were legs which will also soon be engineered. I also think another big area will be robotics. I think it will have a breakthrough moment in the next couple of years where we'll have demonstrations of maybe humanoid robots or other types of robots that can start really doing useful things. For example, hey, robot. Researchers Alex Lee and Julia Vazani showed us a robot
            • 07:00 - 07:30 that understands what it sees. That's a tricky one. And reasons its way through vague instructions. Put the blocks whose color is the combination of yellow and blue into the matching color ball. The combination of yellow and blue is green and it figured that out. It's reasoning. Yep, definitely. Yes. The toys of Deis Hassabus' childhood
            • 07:30 - 08:00 weren't blocks, but chess pieces. At 12, he was the number two champion in the world for his age. This passion led to computer chess, video games, and finally thinking machines. He was born to a Greek criate father and Singaporean mother. Cambridge, MIT, Harvard. He's a computer scientist with a PhD in neuroscience. Because he reasoned he had to understand the human brain first. Are
            • 08:00 - 08:30 you working on a system today that would be selfaware? I don't think any of today's systems to me feel self-aware or you know conscious in any way. Um of obviously everyone needs to make their own decisions by interacting with these chat bots. Um I think theoretically it's possible but is self-awareness a goal of yours? Not explicitly but it may happen implicitly. These systems might acquire some feeling of self-awareness. That is possible. I think it's important for these systems
            • 08:30 - 09:00 to understand you um self and other and that's probably the beginning of something like self-awareness. But he says if a machine becomes self-aware, we may not recognize it. I think there's two reasons we regard each other as conscious. One is that you're exhibiting the behavior of a conscious being very similar to my behavior. But the second thing is you're running on the same substrate. We're made of the same carbon matter with our squishy brains. Now, obviously with machines,
            • 09:00 - 09:30 they're running on silicon. So, even if they exhibit the same behaviors and even if they they say the same things, it doesn't necessarily mean uh that this sensation of consciousness that we have um is the same thing they will have. Has an AI engine ever asked a question that was unanticipated? Not so far that I've experienced. And I think that's getting at the idea of what's still missing from these systems. They still can't really yet go beyond um asking a new novel
            • 09:30 - 10:00 question or a new novel conjecture or coming up with a new hypothesis that um has not been thought of before. They don't have curiosity. No, they don't have curiosity and they're probably lacking a little bit in what we would call imagination and intuition. But they will have greater imagination, he says. And soon I think actually in the next maybe 5 to 10 years I think we'll have systems that are capable of not only solving a important problem or conjecture in science but coming up with
            • 10:00 - 10:30 it in the first place. Solving an important problem won Habisas a Nobel Prize last year. He and colleague John Jumper created an AI model that deciphered the structure of proteins. Proteins are the basic building blocks of life. So everything in biology, everything in your body depends on proteins. You know, your neurons firing, your muscle fibers twitching, it's all mediated by proteins. But 3D protein structures like this are so complex,
            • 10:30 - 11:00 less than 1% were known. Mapping each one used to take years. Deep Mind's AI model did 200 million in one year. Now, Habas has AI blazing through solutions to drug development. So, on average, it takes, you know, 10 years and billions of dollars to design just one drug. We could maybe reduce that down from years to maybe months or maybe even weeks, which sounds incredible today, but
            • 11:00 - 11:30 that's also what people used to think about protein structures. It would revolutionize human health. And I think one day maybe we can cure all disease with the help of AI. The end of disease. I think that's within reach maybe within the next decade or so. I don't see why not. Demisabas told us AI could lead to what he calls radical abundance, the elimination of scarcity. But he also worries about risk. There's two worries that I worry about. One is that bad
            • 11:30 - 12:00 actors, human uh pe you know users of these systems repurpose these systems for harmful ends. Then the second thing is the AI systems themselves as they become more autonomous and more powerful. Can we make sure that we can keep control of the systems that they're aligned with our values? They they're doing what we want that benefits society um and they stay on guard rails. Guard rails are safety limits built into the system. And I wonder if the race for AI dominance is a race to the bottom for
            • 12:00 - 12:30 safety. So that's one of my big worries actually is that of course all of this energy and racing and resources is great for progress but it might incentivize certain actors in in that to cut corners and one of the corners that can be shortcut would be safety and responsibility. Um so the question is is how can we uh coordinate more you know as leading players but also nation states even I think this is an international thing. AI is going to affect every country. Everybody in the
            • 12:30 - 13:00 world. Um, so I think it's really important that the world uh and the international community has a say in this. Can you teach an AI agent morality? I think you can. They learn by demonstration. They learn by teaching. Um, and I think that's one of the things we have to do with these systems is to give them uh a value system and a and a guidance and some guard rails around that much in the way that you would teach a child. Google DeepMind is in a race with dozens
            • 13:00 - 13:30 of others striving for artificial general intelligence so human that you can't tell the difference which made us think about Deus Hassaba's signing the Nobel book of laurates when does a machine sign for the first time and after that will humans ever sign it again I think in the next steps is going to be these amazing tools that enhance our almost every uh endeavor we do as humans. And then beyond that uh when AGI
            • 13:30 - 14:00 arrives, you know, I think it's going to change uh pretty much everything about the way we do things. And and it's almost, you know, I think we need new great philosophers to come about hopefully in the next 5 10 years to understand the implications of this system. Creating 3D worlds from images, bringing to life your own holiday photos at 60 minutesovertime.com.