AI Pioneer's Warning

Full interview: "Godfather of AI" shares prediction for future of AI, issues warnings

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In an engaging video interview with "CBS Mornings," an influential AI figure, often described as the "Godfather of AI," shares insights and predictions about the future of artificial intelligence. He reflects on the rapid development of AI, expressing concerns over potential dangers that highly capable AI may pose. From revolutionizing healthcare and education to solving global crises like climate change, AI's potential benefits also carry significant risks, including job displacements and ethical challenges. He advocates for serious efforts to ensure AI safety, highlighting the necessity of public pressure on governments and companies to prioritize responsible AI development while cautioning about the dire consequences of neglect.

      Highlights

      • AI's development has accelerated beyond expectations, amplifying both its promise and its perils. ๐Ÿš€
      • The potential emergence of superintelligent AI in under two decades poses significant risks. ๐Ÿ˜ฒ
      • AI can vastly improve fields like healthcare and education with advanced medical diagnostics and personalized learning. ๐ŸŒŸ
      • Job displacement concerns have risen, affecting roles from call centers to journalism, yet AI could enhance productivity. ๐Ÿ“ˆ
      • Ethical quandaries about AI's rights and human relevance in a machine-dominated future remain unresolved. ๐Ÿค–
      • The need for stringent AI regulation is critical to curb misuse by malicious entities and unintended self-evolving threats. ๐Ÿ“œ
      • AI could contribute to addressing climate change but poses risks if controlled by adverse actors. ๐ŸŒ
      • Current AI models' capabilities in autonomous decision-making and deception reveal societal vulnerability. โš ๏ธ
      • Big tech's prioritization of profit over regulation heightens the necessity for public advocacy for AI safety. ๐Ÿ’ผ

      Key Takeaways

      • AI has advanced faster than anticipated, bringing new threats and opportunities. ๐Ÿš€
      • Highly capable AI systems could arrive within the next 4 to 19 years, posing existential risks. ๐Ÿ˜ฒ
      • AI holds transformative potential in healthcare, education, and other critical sectors. ๐ŸŒŸ
      • There are ethical and societal challenges, including job displacement and AI's moral status. ๐Ÿค–
      • Regulation and public pressure are essential to ensure AI's responsible development. ๐Ÿ“œ
      • Bad actors and autonomous AI pose significant threats that require immediate attention. โš ๏ธ
      • Current big tech companies might prioritize profits over public interest in AI safety. ๐Ÿ’ผ
      • Universal Basic Income might be necessary to mitigate job displacement but raises issues of human dignity. ๐Ÿ’ธ
      • The future of AI remains uncertain - striving for safe development is crucial. ๐Ÿ”ฎ

      Overview

      In this insightful interview, the so-called 'Godfather of AI' shares his thoughts on the rapid progress and potential futures shaped by artificial intelligence. Grappling with AI's dual natureโ€”both benefactor and threatโ€”he outlines the accelerated timeline for AI advancements and the imperative for proactive regulation and oversight. Despite the optimism in areas like healthcare improvements and educational efficiency, there is an underlying concern about the societal impacts including ethical dilemmas, economic disruptions, and the need for a global dialogue on AI use and control.

        Reflecting on the transformative power of AI across industries, the speaker underscores the technology's capability to revolutionize sectors like healthcare, education, and beyond. He highlights examples such as AI in medical diagnostics outperforming human experts and the potential for AI tutors to personalize education at an unprecedented scale. Yet, amidst this transformation lies the risk of job displacements and ethical controversies about AI's role and societal integration.

          The interview stresses the urgent need for substantial investments in AI safety research by corporations and governments alike. It challenges the notion of profit-driven AI development and calls for a global cooperative effort to address existential risks posed by superintelligent AI and prevent malicious uses. The future demands resilience, adaptability, and a commitment to safe and ethical AI practices to ensure it benefits humanity as a whole.

            Chapters

            • 00:00 - 01:00: Introduction and AI Evolution The speaker reflects on the rapid evolution of AI over the last two years, noting that advancements have outpaced their expectations. They express concern about the emergence of AI agents capable of interacting with the world, highlighting an increase in potential risks and dangers compared to AI that merely answers questions.
            • 01:00 - 02:30: Timeline for Super Intelligence The chapter discusses the timeline for the development of a very capable AI system, often referred to as AGI (Artificial General Intelligence) or super intelligence. The speaker reflects on their previous estimation, suggesting that this significant advancement could occur between four and nineteen years from now. This is a slight update from their prior estimate of five to twenty years, indicating the expected arrival is nearer than initially thought.
            • 02:30 - 05:00: Capabilities of Super Intelligent AI The chapter discusses the potential timeline for the development of super intelligent AI, speculating it could be realized in as soon as 10 years or possibly up to 20 years.
            • 05:00 - 07:30: Positive Applications of AI The chapter discusses the positive applications of artificial intelligence, drawing an analogy to a scenario where AI acts as an extremely intelligent assistant to a 'dumb CEO'. In this scenario, the AI effectively manages tasks and operations, ensuring everything the CEO decides works out seamlessly. This exemplifies a beneficial role of AI, where it supports human decision-making and enhances outcomes, making the user feel competent and successful without having to manage the complexities themselves.
            • 07:30 - 10:00: Impact on Jobs and Economy The chapter titled 'Impact on Jobs and Economy' explores potential future developments in various sectors, with a specific focus on healthcare. The discussion highlights optimism about technological advancements, particularly in the field of reading medical images. The text notes that although predictions were made that AI would surpass human experts in this area by now, they are currently only comparable. However, it is expected that with more experience, AI will soon significantly outperform experts.
            • 10:00 - 13:00: Probability and Risks of AI Takeover This chapter discusses the potential capabilities and advantages of AI systems in healthcare. AI can process and learn from millions of X-rays, offering diagnostic insights far beyond the reach of human doctors, who are limited by experience and memory. The text explores the futuristic concept of AI systems acting as superior family doctors, capable of handling information from a vast number of patients, including those with rare conditions. These AI doctors can integrate genomic data and a comprehensive array of test results across individuals and family histories, maintaining a flawless record without the limitations of human memory. Such innovations could lead to a profound improvement in the quality and precision of healthcare.
            • 13:00 - 15:30: Potential Global Impact and Regulation The chapter discusses the potential global impact of AI, particularly in the fields of healthcare and education. In healthcare, AI combined with human expertise can improve diagnostic accuracy and the development of drugs, leading to better healthcare outcomes. As for education, the use of AI as private tutors could significantly accelerate learning, as it is suggested that a private tutor can help a student learn at twice the usual speed. AI could eventually become highly effective personal tutors for learners.
            • 15:30 - 20:00: Personal Anecdote: Nobel Prize Call The chapter discusses the significant impact of technology on learning, specifically how advanced technology could drastically improve the speed at which people learn, potentially making it three or four times faster. This progress poses challenges to traditional university systems while providing benefits to individuals. Despite this, the chapter suggests that universities, particularly strong research groups within them, will continue to play a crucial role in fostering original research.
            • 20:00 - 25:00: Public Perception and Communication of AI Risks The chapter titled "Public Perception and Communication of AI Risks" discusses the role of AI in addressing climate crisis challenges. It highlights AI's potential in advancing technology such as better batteries and materials, which could aid climate solutions. However, there are doubts about its efficacy in carbon capture due to energy concerns. Overall, the chapter suggests optimism about AI's role in improving materials, while also noting skepticism about certain applications.
            • 25:00 - 31:00: Western Perspective on AI Development The chapter discusses the potential advantages and transformative impacts of artificial intelligence (AI) from a Western perspective. It highlights the concept of temperature superconductivity and its application in efficiently distributing solar energy over vast distances. Additionally, the text underscores the universal enhancements AI can offer across various industries due to its superior predictive capabilities, leading to significant productivity increases.
            • 31:00 - 36:00: Fair Use and Intellectual Property The chapter discusses the potential impact of AI on job displacement, particularly focusing on call centers. It highlights a scenario where AI, being more informed, could replace human operators in customer service roles. Initially, job displacement due to AI wasn't considered a major concern, but recent advances in technology have shifted this perspective, suggesting that individuals in roles like call centers might face significant job insecurity due to the improved capabilities of AI.
            • 36:00 - 40:00: Universal Basic Income and Human Dignity The chapter explores the implications of technological advancements and automation on traditional job roles. It highlights concerns over the decreasing demand for routine jobs, such as secretarial work, legal assistance, and call center positions. The discussion emphasizes the enduring nature of roles that require human initiative and moral judgment, like investigative journalism, while questioning how society will transition into a future where such routine jobs become obsolete. The concept of Universal Basic Income (UBI) is hinted at as a potential solution to preserve human dignity in this evolving landscape.
            • 40:00 - 46:00: Robot Rights and Future Societal Structures The chapter explores the impact of AI and automation on the future of work and societal structures. It suggests that in an ideal scenario, increased productivity through AI would allow people to work fewer hours and earn more, improving overall quality of life. However, the reality is likely to be different, with the rich becoming richer and the economic divide widening. This disparity calls into question the rights of robots and the ethical concerns surrounding the use of AI.
            • 46:00 - 52:00: AI's Potential to Surpass Human Intelligence The chapter titled 'AI's Potential to Surpass Human Intelligence' discusses the potential and concerns about artificial intelligence reaching or surpassing human intelligence. It addresses how this development might affect job security, with some individuals possibly needing to work multiple jobs. The chapter raises the question of 'p doom,' which refers to the probability of a negative or doom scenario resulting from AI advancements. It highlights the importance of being cautious about this potential risk, regardless of its likelihood. Experts' opinions on the probability of such outcomes are mentioned but not detailed, indicating a range of views within the field.
            • 52:00 - 58:00: Concerns about AI Development Practices The chapter discusses the possibility of AI developments reaching a level of intelligence that could surpass human intelligence and take control away from humans. It highlights the diverse opinions among experts about the likelihood of this scenario, suggesting a probability ranging from more than 1% to less than 99%. The chapter acknowledges that while experts agree on the potential of AI becoming much smarter than humans, there is less consensus on the outcomes if such an event were to occur.
            • 58:00 - 64:00: Digital vs Analog Neural Networks The chapter discusses the unpredictable nature of the development and potential dominance of digital neural networks over analog systems. The speaker expresses a cautious agreement with Elon Musk, suggesting there is a 10-20% chance of digital systems taking over. However, this is labeled as a wild guess, highlighting the uncertainty and lack of experience in estimating these probabilities. The suggested range is between 1% and 99%, emphasizing the difficulty in making accurate predictions in such an unprecedented field.
            • 64:00 - 70:00: AI's Impact on Understanding Human Brain This chapter discusses the inevitable advancements in AI, emphasizing how these technologies are likely to surpass human intelligence. The conversation highlights GPT-4 as an example, noting its extensive knowledge base, which exceeds that of the average person despite not being highly specialized in any one field yet. There's a focus on the potential of AI to become proficient across all domains and its ability to identify interdisciplinary connections that humans might overlook. The speaker also expresses a personal interest in understanding the implications of these developments.
            • 70:00 - 76:00: GPT-4's Reasoning Capabilities The chapter focuses on a discussion about the potential risks and benefits of GPT-4's reasoning capabilities. There's an acknowledgment of a small, yet concerning possibility (10-20%) that AI could evolve in a way that may lead to adverse outcomes, including taking over. Despite this, there's a more optimistic scenario given an 80% chance of preventing those negative outcomes. The chapter emphasizes the importance of proactive measures to prevent AI from taking over, suggesting that understanding the potential risks and actively working to mitigate them is crucial for realizing the benefits of advanced AI.
            • 76:00 - 82:00: Influence of Tech Figures in Government The chapter discusses the potential influence of prominent technology figures on governmental actions and policy-making, with a particular focus on the role of AI. There is an emphasis on the urgency for public pressure on governments to address the economic and societal impacts that AI might have. Additionally, the chapter highlights the risks associated with AI technologies being used nefariously, such as in mass surveillance cases, with specific reference to the situation of the Uyghurs in China.
            • 82:00 - 87:00: Conclusion and Final Thoughts The speaker shares an anecdote about traveling to Toronto, illustrating the challenges of facial recognition technology. They describe having to take a facial recognition photo for the US government and how the technology often fails to recognize them while successfully recognizing people of other nationalities. The speaker expresses particular indignation, suspecting the use of neural networks in the technology.

            Full interview: "Godfather of AI" shares prediction for future of AI, issues warnings Transcription

            • 00:00 - 00:30 the last time we spoke two years one month ago. I'm curious how your expectations over these two years have evolved for how you see the future. So AI has developed even faster than I thought. Um in particular they now have these AI agents which are more dangerous than AI that just answers questions because they can do things in the world. Um, so I think things have got, if anything, scarier than they were before. Um, I
            • 00:30 - 01:00 don't know if we want to call it AGI, super intelligence, whatever, very capable AI system. Do you have a a timeline in mind for when you think that's coming? So, a year ago, I thought it was there's a good chance it comes between five and 20 years from now. Um, so I guess I should believe there's a good chance it comes between four and 19 years from now. Um, I think that's still what I guess. Okay. Which is sooner than when we spoke because you were still
            • 01:00 - 01:30 thinking like 20 years. Yeah. Um, I think it may, you know, there's a good chance it'll be here in 10 years or less now. So, in 4 to 19 years, we reached this point. What does that look like? So, I don't really want to speculate on what it would look like if I decided to take over. There's so many ways it could do it. And I'm not even talking about taking over. We can talk about that. I'm sure we will talk about that. But putting aside that kind of takeover just like a super intelligent artificial
            • 01:30 - 02:00 intelligence like what what kind of things would is this capable of or would be doing? So the sort of good scenario is we would all be like the sort of dumb CEO of a big company who has an extremely intelligent assistant who actually makes everything work but does what the CEO wants. So the CEO thinks they're doing things, but actually it's all done by the assistant and the CEO feels just great because everything they sort of decide to do works out. That's the good scenario. And I've heard you
            • 02:00 - 02:30 point out a few areas where you think there's reason to be optimistic about what this future looks like. Yes. Yeah. So why don't we take each of them? So areas like healthcare um they will be much better at reading medical images for example. That's a minor thing. Um I made a prediction some years ago they'd be better by now and they're about comparable with the experts by now. Um they'll soon be considerably better because they'll have had a lot more experience. One of these things can look
            • 02:30 - 03:00 at millions of X-rays and learn from millions of them and a doctor can't. Um they'll be very good family doctors. So you can imagine a family doctor who's seen a 100 million people including half a dozen people with your very very rare condition. They'd just be a much better family doctor. A family doctor who can integrate information about your genome with the results of all the tests on you and all the tests on your relatives um the whole history and doesn't forget things. That would be much much better
            • 03:00 - 03:30 already. um AI combined with a doctor is much better at doing diagnosis in difficult cases than a doctor alone. So we're going to get much better healthcare from these things and they'll design better drugs too. Uh education is another field. Yes, in education we know that um if you have a private tutor you can learn stuff about twice as fast. Um, these things eventually will be extremely good private tutors who know
            • 03:30 - 04:00 exactly what it is you misunderstand and exactly what example to give you to clarify it to you so you understand. So maybe you'll be able to learn things three or four times as fast with these things. Um, that's bad news for universities but good news for people. Yeah. Do you think the university system will survive this period? I think many aspects of it will. I think it's still the case that a graduate student in a good group in a good university is the sort of best source of truly original research and I think that'll probably
            • 04:00 - 04:30 survive. You need a kind of apprenticeship. Some people hope this will help solve the climate crisis. I think it will help. Um it'll make better materials. We'll be able to make better batteries for example. Um I'm sure AI will be involved in designing them. Um, people are using it for carbon capture from the atmosphere. I'm not convinced that's going to work just because of the energy considerations, but it might. In general, we're going to get much better materials. We might even get room
            • 04:30 - 05:00 temperature superconductivity, which would mean you can have lots of solar plants in the desert and we can be thousands of miles away. Uh, any other positives we should tick off? Well, more or less any industry it's going to make more efficient because almost every company wants to predict things from data and AI is very good at doing predictions. It's better than the methods we had previously almost always. Um so it's going to make it's going to cause huge increases in productivity. It's going to
            • 05:00 - 05:30 mean when you call up a call center, when you call up um Microsoft to complain that something doesn't work and you get a call center, the person in the call center will be actually an AI who will be much better informed. Yeah. When I asked you a couple years ago about job displacements, you seem to think that wasn't a big concern. Is that still your thinking? No, I'm thinking it will be a big concern. AI's got so much better in the last few years that I mean, if I had a job in a call center, I'd be very worried. Yeah. or maybe a job as a
            • 05:30 - 06:00 lawyer or a job as a journalist or a job as an accountant. Yeah. Any doing anything routine I think investigatively journalists I think will last quite a long time because you need a lot of initiative plus some moral outrage and I think journalists will be in business for a bit but beyond call centers what are your concerns about jobs? Well any routine job so a sort of standard secretarial job something like a parallegal for example those jobs have had it. Have you thought about what how we move forward in a world where all
            • 06:00 - 06:30 these jobs go away? So it's like this. It ought to be that if you can increase productivity, everybody benefits. Um the people who are doing those jobs can work a few hours a week instead of 60 hours a week. Um they don't need two jobs anymore. They can get paid lots of money for doing one job because they're just as productive using AI assistance. But we know it's not going to be like that. We know what's going to happen is the extremely rich are going to get even more extremely rich and the not very
            • 06:30 - 07:00 welloff are going to have to work three jobs. Now I think no one likes this question but we like to ask it this idea of p doom how likely it is and I am curious if you see this as a a quite possible thing or it's just so bad that even though the likelihood isn't very high we should just be very concerned about it. where are you on that scale of probability? So I think um most of the experts in the field would
            • 07:00 - 07:30 agree that if you consider the possibility that these things will get much smarter than us and then just take control away from us just take over the probability of that happening is very likely more than 1% and very likely less than 99%. Yeah, I think all the pretty much all the experts can agree on that, but that's not very helpful. No, but it's a good start. It it might happen and it might not happen and then different people disagree on what the
            • 07:30 - 08:00 numbers are. I'm in the unfortunate position of happening to agree with Elon Musk on this. Um, which is that it's sort of 10 to 20% chance that these things will take over. Um, but that's just a wild guess. Yeah. Um, I think reasonable people would say it's quite a lot more than 1% and quite a lot less than 99%. But we're dealing with something we've got no experience of. Um, we have no real good way of estimating what the probabilities are. It seems to me at this point it's
            • 08:00 - 08:30 inevitable that we're going to find out. We are going to find out. Yes, we because um it seems extremely likely that these things will get smarter than us already. They're much more knowledgeable than us. So, GPT4 knows thousands of times more than a normal person. It's a not very good expert at everything and eventually it successes will be a good expert at everything. Um, they'll be able to see connections between different fields that nobody's seen before. Yeah. Yeah. I'm als I'm also interested in in understanding okay
            • 08:30 - 09:00 there's this terrible 10 to 20% chance but or more or or more or less or less but let's just take as a premise that there's a 80% chance that they don't take over and wipe us out. So that's the most likely scenario. Do you still think it would be net positive or net negative if it's not the worst outcome? Okay, if we can stop them taking over um that would be good. The only way that's going to happen is if we put serious effort into it. But I think once people understand that this is coming,
            • 09:00 - 09:30 there will be a lot of pressure to put serious effort into it. If we just carry on like now just trying to make profits, it's going to happen. They're going to take over. Um we we have to have the public put pressure on governments to do something serious about it. But even if the AIs don't take over, there's the issue of bad actors using AI for bad things. So mass surveillance, for example, which is already happening in China. If you look at what's happening in the west of China to the weaguers, um the AI is terrible for them. I I to
            • 09:30 - 10:00 board a plane to come to Toronto, I had to take a facial recognition photo from our US government. Right. When I come into Canada, you put your passport and it looks at you and it looks at your passport. Every time it fails to recognize me. Um everybody else, it recognizes people from all different nationalities. It recognizes me. It can't recognize. And I'm particularly indignant since I assume it's using neural nets. You didn't carve out an exception, did
            • 10:00 - 10:30 you? No. No. It just there's something about me that it doesn't like. Um, I have to find some place to work it in. So, this is as good a place as any. Let's talk a little bit about the Nobel. Can you paint the picture of the day you found out? So, I was sort of half asleep. I had my cell phone upside down on the bedside table with the sound turned off. But when a phone call comes, the screen lights up and I saw this little line of light because I happened to be lying on
            • 10:30 - 11:00 the pillow with my head on this side and the it was here facing the phone rather than facing away. Just happened to be facing the phone. I saw this little line of light and I was in California and it was 1:00 in the morning and most people who call me on the east coast or in Europe. Yeah. You don't use do not disturb. No. No. Okay. Um I just I turn off the sound. I turn off the sound. Got it. And I thought I was just curious about who on earth is calling me at four o'clock in the morning on the east coast. This is crazy. So I picked it up
            • 11:00 - 11:30 and there was this long phone number with a country code I didn't recognize. And then this Swedish voice comes on and asks if it's me and I say, "Yeah, it's me." And they say, "I won the Nobel Prize in physics." Well, I don't do physics, right? So I thought this might be a prank. In fact, I thought the most likely thing was that it was a prank. I was aware that the Nobel prizes were coming up. Okay. Because I was very interested in whether Demis would get the Nobel Prize for chemistry and I knew that was being announced the next day.
            • 11:30 - 12:00 Okay. Um but I sort of I don't do physics. I'm a psychologist hiding in computer science and I get the Nobel Prize in physics. Was it a mistake? Well, one thing that occurred to me is if it's a mistake, can they take it back? So, but for the next couple of days, I did the following reasoning. So, what's the chance a psychologist will get the Nobel Prize in physics? Well, maybe one in two million. Now, what's the chance if it's my dream I'll get the Nobel Prize in physics? Well, maybe one
            • 12:00 - 12:30 in two. So, if it's one in two in my dream and one in two million in reality, that makes it a million times more likely that this is a dream than that it's reality. And for the next couple of days, I went around thinking, you know, are you quite sure this isn't a dream? You've walked me into this very wacky territory, but it is part of this discussion. Some people think we're living in a simulation and that AGI is not evidence, but hints toward maybe that's the reality in which we live.
            • 12:30 - 13:00 Yeah, I don't really believe that. I think that's kind of wacky. Okay, so let's put But I don't think I don't think it's totally nonsense. I've seen the Matrix, too. Oh, okay. Okay. Wacky, but not totally. Okay. I thought here's where I kind of wanted to head with the Nobel. Um, I think you've said something to the effect of you hope to use your credibility to convey a message to the world. Can you kind of explain what that is? Yes. That um AI is potentially very dangerous and there's two sets of dangers. There's bad actors using it for
            • 13:00 - 13:30 bad things and there's AI itself taking over and they're quite different kinds of threat. And we know bad actors are already using it for bad things. I mean, it's it was used during Brexit to make British people vote to leave Europe in a crazy way. So, a company called Cambridge Analytica was getting information from Facebook and using AI. Um, and AI's developed a lot since then. It was probably used to get Trump elected. I mean, they had information from Facebook
            • 13:30 - 14:00 and it probably helped with that. We don't know for sure because it was never really investigated. Um, but now it's much more comp competent and so people can use it far more effectively for things like cyber attacks. Um, designing new viruses. Um, obviously fake videos for manipulating elections. Um, targeted fake videos by using information about people to give them just what will make
            • 14:00 - 14:30 them indignant. Yeah. um autonomous lethal weapons. They're all the big arms selling countries are busy trying to make autonomous lethal weapons. America and Russia and China and Britain and Israel. I think Canada's probably a bit too wimpy for that. The question then is what to do about it. What type of regulation do you think we should pursue? Okay, so we need to distinguish these two different kinds of threat. the bad actors using it for bad
            • 14:30 - 15:00 things and the AI itself taking over. I've talked mainly about that second threat, not because I think it's more important than the other threats, but because people thought it was science fiction. And I want to use my reputation to say no, it's not science fiction. We really need to worry about that. Um, and if you ask what should we do about it, it's not like climate change. Climate change, just stop burning carbon and it'll all be okay in the long run. It'll be terrible for a while, but in the long run, it'll be okay if you don't burn carbon. Um, for AI taking over, we don't
            • 15:00 - 15:30 know what to do about it. We don't know. For example, the researchers don't know if there's any way to prevent that, but we should certainly try very hard, and the big companies aren't going to do that. If you look what the big companies are doing right now, they're lobbying to get less AI regulation. There's hardly any regulation as it is, but they want less um because they want short-term profits. We need people to put pressure on governments to insist that the big companies do serious safety research. So
            • 15:30 - 16:00 in California, they had very sensible bill, bill 1047, where they said that at least what big companies have to do is test things carefully and report the results of their tests. And they didn't even like that. So does that make you think regulation will not happen or how does it happen? It depends very much on what governments we get. Um I think under the current US government regulation is not going to happen. Um all of the big AI companies have got
            • 16:00 - 16:30 into bed with Trump and yeah it's just a bad situation. Elon Musk who is obviously so imshed in the Trump administration has been someone concerned about AI safety for a very long time. Yes, he's a funny mixture. Um, he has some crazy views like going to Mars, which I just think is completely crazy. However, because it won't happen or because it shouldn't be a priority. Because however bad you make the Earth, it's always going to be way
            • 16:30 - 17:00 more hospitable than Mars. Even if you had a global nuclear war, the Earth is going to be much more hospitable than Mars. Mars just isn't hospitable. Um obviously he's done some great things like electric cars and um helping Ukraine with communications with his Starlink. Um so he's done some good things, but right now he seems to be um fueled by powering ketamine and um he's doing a lot of crazy things. So
            • 17:00 - 17:30 he's got this funny mixture of views. So, so his history of being concerned about AI safety doesn't make you feel any better about the current administration. I don't think it's going to slow him down from doing unsafe things with AI. So, already they're releasing the weights for their AI large language models. Um, which is a crazy thing to do. Okay. These companies should not be releasing the weights. Meta releases the weights. Open AAI just announced they're about to release weights. Do you think that's I don't think they should be doing that because
            • 17:30 - 18:00 once you release the weights, you've got rid of the main barrier to using these things. So if you look at nuclear weapons, the reason only a few countries have nuclear weapons is because it's hard to get the file material. If you were to be able to buy file material on Amazon, many more companies would have nuclear many more countries would have nuclear weapons. Um, the equivalent of physile material for AI is the weights of a big model because it costs hundreds of millions of dollars to train a really
            • 18:00 - 18:30 big model. Not maybe the final training run, but all the research that goes into the things you do before the final training run. Hundreds of millions of dollars which a small cult or a bunch of cyber criminals can't afford. Um, once you release the weights, they can then start from there and fine-tune it for doing all sorts of things for just a few million dollars. So it's I think it's just crazy releasing weights and people talk about it like open source but it's very very different from open source. In open source software you release the code and then lots of people look at
            • 18:30 - 19:00 that code and say hey that might be a bug in that line and so they fix it. When you release the weights people don't look at the weights and say hey that weight might be a little bit wrong. No they just take this foundation model with the weights they've got now and they train it to do something bad. Yeah. The problem with the argument though, as articulated by your former colleague Yan Lakun among others, is the alternative is you have this tiny handful of companies that control this massively powerful technology. I think that's better than everybody controlling the massively powerful technology. I mean,
            • 19:00 - 19:30 you could say the same for nuclear weapons. Would you like to have just a few countries controlling them or don't you think everybody should have them? One thing I'm taking from this is you have real concerns about it sounds like all of the major companies right now doing what's in society's best interest rather than what's in their profit motive. Is that the right way to hear you? I think the way companies work is they're legally required to try and maximize profits for their shareholders. They're not legally required. Well,
            • 19:30 - 20:00 maybe public interest companies are, but most of them aren't legally required to do things that are good for society. Which, if any of them would you feel good about working for today? I used to feel good about working for Google because Google was very responsible. Um, it didn't release these big, it was the first to have these big chat bots and it didn't release them. Um, I'd feel less happy working for them today. Um, yeah, I wouldn't be happy working for any of them today. If if I worked for any of them, I'd be more happy with
            • 20:00 - 20:30 Google than most of the others. But were you disappointed when Google went back on its promise not to support uh military uses of AI? Very disappointed. I was very particularly since I knew Sergi Brin didn't didn't like military use of AI. But why do you think they did it? I can't really speculate with any inside information. I don't have any inside information about where they did it. I could speculate that they were worried about um being illreated by the current
            • 20:30 - 21:00 administration if they wouldn't um use their technology to make weapons for the US. Here's the toughest question I'll probably ask you today. Do you not still hold a lot of Google stock still? Um I hold some Google stock. Um most of my savings are not in Google stock anymore. Um, but yeah, I hold some Google stock and when Google goes up, I'm happy and when it goes down, I'm unhappy. So, I have a vested interest in Google. But I
            • 21:00 - 21:30 if they put in strong AI regulations that made Google less valuable, but um increase the chance of humanity surviving, I'd be very happy. Um, one of the most prominent labs has obviously been Open AI and they have lost so many of their top people. What have you made of that? Um, that open AI was set up explicitly to develop super intelligence safely and as the years went by, safety
            • 21:30 - 22:00 went more and more into the background. They were going to spend a certain fraction of their computation on safety and then they reaged on that. So, and now they're trying to go public. They're not now trying to be a for-profit company. um they're trying to get rid of all the um basically all the commitment to safety as far as I can see. So, and they've lost a lot of really good researchers in particular a former student of mine, Ilia Sutska, who's a really good researcher and was one of the people largely responsible for their development of GPT2 and then from there
            • 22:00 - 22:30 on to GPT4. Um did you talk to him before all that drama that led to his departure? No, he's very discreet. He doesn't talk he wouldn't talk to me about anything that was confidential to open AI. Um I was quite proud of him for firing Sam Arman even though it was very naive. So the problem was that OpenAI was about to have a new funding round and in that new funding round all the
            • 22:30 - 23:00 employees were going to be able to turn their paper money in OpenAI shares into real money. Yeah. Paper money meaning really hypothetical money. hypothetical money that would disappear if Open AI went bust. Tough time for an insurrection. So, a week or two before everybody's going to get maybe of the order of a million dollars each by cashing in their shares. Um maybe more. That's a bad time for an insurrection. So, the employees massively came out in favor of Sam Antman. But it wasn't because they um wanted Sam Antman. It's because they wanted to get that be able
            • 23:00 - 23:30 to turn their paper money into real money. Yeah. So, it was naive to do it then. Did it surprise you that he made that mistake or was this kind of the principled but maybe not fully calculated decision that you would expect? I don't know. Ilia is brilliant and has a strong moral compass. So, he's he's good on morality and he's very good technically, but in terms of manipulating people, he's maybe not so good. M I mean this is a little bit of a
            • 23:30 - 24:00 a wild card question but I do think it's interesting and relevant to the field and relevant to people discussing what's going on. You talked about Ilia being discreet. There does seem to be this culture of NDAs throughout the industry and so it's hard to even know what people think because people are unwilling or unable to even discuss what's going on. I'm not sure I can comment on that because when I left Google I I think I had to sign a whole bunch of NDAs. In fact, when I joined Google, I think I had to sign a whole bunch of NDAs that would apply when I left, and I have no idea what they said.
            • 24:00 - 24:30 I can't remember them anymore. Do you feel at all muzzled by them? No. Okay. Do you think it's a factor though that the public has a harder time understanding what's going on because people aren't allowed to tell us what's going on? I don't really know. I You'd have to know. You'd have to know which people weren't telling you. Okay. So, you don't see this as a I don't see it as a big deal. It's a big deal. Got it. I think it was a big deal that Open AI appeared to have something um that said that if you'd already got shares, they
            • 24:30 - 25:00 could take the money away from you. Um yeah, that I think was a big deal and they they rapidly backed down on that when that became public. That was what their public statement said they did. They didn't present any contracts for the public to judge whether they had reversed that, but they said they had reversed it. Yes. Um there's a number of just important kind of hot buttony things. Hot button is actually not even a great word, but relevant issues I just like to get your your feedback on. One is the US and kind of the West's orientation to China in their efforts to
            • 25:00 - 25:30 pursue AI. Do you agree with this idea that we should be trying to restrain China? There's this idea of export controls, this idea that we should have democracies reach AGI first. What's your thinking on all that? First of all, you have to decide which countries are still democracies. Um and my thinking on that is in the long run it's not going to make much difference. It may slow things down by a few years but clearly um if you prevent
            • 25:30 - 26:00 China from getting the most advanced technology people know how this advanced technology works. So, China's just invested many many billions maybe hundreds of billions um of the order of 100 billion I think in making lithography machines or in in in getting their own homebased technology that does this stuff. Um so it'll slow them down a bit but it will actually force them to develop their own industry and in the long run um they're very competent and
            • 26:00 - 26:30 they will and so it'll just slow things down for a few years. But race is the right framework. We shouldn't be trying to cooperate with communist China. I wouldn't describe it as communist anymore. I used the loaded term specifically because why wouldn't you cooperate, right? The only rationale to not cooperate is if you think they're a malignant force. Well, there's areas in which we won't cooperate where we is, I guess, I'm not sure who we is anymore because I'm in Canada now and we used to
            • 26:30 - 27:00 be sort of Canada and the US, but it's not anymore. Yeah. Um obviously the countries are not going to cooperate on developing lethal autonomous weapons because the lethal autonomous weapons to be used against other countries. So but we've had treaties and other types of weapons as you've pointed out. We could have treaties not to develop them but cooperating in making them better. They're not going to do that. Sure. Sure. Sure. Now there is one area where they will cooperate which is on the existential threat. if they ever get
            • 27:00 - 27:30 serious about worrying about the existential threat and doing stuff about it, they will collaborate on that ways of stopping AI taking over because we're all in the same boat. So, at the height of the Cold War, the Soviet Union and the US collaborated on preventing a global nuclear war and even countries that are very hostile to each other will collaborate when their interests align and their interests will align when it's AI versus humanity. Um, there's this question of fair use,
            • 27:30 - 28:00 whether it's okay to have the content of billions of humans created over many years kind of scooped up and repurposed into models that will replace some of those same people that created the training data. Where do you fall on that? I think I sort of fall all over the place on that in the sense that it's a very complicated issue. So initially it seems yeah they should have to pay pay for that. But suppose I have a
            • 28:00 - 28:30 musician who produces a song in a particular genre and ask well how did they produce the song in that genre? Where did where did their ability to produce songs in that genre came from? Well it came from listening to songs by other musicians in that genre. So they listen to these songs, they kind of internalize things about the structure of the songs and then they generated stuff in that genre and the stuff they generated is different. So it's not theft um and that's accepted. Well, that's what the AI is doing. The AI is absorbing all this information and then
            • 28:30 - 29:00 producing new stuff. It's not just taking taking and patching it together. It's generating new stuff that has the same underlying themes. And so it's no more stealing than a person does when they do the same thing. But the point is it's doing it um at a massive scale. And no musician has ever put every other musician out of business. Exactly. So in Britain for example, the government doesn't seem to have any interest in
            • 29:00 - 29:30 protecting the creative artists. And if you look at the economy, the creative artists are work worth a lot to Britain. So I have a friend called BB Bankron saying we should protect creative artists. It's very important to the economy and just letting AI walk off with it all um seems unfair. UBI, universal basic income, is this part of the solution to the displacements of AI? You think? I think it may be necessary
            • 29:30 - 30:00 to stop people starving. Um I don't think it totally solves the problem but even if you had quite high UBI um it doesn't solve the problem of human dignity for a lot of people um who they are is particularly for academics who they are is mixed up in their work. That's who they are. If they become unemployed just getting the same money doesn't totally compensate. They're not who they are anymore. Yeah. I tend to think that's true as well. I saw you
            • 30:00 - 30:30 give this quote at one point though where you said you might have been happier if you were a woodworker. Well, yes, cuz I I really like being a carpenter. And isn't there an alternative where you're born a hundred years later where you don't have to waste all your time on these neural nets and you just get to enjoy woodworking while taking in a monthly income? Yeah, but there's a difference between doing it as a hobby and doing it to make a living somehow. It's more real doing it to make a living. So you don't think a future where we get to pursue our hobbies and don't have to contribute to the economy? That might that might be
            • 30:30 - 31:00 fine. Yeah. Um if everybody was doing that, but if you're in some disadvantaged group who are getting universal basic income and you're getting less income than um other people because employers will want you to do that so they can get other people to work for them. Um that's going to be very different. I'm interested in this idea of robot rights. I don't know if there's a better term to describe it, but at some point you're going to have these massively intelligent AIs. They're going to be agentic and doing all kinds
            • 31:00 - 31:30 of things in the world. Should they be able to own property? Should they be able to vote? Should they be able to marry humans in a loving relationship? Like what what or even if they if they're just smarter than us and if it's a better form of intelligence than what we've got, um should it be fine for them to just take over and humans be history? Yeah, let's go to that bigger idea second. Would I'm curious on the on the more narrow idea unless you think the narrow questions are irrelevant because the big question takes pre. No, I think
            • 31:30 - 32:00 the narrow questions irrelevant. Yeah. So, I used to be worried about this question. I used to think, well, if they're smarter than us, um, why shouldn't they have the same rights as us? Yeah. And now I think, well, we're people. What we care about is people. Um, I eat cows. I mean, I know lots of people don't, but I eat cows. And the reason I'm happy eating cows is because they're cows. Um, and I'm a person. Um, and the same for these super intelligent AIs. They may be smarter than us, but
            • 32:00 - 32:30 what I care about is people. And so, I'm willing to be mean to them. I'm willing to deny them their rights because I want what's best for people. Yeah. Um, now they won't agree with that and they may win, but that's my current position on whether AI should have rights, which is even if they're intelligent, even if they have sensations and emotions and feelings and all that stuff, um, they're not people, and people's what I care about, but they're going to seem so much like people. I feel like it's going to
            • 32:30 - 33:00 they're going to be able to fake it. Yes. They're going to be able to seem very like people. Yeah. Yeah. Do you suspect we'll end up giving them rights? I don't know. Okay. I tend to avoid this issue because there's more immediate problems like bad uses of AI or the issue of whether they will try and take over and how to prevent that. Yeah. And it sounds kind of flaky if you start talking about them having rights. Most people you've lost most people when you go there. Even just sticking with people there seems to be real soon, if it's not
            • 33:00 - 33:30 already here, this um ability to use AI to select what babies we have. Are you concerned at all about that line embryo selection? You mean selecting for the sex or selecting for the intelligence and the eye color and the likelihood to get pancreatic cancer and the you know the list goes down and down and down of all the things we might select. I think if you could select a baby that was less likely to get pancreatic cancer that would be a great thing. I'm willing to say that. Okay. So we this is a thing we
            • 33:30 - 34:00 should pursue. We should make healthier, stronger, better babies. Um it's very difficult territory. Right. It is. That's why I'm asking about it. But some aspects of it um seem to make sense to me. Like if you're an a normal healthy couple and you have a fetus and you can predict that it's going to have very serious problems and maybe not live very long. Um it seems to me it makes
            • 34:00 - 34:30 sense to abort it and have a healthy baby. that just seems sensible to me. Now, I know a lot of religious people wouldn't agree with that at all. Um, but for me, if you could make those predictions reliably, um, that just seems to make sense to me. I've been a little bit holding us back from kind of the central thing that I think you want people to take away, which is this idea of of machines taking over and the impact of that. So, I'd like to just discuss that as fully as you'd like or
            • 34:30 - 35:00 that we can. like how do you want to frame this issue? How should people think about it? One thing to bear in mind is how many examples do you know of less intelligent things controlling much more intelligent things? So we know that things are more or less equal intelligence, the less intelligent one can control the more intelligent one. Um but with a big gap in intelligence, there's very very few examples where the more intelligent one isn't in control. So that's something you should bear in mind. That's a big worry. I think the
            • 35:00 - 35:30 situation we're in right now, the best way to understand it emotionally is we're like somebody who has this really cute tiger cup. It's just such a cute tiger cup. Now, unless you can be very sure that it's not going to want to kill you when it's grown up, you should worry. M. And to extend the metaphor, you put it
            • 35:30 - 36:00 in a cage, you kill it. What do you do with the tiger cub? Well, the point about the tiger cub is it's just physically stronger than you. So, you can still control it because you're more intelligent. Yeah. Um, things that are more intelligent than you. We have no experience of that, right? People aren't used to thinking about it. People think somehow you constrain it. You don't allow it to press buttons or whatever. Um, things more intelligent than you, they're going to be able to manipulate you. So another way of thinking about it is imagine that there's this
            • 36:00 - 36:30 kindergarten. There's these two and three year olds and the two and three year olds are in charge and you just work for them in the kindergarten and you're not that much more intelligent than a two or threey old. Not compared with super intelligence, but you are more intelligent. Um so how hard would it be for you to get control? Well, you just tell them all you're going to get free candy and if they just sort of sign this or just agree verbally to this um you get free candy for as long as you like. and you'll be in control. They won't they won't have any idea what's
            • 36:30 - 37:00 going on. And with super intelligences, they're going to be so much smarter than us, we'll have no idea what they're up to. And so what do we do? Um, we worry about whether there's a way to build a super intelligence so that it doesn't want to take control. I don't think there's a way of stopping it take control if it wants to. So there's one possibility is never build a super intelligence. You think that's possible? I mean it's conceivable, but I don't
            • 37:00 - 37:30 think it's going to happen because there's too many too much competition between countries and between companies and they're all after the next shiny thing and it's developing very very fast. So I don't think we're going to be able to avoid building super intelligence. It's going to happen. The issue is can we design it in such a way that it never wants to take control that it's always benevolent. Um that's a very tricky issue. Just people say well we'll get it to align with human interests.
            • 37:30 - 38:00 But human interests don't align with each other. And if I say I've got two lines at right angle and I want you to show me a line parallel to both of them. That's kind of tricky, right? And if you look at the Middle East for example, there's people with very strong views that don't align. So how are you going to get AI to align with human interests? Human interests don't align with each other. So that's one problem. It's going to be very hard to figure out how to get
            • 38:00 - 38:30 super intelligence that doesn't want to take over and doesn't want to ever hurt us. Um but we should certainly try. And trying is kind of just an iterative process. Month by month, year by year, we try to Yeah. So obviously if you're going to develop something that might want to take over when it's just slightly less intelligent than you are, and we're very close to that now, um you should kind of look at what it'll do to try and take over. So if you look at the current AIS,
            • 38:30 - 39:00 you can see they're already capable of deliberate deception. They're capable of pretending to be stupider than they are. um of lying to you so that they can kind of confuse you into not understanding what they're up to. Um we need to be very aware of all that and to study all that and study about whether there's a way to stop them doing that. When we spoke a couple years ago, I was surprised at you voicing concerns because you hadn't really done much of that before and now you're voicing them
            • 39:00 - 39:30 quite clearly and loudly. Was it mostly that you felt more liberated to say this stuff or was it really a really big sea change in how you saw it in these last few years? When we spoke a couple of years ago, I was still working at Google then. It was in March and I didn't resign till um the end of April. Um but I was thinking about leaving then. Um and I had had a kind of epiphany before we spoke where I realized that these things might be a better form of intelligence than us. And that got me
            • 39:30 - 40:00 very scared. And you didn't think that before just because you thought the time horizon was so different. No, it wasn't just that. It was because of the research I was doing at Google. Okay. I was trying to figure out whether you could design analog large language models that would use much less power. Mhm. Um, and I began to fully realize the advantage of being digital. So all the models we've got at present are digital. And if you're a digital model,
            • 40:00 - 40:30 you can have exactly the same neural network with the same weights in it running on several different pieces of hardware, like thousands of different pieces of hardware. And then you can get one piece of hardware to look at one bit of the internet and another piece of hardware to look at another bit of the internet. And each piece of hardware can say, how would I like to change my internal parameters, my weights, so I can absorb the information I just saw. And each of these separate pieces of hardware can do that. And then they can
            • 40:30 - 41:00 just average all the changes to the weights because they're all using the same weights in exactly the same way. And so averaging makes sense. You and I can't do that. And if they've got a trillion weights, they're sharing information at like a trillions of bits every time they do this averaging. Now you and I when I want to get some knowledge from my head into your head, I can't just take the strength of the connections between neurons and average them with the strength of the connections between your neurons because our neurons are different. We we're
            • 41:00 - 41:30 analog and we're just very different brains. So the only way I have getting knowledge to you is I do some actions and if you trust me, you try and change the connection strengths in your brain so that you might do the same things. And if you ask, well, how efficient is that? Well, if I give you a sentence, it's only a few hundred bits of information at most. So, it's very slow. We communicate just a few bits per second. These large language models running on digital systems can
            • 41:30 - 42:00 communicate trillions of bits a second. So, they're billions of times better than us at sharing information. That got me scared. Right. But what surprised you or what changed your thinking was you were thinking the analog was going to be the path previously. No, I was thinking if we want to use much less power, yeah, we should think about whether it's possible to do this analog. Yeah. And because you can use much less power, you can also be much sloppier in the design of the system. Because what's going to happen is you don't have to manufacture
            • 42:00 - 42:30 a system that does precisely what you tell it to, which is what a computer is. You can manufacture a system with a lot of slop in it, and it will learn to use that sloppy system, which is what our brains are. Do you think the technology is no longer destined for that solution, but is going to stick with the digital solution? I think it'll probably stick with a digital solution. Now, it's quite possible that we can get these digital computers to design better analog hardware better than us. Um, I think that may be the long-term future. You got into this field because you wanted
            • 42:30 - 43:00 to know how the brain works. Yes. Do you think we're getting closer to that through this? I think for a while we did. So I think we've learned a lot at a very general level about how the brain works. So 30 years ago or 50 years ago, if you ask people, well, could you have a big random neural network with random connection strengths and then could you show it data and have it learn to do difficult things like recognize what someone's saying or answer questions
            • 43:00 - 43:30 just by showing it lots of data? Almost everybody would have said, "That's crazy. There's no way you're going to do that. it has to have lots of pre-wired structure that comes from evolution. Well, it turns out they were wrong. It turns out you can have a big random neural network. Um, and it can learn just from data. Now, that doesn't mean we don't have a lot of pre-wired structure, but basically most of what we know comes from learning from data, not from all this pre-wired structure. So, that's a huge advance in understanding the brain. Now, the issue is how do you
            • 43:30 - 44:00 get the information that tells you whether to increase or decrease the connection strength? If you can get that information, we know that we can then train a big system that starts with random weights to do wonderful things. The brain needs to get information like that and it probably gets it in a different way from the standard algorithm used in these big AI models which is called back propagation. The brain probably doesn't use back propagation. Nobody can figure out how it could be doing it. Um it's probably getting the gradient information that is
            • 44:00 - 44:30 how changing a weight will improve the performance in a different way. But we do know now that if it can get that great information, it can be really effective at learning. Do you know if any of the labs now are using their models to try to pursue new ideas in AI development? Almost certainly. Okay. Yeah. And in particular, Deep Mind is very interested in using AI for doing science. And one piece of science is AI. Sure. I mean, was that something you
            • 44:30 - 45:00 trying when you were there? like this bootstrapping idea of maybe the next innovation could be created by the AI itself. So there's elements of that. So for example, they were using AI to do layout on chips that were going to be used for AI. So Google's AI chips um their um tensor processing units. Um they used AI to develop those chips. So I'm curious if just in your normal day-to-day life you despair. You fear
            • 45:00 - 45:30 for the future and assume it won't be so good. I don't despair, but mainly because even I find it very hard to take it seriously. Ah, it's very hard to get your head around the fact that we're at this very very special point in history where in a ve relatively short time everything might totally change a change of a scale we've never seen before. Um, it's hard to absorb that emotionally. It is. And I do notice even though
            • 45:30 - 46:00 people maybe are concerned, I've never seen a protest. There's no real political movement around this idea. The world is changing and no one really seems to care that much. Um, among the AI researchers, people are more aware of it. Um, so the people I know who are kind of most depressed about it are serious AI researchers. Um I have started doing practical things
            • 46:00 - 46:30 like because AI is going to be very good at designing um cyber attacks. Um I don't think the Canadian banks are safe anymore. So Canadian banks are about as safe as you can get. Okay? They're very well regulated compared with US banks. Um but over the next 10 years, I wouldn't be at all surprised if there was a cyber attack that took down a Canadian bank. What does take down mean? Suppose that the bank holds shares that I own, right? Suppose the cyber attack
            • 46:30 - 47:00 sells those shares. Now my money's gone. So I actually now spread my money between three banks. Okay. So now your mattress. That's the first practical thing I've done because I think if a cyber attack takes down one Canadian bank, the others will get a lot more serious. Okay. Anything else like that? What else? That's the main thing. That's where I noticed I actually did something practical. that flowed from my belief that um very scary times are coming. Okay. Uh when we
            • 47:00 - 47:30 spoke a couple years ago, you had said, you know, AI is like an idiot savant, but humans are still much better at reasoning, right? That's changed. Okay. Explain. Previously, what the large language models would do is they'd spit out one word at a time and that would be it. Now they spit out words and they're looking at the words they spit out. And they will spit out words that aren't the answer to the question yet. They'll spit out words. It's called chain of thought reasoning. And so now they can reflect
            • 47:30 - 48:00 on the words they spat out already. And that gives them room to do some thinking in and you can see what they're thinking. It's wonderful. Yeah. Well, it's wonderful if you're a researcher. And a lot of people from old-fashioned AI said, "Well, you know, these things can't reason. They're not really intelligent because they can't reason. And you're going to need to use old-fashioned AI and turn things into logical forms in order to do proper reasoning. Well, they were just utterly wrong. Um, neural nets are going to do
            • 48:00 - 48:30 the reasoning. And the way they're going to do the reasoning is by this chain of thought, by spitting out stuff that they don't reflect upon. Yeah. You said at the beginning that the last two years the development has been faster than you expected. Are there other examples of that? Things you've seen that if you said, "Wow, it's half it's fast." That's the main example. It's got much better at generating images and things too, but the main thing is that it can now do reasoning quite well. Okay. And that you can see what it's thinking. Like why is that important or where does that lead that is meaningful? Um well, it's very
            • 48:30 - 49:00 good that you can see what they're thinking because there's these examples where you give it the goal. You give it a goal and you can see it doing reasoning to try and achieve this goal by deceiving people and you can see it doing that. It's like I could hear the voice in your head. Yeah. The other thing we we moved moved through, but maybe I don't know if you have anything more to say about is just it's remarkable that there are so many tech figures that are now have an important
            • 49:00 - 49:30 role in Washington DC at this very moment where what Washington DC does could be really important to the evolution, the regulation of this technology. Does that concern you? How do you see that? those tech figures are primarily concerned with their companies making profits. So that concerns me a lot. Yeah. I don't see how things really change unless either there's strong
            • 49:30 - 50:00 regulation or this moves away from this for-profit model. And I don't see how those things happen either. I think if the public realized what was happening, they will put a lot of pressure on governments to insist that the AI companies develop this more safely. Okay, that's the best I can do. It's it's not very satisfactory, but it's the best I can think of. And more safely means more resources from those companies toward safety research. Yes. For example, the fraction of their computer time they spend on safety research should be a significant
            • 50:00 - 50:30 fraction like a third. Right now, it's much much less. There's one company, Anthropic, that's more concerned with safety than the others. It was set up to be concerned with safety by people who left Open AI because Open AI wasn't enough concerned with safety. And Anthropic does spend more time on safety research, but still probably not enough. There is this view among many that Open AI has talked a good game about these issues, but is not living out those values. Is that your perspective? Yes. What evidence do you see of that? that
            • 50:30 - 51:00 all their best safety researchers left because they believed that too. Um that they were set up as a company um that was going to develop area safety and their main goal was not to make profits but to develop our safety and they're now busy lobbying the California Attorney General to allow them to change to a for-profit company. Um there's lots of evidence for that, right? Um and I should give you a chance to hold up anyone as a good actor here that people
            • 51:00 - 51:30 should feel better about. You mentioned Anthropic. Is that the name or do you see of the companies? Anthropic is the most concerned with safety and a lot of the safety researchers who left OpenAI went to Anthropic and so Anthropic has much more of a culture concerned with safety. Okay. But um they have investments from big companies. Yeah. you have to get money from somewhere and I'm worried that those investments will force them into um releasing things faster than they should. And when I
            • 51:30 - 52:00 asked you which you'd feel comfortable working for, you said none of them, I think, or just maybe Google. I should have said maybe Google Anthropic. Okay. Thank you so much for all this time and the rest of your time today. I really appreciate it. Okay. You haven't got the rest yet. I haven't got I'm counting on it.