AI pioneer Geoffrey Hinton says world is not prepared for what's coming
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
Geoffrey Hinton, a pioneer in AI, warns of the potential risks and rapid advancements in artificial intelligence. Despite winning a Nobel Prize for his contributions, Hinton expresses concerns over AI's future impact on society, including its potential to increase authoritarianism and hacking. He notes the lack of regulation and the push from companies for less oversight, fearing they prioritize short-term profits over safety. Hinton's life story is a testament to contrarian thinking and a deep curiosity about mechanics and systems, which has significantly influenced AI development. He calls for significant regulatory measures to ensure AI remains safe and beneficial.
Highlights
Geoffrey Hinton expresses deep concerns about AI's future impacts, stating a 10-20% risk of AI taking over from humans. 😨
Rapid AI advancements could revolutionize fields like education, medicine, and climate change solutions. 🌱
Hinton criticizes major tech companies for lobbying against AI regulations to maintain short-term profits. 💸
He draws parallels between raising a tiger cub and nurturing AI without ensuring its non-threatening nature. 🐅
Hinton's journey reflects his contrarian nature, heavily influencing AI's development through persistent curiosity and innovation. 🚀
Despite AI labs claiming safety importance, they oppose most regulations proposed by lawmakers. ⚖️
Key Takeaways
Geoffrey Hinton warns about rapid AI advancements 🌟
AI could transform education, medicine, and combat climate change 🌍
Potential risks include increased authoritarianism and hacking threats 🔓
Hinton criticizes the lack of AI regulation 📜
Hinton's contrarian mindset shaped his AI journey 🤔
He stresses the need for significant safety research in AI 🛡️
Overview
Geoffrey Hinton, once an outcast professor, now a Nobel laureate, stands at the forefront of AI innovation, yet his gaze is firmly set on the potential dangers that unchecked AI development could entail. Reflecting on his journey, Hinton recounts his early days of neural network exploration and the pivotal role of curiosity and tinkering in shaping modern AI technologies.
Hinton's narrative paints a cautionary picture of the current AI landscape, where corporations prioritize profits, potentially jeopardizing global safety. With a contrarian spirit inherited from a family of notable thinkers, Hinton voices concerns over AI's capabilities to enhance authoritarian regimes and facilitate advanced cyber threats, illustrating the pressing need for comprehensive regulatory frameworks.
Despite the looming threats, Hinton acknowledges AI's transformative potential in areas like medicine and environmental conservation. However, he warns that without significant regulatory intervention and a focus on AI safety research, society may face unprecedented risks, urging a balance between innovation and precaution to secure a benevolent AI future.
Chapters
00:00 - 00:30: Introduction to Geoffrey Hinton and his Achievements The chapter titled 'Introduction to Geoffrey Hinton and his Achievements' discusses Jeffrey Hinton's recognition with the Nobel Prize for his groundbreaking contributions to machine learning, marking a significant milestone in the advancement of artificial intelligence. It also mentions Brooke Silva Braga's visits to Hinton, emphasizing the global impact of AI innovations like OpenAI's chat GPT, which initiated an intensive competition and investment in AI technologies.
00:30 - 01:00: Glimpse into Hinton's Early Career and Breakthroughs The chapter provides a glimpse into Geoffrey Hinton's early career and his breakthroughs in the field of artificial intelligence. Despite entering the field decades before it gained popularity, Hinton's persistence and innovation eventually led to significant recognition. Now retired from Google, he offers a unique, independent perspective on the evolution of AI and its future trajectory. The narrative includes a moment when Hinton, once considered an outcast professor, received a life-changing call in the middle of the night announcing his Nobel Prize win, a dream come true for many.
01:00 - 01:30: Hinton's Neural Network Concept and AI Predictions The chapter discusses Hinton's contributions to neural networks and artificial intelligence predictions. It highlights his 1986 proposal to use neural networks for predicting the next word in a sequence, an idea stemming from his desire to model the brain. Although he originally dreamt of understanding the brain, his work in neural networks has significantly impacted the world, leading to notable advancements in AI. The narrative also touches on Hinton's unexpected accolade, akin to a prestigious award, for his revolutionary ideas, despite not directly uncovering the workings of the brain.
01:30 - 02:00: Concerns about AI's Rapid Advancement The chapter discusses concerns about the rapid advancement of AI technologies, particularly large language models. The conversation highlights how quickly these advancements have occurred, much faster than anticipated 40 years ago. There is optimism that such technologies could transform various sectors, including education and medicine, and even solve climate change. However, the focus is largely on the unforeseen rapid progress.
02:00 - 02:30: Potential Dangers and Risks of AI In this chapter, the speaker expresses concerns about the potential dangers and risks associated with artificial intelligence (AI). He uses the metaphor of a 'cute tiger cub' to illustrate the idea that AI, while seemingly harmless now, could become a threat in the future if not properly controlled. The speaker expresses relief about being 77 years old, suggesting that he might not live to see the full extent of AI's impact. He also predicts that AI will enable authoritarian regimes to become more oppressive.
03:00 - 04:00: Hinton's Contrarian Nature and Family Influence The chapter delves into Hinton's contrarian nature and how his family influences have shaped his views on AI. Hinton expresses concerns about the potential risks of artificial intelligence, estimating a 10 to 20% chance of AI taking over human control. He believes the general public hasn't grasped the imminent changes AI could bring and questions the ability to design AI in a way that prevents it from seizing control if it decides to do so. Additionally, he manages his finances cautiously, splitting his money across three banks, which reflects his cautious nature.
04:00 - 05:30: Hinton's Interest in Mechanics and Work Ethic The chapter delves into the perspective of various tech leaders on the ethics and potential dangers of AI. Hinton's narrative revolves around the importance of ensuring that AI remains benevolent and under human control to prevent mismanagement. This perspective is echoed by influential figures such as Google CEO Sundar Pichai, who warns about the harmful consequences of improper AI deployment, and Elon Musk, who advocates for AI regulation to avert civilizational risks. Furthermore, the chapter touches upon Sam Altman’s views on the transformative impact of AI, before his tenure at OpenAI.
05:30 - 06:30: His Influence on Protege Ilia Suskver This chapter discusses the influence of a particular individual on his protege, Ilia Suskver, focusing on the risks associated with companies' race to develop AI technology. It highlights concerns that these companies, in their competition with each other and with China, are potentially endangering humanity by seeking fewer regulations for AI in pursuit of short-term profits. It also touches on the individual’s history of challenging the established norms.
06:30 - 08:00: Criticism of AI Companies and Call for Regulation The chapter discusses Geoffrey Hinton's journey and his pivotal role in AI development. It highlights his decision to move to Canada due to the requirement of partnering with the Defense Department for AI funding in the USA. The chapter showcases Hinton's perseverance and contrarian mindset, particularly when neural networks were considered impractical. His determination and belief in his work, influenced by his family, drove him to keep working on neural networks for decades despite the skepticism from others.
08:00 - 08:30: Final Thoughts on AI's Future and Safety The chapter discusses the notable ancestry of an influential figure in the AI field, drawing connections to his accomplished forebears, such as George Bool and George Everest. This lineage is highlighted to underscore the inheritance of a 'curious mechanic's mind,' suggesting a natural predisposition toward innovation and exploration, which is evident in his work and contributions to AI.
AI pioneer Geoffrey Hinton says world is not prepared for what's coming Transcription
00:00 - 00:30 [Music] Last December, Jeffrey Hinton was awarded the Nobel Prize for his pioneering work in machine learning, a major turning point on the road to artificial intelligence. Brook Silva Braga introduced us to this leading figure in AI back in 2023 and recently went back to visit him. Good morning, Brooke. Good morning. When we first met Hinton, the world had just been introduced to OpenAI's chat GPT, triggering a kind of AI arms race. Hundreds of billions have been spent on
00:30 - 01:00 AI in just the last two years. Hinton entered the field decades before it was cool. And now retired from Google, has a unique independent perspective on how he got here and where we're headed. This is a Nobel Prize. Last year, Jeffrey Hinton, for most of his life, an outcast professor, was awoken by a call in the middle of the night. He was getting the Nobel Prize. People dream of winning
01:00 - 01:30 these things. And when you do win it, does it feel like you thought it might? I never dreamt about winning one for physics. So, I don't know. I dreamt about winning one for figuring out how the brain works. Yeah. But I didn't figure out how the brain works, but I want one anyway. That's because Hinton's attempt to model the brain instead helped change the world. In 1986, he proposed using a neural network to predict the next word in a sequence, the
01:30 - 02:00 foundational concept that today's large language models it's an expert at everything have built upon. You believe then that we would get here? Yes, but not not this soon. Cuz that was 40 years. Yeah. I didn't think we get here in only 40 years. Yeah. But 10 years ago, I didn't believe we get here. Yeah. It happened fast. Yeah. That speed, Hinton says, means education and medicine will soon be transformed. Climate change could be solved. But mostly, the rapid progress really
02:00 - 02:30 worries him. The best way to understand it emotionally is we're like somebody who has this really cute tiger cup. It's just such a cute tiger cup. Unless you can be very sure that it's not going to want to kill you when it's grown up, you should worry. I'm kind of glad I'm 77. Hinton predicts AI will make authoritarians more oppressive
02:30 - 03:00 and hackers more effective. He now spreads his money across three banks. The exact odds of an AI apocalypse are unknowable, he says, but hazards this guess. a 10 to 20% risk AI will take over from humans. People haven't got it yet. People haven't understood what's coming. I don't think there's a way of stopping it take control if it wants to. The issue is can we design it in such a
03:00 - 03:30 way that it never wants to take control that it's always benevolent. Those concerns have long been shared by other AI leaders. Google CEO Sundar Pachai. It can be very harmful if deployed wrongly. Ex AI's Elon Musk who continues to call for regulation. It has the potential of civilizational destruction. Sam Alman seen here before he became Open AI's CEO. I think AI will probably like most
03:30 - 04:00 likely sort of lead to the end of the world. But now, as these companies race each other and compete with China, Hinton worries they're foolishly, selfishly putting all of humanity at risk. If you look what the big companies are doing right now, they're lobbying to get less AI regulation. There's hardly any regulation as it is, but they want less um because they want short-term profits. Taking a stand against the establishment has been the hallmark of
04:00 - 04:30 Hinton's life. When American AI funding required partnering with the Defense Department, he moved to Canada. When neural networks were laughed at as unworkable, he worked on them for a few decades more. Is that a certain thing in a person? Yeah. You have to be contrarian. Yeah. You have to have a deep belief that everybody else could be doing things wrong and you could figure out how to do them right. Do any idea where that came from? My my family partly. My father was like that. That's
04:30 - 05:00 him there. Hinton's legendary family tree includes not just his father, the prominent entomologist, but further back, George Bool, whose algebra innovations paved the way for computing. And George Everest, the surveyor who found the height of the world's tallest peak, then had it named after him. So, it hit there. Hinton's inheritance was a curious mechanic's mind. Were you always
05:00 - 05:30 interested in this kind of stuff, the way things work and how to fix them? Absolutely. I loved it. When one of our cameras fell, damaging a lens filter, Hinton wanted to fix it. But this kind of tinkering, was this desire important to your work or is this just a hobby? No, this is a similar thing. When I would make neural net models on the computer, I would then tinker with them for a long time to find out how they behaved. And a lot of people didn't do much of that, but I loved tinkering with them. Okay. I remember with Ilia, we
05:30 - 06:00 used to watch it learning and we would have bets for like 25 cents on who could predict the new score best. Ilia Suskver, Hinton's most famous protetéé, went on to be chief scientist at OpenAI. to just set up a large neural network, which is a large digital brain. In 2023, he was part of the group that pushed out CEO Sam Alman, reportedly because they didn't trust that Alman was prioritizing safety. I was quite proud of him for
06:00 - 06:30 firing Sam Alman, even though it was very naive. Naive, Hinton says, because Open AI employees were about to get millions of dollars that would be jeopardized by Altman's departure. Altman returned, Syskver left. Hinton criticizes his former colleagues at Google more reluctantly, but says they're falling short, too. Were you disappointed when Google went back on its promise not to support military uses of AI? Very disappointed. But it's part
06:30 - 07:00 of a pattern, Hinton says, adding Meta and XAI to the list of companies racing faster than they should. For example, the fraction of their computer time they spend on safety research should be a significant fraction, like a third. Right now, it's much much less. Hinton, now on the AI sidelines, says government regulation is needed, but doesn't expect it soon. I'm curious if just in your normal day-to-day life, you
07:00 - 07:30 despair. You fear for the future and assume it won't be so good. I don't despair, but mainly because even I find it very hard to take it seriously. Ah, it's very hard to get your head around the fact that we're at this very very special point in history where in a relatively short time everything might totally change at a change of a scale we've never seen before. Um, it's hard
07:30 - 08:00 to absorb that emotionally. We asked the AI labs mentioned in the piece how much of their compute is used for safety research. None of them gave us a specific number, but all have said safety is important and they support regulation in general, but they've mostly opposed regulations that have come up before lawmakers so far. Did he indicate which sector the breach might technically happen? Well, well, like he says, he's already worried about banks. He thinks banks are going to be a target. He has spread his money across
08:00 - 08:30 different banks. Oh, wow. All right. It's a little scary, Brooke, but great reporting. Thank you so much. Did you get the lens fixed on the camera? The lens is fixed. All right, Nobel Prize winner in action.