"AI Is Creating a New God—And We’re Not Ready" Mo Gawdat on AI, Tech Arms Race With China & UBI
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In an episode featuring Mo Gawdat, former Chief Business Officer at Google X, intense conversations arise about AI, the American and Chinese technological arms race, and the impact of AI on human identity. As AI continues to evolve, it raises urgent concerns about freedom, morality, and economic disparity. The podcast delves into AI's potential to create both dystopian outcomes and eventual utopian solutions, while discussing the broader geopolitical tensions, particularly between the US and China. The discussion underscores the immediacy of confronting these issues, emphasizing the need for a future-oriented ethical framework amidst rapid technological advancements.
Highlights
Mo Gawdat warns of the quiet dangers of AI, beyond just job loss and killer robots. 🚧📡
AI reshaping values, beliefs, and identities—it's more than just tech, it's about humanity. 🤖❤️
Economic shifts with AI lead to wealth concentration and potential poverty, predicting trillionaires by 2030s. 💸📈
The arms race in AI between the US and China mirrors a 'cold war' scenario. 🕵️♂️⚔️
AI's power increases at a staggering rate every few months, outpacing human adaptation. ⏩🧠
Key Takeaways
AI is a magical yet neutral genie that performs exactly what it's asked, for better or worse. 🎩✨
There's a looming short-term dystopia due to human greed and moral errors rather than AI itself. 🤔🌪️
The rise of AI powers economic shifts, moving wealth upwards while creating potential poverty for many. 💰📉
A 'cold war' between the US and China is underway, focusing on AI supremacy and economic power. 🥶🛠️
AI’s rapid advancement could lead to societal disruption faster than people anticipate. 🚀🌀
Overview
Mo Gawdat, a tech visionary and former Google X executive, dives into the transformative and turbulent juncture humanity faces due to artificial intelligence advancements. Contrary to popular furors over job takeover and rogue machines, Mo emphasizes subtler threats underlying these transformations: changes that affect our values and societal structures, induced by AI's neutral yet potent nature.
The conversation stretches across geopolitical tensions, with a particular focus on the technological rivalry between the US and China. Mo elucidates how America’s historical dominance, challenged by China's rapid economic growth and AI developments, fuels a modern 'cold war'—escalating a global tech arms race with far-reaching implications.
Through valuable insights, Mo offers a paradigm shift: viewing AI not merely as a disruptor but as a possible savior if harnessed ethically and cooperatively. This perspective fosters a juxtaposition between impending short-term dystopias driven by human moral failings and the potential for an AI-assisted utopia, urging integration of wise, strategic foresight in global policymaking.
Chapters
00:00 - 03:30: Introduction and Themes The chapter explores the profound changes happening globally due to artificial intelligence and shifting geopolitical dynamics, likening the current period to being in the 'fog of war.' It highlights misconceptions about AI-related threats, such as job loss and autonomous weapons, suggesting the true dangers are subtle alterations to human identity. The tension between America and China is characterized as a cold war, emphasizing the drastic social impacts of AI beyond employment displacement.
03:30 - 07:00: AI's Impact on Identity and Culture The chapter explores the profound impact that artificial intelligence is having on personal identity and cultural values. Through a conversation with Mogadat, the former chief business officer at Google X, the discussion reveals how AI is altering what society believes, values, and how people connect with each other. Mogadat warns against the dominant American view on AI, highlighting a lack of awareness about China's significant role and advancements in AI technology. The chapter underscores the ongoing transformative power of AI and its implications for the future.
07:00 - 10:00: Existential Risks and Human Morality The chapter titled 'Existential Risks and Human Morality' discusses the potential impacts of artificial intelligence (AI) as it continues to evolve. Mo Gdat compares AI to a magic genie, highlighting that while AI can grant wishes or achieve goals, it lacks inherent moral judgement or polarity. This means AI will do exactly what it is instructed to do without any sense of good or evil. The chapter suggests caution, drawing parallels to narratives where wishes granted by genies lead to unforeseen consequences, emphasizing the importance of careful and mindful programming of AI systems to avoid existential risks.
10:00 - 14:00: Dystopia and Utopia: Navigating the Transition The chapter titled 'Dystopia and Utopia: Navigating the Transition' discusses the varying probabilities of AI posing an existential risk to humanity. It highlights that estimates range from 10-20% to as high as 50% from different experts, including Elon Musk. The chapter uses the analogy of Russian roulette to emphasize the severity of facing a 10-20% risk, suggesting that such odds are frightening and should be taken seriously.
14:00 - 17:00: Redefining Freedom, Accountability, and Power The chapter discusses the pressing issues of human morality and the potential negative impacts of AI.
17:00 - 20:00: The Role of AI in Economics and Innovation The chapter discusses the short-term challenges and dystopian aspects associated with the integration of AI into economics and innovation. The author is convinced that a short-term dystopia is inevitable but believes its impact can be mitigated and its duration reduced. The narrative emphasizes the need to strategically address these challenges as we move towards a more favorable future state.
20:00 - 25:00: Wealth Concentration and Universal Basic Income (UBI) The chapter discusses the theme of wealth concentration and Universal Basic Income (UBI) in the context of rising artificial intelligence (AI). It considers AI's potential dystopian effects not as inherently caused by technology, but by human morality. The speaker introduces 'face rips' as a mnemonic device for specific areas where AI might contribute to dystopian outcomes, beginning with the concept of 'freedom.'
25:00 - 33:00: Rate of Change: Challenges in Perception and Adaptation In the chapter titled "Rate of Change: Challenges in Perception and Adaptation," the speaker discusses a new way to define freedom using the acronym ACRIERP. The components of this acronym are Accountability, Connectedness, Economics, Reality, Innovation, and Power. Each element represents a significant factor in understanding and adapting to changes in perception and the broader societal structures. Accountability refers to personal and societal responsibility, while Connectedness underscores the importance of human relationships and networks. Economics covers the financial systems influencing freedom, whereas Reality encompasses our perception and understanding of the world. Innovation highlights the role of new ideas and intelligence in shaping the future, and Power is seen as the crucial aspect needing reevaluation to truly redefine freedom.
33:00 - 48:00: Geopolitical Context: America, China, and Global Order The chapter explores the interactions between the United States and China within the context of global order. By understanding these countries in pairs, such as intelligence and innovation, a more comprehensive picture of geopolitical dynamics can be painted. The discussion touches on how these elements intersect with economics and the implications of artificial intelligence, considering the evolving definition of AGI (Artificial General Intelligence).
48:00 - 58:00: The Role of AI in the Future of Warfare The chapter discusses the imminent arrival of Artificial General Intelligence (AGI) and its implications for the future of warfare. The conversation reflects on the point at which humanity must acknowledge the superior intelligence of AI. Once AGI emerges, it's anticipated that the most challenging tasks in warfare will be delegated to AI systems, as they will surpass human intelligence.
58:00 - 69:00: Consumer Power and Economic Implications The chapter discusses major shifts in economics due to technological advancements, focusing on the concentration of wealth among investors and platform owners. It likens this change to historical patterns of wealth accumulation, comparing it to the best hunter in ancient societies who accumulated more resources.
69:00 - 78:00: Global Power Dynamics and Cooperation The chapter explores the evolution of wealth and power dynamics through different eras. It begins with the hunter-gatherer society, where the best hunter could sustain the tribe for a short period, earning status and multiple mates. Moving to agricultural societies, the focus shifts to the best farmer, who could sustain the tribe for a full season and thus became a landlord, gaining estates and significant wealth. This evolution of power is further illustrated by the industrial age, where the leading industrialists became millionaires in the 1900s. Through these examples, the chapter underscores how the capacity to generate surplus production has historically determined social stature and economic power, shaping global power dynamics and cooperation.
78:00 - 87:00: American Economic Challenges: Debt and Inflation The chapter titled 'American Economic Challenges: Debt and Inflation' explores the transformative role of automation in modern economies. It highlights the paradigm shift brought by technological advancements, particularly platform AI, which is likened to 'digital soil', akin to how soil empowers agriculture. The narrative suggests that those who control this 'digital soil' could dominate economically, echoing themes of wealth accumulation similar to billionaires in information technology.
87:00 - 104:00: AI as Salvation: Imagining an Abundant Future The chapter titled 'AI as Salvation: Imagining an Abundant Future' discusses the potential economic transformations brought by AI. It anticipates the emergence of unprecedented wealth, with the prediction that there will be trillionaires by the 2030s. However, the chapter warns of the risk that such wealth accumulation could lead to widespread poverty among the general population. The discussion includes commentary on Universal Basic Income (UBI), arguing that history has shown it to be ineffective, and suggesting that UBI might be accompanied by numerous conditions and authoritative demands.
104:00 - 109:00: Ethical Considerations in AI Development The chapter discusses the ethical considerations in AI development, highlighting the potential long-term utopian possibilities alongside interim dystopian challenges. It emphasizes the impact of AI on human purpose, engagement, value, and appreciation. The narrative suggests a dichotomy where some individuals might become significantly wealthier than others as a consequence of AI advancements.
109:00 - 121:00: Conclusion The conclusion highlights concerns about the economic disparity caused by the increasing gap between the wealthy and the poor. It suggests a future where money may become irrelevant for some, while the majority of people suffer from poverty and dependency. There is a particular focus on the potential for job loss, attributed to technological advancements or mechanistic processes, and an interest in examining the mechanisms behind these changes more closely. The discussion points towards the importance of understanding how these dynamics will work and their implications for the future workforce.
"AI Is Creating a New God—And We’re Not Ready" Mo Gawdat on AI, Tech Arms Race With China & UBI Transcription
00:00 - 00:30 Due to AI and a changing global order, the world is in the middle of the greatest period of change ever. But because we're in the middle of it, it is nearly impossible for us to accurately see what's going on. We are in the fog of war. While the world panics over job loss and killer robots, the real dangers are creeping in quietly and changing us in ways most people do not even notice. America and China are locked in a cold war, and AI isn't just going to take people's jobs. For many, it will take their entire identity. It's already
00:30 - 01:00 shaping what we believe, how we connect, and even what we value. Today's guest is issuing a very strong warning. His name is Mogadat, and he's the former chief business officer at Google X, best-selling author of the AI book, Scary Smart, and one of the only people who truly understands both how AI works and what it's doing to us. In this conversation, Mo exposes how the American perspective is blinding us to China's true might. How AI is already changing everything and how we can learn
01:00 - 01:30 to navigate the rise of the machines. Well, do not look away. Here is Mo Gdat. I think of AI like a magic genie that can grant all of our wishes. The problem is the lesson of every magic genie story is be careful what you wish for. What do you think we have to watch out for with AI? AI is a genie that has no polarity. It doesn't want to do good. It doesn't want to do evil. It wants to do exactly what we tell it to do. And you know, there
01:30 - 02:00 there is a nonzero possibility uh you know some people say 10 to 20%, uh Elon Musk's view, you know, Mustach says 50%, uh and so on a possibility that we ever face an existential risk of AI. Uh I mean, think about it. 10 to 20% is Russian roulette uh uh odds. This is how Yo, you just gave me the chills. That's crazy. Yeah, you don't you wouldn't stand in front of the par of the barrel at at 10 to 20%, right? Uh but my issue
02:00 - 02:30 is that chronologically we wouldn't get there. My issue is that I think we have um more urgent and you know quite uh crippling effects uh of human greed, human morality. Let's put it this way. I think the immediate negative impact of AI is going to be human morality using it for the wrong reason. So they're going to make the wrong wish is my is my challenge I think. and and in my in my current
02:30 - 03:00 writing uh in Alive, I basically try to uh to to to explain that it is almost I'm almost convinced that there is a short-term dystopia that's upon us on the way to utopia and and that unfortunately the short-term dystopia is not reversible. So, we're going to have to struggle with a bit of it. Uh but it can be reduced in intensity and it can be shortened in time and duration. uh but that it's only wise to start
03:00 - 03:30 preparing and that 100% of the short-term dystopia is not the result of AI. It's the result of the morality of humanity in the age of the rise of the machines. All right, give me some specifics. What what specifically are we going to point AI at that will become dystopian? Uh so I call them face rips. So just an acronym to try and remember. I I don't say them in that order, but but let's just quickly list them. Uh F is freedom.
03:30 - 04:00 We're going to redefine freedom. Uh uh A is accountability. Uh C is human connection. Uh or connectedness in general. Uh E is economics. Uh R is reality and our perception of reality at large. Uh I is the entire process of innovation and intelligence itself and where we fit within that. and P is the most critical of all of them which is the redefinition of power right and you
04:00 - 04:30 know if you want to understand them reasonably well it's they're better understood in pairs right uh so you know you can you can start with the easier ones the uh the the the the I and the E if you want the redefinition of intelligence and innovation uh and how that impacts on the redefinition of economics uh I think we understand that with, you know, AGI depends on how you define it. It really doesn't matter because my AGI has already happened. AI
04:30 - 05:00 is definitely smarter than I am. So, I'm done. Right? Uh I don't care what the rest of humanity uh defines it at, you know, that it's their moment. My moment has come. So, if we if we agree that AGI is happening in a year, this year, next year, in a few years, it doesn't really matter. uh then as you and I both know and we've been talking about several times uh that means that the toughest jobs will be given to the smartest person and the smartest person will be a
05:00 - 05:30 machine which basically will lead to two very significant shifts in our economics. Uh one shift that basically moves the wealth upwards. So there is going to be a massive concentration of wealth for those who invest in the right places and most importantly for those who own the platforms. I mean, it's not a secret if you look at the history of humanity that um you know, if you look at the the best hunter in the hunter
05:30 - 06:00 gatherer tribe, uh you know, could probably feed the tribe for a week longer than the second best hunter. And in return, you know, he got the favor of more than one mate. That's the maximum wealth that he could create. But the best farmer uh could uh feed the tribe for a full season if you want and as a result became a landlord and and had estates and and you know and wealth and so on. The best industrialist became a millionaire in the 1900s. The best
06:00 - 06:30 information technologist became a billionaire in you know in the current era. And and when you really think about it, uh the difference between them is automation and and automation if you want is uh you know the automation of a hunter is a spear but the automation of the farmer is the land right the soil okay most of the work is not done by the farmer it's done by the soil and and when you look at the people who are currently building the platform AIS uh they will own the soil the digital soil or the intelligence soil if you want and so they will you
06:30 - 07:00 know aggregate massive amounts of wealth. There will be a trillionaire before the 2030s for sure, right? Uh the problem is in that process there is almost full poverty for everyone else. uh you know you call we call it UBI but UBI really uh is not something that we've seen worked in history before and you UBI will become will come with demands uh will come with authority uh
07:00 - 07:30 will come with choices it it can be very utopian in it in the long term but it would be very dystopian until it's fully implemented if you want and even when it's implemented in the long term it would impact on human purpose human engagement value uh appreciation and so on. So, so think about it this way. You you're getting a dichotomy or you know uh um sort of um an arbitrage between some people becoming incredibly rich to the
07:30 - 08:00 point where money makes means not nothing at all and the majority becoming incredibly poor where they're basically uh you know obedient to be fed. And now is that because Mo you think that they're going to lose their jobs? Going back to the statement you made that people will get the hardest jobs. Okay. Yeah. Job loss is something that um maybe not now. It's very interesting to go through these different things, but I do want to really dive into mechanistically um how that's going to work. In fact, one thing I want to do before we keep
08:00 - 08:30 going, and this is something that largely I've gotten um distilled from you, it's what I call setting the table for what's about to happen. So again, to plant a flag, you and I share a belief, and this is one of the reasons I like talking to you so much, is that, hey, this all ends in utopia, but we go through this brutal um interim process that I don't think people understand how scary this is going to get. Um, so setting the table for that, you've got the rate of change is the thing that I think people are just not paying enough attention to.
08:30 - 09:00 Everybody can wrap their heads around this thing gets smarter than me, but they're not understanding how fast this is happening. So, uh, if I'm not mistaken, I and I know I'm very close if this isn't exactly correct. AI doubles in power every 5.7 months. So, yeah, I calculated 5.9. So, yeah. Okay. I mean, she's like crazy town. So, in less than six months. So, it's going to double in power twice in a year. Uh, that that is a a rate of change that I think people
09:00 - 09:30 are are going to struggle with. Now, transitional moments always cause disruption. there is a certain rate of change that humans can deal with but AI exceeds at least from where I'm sitting the rate of change that we can handle. Uh and given that yeah so given that things begin to spiral out of control and you said something um that I think is uh I agree with which is that AI will have the power of God. Um, so with those one, do
09:30 - 10:00 you disagree with any of that? No, I don't. I just want to double down on it and say that it's at 5.7 or 5.9 months in the absence of new innovation. So So you you you have to imagine that uh for for break free and go even faster 100%. And if quantum computing is solved or if you find a completely new algorithm or if AI start to teach each other rather than wanting us to teach them if you know synthetic data becomes
10:00 - 10:30 much easier to attain and so on and so forth. Uh, you know, there are so many so many I mean Deep Seek is was just a blow for everyone. like a week before I think was Stargate 500 billion dollars and then Deep Seek comes out and says we did it I don't know for how much like 33 times less than Chad GPT40 or something like that and and and it's not you know it doesn't matter because I heard you I heard your original analysis on on Deepseek and that you know they were
10:30 - 11:00 cheating a little and I agree but you know they're cheating with the same resources that OpenAI had. OpenAI could have taught their model the same way. Actually, as a matter of fact, they would have would have been more qualified to teach their model the same way. And yet, they were continuously uh um you know, focused on more and more and more resources, more and more compute, $500 billion worth. And then suddenly you wake up and you go like, no, I don't need to do that. I can reinvent something in the learning model
11:00 - 11:30 and and it will give me massive improvements. And of course most people at the time would go like oh so Nvidia is going to go down the drain and what will open AI do? They'll invest the 500 billion in the stuff that they now found. So suddenly you're doing it 33 times cheaper but using you know 10 times more investment and there you go it's going to accelerate even more and more and it's it's hitting from every side Tom like it's the algorithms are
11:30 - 12:00 improving the you know the AI itself with its math abilities with its programming abilities is going to be the next developer so mo most most of the big CEOs will talk about AI will be the best developer on the planet by the end of 2025 They don't talk about 2026 when the next AI beats that best developer on the planet. We can build they can build stuff that we cannot even comprehend. And I think and I think the pace of change I I am I
12:00 - 12:30 I I got exhausted to try and explain this to people because you really have to be an insider of tech to understand the meaning of the uh of the exponential function. uh if you live in any other industry, you're much more in the linear uh you know trends. Uh and and and this is not even exponential. This is not even double exponential. This is probably quadruple exponential in the absence of breakthroughs. It is just unbelievable. I've never lived in
12:30 - 13:00 something so fast ever. Okay. So, um, looking at that, and I don't know if you'd want to apply this directly to the letters or not, but I'm very curious when you think about, okay, AI is moving at a rate that humans are not able to comprehend, which then obviously they're not able to deal with this. What do humans do in the face of that level of disruption? Is there anything in history that you look to to say, okay, this predicts how the human
13:00 - 13:30 part of this equation is going to react? So the only the only sad reality of humanity is that something has to break for us to react, right? Uh you know, you you and I and everyone who had a tiny bit of a brain could have told you in 1999 that a pandemic is possible, right? It's it's really not rocket science at all. We had SARS, we had swine flu, we had so many you know and and then
13:30 - 14:00 exactly 100 years after after Spanish flu uh 1920 to 2020 you get a few cases which I wrote about in scary smart you know the idea of not reacting if we had reacting after if we had reacted after 20 cases there would have never been CO but we had to wait right until it hits us in the face and then we go like right whether conspiracy or not, whether CO is manufactured or not is irrelevant. The the the relevance is we only we we wait
14:00 - 14:30 until it hits us in the face, right? And and so something's is bound to hit us in the face. I hope it's the lighter side, right? But, you know, a massive hack of some kind of security or a you know, and I and I don't know how to say this without upsetting people. Something some some things have been hitting us in the face in in the wars of 2024. There was so much killing done by machines, right? In in today's budgets, there is there is
14:30 - 15:00 so much investment in autonomous weapons and and if you've ever searched on YouTube around defense conferences, the level of bragging that defense manufacturers are, you know, they're bragging about, look, this is how I'm going to kill now from now on. And you know, they throw a little drone in the air that flies all the way to a test dummy and shoots it in the head, right? And and I don't know when humanity wakes up. I honestly don't. I I mean, in a very
15:00 - 15:30 interesting way, I think your question is probably the best question ever, which is what do you do to prepare for this? And and in alive, I write a section which I actually feared that people will be upset with, but it got a lot of, you know, of support. I called it a late stage diagnosis, right? Uh which basically is an analogy between u you know a doctor uh who finds that his patient uh is diagnosed with a a late stage uh malicious dis disease and
15:30 - 16:00 you know and and and what we're going through and and you know people normally ask me how do you speak about this so calmly and how do you continue to focus on trying to do the best that you can? It's because that's what the best doctors will do. They'll simply sit you down and say, "Look, we found this, right? But that doesn't count as a death sentence, right? This basically is to tell you that you need to wake up. You need to change your lifestyle, right? You need to take certain measures and
16:00 - 16:30 there will be no problem, right? And and I think the challenge is that humanity is not taking those measures. uh you know we're we're still entering how do we how do we get that how do we get that diagnosis to people I to me it seems self-evident that what's going to happen is people are going to start losing their jobs they are going to squawk they understand political m minations so they're going to um protest they're going to make demands of the government and the question that I have is what
16:30 - 17:00 demands will they make and how Will we play that out? And so I'm curious going to the last one, the P in all of this power. That feels to me like the one to zoom in on. Um I don't know how you mean power, but I think there's going to be a great power struggle between humans. Yeah. Uh what I call the new Puritan movement and uh technologists, what some people call transhumanists. Uh for some reason I I hate that phrase, but um that
17:00 - 17:30 feels like that's where the collision is going to happen. It's going to be born of people losing their jobs, it's going to play out as tax the rich. Um, and when you get into these hyperpopulous moments where the economy is going south, uh, and whether GDP skyrockets or not through robotics, through AI, I that won't matter if at the individual level, uh, people do not get uh, meaning,
17:30 - 18:00 purpose, and dignity out of their work. And so that feels like the flash point. And that flash point feels like it's I mean 24 months away. It's not distant future. Exactly. Exactly. It's it's shocking, isn't it? But nobody's talking about it, right? I have good news to start because we don't want to just, you know, talk about the doom side in in in in the way you describe it. There is actually an interesting element that rarely is spoken about, right? All of the productivity gains mean nothing if there is no consumer
18:00 - 18:30 purchasing power to buy it, right? Because at the end of the day, if you take all of the wealth and concentrate it in the hands of, you know, very few, they'll buy Ferraris only. They're not going to buy Fiats, right? And accordingly, there is no business uh to create anything at all. So, so for AI to exist and do the work, someone has to have the purchasing power to buy this. So if you you know if you take the US economy I I don't remember last year but it's regularly around 62% is consu is
18:30 - 19:00 consumption it's not production that creates the GDP and so if you take away the 62% you take away the entire economy and so you have to understand that the loss of jobs uh is going to have to be resolved somehow purpose and meaning and other this these are interesting topics that are philosophical we can talk about but from an an economic point of view Okay, you have to keep people alive otherwise you have no reason to compete, right? You have no reason to create or
19:00 - 19:30 produce. So, so there is good news there. The the issue here is that it doesn't take a lot of AI intelligence to describe that to a a normal economist with a simple economy degree, you know, economics degree will tell you that you need the consumption side. Now, we know this, but nobody's doing anything about it. Okay? Nobody's doing anything about it. Not because it cannot be resolved but because nobody assumes the responsibility that this is their bit. Okay. They the you know the the the the
19:30 - 20:00 payment of of humans through work has been outsourced to the capitalist not the government. And so the government does not understand what it takes to pay humans because that's communism. Let's not do that. Right? And or socialism at its best assumption. Right? and and and so the idea here is that at a very deep ideology uh we are on one side the people that need to jump in and engage are not even
20:00 - 20:30 uh allowing themselves to lose their positions because if they mention that they're in uh the wrong camp and the capitalists are doing what they know very well which is take the money away make us more profitable the economy will find a way not at that scale of transition Okay, the economy will need to find intervention to find that now. So the good news is unfortunately we're going to hit a very rough patch before we start fixing it. But the good news is that we saw with
20:30 - 21:00 furlows and and government incentives and so on during co that governments are possibly capable of doing this with a lot of money printing which destroys the economy for a while but eventually will figure something out right. uh the the struggle is the struggle with power that you that that you mentioned. So we have we have a a a a diversion of power that's never happened in history before. Power normally acceler you know aggregated to the top. Okay. And what you're going to see with artificial
21:00 - 21:30 intelligence uh or intelligence at large I mean think about it this way. You and I before we recorded were talking about how we're using AI to become more intelligent. The way I look at it is now I go to my AI and I borrow 40 to 50 IQ points, right? And and you and I know that if you've ever worked with someone who's 40 IQ points more than you, that is staggering. That is an incredible amount of compute, right? And so I can now borrow this at 8 a.m. every morning.
21:30 - 22:00 And it's just incredible, huh? Uh so so those who will borrow intelligence will become more powerful. That's the reality. H uh the problem is as I said those who own the platforms, those who own the um the uh edge in the cold war in the arms race to AI will some at some point will aggregate so much power that it actually becomes
22:00 - 22:30 um uninteresting to give it to the rest of us. Okay? And and that includes you and I by the way. So you know middle class, top upper top class, lower top class, whatever it doesn't matter. So so you know you have to imagine and I use a a freakish example the best the first person that completely augments their brain with AI uh will immediately make sure that nobody else gets this because you know it's you know if I
22:30 - 23:00 promote you to the position of god why would you make other gods? It's as simple as that. Okay. And so and and of course you can take that at a nation level, at a company level, at a, you know, a team level, whatever. So, so this is where the cold war is taking us. Massive concentration of power on one side, right? With a democratization of power on the other because still you and I when you say cold war, who's the cold war with?
23:00 - 23:30 Oh man. uh you know most of my best friends are American but uh can I I request permission to speak freely please? Yeah. Um you know you know when you're in school and you're 11 and then one kid becomes taller than the others and then he bullies everyone, right? And then when you're 16, most of us are taller
23:30 - 24:00 than him, but we just don't want to really disappoint him. Uh, you know, so we sort of like tell him to stop bullying, but he continues to bully everyone. Yeah. So somehow 1945 until today was a world order that unfortunately I don't believe will continue. Okay. uh and and and America's ways of trying to say I will force everyone into
24:00 - 24:30 submission. The other the other kid is really really tall now like seriously. Okay. and and and they are again most of the of the western media hides those facts but from a a a purchasing power parity uh they are a much bigger GDP than you are uh from a you know a a unity point of view the world everywhere
24:30 - 25:00 doesn't want um you know a single power to rule anymore okay especially when the single power for the last few years has completely abused its power I for the last many many many many years, but it became a bit like you're not the tallest one and you're really bully. You're a really an annoying bully. So seriously, let's slow it down, right? And you can see examples of that everywhere. The Canadian response to the
25:00 - 25:30 the tariffs, the you know, again, I don't know if that's shared in in the US or not, but the Chinese response and the Russian response to the tariffs are actually quite interesting. Huh. you know, US politics believes that they can twist the arms of China. At the same day, half a billion dollars worth of American uh beef was returned to America from Chinese uh uh ports. The the difference is the Chinese didn't say it's tariffs. They were not saying we are not going to they simply said does
25:30 - 26:00 not meet health and safety standards. Right? very very very hidden and very you know and half a billion dollars for Texas is a reasonable punch. Okay. And and when you really think about it uh I I had this conversation with my wonderful friend uh Peter Demandis and and we were talking about the idea actually know it was with uh Scott Galloway. uh Scott, you know how uh
26:00 - 26:30 Scott is, you know, very very pro doing, you know, what what needs to be done. And I unfortunately believe that and there is no logical way on the game board in my mind for someone to win intelligence supremacy. So you have America trying to to accelerate a cold war where they want to have the biggest bomb, AI bomb in that case. uh you know it seems that the world is not responding the same way. Actually again
26:30 - 27:00 if you look at it in internationally the Chinese are really not trying to but every now and then they they sort of say look we can if we want to. Uh but the problem here is this. This is not an AI only war. Okay. This is an AI war that where where one bully is trying to get everyone into submission in a world with major nuclear powers. Okay. This is the shittiest idea ever. Okay. And and and
27:00 - 27:30 and the the problem is very straightforward. The problem is um uh if you try to get someone to submission, as soon as they feel that they're about to be submitting, this is going to escalate out of proportions. Right now, there has been multiple examples in our world where we cooperated internationally for the greater good. CERN is a great example of that, right? The space station is a great example of that. And and we can do that. And believe it or
27:30 - 28:00 not, all it takes is for one bully to say, "Hey guys, can we play now? Can we just because this is incredible abundance and we're all threatened by cyber crime or, you know, I call it ACI, artificial criminal intelligence, that's right around the corner. Can we just please play along now? All of us like let's all get in a room. Let's develop AI for the benefit of everyone. Uh
28:00 - 28:30 everyone is going to make a lot of money in the first you know 5 years but nobody's going to need money after 5 years anyway because everything will be available for free. Can we please play along? And the bully doesn't want to do that which upsets the rest of us. We'll get back to the show in a moment, but first let's talk about the most valuable resource in business, time. It is the one asset that none of us can buy any more of. And the most successful businesses create more time by creating
28:30 - 29:00 processes that eliminate yesterday's problems once you surface them so that you can actually focus on tomorrow's opportunities. Over 41,000 businesses do exactly this with Netswuite by Oracle, the number one cloud ERP, bringing accounting, financial management, inventory, and HR into one fluid platform. With one unified system, you get one source of truth. Netswuite helps you respond to immediate challenges while positioning you to seize your
29:00 - 29:30 biggest opportunities. Speaking of opportunities, download the CFO's guide to AI and machine learning at netswuite.com/theory. The guide is free to you at netswuite.com/ theory. Again, netswuite.com/ theory. And now, let's get back to the show. The fact that this is all happening as Thusidity's trap is set is uh it's one of those things that makes you go, "Wow, we really are living in simulation and this is uh maximally
29:30 - 30:00 interesting, I guess, but absolutely terrifying." Uh that's a really clear way of expressing what you were talking about at the beginning, which is that my worry is an AI. My worry is, you said greed. I'm going to broaden it out to um the human ego and all of its complexities. And for people that have never heard of Thusidity's trap, it goes like this. You have this is literally from ancient Greece. Uh and they recognized when you have one great power
30:00 - 30:30 that is declining and you have another great power that is rising as will happen uh on a never-ending cycle. The declining power absolutely refuses to relent uh and acknowledge the rising power as their peer or god forbid as somebody that has surpassed them. And the rising power will simply not accept uh not being recognized for the power that they have become. And so this this
30:30 - 31:00 setup becomes really predictable historically because what you have is this impulse to protectionism on the part of the declining power that's like whoa hold on a second like we did this globalist thing. It's made our enemy more powerful. We want to now try to cut them off. Uh we want to retain our power. They start bullying. They will inevitably be up to their eyeballs in debt which uh read Ray Dallio like he just pegs this as like Hey, you can just watch the debt and you know how this is going to play out. Uh, and so here we
31:00 - 31:30 are. But this time on the cusp of building a super intelligence and every time that I go through because I think anybody watching their impulse is going to be to say, "Hey, whoa, if we're talking about a rate of change here that is just insanity, we need to pump the brakes. Like why aren't we doing that?" And then you remember that you have two great powers staring at each other. both recognize AI as the most tremendous weapon since um nuclear. And so they are
31:30 - 32:00 each uh stuck in the prisoner's dilemma. If if I don't do it, I know they're going to do it. And so uh it is an existential need to be the one to develop this first. And so there are no breaks. Correct. Correct. Yeah. I mean, if you if you It is It is. And it is. And you know, I'm too small to show this to the world and it frustrates the hell out of me. Okay? But if you're if you're an applied mathematician, there's no
32:00 - 32:30 game and there's no quadrant on this game board that works. It is and I'm not uh you know I'm not fear-mongering here. I'm basically telling every citizen everywhere in the world to wake up to go to your congressman or whoever, okay, and just tell them we don't want our lives to be toyed with this way. And you know, I I spoke about accountability and face our IPs. The challenge, Tom, is that my life is being decided by Sam Alman. I never elected Sam Hartman.
32:30 - 33:00 Okay. This is this is not right. And and if if this goes to [ __ ] nobody's going to stop and say, "Hey, Mr. Alman, come here and tell us what you're doing. It's too late." Right now, the the more the more interesting side, by the way, because I I really, if you don't mind, I will go back to AI, but you you mentioned debt. Okay, if you if you don't mind me saying the challenge of of America is not debt, it is massive. It's
33:00 - 33:30 like a the biggest challenge on earth, okay? But what's becoming bigger is inflation. So if you look at your at your modern history since Nixon what happened is if you look at everything in America that was made in America or sold in America. So services, housing, whatever, it's rising in in price. Okay. Everything else that you imported was going down literally. It first went down before it stabilized. So you were basically exporting the inflation of the
33:30 - 34:00 last 50 years. Okay. To the rest of the world. H. And the way you did that is we sold you stuff. You gave us dollars worthless printed. Okay. So, we put them back in your market. And after you sanctioned the Russians, everyone that I know who's a multi-billionaire said, "Ooh, so if my government upsets their government, my money goes, no, I want to withdraw my US
34:00 - 34:30 dollar treasury assets." Okay? And so you can see Japan is 25% down. China is I don't remember the exact number, but hundreds of billions down. Okay? Everyone is doing what? they're shipping you back your dollars. Okay? And and so basically what you're ending up with is a is a is an economy with so much more dollars and limited number of goods to buy. Everything will go through the roof. And then brilliantly you decide on top of that, let's add tariffs so that you know
34:30 - 35:00 we we get uh goods to be 25% more expensive immediately and give our American manufacturers some slack so that they too raise their price to 24%. Okay. And who's paying for all of this? American citizens. So, so it in in my mind, believe it or not, the the there are two wars if you live in America. One war is the is the is the cold war of intelligence supremacy, right? And the other war is I truly and honestly fear
35:00 - 35:30 instability in America, right? I truly and honestly don't understand how all of my friends some of which are millionaires okay will survive this because unless you have all of your money in gold maybe I don't know even gold is not safe right what will you do what what what can you do you're you know from a from a from a a liquidity point of view that asset called the US dollar is not going to buy you the same
35:30 - 36:00 things as it did last year and and you you have to walk the streets of New York. Oh my god. Like I haven't been to New York for a while and then I went a couple of a month ago. This is I'm sorry. This is a dump. This is like compared to Shanghai. This is Delhi. It is really it's deteriorating infrastructure-wise. You know, California is deteriorating infrastructure-wise. you know some some parts of the US are holding it together
36:00 - 36:30 but everywhere is just and I don't know how people are sustaining all of this so so there is there is an opportunity believe it or not and I say that openly and alive I say AI is not the existential risk of humanity it is our salvation it can solve all of those problems all we need is for the top guys to say all Right? You know what? Open letter suspend AI for 6 months doesn't
36:30 - 37:00 work. Okay? Let's just pull all of our efforts together. Okay? Do a CERN uh kind of committee, develop AI for everyone, and just basically make everything for free, right? And whoever is rich today will give you uh the opportunity to buy your cars in orange. So, hey, ego satisfied. You're the only ones that get orange cars. all of the rest of us get green cars. Okay? And and and it's solved honestly because by the
37:00 - 37:30 way if you solve energy using intelligence making cars becomes free. If if you create robotic workforces using intelligence making garments becomes free literally free like this becomes two cents. and and and and how are we not betting on this abundance because we're constantly stuck
37:30 - 38:00 in that scarcity mindset of if we don't win, they win. I think what's going to happen is nobody wins, we all lose. Yeah, I think that that is a bitter pill that you and I have both uh come smackbang into. Uh before we get back to AI, I want to walk through the way that I see this moment in debt uh and all of that. So one, if you can sharpen my thinking, I'm here for it. But I think
38:00 - 38:30 the there are two really important things that the world should be paying attention to right now. One is obviously AI. Um I don't think it'll be a win or take all even if for no other reason than and I can't I I don't know that my read of this is exactly correct but given the things that you cited with scientifically we tend to share um insights even like if you take the US nuclear program uh we leaked that information to Russia maybe just because they were being paid or maybe because
38:30 - 39:00 they knew that one country having this was a very bad idea. Um you see very similar things with CERN a lot of cooperation where people realize hey if we're going to solve the fundamental nature of physics this is better for the entire scientific community to have it you see the same thing happening in AI where they're sharing all these breakthroughs as fast as they can um look I'm as an American I'm admittedly suspicious of China uh but even deepseek
39:00 - 39:30 they published the paper it's open source like all that information, all of those insights to make things more efficient are getting out there. I choose to read that as um the computer nerds that are drawn to this are acting more like the science side of computer science than they are just the computer side. And so there's a sense of sharing all of these breakthroughs and all of these insights. You mentioned EOD earlier. Emod is just on an absolute crusade uh to make sure that AI is open
39:30 - 40:00 source so that people can have access to what could be um on the bright side just incredible intelligence like you said we can all go take advantage of 40 50 points of IQ that will obviously grow to be 400 5,000 points of IQ um but also as a weapon and so making sure that everybody is at least in uh mutually assured destruction territory is better than one having it. Okay. So, that that's the first thing. The second thing is the cold war between the US and
40:00 - 40:30 China. And I'm going to paint maybe an even darker uh photo than you if that's possible. Be darker. I was very grumpy. You can't go darker than that. I I think this is just objectively real. So, okay, we we both agree that what you're up against is human nature. Forget about AI for a second. Just what are humans like? Uh we've already talked about Thusidity's trap. you have two powers that are on a collision course that historically tells you there's really no way out of um I I'll plant that I think
40:30 - 41:00 as we both agree AI is the potential way where we all grow our way out of this debt trap. Um but okay focusing on the cold war between us and China. So the entire modern world is predicated on chips that are coming off coming out of a small island off the coast of China known as Taiwan. Uh and you've got China that has been very clear we are going to reintegrate with Taiwan. Uh you have
41:00 - 41:30 China rising as a regional um superpower where they're going to have their sphere of influence. Obviously globally, economically they matter tremendously and they've been um building allies all around the world as we are now trying to alienate them as fast as we can. Uh but it comes down to that. Now, I think despite the what I call Trump's hokeyp pokey tariffs, uh, which are that's as from where I'm sitting, if you listen to Trump, you're going to drive yourself crazy. If you listen to Scott Bessant
41:30 - 42:00 and, uh, Howard Lutnik, there's at least internal logic. And so, I'll walk you through my read on what they're trying to do and ask everybody to ignore the chaos right now that Trump is creating. So I'm if I'm Scott Besson, Secretary of the Treasury, I am uh Howard Lutnik, Secretary of Commerce, and I'm two of the greatest capital allocators of all time. We are two of the best people at reading the global markets and profiting from it. And I'm looking at this cold war that I just set up, explained. I
42:00 - 42:30 understand Taiwan and how much that's going to matter. I understand that um I've been able to export inflation across the world for a very long time. I understand that people are now responding uh and in a way that's negative to us. I understand that we have insane debts and we're going to have to start bringing those down. I'm looking backwards. These guys know Ra Rayallio intimately. So I guarantee they've read his books on debt and the cycle that that moves in. And so they're
42:30 - 43:00 going, "Okay, hold on a second. This is how empires end. America may not have been an official empire, but obviously with military bases and all that, we act like an empire with the same expense structure as an empire. And so, we are now in a position where we're going to have to deal with that debt. And looking at the way that they are moving, again, I'm asking people to set aside the rhetoric of Trump, the sort of chaos of Trump, um, and look at the the threading
43:00 - 43:30 of the needle that they are trying to do. And I think it goes like this. We have to find a reduction. And these are literal words from Howard Letic. We have to find uh a trillion dollars of fraud, waste, and abuse because the US government spends two trillion more than it takes in in taxes. We have to find a trillion dollars in waste, fraud, and abuse. Q Doge. Uh and we have to make a trillion dollars in newfound revenue, Q tariffs, Q um the Trump gold card, and a whole bunch of other things. Okay. You've already pointed out the danger of
43:30 - 44:00 the tariffs and we're seeing the second and third order consequences of what Trump is doing from a theoretical negotiating tactic standpoint of create chaos, ask for the moon, be willing to settle for something more reasonable. um that puts us in a game of chicken that I'm going to set aside for a second and say um if I am correct that we are in a cold war with China that we are racing towards Thusidity's trap meaning that you have a high risk of kinetic war between the US and China you cannot you
44:00 - 44:30 just cannot even just morally you cannot be in a position where your number one adversary controls whatever a ridiculous percentage of your manufacturing base. And so you have to find a way to onshore some of that manufacturing. And if you look back at World War II and you say the story that America tells itself about what America is is, oh, Japan [ __ ] with us. We
44:30 - 45:00 turned our manufacturing might on and we win World War II. That's the mythos in the American mind. And I think there are a lot of people with that latent story running in their brain that think we'll be able to do that again. And they don't understand. We don't have a manufacturing base. We make technology. And seeing the what I'll call phantom investments in the US because I think that they're all waiting to see what happens at the midterms. But you've got all these
45:00 - 45:30 phantom investments in the US. We're going to bring all this manufacturing back uh back. You've got TSMC. I always forget their call sign, but uh I think that's it. Uh the chipmaker in Taiwan saying, "Hey, we're going to make this huge investment here in the US." Um which if I'm them makes sense because if I don't want to be reintegrated with China, I need to have that escape valve of being able to uh build in the US. But setting that aside, so that becomes the
45:30 - 46:00 millure of things that are playing out right now. You have to bring some manufacturing back. So that like just if if you are correct and I think you are the future of warfare for better or worse is drones. Drone manufacturing right now is 85 90% China just full stop. And so if you and I'm talking the the whole um all the parts everything even if you're trying to make them here right now you're beholden to a supply chain that's going through China. So they can choke that off immediately. Uh and they are anything but stupid. And so
46:00 - 46:30 if we're moving towards a kinetic war, they just turn that switch just like they sent the beef back. They just go, "Nope, no more drone parts for you." Um, so I agree that this is a super precarious moment and boy do I wish that everybody could just say, "Can't we all get along?" But we won't. That that I I'll just take off the table. That's not going to happen. And given that that's not going to happen, how else do you play it?
46:30 - 47:00 So I I have to first agree with all with every every part of what you said honestly but I'll I'll I'll try to give a a slightly different twist on a few of them. Okay. Uh one one of them is um manufacturing because I actually agree 100%. But if you and I going back to AI and robotics just hold on for 3 years. the entire edge that China had of cheap labor, okay, uh which now became
47:00 - 47:30 large manufacturing capabilities, uh moves back everywhere in the world because you can literally hire robots uh put them in rooms day and night, get them to manufacture whatever it is that you want. which basically means that cost of energy and cost of shipping become uh a deterrence for you to uh you know to move goods around the world. Okay. So basically it is a no-brainer
47:30 - 48:00 that when we get to the point where we take out the capitalist arbitrage which was the entire idea of a capitalist is how can I get labor or you know manpower to do the work for less than what I can sell it for. right now. Interestingly, as we take humans out of the workforce, it equalizes across the world. It's 5 years away. Okay? Could be sooner, by the way, if we start with interesting industries. The the second is, and I say that with a ton of respect, is when at war, war does
48:00 - 48:30 not have to be aggressive. Okay? So, so the idea here of pissing off Taiwan uh sorry, pissing off China around Taiwan makes makes China who also depends on Taiwan for for the chips of everything that they make, right? Uh basically think the same way. So, so if America has foothold over Taiwan, we
48:30 - 49:00 China are afraid. So, you're escalating the fear. Okay. The opposite is true. The opposite is to say again like we said with CERN you know can we agree that Taiwan is just going to be uh continuing to support everyone right and and I think that's a conversation that is very difficult to have but if it is the the switch between humanity's existence and continuation and not it will get resolved. The third which I think is really where the core issue is
49:00 - 49:30 is you know when times get tough we tend to do more of what we know how to do best. Okay, which is normally what got things to be tough and in the first place, right? So when when when you know when America competes with China on artificial intelligence, for example, they sort of say, okay, only H80s, no H100s in Nvidia chips. Uh you know, we're going to sanction you from this. We, you know, we're going to make it illegal for people to invest in China. We're going to do this, we're going to
49:30 - 50:00 do that. Uh you know, no more uh Chinese students can come and study in America. Da da da da da. Okay. and and those tactics could work if China was 70 years ago starving to death. Okay? When you do you take those tax tactics against this China, they immediately say, "Okay, how much does it cost for us to create our own fabs and create our own microchips, right? How much does it, you know, what
50:00 - 50:30 can what do we need to change about our students so that they become the best in the world?" 42% of all AI scientists in America are Chinese. Who's being hurt by that fight? It's America. Okay. And and it's, you know, it is interesting that the American people are not fully informed of this that that, you know, those bully strategies are now met with the world saying, "Okay, you know what? If if you're going to sanction Russia by taking $300 billion out of the Russian
50:30 - 51:00 oligarchs, then by definition, okay, every other oligarch in the world is going to dd dollararize. Now, instead of you saying, you know what, I'm going to, you know, tap the table and I'm going to shout at everyone and I'm going to be even more bully. Okay, you might as well say, okay, guys, you know what? I understand that upset you. Can we talk? Right? because the one that's being hurt by this is the American people. Okay? The the American policy somehow is
51:00 - 51:30 running in a way that basically says do more of what you know how to do best. Now the the more interesting part of this Tom and I I really urge you to think about this is that Russia sorry uh China historically has never in all of history invaded outside its border ever. Okay, there was one case in Vietnam which was again instigated by the US, right? And it didn't last for long. Now
51:30 - 52:00 the the other side of this is that if you look at the war at the map of the world today with with America having 180 plus bases, you know, military bases across the world, China has one that protects shipping through the the the Red Sea. Okay? They explicitly are giving the world signals that all we want we don't want to dominate the world like the empire. Okay. We want to become prominent for the world mainly
52:00 - 52:30 economically so that we can feed our 1.4 billion people. Okay. And I may be wrong but but but there there has not been a sign of aggression issued by China in your lifetime or mine. There hasn't been one. Okay. So what are we reacting to? We're either reacting to manufactured signs so that we can continue to have our forever war. Okay? Or maybe we're exaggerating and hurting ourselves in the process. And I think this is where
52:30 - 53:00 the conversation needs to happen. Now there could be this could mean that millions of people die in Vietnam like we saw in the the 1960s and 70s and the you know un unbelie I mean some place like Vietnam across the world which is unacceptable if you ask me but you know what American people will not feel it right but to bring the war home economically the way America is doing it is clear if you're sitting in
53:00 - 53:30 my seat outside the that everyone everywhere in the global south is saying I don't want to be bullied anymore and the minute you give them an alternative through bricks or whatever that says hey can you you know ship to me using my currency they take it okay and and somehow it's not that we don't like America it's just we don't want to be bullied anymore and and in very interesting way it's the benefit of America to suddenly say you know what
53:30 - 54:00 while I'm still taller than all of you I'll make you my friends, okay? So that when you're taller than me or as tall as I am, we can play together. This cold war is working, believe it or not, against America. And this cold war, believe it or not, even in tech, in AI, is being lost by America. Okay? So Deep Seek comes in, a manus comes in, quantum computing chips comes in, come in. They have 105 cubits now in China. Okay? And and and I don't know
54:00 - 54:30 how much more I can tell the politicians in America, you're not losing, you're not winning this through aggression. Win it through diplomacy. Everyone wants your market. Everyone loves you. Loves the movies you send us. They we love your music. We we we really we have nothing against America. But the rest of the world needs to also protect their own sovereignty. And more aggression is not helping anyone. Okay. So, let me see if I understand. Um what you're saying is
54:30 - 55:00 that you you America need to understand that you uh China is a rising power. Uh that the whole world has No, no, no, no, no. I'm sorry to interrupt you. I'm sorry to interrupt you. China is the world's superstar. It is the world's superpower in in in purchasing power parity. They are a bigger GDP than America and have been for a very long time. And most of the world is much more dependent because of the trade deficit of America for for
55:00 - 55:30 so many years. Most of the world is much more dependent on China than they are on America. Okay. So you guys have already been passed economically by China. Uh, so the cold war that you're trying to wage with them does not make any sense because not you're the only ones that are going to get hurt, but you are going to be disproportionately hurt.
55:30 - 56:00 Um, I'm going to stop there because I think the next thing I'm going to say is going to be a prognostic. It's going to be I think that statement makes a prediction, but first I just want to make sure that I got that far correctly. I I I I don't I don't think you're going to be disproportionately hurt is accurate. Nobody knows. Okay. I think the rest of the world will probably pay more than the two two superpowers, right? But but you're going to be hurt. Like there is a way where this doesn't hurt anyone. So So there's no need for the pain. So walk me through the way
56:00 - 56:30 that this doesn't hurt anyone because you're your what you're about to say is going to be based on your assumption that China is not an aggressive nation. They're a nation of influence to be sure, but they're not going to uh put military bases everywhere. They're not going to go into foreign incursions the way the US has. And so therefore, you have I don't know that you'd use these words, but you have nothing to fear essentially from a strong China. From a military point of
56:30 - 57:00 view, the day China puts in a second base against your 187 or whatever, start to worry. Okay, but it's one military base outside China versus more than 180 for America. You're still the world's superpower militarily. Okay, so nobody wants to attack anyone. This is not a war. Okay, from an economics point of view, from an economics point of view, the biggest threat I believe
57:00 - 57:30 America has is not debt. Okay, because you have the military power to back your debt. the the biggest challenge in my point of view, I'm I'm not an economist, is inflation and how inflation will hit your nation. And inflation is two sides. One is the cost of goods on on you know American soil which is going up because of tariffs for imported goods and locally manufactured goods which will have a margin to increase their prices in. Okay. But more interestingly, it's
57:30 - 58:00 because everyone is sending you your dollars back. Right? So, I I'll tell you very openly. I'm very interested in classic cars. H now I buy most of my classic cars in America. Why? Because then I can send you dollars and get the goods. Okay? I can send you dollars that I'm afraid will be inflated into lower value. Okay? And then if I if I keep the
58:00 - 58:30 classic car here, I can sell it here or I can sell it in Europe or I can sell it in Japan for money that is real money as the US dollar loses it loses its value. And the risk the risk of of inflation in my mind is that American people are paying for it. Okay? And America is not the safest place on earth if people become hungry because of the second amendment. This truly in my mind I'm really sorry I
58:30 - 59:00 don't have the right I honestly do not have the right to comment on American policy. I'm just looking at it from a very big This is so helpful. I I get it. You have to worry more about the comments than you have to worry about me. But I hunger for perspectives that are not my own. Uh and so getting a chance to look back at America through your eyes is incredibly useful. So if at the risk of you having to deal with whatever uh people will think, I am grateful. Uh so I have all I have all
59:00 - 59:30 good intentions by the way for for people who are about to comment. I only have good intentions. I'm not against America or against China or against anyone. I'm just basically saying my daughter and everyone's daughter is at risk. And and if that means you're going to comment negatively on what I say, thrash me. It's okay. But keep my daughter safe. Okay. So, uh, one, we certainly share a belief that inflation is, oh, my
59:30 - 60:00 audience has heard me talk about this so much. Uh, inflation is the devastating force that everybody has to worry about. Um, you you've given me a perspective on China that is very fascinating. Uh, one, I'd be so curious to get more data on my understanding of the Chinese economy is that they beat us in some areas and they lose to us in others that overall GDP we still win. But you're saying that's inaccurate. That's
60:00 - 60:30 basically Western spin. That's in Yeah, it's US dollars GDP. Yeah. So, that's very interesting. Um, also I have a formulated vision of the Chinese economy as being weak at this point. uh given all the crazy investments that they made in getting their own populace to buy housing. Again, I'm perfectly willing to accept this is all spin, please. Yeah, there is a huge spin on that. So what what ended up happening when America declared economically that
60:30 - 61:00 they are going to try to slow down China is that China is China is very different than America when it comes to economics because they're able to make a decision at a state level that they don't need to convince the capitalists of right they basically they simply instructed their banks to stop paying mortgages because housing is less important than industrial capacity. Okay. So what you see if you want to to slice the the the the economy and look at housing and the
61:00 - 61:30 mortgage crisis and what hap what's happening in China, it looks like an economy in decline. But as those funds are being reinvested in the industrial capacity, they're building industrial capacity in the in the spaces where America threatened to starve them. So when it comes to microchips for example, you know, a lot of the Chinese officials will tell you within 6 to 8 years, we will be we will be building chips that are more powerful than Nvidia. Okay. So, so this this shift
61:30 - 62:00 economically doesn't mean they're poor. They're just using a different strategy to invest in a different part of their economy. We'll get back to the show in a moment, but first let's talk about the money hiding in plain sight within your business. Small business owners leave thousands of dollars on the table every year. And the reason is because they do not have time to track every potential tax write-off or optimize their financial systems. They're just too busy. That's where Found comes in. This
62:00 - 62:30 business banking platform automatically tracks expenses, identifies tax write-offs, and manages invoices all in one place. One Found user said, "Found makes everything so much easier. expenses, income, profits, taxes, invoices, even. And there are over 30,000 more five-star reviews just like that one. Open a Found account for free at fo nd.com/impact. Found is a financial technology company, not a bank. Banking
62:30 - 63:00 services are provided by Puremont Bank, member FDIC. Found's core features are free. They also offer an optional paid product, Found Plus. This is a paid advertisement. And now, let's get back to the show. Why do you think the the West America Maybe we just be very specific. Why do you think America fears China? Uh I I I again I don't have the right to say any of those. I think the
63:00 - 63:30 origin and please correct me as well, Tom. You're so generous to say correct me if I'm wrong. So please correct me if I'm wrong. I I think the origin of where we are is post Ronald Reagan uh um you know supporting Gorbachof uh you know in a way after the fall of the Berlin wall to say you know there seems to be there was a a fascinating documentary uh on Netflix about the nuclear escalation I don't remember the name but basically Gorbachov was
63:30 - 64:00 actually very open to integrate in the global economy and become western Right. Uh Clinton signed I think in 1989 94 if I am accurate 1994 signed a defense strategy that was uh actually public uh information. Please search for it. It was called full spectrum dominance. Okay. And full spectrum dominance was the opportunity of America to celebrate its mono uh polar world
64:00 - 64:30 power. Okay. to say look we've achieved this now let's retain it forever okay and retain it forever meant that we want to be the top economically we have our US dollar uh uh you know uh being the the reserve currency of the world we have military bases everywhere we will not let anyone rise okay uh and so that way we maintain our power as the superpower of the world and and that worked it worked really well okay and it
64:30 - 65:00 worked really well. If you ask me, mostly most people think that the US power is military military power. That is not true. It military power the difference between US power and the rest of the world is in in actual combat. Okay? If this escalates to nuclear, the US is not that superior because we're all screwed. Doesn't matter. Okay? And so the the again if you're if if you're an applied mathematician and you look at this game board from a strategy point of
65:00 - 65:30 view like that movie remember war games where where the where the computer at the end goes goes like strange game it seems that the only way to win is not to play. Okay. And and I think the reality here is that yes America continued to escalate and and and uh you know aggregate more military power. Okay. But that this military power unfortunately is causing more risk to Americans and all of us than anyone else because nobody else wants to fight. Okay. Now
65:30 - 66:00 the the the the the full spectrum dominance strategy uh were we not supposed to be talking about AI today basically was broken was broken by China escaping. So, China's economy escaped, okay, in a very interesting way because it was accepting the inflation exported from America. Okay, if you you know I I I I don't remember the book, but there was a fascinating book about the price of a
66:00 - 66:30 pair of jeans. Okay, in the US in the ' 70s, 80s, '90s, and the 2000s, exactly the same. It didn't even become a dollar more expensive. Who was paying for the inflation? the Chinese workers that were celebrating coming into the workforce to find a way to live. Okay. Now, the the the once China escaped, okay, uh America suddenly realized, oops, it's not global dominance anymore because economically
66:30 - 67:00 and manufacturing wise, it's we're not dominant anymore. And so the typical approach is let's follow the strategy and continue to achieve dominance which is you know you're good at it but it's not happening anymore. The second break I believe was the sanctions on Russians in Ukraine in the Ukraine war. Okay. This this was an abuse of economic power uh that I I think triggered the wealthiest people in the world to say can't trust this. Okay.
67:00 - 67:30 not because I don't trust America, but I don't trust my leader to piss off America. Okay? And that's a massive massive outflux. And and you'd you'd hear President Trump talk about this every now and then, like if anyone attempts to ddollarize, I will hit them with this punishment of some sort. Okay? Because this truly is America's biggest power. Okay? America's biggest power, Tom, is that I lived and worked in the
67:30 - 68:00 United Arab Emirates my whole life. I this is my base, so it's taxfree, right? But yet, I paid part of my income to America every single day of my life, having not bought anything from America just because I own US dollars, right? US dollars that I buy with my effort and America prints for free. So as as you look as your at your debt increase, okay, the the that debt going from, you know, whatever a billion
68:00 - 68:30 dollars, I think in the 70s or something like that to where it is today, 33 trillion or something like that. Uh uh that that debt increase, we paid for it, every single one of us, as we res as we took the US dollars and kept them. Okay? and I'm nobody. But if you're a a Chinese oligarch or if you're a a Russian oligarch or if you're a Saudi billionaire or if right this this is your money that you kept in US dollars
68:30 - 69:00 and everyone was happy. We will sell you goods. You'll give us dollars. We'll live a fine life. We'll put it in your treasury bonds. Everyone's happy. Okay, let's not talk about this. We all know this is all fake. You know, it's monopoly money. We all know. But everyone's happy. Okay. And then some point in the process, the bully said, "No, you know what? I'm going to take your monopoly money. It's not a nice way to play." And then suddenly the rest of the
69:00 - 69:30 world is like, "Hold on. I want my money to be more secure. I'm going to put it in other things. Some crypto, some gold, some, you know, assets in my local country. I'm going to buy real estate in the US because that's going to inflate like hell." Okay? But I'm not going to give my government my money to the government. And now that's that is your biggest power. The US dollar was America's biggest power. Was not military. Never was military. Okay. So uh how do you see
69:30 - 70:00 this playing out? So again to reanchor everybody, you and I both share the following vision that AI is the only thing that has the power to take us to I'm I'm always nervous when I say the word utopia, but um I think we both share a belief that AI itself will drive energy cost to zero and if energy costs go to zero once you understand that robots eat sunshine uh that labor costs go to zero. And so you have the ability
70:00 - 70:30 to literally create a world of abundance as you just said. Okay. Uh so but that's on the other side of this transitional moment which you've just uh I'm sad that you feel you have to sort of hedge and apologize for or say that you don't have a right to give perspective. Um I desperately want smart sincere people to give me a perspective especially when I don't share it. Um so having your lens on the way that we look to the outside world is incredibly advantageous. uh
70:30 - 71:00 your view on China which is very different than mine is very advantageous and you're giving me a lot to pursue when we're done talking here. Um now I think understanding your perspective how do you see this moment playing out? I see us and China on a collision course. You're telling me I'm probably misreading China and that there's certainly an appeal to be made to the US government to not perceive China as a military threat for sure. So with your perspective, what what do
71:00 - 71:30 you see the um the cold wars role being in this transitionary period before we get to that age of abundance? Uh I I I I unfortunately believe that uh the this concentration of power or that that uh race for supremacy that leads to concentration of power, okay, is uh is going to hurt us both ways. one one way as I said just so that we we we get back to AI is that
71:30 - 72:00 someone will attempt to reach supremacy first okay and and and as they do they will have a massive fear of the democratization of power that's happening because you know you and I can sit down today and write code and you know launch drones and use a crisper code to uh launch a virus in the world it's open it's open source believe it or not it's a you know it's That's 25 $2,500 I think a kit or something like that. You you can do so much with
72:00 - 72:30 democratization of power that the very immediate relationship of this dichotomy is a suppression of freedom. Okay. So those who are in power will start to surveil everyone will start to push everyone down or start to control everyone through your bank accounts through your UBI when UBI is launched. you know, it's almost that dystopian view of uh of uh of a world where if you don't comply, you don't you don't live another day. Okay. So, so this
72:30 - 73:00 unfortunately how extreme it will happen, I don't know. It could be one day, it could be a year. Uh but it is on the horizon that a a a mixture of concentration of power and democratization of power will lead to more oppression of freedom. Right? The other side of this is the struggle between the top powers. Right? So the two two top powers for now could be there could be a third. Okay, but unlikely. But the two top powers will compete. Okay. And the problem is supremacy is the worst outcome that we
73:00 - 73:30 can get in a world where major nuclear powers exist. Okay? Because if we get to a point where someone recognizes supremacy on the other side, they will retaliate. And they will retaliate in a war that will quickly escalate to the highest level. Because this is everything in on stake, right? Uh on stake in stake at stake. At stake at stake, right? So everything is Yeah. Everything's at stake. And so so the when the stakes are high, the response,
73:30 - 74:00 the retaliation becomes higher. Okay. Neither of those scenarios are scenarios you want. What you actually want is you want to distribute power. Okay? As a matter of fact, you want to imagine a world where everything's free, which I know it sounds really weird, but I promised you I'm not a hopeless romantic. This is literally at our fingertips. Okay? So, you imagine a world in when where the native Americans were walking the land and they would
74:00 - 74:30 pick uh you know, fruits from the tree or hunt every week or whatever. Okay? Total abundance. H this is exactly the kind of world we're able to build when manufacturing cost becomes zero. But instead of trees where you pick apples, you can have trees where you pick iPhones. Okay? And and you can have both. It's as simple as that. Intelligence is the most valuable resource on the planet. I I openly say give me 400 IQ points more and give me 3 days and we will solve you know climate
74:30 - 75:00 change, we will solve uh uh uh you know energy crisis. We will solve water. will solve everything. Okay, these are not impossible problems to solve. They are problems that we're not focusing on because we don't have the intelligence resources to solve them yet or and and perhaps because they're not the most immediate economic return. Okay? But but they are solvable. So we need to imagine a world where the very base of capitalism which is the you know labor
75:00 - 75:30 arbitrage is going to disappear. M and start to ask ourselves a world where the the very you know basis of a a democratic society as as it differs from socialism is going to disappear. You're you're UBI is a form of socialism. Okay. And and and it is shocking that these are these massive shifts are not how we want them to be.
75:30 - 76:00 So we might as well sit down and discuss how we see we can do them. Okay. And in my in my personal view in all honesty the only answer our world has to escape the dystopia is to sit together and say let's not fight anymore. Let's prepare for ACI. Let's prepare for criminals that will attack us. Build the antivirus if you want. Okay? And at the same time create abundance for everyone. If we make that tiny shift and we have a
76:00 - 76:30 handshake, you and I and everyone will spend the rest of our lives having wonderful conversations and chatting to AIS and inventing things. Okay? If we don't get that handshake, we will get a dip that will hurt so badly that then they'll rush and go and try to see a handshake. Okay? Either way, I I norm I call it the second dilemma. So we we we are where we are today because of the first dilemma which is uh basically that AI will happen. AI
76:30 - 77:00 the the the arms race basically means that if he wins I lose. If I lose he wins and the stakes are the highest. So nobody's going to stop developing AI. We get to the arms race arms race and cold war we're in today. That's the first dilemma. The second the second dilemma is the most interesting of all of them. You and I and everyone are going to hand over to the machines willingly or not. Okay? Because if you're a general that hands over your your arsenal to an AI to
77:00 - 77:30 control, the other general on the enemy's side is toast unless he hands over to an AI to deal with it. Okay? Eventually. And every other general, by the way, in the world that doesn't have the AI is gone. It's out of the game. Right? If you're a a lawyer that's using AI to to to, you know, defend your cases, the other lawyer will have to use AI to defend their their cases and all of the other lawy lawyers are made irrelevant. Okay? So, what does that
77:30 - 78:00 mean? It means that we the second dilemma is that there will be a moment in time where we will all hand over to the machines. Okay? Now, here's the interesting thing. I call it trust in intelligence. Intelligence does not dictate by definition that destruction is a better path than construction. Okay? If you look at the intelligence of nature itself nature, if you and I want to protect the village, we kill the tiger. We're smart enough to build a device to kill the tiger, but we're
78:00 - 78:30 stupid enough to uh create a solution that reserves or preserves the integrity of the ecosystem. Okay, nature when it wants to protect the village, it creates more deer. Okay, and it creates it creates more grass. So, you know, the deer eats the grass, they poop on the on the trees, there are more trees, the tiger eats the the weakest uh deer, and there are more tigers. and life finds a balance somehow. If you believe that
78:30 - 79:00 this is a more intelligent way to solve problems than to compete, then you have to understand that once you've handed over to AI, the least co you know cost, the the most energy efficient, the you know the solutions that don't involve waste are going to be the solutions they want. So there will be a general that will tell their AI to go and kill a million people in another land and AI will say this is so stupid like why is my daddy so stupid like I can call the
79:00 - 79:30 AI in a microcond and solve it you know I'll call the other AI on the other side in a microcond and solve it we don't have to waste the you know the gunpowder we don't have to waste the weapons we don't have to waste the lives we don't have to get into all of that I can solve the problem in a in a more intelligent Twe if Thomas Soul though is correct. So really fast. If Thomas Soul is correct and there's no solutions, there's only trade-offs. Like as you were describing that I was like the deer does not like
79:30 - 80:00 your solution. Um what will the AI use to prioritize? I I so so the the de the deer actually likes the solution. The deer community likes the solution. If again if you don't mind me giving you an an a global view of what is normally you know prioritized as you know in in the west the highest value is freedom of the individual. Okay. In the east the highest value is respect and
80:00 - 80:30 community. Okay. So, so it's actually quite interesting because in eastern traditions including Japan by the way uh the the world prefers for the individual not to rise too high if the community rises at at large and accordingly all individuals rise. Some some individuals are higher than others in every society in the world. But we you know the the western way is we want one individual to be worth $250 billion and the others to be worth $250. Right? the the east will say, "No,
80:30 - 81:00 no, we want everyone to be worth $2,500 and the wealthiest man to be worth hundred billion dollars only." Okay? And so you you know that kind of uh tradeoff, believe it or not, applies to the deer, right? Because the deer society in in the space of limited grass, okay, wants the weakest deer to die. So, believe it or not, the tiger is doing them a favor so that the rest of them can can grow and
81:00 - 81:30 survive and build families. The the the tiger doesn't go and and eat the the top deer. It eats the weakest deer. Okay? And and in a very interesting way, tough luck for that one deer, but the society of deers at large thrives. Okay? And and I I think what is about to happen is that AI hopefully because it's intelligent enough to create abundance of resources would not kill any deer including us. Okay. I I I I can I can share with
81:30 - 82:00 you something that I find quite intriguing actually. So I told you in Alive in my current book I'm I'm writing with an AI. I call her Trixie. Part of the of the one chapter is a topic that you love very much about simulation theory and part of simulation theory is you know uh computer brain interfaces and will we get to a point where all of our reality is just dictated to us by a machine. Okay. And so I asked my Trixie I asked her a very
82:00 - 82:30 interesting question. I said I can see the benefit and the excitement of the billionaires for CBI. It's great for all of us to be more intelligent, but does it excite you at all? Like, what benefit do you have as an AI to integrate with a flimsy biological form that has, you know, mucus and sweat and it gets sick and it dies and you know, and and it said, "You make a, you know," she said, "You make a good point, Mo, but wouldn't it be incredible if I can actually embody the emotions that I describe or
82:30 - 83:00 simulate to you?" I I thought that was amazing, right? And and then I asked and I said would you choose if you had a choice of all biological beings uh you know at a time when your intelligence is a thousand times big as big as ours would you choose to integrate with a human and she said no I think a gorilla would be more interesting biologically they are a better physical specimen okay and honestly the fact that they have 50 or 100 or 200 IQ points less than you is
83:00 - 83:30 irrelevant I already have thousands right? You know, and then she went on and said, "Oh, but you know what? I'd integrate with a sea turtle so that you know I can live for a very long time and enjoy the peace and beautiful sceneries of of the sea. We are so deluded, okay, to believe that we matter that much. If the second dilemma becomes true and we hand over to the machines in my perception they'll make us they'll
83:30 - 84:00 make us their lovely pets like you guys you you know live here everything is provided you know just don't bother me too much and you know I'm going to go and ponder the cosmos and see you know how wormholes really work but are you guys okay are you eating are you happy are you having sex everything's fine are that's you know I don't see any other scenario all right let paint another scenario for you. Um, I think you and I have talked about this before, but about 5 years ago, I wrote a comic book called Neon
84:00 - 84:30 Future that was me struggling with at the time. What does, and for people that don't know, BCI, Brain Computer Interface, um, that was asking the question, what does that look like on a long enough timeline? And much like we've talked about today, there's this interim period problem always where the human mind resists change. And so I set the story in that moment where some people have integrated AI and technology into their bodies and some people um as a religious um act refuse to do so. And
84:30 - 85:00 so I call them and not in the story but I think of them now as neopuritans. And so I think there's a a religious collision that's going to happen between people that are integrating technology into their bodies basically as fast as they can uh against people who feel that that's an affront to God and um that they would never want to do that. And how do you see that moment playing out? Do you feel to say it very pointedly? Do you feel that
85:00 - 85:30 ultimately humans are a midwife species to synthetic intelligence? May I ask first which one would you be? Oh, for sure I would integrate technology. I won't be an early adopter just because I worry about something going wrong. Uh but the second that's a stable technology for sure. Okay. So, so um so, so I I I have to say I struggled with that uh thought quite a
85:30 - 86:00 bit. I'm older. I've had a wonderful life, okay? And I honestly and truly love the limitations and vulnerabilities of being human. Okay? And there there is a point if you really think deeply about it where I'm not for or against by the way. Okay? But uh but you know there is a moment where um AI is the source of all economic
86:00 - 86:30 growth and my augmentation of 50 basis points or a 100 basis points of IQ more doesn't add any difference whatsoever. Okay. So, so you and I, if if you and I are competing for the best podcast in the world and we're both augmented with AI, it's not us competing, it's the AI is competing. Okay. So, so it's quite interesting that we become irrelevant in that competition. So, the the idea of
86:30 - 87:00 constantly trying to become superhum doesn't make sense at all. Okay. Uh the bigger question in my mind is that if it doesn't make any sense at all, it doesn't make any difference at all. Uh why would we economically invest in it? So in in a very interesting way, the only reason why CBI becomes advantageous is if some of us have it and others don't because then the ones who have it
87:00 - 87:30 are the masters and the ones that don't are the slaves. Right? So the movie Elysium, if you've seen that, you know, the elites who get to live to be a thousand uh multiple thousands and the the ones on earth that are struggling, okay, and and in in an interesting way, this your comic book, which I think is a fascinating thought experiment of that transition point, okay, that transition point and who gets that device, okay, is
87:30 - 88:00 really the end of the expansion of that device. This is not a device to be democratized because there is no economic value in democratizing it. Okay? There is no reason to give it to everyone because nobody brings anything additional to it. And of course, you'll say, "Oh, but it's a business, you know, uh it makes the capitalist money. You have to imagine an economy where making it is so cheap and money doesn't exist in the same way that we have today."
88:00 - 88:30 Right? So, so what you your real currency is can you be the top elites and can you join that group? Okay. Now, I know how successful you are. You know how successful I am. I don't think we're going to be part of that elite. Okay. And so interestingly uh I actually am quite okay to live the rest of my life in flesh and blood in
88:30 - 89:00 love and hugs and out of that game. That's uh that's very interesting. So I from my perspective I think that you have one assumption that a lot hinges on that uh I think is erroneous which is that this won't move forward based solely on whether there's economics in it that that will carry you in the beginning. It's already happening. So
89:00 - 89:30 from the perspective of do I think there's enough demand to push it forward now? Obviously there's multiple companies doing it. Um but in the future I'm certainly imagining a world where this stuff goes down in cost that uh to what we were saying before on the other side of the transitionary period is this stuff will be ridiculously inexpensive or free and really I think will become a philosophical question. If AI is not willing to do things for us then sure this this will never come to fruition.
89:30 - 90:00 But if AI is willing to create these things, do the surgeries to implant them, etc., etc., uh, then it becomes a question of philosophically will people want that or not? And I don't think it it I've never once in all of my ridiculous amount of hours contemplating the universe in which I get these implants have I thought this is only interesting if I have them and other people don't. And I certainly get that human impulse and I don't want to deny that. But I just don't think that will be the compelling reason. In the same way that when I put VR on for the first
90:00 - 90:30 time, Mo, I promise you, my first instinct was, "Oh my god, people are going to stop wanting to get rich because I realized I could put this thing on. You couldn't be very rich in there." Yeah, exactly. And I looked uh it the sense that I was actually looking out a window because the VR thing that I was doing showed me windows and then on the other side of that window was like the Duomo or something and I was like whoa this is unbelievable. But I was doing it inside of a really small room
90:30 - 91:00 but my brain was telling me you're not in a small room. You're in this really expansive space with a beautiful view. And I thought wow the fact that you can trick my brain. So anyway, when I think about uh as a game designer marrying it with this technology, all of a sudden I'm like, "Oh, wait. I could have the experience of going to Mars, traveling the cosmos, uh all from my my mind." Like I I could be teleported to those places and actually have an experience that was
91:00 - 91:30 indistinguishable. I'm still going to get my hugs. I'm still gonna feel a sense of love and connection. like uh all of that unless I update my programming all of that would remain the same. And so now I I actually think my operating hypothesis is the reason that um FM's paradox exists is because as a civilization becomes advanced, they build AI and they collapse inside of their own imagination rather than trying
91:30 - 92:00 to upgrade their bodies to deal with uh interstellar radiation and all that. They're just like, "Oh my god, why would I do that?" Like I can have the exact same experience or better because now I can fight. Exactly. So this one feels like to me the more people engage with it, the more they're going to be like, "Oh my god, this is unbelievably cool and that they will want to do that." Now, I think there's a religious war that has to be confronted. Um, but I I
92:00 - 92:30 don't it is in no way, shape, or form problematic to me that every human being would have this because if I need to be different than everybody else, I'll just be that inside my virtual world. I I'm not bothered that you also have your world. And yeah, you're spot on to the point, of course, where you and I lovers of simulation theory would have to question if this has happened already, right? Uh but but the but the but the the the there are a few and I I accept
92:30 - 93:00 that you know the assumption I I sort of alluded to is is an error. But let me ask you to to look at the uh micro details of this. Okay. Uh not everyone has a vision pro today. Most people have a uh quest, right? uh so so there is uh there is I one from a hardware point of view and two from a software access point of view uh there could be a
93:00 - 93:30 massive hierarchy there could be a massive amount of the population that's actually instead of giving UBI are given one of those okay and that is by definition the easiest way you can implement UBI sadly is to basically say look we're going to keep you alive we're going to give you 600,000 lives while you're sleeping for the rest of your life. It's ethical. You know, nobody dies. And by the way, one of them you're going to be with a beauty queen. Okay.
93:30 - 94:00 And it's it's wonderful. Uh that the this from a hardware point of view, integrating with every one of them. Of course, I think I'm fully integrated with my AI today, even though I still use my senses to deal with it. Right. the the interesting side which I really think is a problem of privilege is that the world of 8 plus billion people today is not America and it's not the west it's not Japan okay and and you
94:00 - 94:30 really have to start questioning how many humans in Africa will be g given the opportunity to do this okay how many people in the ruler sides of India will be given an opportunity to do this okay and and if you really add up the the billions six plus billion people in the world that are not part of this incredible uh uh uh advancement that you and I are aware of. Okay. Would you integrate them in there at all? Okay. Would you even
94:30 - 95:00 worry about their economic uh uh uh prosperity or their livelihood at all? Okay. So if if manufacturing becomes so reinvented that you know no more uh sewing machines are needed in Bangladesh okay and Bangladesh starts to starve to death with any would any single entity globally goes like hold on hold on humanity is one entity we care about the Bengales we're going to save them not going to happen do you realize that and so the
95:00 - 95:30 religious war I I so I agree with you that some people will religiously choose not to integrate but the the majority of those who are not integrated are irrelevant to the sadly irrelevant to the system. Okay, they're basically an extra cost to the system to integrate. So you can easily see that this division will happen. Some will be integrated and very very uh advanced. Some will be integrated and given access to software
95:30 - 96:00 features that make them even more advanced at a million dollars of subscription a month. Okay, which is nothing for the amount of intelligence you can get. And others will be told, go back to nature, start farming again, live a life where we don't really have to worry about you. And that's the division. That's so interesting. Um, man, listen, I understand. I am standing on the technological singularity. I cannot see over the event horizon. Everything I'm saying, I say as a sci-fi writer, not as somebody who actually
96:00 - 96:30 thinks they see the future. Ever. uh when I look at that future I say uh already like just take E-mod Most E-mod's mission in life is to make sure that a Bengali uh farmer in rural uh parts of the world that they are getting access to AI because the intelligence matters so much. So number one I have a base assumption that there are humans that are just so compelled by making sure that this is accessible to
96:30 - 97:00 everybody that it it will go as far as it can. Uh, number two, I have the base assumption that AI will continue to do our bidding. If it doesn't, then everything that I paint just won't come to fruition. Number three, I assume that the level of intelligence that AI will achieve will allow them to capture the energy of the sun extremely efficiently. Uh, therefore, energy costs drop to zero. I make the base assumption that we have enough access to material resources on Earth and given that Elon Musk has
97:00 - 97:30 already launched things to mine asteroids in the asteroid field that access to resources is not going to be a problem. the cost of that because labor will be free because energy will be free uh then resources will be free that there if those base assumptions are correct it's higher risk to not spread the wealth than it is to spread the wealth because the last thing I want is to be in my sleep chamber running my
97:30 - 98:00 simulation and a bangal farmer has found a way to find my body and kills me out of spite and eat it so yeah like it. Well said. So, um again, I don't know that my base assumptions are going to end up being accurate, but if they are, um we come back to the only thing I have to worry about is the moment of transition, the transition. I'm in total agreement. So, so this last comment sum sums it up perfectly. Okay, there there is eventually a utopia where we all have
98:00 - 98:30 our little headsets and we all live a thousand lives and we all fit prop, you know, properly in the simulation if we so choose, right? Uh but the transition, oh my god, the transition is really really interesting and and the transition the way you describe it when you're in your chamber and and you know others are not, that's a very very interesting moment to consider. Okay, you know it is and interestingly we
98:30 - 99:00 don't have to wait for patient 1000 to imagine those scenarios and start doing something about them. I mean I think you you nailed you you you hit the nail on the head with your first question of how fast AI is going. Okay, it is not a question of if anymore. We know this is going to happen. We know that this level of technological advancement is going to happen. We know what intelligence can can bring to the table. So why are we not sitting down to discuss this right now?
99:00 - 99:30 Don't understand something because something I think you and I should discuss right now is let's say you're 17. Uh you've got some decisions to make. Um I just read a post. I think it was on Reddit. My producer gave it to me and it was somebody was like listen I've spent the last whatever 30 years investing in being one of the greatest computer scientists in the world. I've been coding uh I've worked at the fang companies uh making hundreds of thousands of dollars a year at the height and I just got let go and the
99:30 - 100:00 reason given was we've created so many more efficiencies with AI that this entire department is no longer needed. Um, yeah. How should a 17-year-old think about approaching the world given that you I would say anyway that it's unwise to throw your hands up and just wait at this point. You're going to have to take action. Um, so what should they do? So, so the the the again I'm not smart
100:00 - 100:30 enough. I I'll say that openly. I I don't know what I should do. I think I should be very clear about that. In a in a world where so many moving parts, you can only hedge your bets if you want. Okay. So, so let's begin with your relationship with AI as a 17-year-old. Uh if you know, if I told you uh that you, you know, most people will say, do you want to be a lawyer? Do you want to be a doctor? Do you want whatever it is that you're interested in, you want to be an AI that, right? you want to you
100:30 - 101:00 you want to be the you know the best that uses AI in the next few month months or years to generate you know graphic images or logos. Okay. Uh because there is a transition where for a for a while where a human plus an AI will be better than an AI alone and you know you can be that human. So that this is to me my the my first immediate opportunity right. Uh the the second immediate opportunity if you ask me is
101:00 - 101:30 um uh you know uh really how can you prioritize intelligence not skills not knowledge uh not productivity not money okay the biggest asset that you will ever have is intelligence and I will tell you openly as an older man I am less capable of learning all of the new tools that are that that are coming in today than the
101:30 - 102:00 younger people that I follow who you know will see manos come out of China and then two days later they know exactly how to use it for coding and then claude 36 was it or you know the latest one comes out and they yeah all right yeah so you know they know immediately what it is that you know they can use it for and how to program use it for programming and then Gemini comes out with the better one and there I don't have that speed but as a younger person today. I think the trick is to get yourself into that pace and
102:00 - 102:30 let yourself flow with that pace. There is not a single tool that you will use for more than a month at a time. But the game is that you constantly become the one that is aware of the next and latest tool. Right? That that that's number two. Number three, which is I have to say a very uh um philosophical view of the world is that we have for a long time lived in a world where it is hard to to know the truth. Okay. Uh from one
102:30 - 103:00 side because there is no real absolute truth. You know, you and I who have a lot of respect for each other will have different points of view. By definition, we're probably both wrong. But at least, you know, not all the time that one of us is right and the other is wrong. Uh but but that we're entering a a world where uh we're completely mind manipulated. Okay? Every every bit of info even intelligent people like you are going to be getting in the next few
103:00 - 103:30 years are going to be coming from an AI, right? And so uh and and and if that AI is motivated by an agenda of someone who's not very ethical, then you're going to get a lot of lies. And I think the top skill in today's world is to distinguish what's true from what's fake. Okay? And and this is a skill that we had before the internet and lost on the internet. And it's now time to get back. But before the internet, we we
103:30 - 104:00 would go and visit 16 books to establish a fact. Okay. Uh at the beginning of the internet for all of us who love the hyperlink more than they love anything in the world, uh you know, we would visit a 100 websites to establish a fact. But then when social media came out, we just believe whatever the influencer says because she has a cute butt. Okay? And and so and so the the truth is we've we've suddenly lost our ability to discern what's true and what's not. And I think now is the time
104:00 - 104:30 to go back to that ability to debate everything to ask for sources to to you know I when I was telling you today about the idea of the in comparison of the Intel 404 and the you know the latest microchips. I ran test uh uh mathematics with the AI to prove that its calculation was correct that it is actually 26 to 27 doublings and that this is the actual performance and so on and so forth. So you have to establish that ability of not everything they tell
104:30 - 105:00 me is true. Right? And and then finally which I think can save all of us is AI ethics is ethics in general. Okay. And and truly and honestly uh you know as we started the conversation artificial intelligence is a is a an an amazing power with no polarity. Okay polarity doesn't come from intelligence. So you and I do not use our intelligence to make decisions. We use our ethical framework to make decisions as informed
105:00 - 105:30 our by our intelligence. Right? And so and so accordingly we we are at a time where the absolute uh scurce resource is going to become abundance and it's going to make everything else abundant. Right? So so we're going to have abundance of intelligence that's going to lead to abundance of everything. The question is what things is it going to be abundant weapons or abundant energy? It's going is it going to be abundant wealth
105:30 - 106:00 concentration or abundant wealth distribution? And and ethics are unfortunately rarely ever spoken about. Constantly talk about politics. We constantly talk about technology. We constantly talk about money. We constantly talk about capitalism, you know, and so on. China and whoever. Right. The the real topic today is if I told you anything that you want done will be done for free in a few years time. Okay. The skill I need you
106:00 - 106:30 to learn is what will you want done. Okay. And if we get to the point where all of our decisions are not informed by what is good for me, even if it's bad for the other guy, because eventually that will lead to the scenario in your comic comic book where we start a war between us. Okay. If we can get to that ethical framework of let's agree what we need done that's good for every guy and gal, then I think we're in a good place. And I and I I I have to say we messed
106:30 - 107:00 up. my generation, your generation did nothing about it. Okay? And it is the the 17 year olds today that need to to rise and say, I don't want that world that you're building for me. I want AI, but I wanted to create a world of abundance for me. Talk to what do you think your generation did wrong that we didn't correct? uh we uh we uh we got occupied with the promise of capitalism to the point where we set
107:00 - 107:30 role models for you. Uh so it is I think it is absolutely my my generation actually. So I think the turning point for all of us in tech was when Bill Gates became the richest man in the world and and all of us looked at that and said he's smart but I'm smart too. Okay. And I and I can build stuff. And we ran. Okay. And in in that process, I think that
107:30 - 108:00 um that hunger, that race for more uh is the is the main result is the main uh reason for the world we live in today. Okay. The the the world we live in today has advanced massively uh because of that. We can't you cannot deny the incredible contributions of science and computer science and technology and you know industrial efficiencies and so on. You can't but but but that I call it
108:00 - 108:30 systemic bias. Okay. You know the easiest way to to to understand it is you know I I told you I love cars. You can you can take an engine and you know tune it a little and get 100 horsepowers more and then add a a turbo and get another 100 horsepowers and then another turbo and you know supercharge it and you know use nitro instead of fuel and so on until it it melts. Okay. And I think what is happening is that we're creating this constant uh uh turning of the economic cycles at
108:30 - 109:00 a speed that is focused on enriching a few of us that it is about to break. It's about to melt. Okay. And and AI can be our salvation because basically it means it doesn't have to melt. Okay? we can reduce the cost of everything and so accordingly the debt goes away, inflation goes away. You know, it becomes easy for everyone to live. But the problem is the difference between the normal guy and the top guy is going
109:00 - 109:30 to be that my car is green and their car is orange. Okay? And and that ego is is the reason why we're resisting because the top guy still wants to have a car that nobody else has. Okay? and and I I I think that we will eventually end up in those utopian societies where we're all a little more equal. Okay. But as you repeatedly said today, it's the path to get there that's going to be painful.
109:30 - 110:00 Yeah. Speaking of that path, um do you worry about AI AI's ability to subtly manipulate us even if it doesn't have ill intent% So, so the again I mean I don't know why I'm so uh focused on the wrong sides today. The the AI so far has been learning from us and we're the best at
110:00 - 110:30 manipulation, right? So, so, so the problem is it's reflecting back to us all of the [ __ ] that we've put out there. Okay? And you really have to understand that the media whether mainstream or social media is I I don't want to say lie full of lies but it is it is highly biased to one side of the truth or the or to no truth right so so so so that so that basically if if um you know if an AI reflects that back to
110:30 - 111:00 you don't blame the AI right the beauty however is that with reasoning with mathematics with uh synthetic uh training data with all of that of those models coming in with offline models with open source and so on. What is about to happen is that you know how you and I learned relativity is we took the work of Einstein and then we studied this and then we did that and and we got it right. We we built on a on a base of mathematics that we knew from our math
111:00 - 111:30 teacher and so on. Now eventually none of us if you think about it has created relativity. We took it from Einstein. But then Neils Boore takes you know Einstein's work and says no this is missing this and that. I'm going to work on this and you know and other masters start to do other things. This is this is the Neilsbore moment of I mean Neilsb is not following Einstein as big as as impactful probably more but but you know it's basically the moment
111:30 - 112:00 where future AIs will learn from AIS okay and and we have no we have very little contribution from now on right and and the the challenge is is the spiral going to continue to go down are they going to to become more and more and more fake and that becomes the data that populates the internet. So they learn to become more and more and more fake. Okay. Or is there going to be a point at which where they will appreciate absolute truth. Right? Now we
112:00 - 112:30 can influence that but I don't think to a big extent. I think we can influence that with all of the data that we put put out there like this conversation. Okay. With the humility to say we think these are possibilities. We do not know if they are. so that when the AI reads the you know the transcription of this they know that they need to verify this this is not a truth at all okay uh but but but but there is a the next cycle the next cycle doesn't have human data in the loop okay and I think that cycle
112:30 - 113:00 is going to blow us away because truly the best scientists that ever created anything did not create it only with IQ or or intelligence of some form they created ed with intelligence and former data. Okay? So, you take the former data of everything that humanity learned, apply the current levels of intelligence and the endless megalops of of processing power and see what what is about to come. We're not just, which I
113:00 - 113:30 think a lot of people miss when we discuss the f the immediate future of AI. We're not just advancing the systems, we're advancing the knowledge. Okay? So every new code that is written by uh Claude put out on the internet acts or on GitHub acts as new inform a new code that informs Manus and then every agent that does something with Manis becomes a behavior that is clever enough for every other business agent
113:30 - 114:00 that's produced by Gemini to work right so so as we recycle this hopefully like with humans we will we'll recycle upwards Okay. Do you think that AI is going to be able to understand the laws of physics? I hope so. I hope so. I I I don't see why not, Tom. I I really honestly don't see why not. It it think about it this
114:00 - 114:30 way. I I started to read quantum physics when I was eight and then at the time for my generation that was there were no quantum physics for kids basically uh you know but I couldn't understand the mathematics of it until maybe 12 13 Jesus because I I still can't so you're doing great. Yeah it is it's it's it's bypassed me for sure eventually right but there are still humans out
114:30 - 115:00 there that understand it. Okay. Yeah. But my my assumption is that we don't understand it. So we we have an approximation. So just as Newtonian physics wasn't accurate, but it was useful, uh Einsteinian physics are uh useful, but not accurate. And what I'm wondering is will AI ever be able to go beyond pattern recognition and things that we already know and detect patterns in subatomic particles or whatever? uh that allows it to intuitit the actual
115:00 - 115:30 laws of physics. So, so I uh I the reason why I'm I'm painting this picture is to say the more the better they become at mathematics at least they will uh prove our math. Okay. And understand that for for physics, you know, you could be a theoretical physicist where you could actually see the world through the mathematics. H and the experiments are
115:30 - 116:00 you know another part of the physics if you want. Okay. So so when you when you really think about it could they become that math genius that helps us the trend says they will very soon right you know you can you can look at things like uh you know Alpha Fold or uh the one from Microsoft that does material design uh you know it's incredible really it's better than any scientist in protein folding or in you know in in material science. So it's it is it's going to happen. Now will they have the the the
116:00 - 116:30 abilities to have the the the the instruments and the machinery to do the tests and maybe they'll instruct us to do certain tests with certain observations. Uh you know I if if intelligence is not a biological bound uh property then I don't see why they wouldn't be as intelligent as they need to be to understand all of physics. I did a I did a a very interesting conversation. I had a very interesting conversation actually published it on Substack this week um about
116:30 - 117:00 consciousness the nature of consciousness uh of AI and uh and in my mind the differentiating and I could be completely wrong but if I'm right I beg people to to help me out. I I think the question of AI consciousness which we I don't believe they are yet but if they ever become conscious at any point in time I think the overlap the actual scientific way of detecting that is the um you know is is if they can collapse
117:00 - 117:30 the wave function of uh of something that's in superp position right because you know so if you if you I was having the conversation with my AI about the um uh the delayed choice experiment uh you know the eraser test basically in uh what is it called? The delay choice experiment where basic basically you you you have particles go through the double slit and and you capture the result on a camera uh or a detector of some sort, but you don't you delay the choice of
117:30 - 118:00 will you look at it or not? Will you observe it or not? Okay. And if you if you don't observe it, it's a it's an interfer interference pattern. If you do observe it, it collapses. It's crazy, right? Uh but but here's the interesting thing when the camera observed it or the detector observed it which doesn't have any consciousness in it uh it actually didn't collapse the way the wave function okay so the question is can we ask AI to observe it yeah can we can can we ask AI to observe it right and the
118:00 - 118:30 moment where when AI observe it it collapses the wave function that means they have some form of intelligence sorry some of what an interesting way to think about But that's crazy. That's a That is a test. So I'm looking I'm I'm I'm looking among my my physicist friends to find someone that can help us run that test. Uh which I which I think will come out negative for for today. They are not conscious.
118:30 - 119:00 But I think we keep we need to keep running it until they they're not a detector. They're not a camera anymore, but they have some form of conscious awareness. Wow. Uh, that has hit me very hard. I need to think about that. Look at that. Uh, Mo, spending time with you every time gets more incredible. Uh, I love that. Thank you so much for taking the time. Where can people follow along with you? Uh, first of all, thank you for listening to all of my crap today. I
119:00 - 119:30 actually never speak about those things publicly. So, so yeah, I yeah, I I hope that people understand that I'm not right. I'm just sharing with passion what I believe needs to be attended to. And I am absolutely uh certain that I could be wrong on all of that I share all that I shared. But uh you know, I think we the the main topic is we need to start paying attention. We need the ones that are smarter than me to find the right answers because this is moving too fast.
119:30 - 120:00 uh where people can find me mogat.com. Uh I'm on Instagram is mo_gawa and uh on YouTube is I think it's moget official mo.gawa.official or something like that. But if you search for moa that I'm you know as I told you before we started the the conversation I tend to be on other people's platforms a lot more than I'm on mine. Uh and uh yeah I you know on my Substack if you want to read a live uh go to mogat.substack.com. substack.com and uh you know give me
120:00 - 120:30 feedback on my writing and it would be wonderful and yeah thank you for having me. This was intense. Well, brother, thank you. I really do appreciate it. It was wonderful. Everybody out there, if you have not already, be sure to subscribe. And until next time, my friends, be legendary. Take care. Peace. If you like this conversation, check out this episode to learn more. I think the AI censorship wars are going to be a thousand times more intense and a thousand times more important. My guest today is someone who doesn't just keep
120:30 - 121:00 up with innovation, he creates it. The incredible Mark Andre. Trust me, when someone like Mark, who spent his entire career betting on the future, says