Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this thought-provoking discussion, Johnathan Bi and Nick Bostrom delve into how advancements in AI might render many human activities obsolete, leading to a world where humans seek new ways to find purpose and fulfillment. The conversation expands on how AI changes the payoff functions of human activities, the potential for a deeply technological utopia, and which aspects of life might remain resistant to automation. The discussion also touches on philosophical and theological implications, exploring how future societies might balance pleasure, social engagement, and personal growth in a highly automated world.
Highlights
AI is making many traditional human activities obsolete, prompting the need for new sources of purpose. ๐
Learning and personal human connections are seen as more resistant to AI innovations. ๐ก
The conversation delves into the future of work, leisure, and relationships in a world deeply influenced by AI. ๐
Exploration of philosophical queries currently debated may become obsolete as AI evolves. ๐ค
The potential of a utopian future with AI raises questions about maintaining human value and meaning. ๐
Philosophical and theological insights align in viewing ideal futures shaped by AI. ๐
Key Takeaways
AI is rapidly changing the landscape of human work and purpose, potentially making many activities obsolete. ๐
Certain areas like learning, personal relationships, and unique human roles remain more resilient against AI takeover. ๐ค
Nick Bostrom suggests focusing on areas such as philosophical questions that might become obsolete with future AI advancements. ๐ง
The discussion explores the notion of a 'deeply technological utopia' where human activities need reevaluation. ๐
Bostrom and Bi discuss the importance of maintaining human connections and natural purposes in an automated future. ๐
Exploring philosophical and ethical considerations of AI-driven futures has theological parallels. ๐
Overview
Nick Bostrom and Johnathan Bi engage in a captivating conversation about the implications of AI on human activities, exploring how technologies are changing the payoff functions that define these activities. As AI potentially renders many traditional roles obsolete, they discuss areas of life that could remain resilient to this transformation, such as personal relationships, learning, and philosophy, which may continue to hold value despite advancing automation.
The discussion delves into the concept of a deeply technological utopiaโan advanced AI-driven society where fundamental changes challenge traditional notions of work, leisure, and human interaction. Bostrom outlines pathways for finding meaning and purpose, suggesting we reevaluate what truly constitutes a flourishing life when AI can fulfill many roles.
Drawing parallels between technological and theological narratives, Bostrom and Bi examine how such a future aligns with philosophical and religious considerations. They explore the changing landscape of human values and ethics in the context of AI's progression, ultimately highlighting the importance of preserving intrinsic human values in a rapidly advancing technological world.
Chapters
00:00 - 01:00: Introduction to AGI-Proof Areas In the chapter titled 'Introduction to AGI-Proof Areas,' the discussion focuses on how artificial intelligence (AI) is altering the dynamics of various human activities. There's an exploration of the shifting balance between learning, production, and the relationship between capital and labor. It is suggested that some services, such as those offered by priests, prostitutes, and politicians, may continue to require human intervention due to the inherent demand for human-provided services. The chapter examines the human desire for novelty, deep purpose, and meaningful experiences, contrasting these with the pursuit of pleasure, social engagements, and aesthetic experiences, underscoring a shift toward these aspects in the face of advancing AI.
01:00 - 02:00: The Critical Juncture in Human History The chapter explores a pivotal moment in human history, emphasizing its immense significance for the future. It suggests that the decisions and actions taken at this point could impact the next billion years. The narrative portrays this period as an opportunity for individuals to find real purpose, given the unprecedented stakes involved. The discussion is marked by a theological tone, drawing parallels to concepts of the Christian afterlife. With the advent of superintelligent technology, humanity could rapidly progress towards technological maturity, highlighting the transformative potential of human capital.
02:00 - 03:00: The Depreciation of Human Capital The chapter, titled 'The Depreciation of Human Capital,' delves into the unsettling feelings elicited by the rapid advancements in artificial intelligence. The narrator recounts their initial encounter with GPT-3, an advanced AI, which sparked a dual reaction of amazement and fear. Awe stemmed from witnessing the capabilities of AI, while dread arose from realizing how closely AI could mimic and potentially surpass their skills in areas like research, writing, and philosophyโfields they had dedicated significant effort into mastering. The narrative draws parallels to historical shifts brought about by the Copernican Revolution and Darwin's theories, which repositioned humanity in the cosmic and biological order, respectively. Here, AI is portrayed as the forthcoming disruptor that challenges human superiority in intellectual domains.
03:00 - 04:00: AI-Proof Investment Areas The chapter discusses the advancement of AI and its potential to make human work obsolete, reflecting on the philosophical perspectives of Nick Bostrom on a future where all human activity might become redundant. It explores investment areas in which individuals can deploy their time, effort, and resources to remain valuable in an AI-dominated world, providing insights into living a fulfilling life despite AI's pervasive influence. The interviewer, Jonathan B., sets the stage for a conversation on maintaining relevance and finding purpose in such an era.
04:00 - 05:00: Learning Over Production The chapter, titled 'Learning Over Production,' begins with a discussion led by a fellow at the Cosmos Institute focusing on philosophy and artificial intelligence. The speaker, Nick Bostrom, emphasizes the transformative impact of AI on human activities. He discusses how AI alters the 'payoff functions' or the value and incentives of various human endeavors. The chapter highlights the notion that certain activities are likely to become obsolete faster due to AI, while others, particularly learning, may prove more resilient and sustainable.
05:00 - 06:00: Relationship Over Things The chapter 'Relationship Over Things' explores the importance of prioritizing relationships and learning over material possessions and production. The author presents a thought experiment regarding a hypothetical technology designed to facilitate learning. This technology is described as extremely challenging to develop because it involves altering billions, or even trillions, of synapses in the human brain to enable language acquisition or experiential learning. The discussion highlights the complexity and potential impact of such a technology.
06:00 - 07:00: Capital Over Labor The discussion centers around the potential obsolescence of intellectual work due to advancements in AI. It highlights how AI could potentially generate superior content, such as books, podcasts, and interviews, surpassing human effort. There is also an emphasis on the capability of video AI to create more visually appealing content than traditional human conversations. The chapter suggests a contemplation on the future of intellectual pursuits and their value in an era dominated by artificial intelligence.
07:00 - 08:00: The Concept of Deep Utopia The chapter explores the challenges and limitations of integrating AI into the learning process. Despite technological advancements, the effort of reading and skill acquisition remains largely unaffected by AI. The text discusses the complexities involved in developing neurotechnology capable of directly downloading skills, highlighting the necessity of understanding current synaptic configurations.
08:00 - 09:00: Leisure in Deep Utopia The chapter explores the concept of integrating new knowledge into one's existing understanding without disrupting current intellectual frameworks or compromising personal identity. This intricate process should ideally be handled by a mature superintelligence, emphasizing the complexity of the task and its futuristic implications.
09:00 - 10:00: Defense Modes for Life in Deep Utopia The chapter explores the concept of controlling one's mental processes in a deep utopian setting, emphasizing traditional methods such as reading, thinking, discussing, and self-improvement techniques like meditation. It reflects on Aristotle's view that the quest for knowledge and the contemplative life is the most inherently human pursuit.
10:00 - 11:00: The Value of Pleasure The speaker reflects on the notion often proposed by philosophers that the optimal state of being human is to be a philosopher. They find it amusing yet contemplate that there might be truth to this assertion. The speaker shares a personal revelation from their late teens, suggesting that philosophy might have an expiration date. They propose that current philosophical endeavors may eventually become obsolete or reach a conclusion.
11:00 - 12:00: Artificial Purpose and Social Cultural Entanglement The chapter delves into the implications of AI potentially becoming super intelligent and the possibility of human cognitive enhancements. It suggests that these advancements could make existing human intellectual endeavors obsolete. Rather than pondering eternal philosophical questions, the chapter proposes focusing on the subset of questions pertinent to these emerging technologies and their impact.
12:00 - 13:00: Rescuing Necessity and Purpose The chapter discusses the significance of philosophy in addressing timeless questions and the curious nature of humans in seeking these answers. It explores the idea of whether it matters to resolve these inquiries now or in the distant future, especially considering the possibility of extended human lifespans. The text reflects on the notion of preserving mysteries instead of hastily solving them, suggesting that some questions might be better left unanswered to enrich human experience over time.
13:00 - 14:00: Theological Aesthetic of Deep Utopia In this chapter, the discussion revolves around the concept of spreading out knowledge and discovery over time rather than having immediate answers to all questions. This approach emphasizes the value of ignorance as a scarce commodity, suggesting that it could be preserved and cherished, much like a rare bottle of champagne reserved for special occasions. The idea is to maintain a sense of mystery and continued learning that extends into the distant future.
Focus on These AGI-Proof Areas | Nick Bostrom Transcription
00:00 - 00:30 AI changes the payoff functions of different types of human activities learning I think over production relationship over things and then Capital over labor priest prostitute politician there might be a demand specifically that the service be provided by a human being as opposed to a machine you think you want this fundamental novelty deep purpose this world historic sense of meaning what you actually want is a lot of pleasure uh social engagements aesthetic experiences we are seeming approaching this this
00:30 - 01:00 critical juncture in human history upon which like the next billion years might depend like this if you want real purpose like knock yourself out now like because it's never going to be like more at stake than it is right now and in the next few years what was striking to me was how theological our entire conversation today has been and how similar to the Christian afterlife once you have a super intelligence explosion you sort of zip fast forward to technological maturity human capital is
01:00 - 01:30 a depreciating asset in this world of advancing AI when I used gpt3 for the first time it filled me first with awe and then with Dread because I could see how close AI was in surpassing me in research writing philosophy that I trained so hard for if the capern revolution took away our special place in the universe and Darwin robbed us of our special place in nature then AI threatens to undermine the last Pride of the human
01:30 - 02:00 race our intelligence today work gives many of us purpose and meaning but AI is making more and more of that work obsolete until one day perhaps all human activity may be redundant Nick bostrom's deep Utopia is about that day and what to do about it in this interview you're going to learn where you can invest your time effort and resources that is most AI proof and what the good life looks like in a world where AI has made you obsolete my name is Jonathan B I'm a
02:00 - 02:30 fellow at the cosmos Institute researching philosophy and artificial intelligence if you want to make sense of this new age of AI we're entering Please Subscribe without further Ado Nick Bostrom AI changes the payoff functions of different types of human activities certain activities are going to be made obsolete a lot sooner and other activities seem to be uh a lot more robust and so the three I want to talk with you about are learning I think over production
02:30 - 03:00 relationship over things and then Capital over labor so by the first one learning over production what I was thinking about was in your book you gave this thought experiment of a technology that can help us learn and you described this as potentially one of the most difficult Technologies to create because it requires of you potentially reading and then changing uh billions if not trillions of synapses in our brains right to be able to learn a new language or or gain the experience of being
03:00 - 03:30 becoming a good father or a good mother and so my thought is uh anyone engaged in intellectual work you know your production may be rendered obsolete maybe in just a few years time we can feed in you know your book deep Utopia it'll a podcast an interview AI will generate a better script than the one that we can have today and then video AI is able to create something a lot more beautiful than the conversation we're having however if I wanted beautiful yes even more beautiful however if I wanted to understand Professor Bostrom work I
03:30 - 04:00 still have to do the hard work of sitting there and reading it it seems for a much longer time so that seems to be one area of life that's going to be a lot more resistant to to AI Innovation yeah if you think about what it actually would require through some sort of neurot technology to sort of directly download skills so first you would have to have the ability to read off what your current synopsis how they are configured right and and as you said you know millions or billions or trillions
04:00 - 04:30 of them uh then you would need to interpret you know what all of those synopses actually en code like currently and and then figure out how they would need to be changed in such a way that they now encode this additional knowledge that you want to download like with without sort of messing up what's already there or or changing your personality too much and and then you would have to physically change all of these so this definitely seems like a task for uh mature super intelligence but on on until until that time like if
04:30 - 05:00 you do want to have some sort of relatively fine grain control over what goes on inside your brain for now you know the the the best method is the right traditional one of of of you know reading and thinking and talking and working on yourself meditating Etc I think that's actually quite optimistic because it means uh uh at least According to Aristotle in the beginning of his metaphysics he says you know all men desire to know that knowledge and the contemplative life is in some sense the most human life for Aristotle that
05:00 - 05:30 that is the one that will be made obsolete uh at the end yeah I always find it a little bit funny when when philosophers are like thinking what's the best and highest form of of being a human and the conclusion is being a philosopher but you know maybe they are right well I've actually uh for a long time thought of philosophy as something that has a deadline in fact this I think occurred to me in my late teens it seemed to me that that's some point our current philosophical efforts
05:30 - 06:00 would be rendered obsolete either by AIS that could you know become super intelligent or maybe by humans developing cognitive enhancements of various forms that would make subsequent Generations much better at this and so that rather than spending my time thinking about the sort of Eternal questions of philosophy uh it seemed more useful to focus on that subset of the questions in
06:00 - 06:30 philosophy where it might actually matter whether we get the answer now as opposed to in a few hundred years would you prefer to know the answers to those Eternal questions well one is uh certainly curious um but um if we do end up in a trajectory where human lifespan becomes extremely long um then maybe rather than sort of using up all these Mysteries
06:30 - 07:00 right away to immediately know the answers maybe you would want to spread it out a little bit so that you have sort of interesting things to learn and discover even you know hundreds of years thousands of years into the future um you know ignorance might become a scarce commodity maybe put some in the shelf for like the bottle of champagne you know that's from a specific gear that there only small numbers of left you might want to save it for a special occasion
07:00 - 07:30 but yeah there might be some other um I guess uh uh professions as well that that are sort of relatively immune or or variables that where where it's not just the specific Knowledge and Skills we have but the fact that these things be done by human is regarded as significant in its own right right could you give give some examples um well I mean I the priest prostit
07:30 - 08:00 politician um where there might be a demand specifically that the service be provided by a human being as opposed to a machine even if a machine could have all the same functional attributes um I I want to move on to the the second type of human activity that is somewhat AI resistant and that seems to be uh relationships over things and what I mean by that is in your book you gave a thought experiment of let's say we're technological maturity and we have have
08:00 - 08:30 a better version of an AI parent in every domain right so so it's better at changing diaper it's better at teaching better at emotional support I I still uh would would be willing to bet that most people wouldn't want to completely Outsource their role as a parent this could be even more resistant to uh even technologically mature AI because what's constitutive to forming a relationship is that it requires at least in our current view to humans yeah I I think relationships is
08:30 - 09:00 one of the more plausible places where we might find purpose that survives this transition to technological maturity there is some value in honoring and continuing this existing relationship between two people um and that even if you could sort of teleport in a robotic surrogate that would be sort of functionally superior it wouldn't be as good or maybe it would be better in some respects but it would also lose this value of of sort of of continuing the current
09:00 - 09:30 relationship even if the robot sort of appeared indistinguishable from from the original Parent if you met in a thought experiment where nobody would actually notice any difference you might still think that the reality is that there is now this different person who is playing the parenting role and if if you care about sort of Truth in relationships that might already be a disvalue and so yeah the existing Human Relationships to the extent that they sort of partially consist of of intrinsically Val in this connection to
09:30 - 10:00 a particular other being would potentially be resistant what are things in a child's education that are potentially made more obsolete given the current wave of innovation in AI well potentially uh uh all all of it um but we don't know how long that will take right so it makes sense to hch your bets a bit you don't want to find yourself you know 18 or 20 going on the labor market with no skills and it turns out the whole AI transition has been
10:00 - 10:30 delayed um so yeah I would like want to make sure to to H your but get the broad bases some useful stuff but then also like um um have fun while it lasts I think uh it would also be a shame to have wasted your childhood um perhaps we are too trigger happy uh in declaring that certain things are made obsolete uh given technology and for example so I I I went through the Chinese education system and it had perhaps the a when compared to the Western education system
10:30 - 11:00 uh an extreme emphasis on math like like rapid Ari arithmetic computation uh including memorization the memorization of the classics and I think it's tempting to think that you know even with the printing press you know memorization is no longer needed we have access to all these great books but certainly with the internet and with calculators in our pockets that we don't need to know these things even though machines can perform it better there's a lot of alpha I think to be had in perfecting these skills because it it
11:00 - 11:30 it's not just that it cultivates your character it enables you to see things that other people might not I would also say General um judgment and especially in these kind of turbulent times with all like social media so many mtic dynamics that that people are now exposed so having a kind of robust um I don't know cognitive immune system where where you can sort of reflect yourself on what makes sense and what doesn't and not getting swept up in in whatever latest fat or cult or
11:30 - 12:00 or memes that are sort of bombarding you I think that that certainly is uh something I also want to see um I I want to explore the third and last potential activity that would be more resistant again this is not in technological maturity but just on the way there and that's capital over labor because uh one uh one anecdote you brought up is how shocks to the labor market throughout history have changed whether uh the disproportion of
12:00 - 12:30 production going to Capital or Labor uh has shifted and so for example post black plague one of the best times to be a peasant in human history because so much of the labor force was called that uh they had a lot more bargaining or negotiating power and so I think you hear these stories of them having like holidays for like half the year or something like that because they had so much more uh bargaining negotiating power it seems like AI again on the way there will do the reverse right it will make labor EX abundant and so
12:30 - 13:00 practically if I'm uh someone in law school or someone in in medical school training 10 20 years before I can start making capital and I'm building up my labor I'm building up a skill that trade-off suddenly seems a lot less attractive um yes human capital is a depreciating asset in this world of advancing AI um and so Investments with very long payback times uh if if if especially if
13:00 - 13:30 you're doing them only because of the ultimate payoff of a higher salary 20 years into the future or something would you should be discounted you know uh accordingly I mean you would have to have scenarios where AI development takes longer or where it gets so regulated that they can't perform these particular jobs let me read you uh a quote from from your book Kane's predicted that by 2030 accumulated savings and Technical progress would increase productivity relative to to his own time between four-fold and 8-fold
13:30 - 14:00 and as a consequence the average working week would decrease to 15 hours as we approach 2030 the first part of Kane's prediction is on track to indication the second part of Kane's prediction on the other hand would appear to be about to miss its Mark if Trends are extrapolated while it is true that working hours have declined substantially we are nowhere near the 15-hour work week that Cane's expected what do you think this is greed has triumphed over sloth um I
14:00 - 14:30 think uh the reason is twofold first uh status competition so we work hard to afford a whole bunch of things going well beyond uh our basic material needs so we don't just want a car we want a car that's nicer than the cars everybody else has Etc and so that provides potentially unlimited Demand right for more because the bar keeps racing and then I think the other uh factor I think
14:30 - 15:00 is a kind of work ethics that is uh a fairly ingrained Norm uh that it's sort of virtuous to to be not just loing on the sofa uh but to sort of exert yourself it's interesting because most people think that we've yet to approach this sort of post post scarcity world where all of our fundamental needs can just be met like that but at least in the first
15:00 - 15:30 world you know North America Western Europe that seems to be far gone like most people are able to sustain themselves um just fine and so it seems like it's maybe it's less so greed that has triumphed over sloth but vanity because now the reason that people work I do think it's mostly this this kind of social Drive Russo I think talks about this in his second discourse where he kind of flips the view of of the direction of scarcity so in in Russo second discourse in his state of nature civilization is formed there's actually
15:30 - 16:00 a state of Natural Abundance not because the productive capacities are actually increased but because people don't have these social vain desire vain in rousos perspective vain desires That civilization has has cultivated but it's actually in Civilization that scarcity increases again not because of the of the amount of things that have diminished but the amount of new desires that have been that have been born yeah I mean I think there is a form of scarcity uh
16:00 - 16:30 uh that has been quite pervasive through human history and prehistory even aside from civilization um in that there were many things that were really needed in some sense but not available to most people or indeed any people say like Quality Health care if you were a hunter gatherer and uh you know you got sick you know maybe you could rub some Leaf you know but that there a lot of conditions that couldn't be fixed that way and and occasionally there would be periods of famine as well you know maybe
16:30 - 17:00 that you you get the cold season or something else um and so I think it's kind of um to some extent been endemic up until um a couple of hundred years ago and and in in many parts of the world until more recently are still today we've been in a sort of um approximately malthusian condition so yes there has been various advances in productivity and all kinds of Technologies you know over thousands of years but whenever the economy grew by
17:00 - 17:30 10% the human population also grew by 10% and average income still hovered around subsistence um you know with some fluctuations and I think it was really only in special circumstances like you mentioned in the aftermath of the plague you know where there had been a great calling um or you know maybe people discovered some some new came to a new island or a resource where there were no humans yet like for a period of time they could enjoy plenty and then since the a revolution where economic growth
17:30 - 18:00 has been so rapid that population growth although it has been high hasn't been able to keep up that you have been able to sort of have increasing average incomes so we talked about um AI right now and the current trends we're seeing but now I want to move into the heart of your book which is what you described as deep Utopia full technological maturity and as a brief overview for our listeners this is where AI far exceeds human capacities in almost all tasks where we can simulate right virtual experiences to be indistinguishable from
18:00 - 18:30 the experiences we experience now where the world is plastic that anything materially that can be done within the the Realms of physics that we have that technology to do but also that we ourselves are plastic right so uh in this deep Utopia um you described how for obvious reasons work other than the types of work you describe priest prostitute and po politician other than jobs that constitutively require humans uh are are already made redundant but I
18:30 - 19:00 think the most interesting claim here on redundancy is that Leisure or or a lot of human Leisure would also be made redundant as well why is that um if you go through those sort of leisure activities one by one you you you can for many of like cross them out or at least put a little question mark on top of them whether they would still have a point in this condition of technological maturity so maybe um some billionaire goes to the gym you know uh five times a week because they want to why they they want
19:00 - 19:30 to be uh like fit and healthy uh and maybe they feel more relaxed afterwards but if you had this drug that would induce exactly the same effects W without you know you having to go there and sweat and make that then you could still go to the gym but but why like if you could just pop the pill you get exactly the same health effects um the same body the same mental Clarity after like I think many people would just pop the pill um we talked earlier about child raring like maybe people would
19:30 - 20:00 still want to do that but a lot of the specific activities involved in child like changing the diapers like doing this kind of a lot a lot of it is is kind of individually things you might be tempted to Outsource um and um so and then you can like go through Leisure activity by Leisure activity and and I think uh many of them would lose their original purpose at technological maturity and and then if you did them it would have to be for some other reason
20:00 - 20:30 generally speaking I think you would have a kind of post instrumental condition where to First approximation that would be nothing you would need to do in order for something else to happen because that would be a shortcut to the other thing uh you could sort of press a button or or like request your AI or robots to to do that thing and you wouldn't have to exert effort yourself to get that outcome and so all the things we do for those instrumental reasons uh would drop out of the picture with maybe
20:30 - 21:00 with some like class of exceptions but and so I think this is the the most interesting Insight um in your book which is even if we solve all of the tremendously difficult problems around alignment um around all the other issues that that you described in your various other books and we get to the maximally good state right where politics society and and AI is purely working for us it's still in some sense difficult to imagine
21:00 - 21:30 what a good life looks like because it's so so different from um how we conceive of a good life today and so yeah when work when so much of work and so much of leisure is made redundant what is the best type of life one can live in such a Utopia well it's actually quite challenging to really envisage a great like because yeah it would risk undermining a lot of the things that currently give meaning to and you would feel almost at a loss that's this kind of do do we just become
21:30 - 22:00 these kind of amorphous you know drugged drugged out pleasure blobs or what what like what and then it might seem quite alienating and un unattractive I I think though that if you if you push through that I think there is a a dare on the other side of something that actually would be very worthwhile so in your book you described five let's call it Moes of Defense uh for for a life in deep Utopia
22:00 - 22:30 and I I found this sort of way of framing it as already very interesting because you think of in Utopia you're not defending things right but but you are defending here because of how difficult it's to imagine a good life here honic veillance experience texture autotelic activity artificial purpose and social cultural entanglement these are these will become the the in your view the pillar of the good life once you push through as you describe it so can you give us an overview of what uh what what you these means the first one
22:30 - 23:00 being hon equality so this is basically the observation that um we could have a lot of pleasure in Utopia pleasure in the wi sense not just kind of physical but mental like enjoyment like you could actually immensely enjoy um every hour and every day of life um to to degrees that that are like at at least would match the
23:00 - 23:30 sort of the peak current human experiences perhaps go a lot beyond that and um it's easy to sort of dismiss this because it's like a philosophically boring point I mean it's kind of trivial that when you have this advanced neurot technology you could do this and uh and then we immediately jump to sort of thinking well that's a kind of um to generate existence we think of junkies and it's like not really the but but like actually it I think is super important maybe the most important and
23:30 - 24:00 and this on its own might make it very worthwhile to swap out the uh the current world for the I mean I should say there is also a sort of minus one mode which the book doesn't talk very much about but but is super important which is just get rid of the negatives that currently plague The Human Condition which is like immense and terrible but uh here we are thinking about what could you do more than just kind of getting rid and so there yeah just this honic well-being
24:00 - 24:30 like that every day could actually be a a an immense Delight like there there are some people who are like philosophical heatonist I think pleasure is the only thing that matters and the absence of pain so for them that's kind of case closed at this point right but there are other value systems that maybe think pleasure is a good thing but there are other good things so let's see what we can add to just mere honic wellbeing and so then the second one is experience texture we observe that not just these utopians have great level of enjoyment
24:30 - 25:00 but rather than just being sort of Dazed out junkies that have a diffus confused sense of pleasure they they could like attach this pleasure to valuable experiences like say the uh appreciation of beauty um or the uh uh understanding of deep important truth um so like pleasure in in like learning and understanding the basic law of physics learning about human nature
25:00 - 25:30 learning about you know and appreciating great art and um natural beauty uh you know plays and like that's that's how they get their fun it's not just like the junkie but it's like this kind of connoisseurship that is also exquisitly joyful you know maybe appreciating the moral virtue and goodness in in various people and historical people and so forth um that already seems to like make it uh more attractive um and then then you can add some further things like so
25:30 - 26:00 it it's not the case that you need to imagine these utopians as mere passive recipients of these experiences of truth beauty and uh and uh understanding of deep truths that they take enjoyment in this this would be this kind of aoic uh stuff so they they they don't just sit there passively like observing great Beauty and feeling Joy but they could go around and like doing stuff um
26:00 - 26:30 artificial purpose is um purpose that we uh create uh in order that we are then able to engage in purposeful activities so you could set yourself some maybe arbitrary goal uh and then once you have that goal then if if you select that goal in a suitable way you could then have instrumental reasons to and engage in various efforts to achieve it so key
26:30 - 27:00 here is that the goal you set yourself has to be sort of constitutively such that it calls on you to make an effort rather than for you to press the button to have the robot do it right um so you can bake into the goal that the goal needs to be achieved by your own efforts otherwise like the goal is to achieve a certain thing with your own efforts then if that if you have that goal for whatever reason then now you have purpose uh because the only way you can achieve your goal is through your own efforts so
27:00 - 27:30 we we can think of this like the Paradigm cases kind of various forms of games where you know maybe you you decide to play a game of golf there was previously no reason why the ball would have to go into a sequence of holes it's like a completely arbitrary goal but you adopt that goal and now you have um a reason to try hard to hit the ball in exactly certain ways to achieve this goal and you could generalize that you could have much more complex games with you know multiplayer multimodalities
27:30 - 28:00 like you know extending maybe over years and uh that that could then give you purposeful activity not just activity and that so those those are the first four and there is like one remaining one which is there would be some natural purposes that could survive into technological maturity um but that would be sort of Sofer purposes like what what what natural purposes would survive in a in this post instrumental State well take something like um so if you currently have some value or goal
28:00 - 28:30 that say uh you value the continuation of a certain tradition that might be something you just happened to that many people would right now have that as one of their values and the tradition might be such that it just isn't continuing unless humans are continuing to do it it might be like constitutively part of the tradition that humans are doing this thing every year in a certain way like imagine some ritual or something like that so then then those would survive because that would be like you could
28:30 - 29:00 create robots who would be doing this stuff but it wouldn't count as Contin or honoring your ancestors like it might not count as you honoring your an your your value of honoring your own ancestors might not be served by building a whole like like Ensemble of of robots who are like going around and you know paying visits to their grave or or thinking about them like it might require you to do it for that value to be achieved um more broadly I think various forms of uh social entanglements where I mean we
29:00 - 29:30 touched upon it a little bit earlier when we discussed parenting where if if there is this existing relationship between a child and a parent and part of what is valuable about that is that it's these particular individuals are relating in a certain way then that could also give you a natural purpose even at technological maturity to continue to do certain things and interact in various ways uh with your child and and and this might at first s
29:30 - 30:00 seem weak uh like compared to a lot of the reasons we have things to do today there are like very Stark tangible immediate consequences if we fail to do them so like maybe somebody has to go into work every morning because otherwise they will lose their job and then they can't pay their rent and then they will get thrown out on the street and then it will be cold and this is like a very sort of hard set of consequences and a lot of the stuff we do today are motivated by these kind of hard hard
30:00 - 30:30 consequences and and uh in in Utopia a lot of that would go away but um I think like just as um you you walk outside in in the daytime you you you you see the sun you don't see the stars they are much fainter than the Sun but they're still there I see these subtle values as as already being there like there's a whole host of these more almost aesthetic reasons for doing
30:30 - 31:00 things that are blotted out from sight currently because there are these kind of more screaming moral and practical needs that we need to take care of in our lives but if you imagine a scenario where all of that went away like the sun set and then it would make sense for our evaluative pupils to dilate to take in more of the fainter light that comes from these subtler values right so what I what I understand these five rings of defense is that the four the first four rings a Critic might say what's lacking from
31:00 - 31:30 them is a kind of necessity right that that I feel I need to do XYZ because it it's from this external source that gives my life purpose and it it seems like they're trying to rescue necessity in Utopia by highlighting a subset of activities like honoring one's ancestors that it is it constitutively requires us humans or maybe even stronger you as an individual to do and and that's how you rescue uh necessity and so maybe analogy
31:30 - 32:00 here is you know um LeBron playing in the NBA bring back a uh uh um the championship to his hometown akan Ohio there's nothing necessary about that right it's it's a set of rules that that we've invented for ourselves but it doesn't make it any less meaningful for the people involved is that is that a good way to understand it yeah well if he independently wants to do this or if there is an independent value that this
32:00 - 32:30 is something that should happen that it like if if if it were a case that he just set himself this goal because otherwise what should I do all day long and then like he convinced himself to pursue it then then it would be an instance of the fourth artificial purpose but if there's like some independent reason that was not just created in order that somebody has something to do then it would count as a natural purpose right and and and in this example LeBron winning the championship for the for Cleveland it
32:30 - 33:00 would be something like the recognition and expectation of his family and friends and the whole city that he grew up in would that be a good example of something that would make this social cultural entanglement instead of artificial purpose yeah so that that would give him a a real reason to do this if if people continue to want that to happen and if he cares about what other people want or what they will think of him so it seems like social cultural entanglement relies heavily on um uh the economy of
33:00 - 33:30 recognition right what people desire what people give honor to and what people give esteem to do you think that humans will start esteeming the recognition of non-human agents and maybe we're already starting to see the the Genesis of this where there are these dating dating Bots that have already existed and right now it's only appealing to people who are are struggling to form real human relationships but I remember there was one dating bot that after they changed
33:30 - 34:00 the algorithm and the dating bot behaved nothing like what they previously behaved like the people were in tears as if a real family member had died so it seems like if eventually we think that we'd care as much about uh artificial recognition as as human recognition then a lot of these social cultural entanglements would be threatened because presumably we can uh tell with a robot uh what what to what to recognize or or what to give AEM to um yeah that's right so you could
34:00 - 34:30 have these uh future social um entanglements with with with various forms of digital minds but you have a kind of Legacy if if now you care about certain people like you you might not want even if if you had some brain technology that could sort of uh extra Pate that care and implant a different care or like you maybe like some somebody who is kind of uh faith of falling in love with the wrong person and but it it might have been better if
34:30 - 35:00 they had fallen in love with somebody else uh but once they are there they are there um and so we might have these Legacy purposes that come from current commitments that we care about and that we will carry on to uh into into technological maturity I I think it's not the only I mean so there certainly are these social entanglements I think there might also be um more Broad aesthetic reasons like you might think it would just be a more
35:00 - 35:30 beautiful way to live your life if you did certain things yourself and upheld various things and kept doing certain things even maybe Avail yourself of a lot of the conveniences but it would just be uh so I think those could and spiritual and religious um reasons for doing things also could possibly survive right I see and so I I want to go back to the beginning um about pleasure uh because I think I want to defend your position a bit because there are entire schools of very serious philosophies for
35:30 - 36:00 example the Epic Koreans who who do see that pleasure as the the Supreme good and so that literally might be might be enough but even for someone who we we we naturally consider to to think of pleasure as this kind of secondary or tertiary thing someone like Aristotle um when you read him closely what the virtuous man is is is someone who takes pleasure in doing the right thing right so it's about correctly aligning one's pleasure so so I think that ends that adds credits to your view that that
36:00 - 36:30 pleasure itself might be worth uh jumping into the state if if nothing else yeah I mean in in the second case I guess particularly if the pleasure was kind of coupled with the right that if you took pleasure in the right things uh which which is like yeah that that could be like taking pleasure in in contemplating the right things or in doing engaging in the right kinds of activities and so forth so yeah you can I think in general it's nice if
36:30 - 37:00 we um steer towards the future that will score high on on a on a wide range of different values and according to uh many different people's preferences if if we can do that by just compromising slightly on any one of the values so I I think because the future could be very big in terms of the resources available um there would be a bunch of different values that it could be quite cheap to satisfy and so we should make sure to
37:00 - 37:30 satisfy all of those values there are certain values that are resource hungry uh like if you're a utilitarian for example like you could always make more happy people and so you just want more and more and more resources but uh but if there are the values that just needs a little bit and then they are sort of almost maxed out then it seems like let's let's do that and then right you know the the outer space maybe the utilitarians could have a bigger say about how we dispose of that what about values uh that could
37:30 - 38:00 compete with each other and and I think this factors into this pleasure case because in Christian theology for example a lot of the sins aren't loving the wrong things but they're liking the right things but in the wrong degree right right so lust gluttony are all about liking good things but into such an immense degree that you ignore some of the most uh the more important things in life so here's a concern Professor Bostrom you might able to design a perfect super drug that's like Molly
38:00 - 38:30 plus mushrooms plus cocaine all the pleasures of all the drugs we have right now with zero of the side effects continuously and there might be nothing objectional about that in itself except for the fact that such an immense pleasure would detract us from pursuing perhaps some of the the the less Obviously good and interesting parts of our lives but are no less important for a flourishing life in the long run yeah um so I think that certainly would be the case today um
38:30 - 39:00 um with more mature technology you might imagine having the ability to uh create more fine grained experiences that um well a that mean you could remove Sort Of The Addictive potential and uh the the sort of Stupify effect of some intoxicants the adverse you know implications for your liver and blood pressure like all of these things but but also like psychologically rather
39:00 - 39:30 than just having it the kind of monoton Dum pleasure and that's the alternative to living a rich and engaged life in deep relationships with other people you could you know weave these things together in a more integrate way so so that the pleasure comes from these you know virtues and appropriate activities and thoughts and experiences right so uh to to give an uh example that certainly will be outdated it's like uh ingesting
39:30 - 40:00 a drug or or going undergoing biotechnical enhancements that gives me immense pleasure when I brush my teeth when I visit my friends when I uh when I when I go to bed in the right time right it's about exactly like what Aristotle described like interweaving pleasure into what also would in itself make for a good life yeah that seems right like so some of the specifics there might need to be like you might not need to brush your teeth any anymore exactly that's why I said it would be outdated um but yeah so it would be
40:00 - 40:30 focused more perhaps um on what is intr like intrinsically valuable activities and experiences as opposed to these kind of instrumental Necessities so right now there are all these things we have to do in our lives and so yeah let's take pleasure in them because then we do them more and it's like all works right but even in a condition where you didn't have to do all of these things like you have you don't have to you know clean your house because you have a robot that cleans it your teeth are like enhanced so they don't rot even if you don't brush them etc etc then in that scenario
40:30 - 41:00 it seems like what we should be spending more of our time on are activities that are intrinsically valuable um right and so then taking pleasure in those would seem to be appropriate yeah and and that might be uh even better for someone concerned with virtue because we would be able to program The Virtuous uh uh activities as more pleasurable right so for example to respond to the the Christian that I just brought up we might say yes we are going
41:00 - 41:30 to make sex sexual activity 10 times more pleasurable but we can also dial up uh reading the Bible or the contemplation of God to be even more pleasurable so that pleasure naturally directs us uh to to to uh a good life in in of itself yeah or or the the the the the sexual pleasure likes being specifically connected to the in wedlock uh case etc etc I mean we have already a little like the OIC for example that's think uh is kind of you know reduces people's uh
41:30 - 42:00 Vice of overeating uh too much and so there are like these limited ways U now there might be sort of harder trade-offs so there might be certain values that for example maybe it would be perhaps appropriate for for these utopians occasionally to remember the earlier times and the horrors and tragedies of history and maybe feeling uh sad and mournful when when they kind of contemplate that maybe they would would have like I don't know like imagine an annual ceremony where you try
42:00 - 42:30 to think of all the people who died before they ever got to uh experience this and um with some some some some beautiful ceremony to honor the the right and and then you maybe you don't want to have like I mean certainly certain forms of pleasure would seem to be inappropriate and maybe you would need to actually have sadness or maybe some kind of Bittersweet I don't know I this there's a huge design space here that um hopefully that could sort of work out um
42:30 - 43:00 I see um one thing that seems to be threatened even with uh these five modes of Defense uh is interestingness right not living a boring life because the challenge is you could be immortal you could potentially live forever uh until something catastrophic happens to you um is there any defense for a boring life given of how long we'll live um first of all there's an even more basic distinction here between subjective boredom and objective boringness so the
43:00 - 43:30 subjective boredom is just to feel bored that certainly could be abolished in Ault world like through the same kind of neurot technology that we have already discussed and and we have to be careful that that doesn't sort of infect our intuitions about the objective boringness or interestingness uh constantly remind that that that's already a big chunk of what we normally associate with things that are boring or interest all these people in Utopia could like be totally fascinated all all the time and just be completely immersed and find this like every fiber of their
43:30 - 44:00 being is just like wow this is sort of cool and interesting and I want to dive in right so that that's already there like um but if we're talking about this objective so the the the objective notion of interestingness is is a little bit harder to pin down but it's some notion that certain experiences or activities are sort of maybe such that they it would be fitting to be interested in them and others it would be sort of un so like just staring at the maybe you think that even if you could take a drug that would make that
44:00 - 44:30 feel super interesting that there is like some sort of normative disvalue in that because it's not the kind of object that it would be appropriate to be super so so then you can think objective interestingness well so to the extent that it involves something like complexity and richness and sophistication of your experiences or activities they could sort of just rack that up to 11 there is a slightly different version of interestingness objective
44:30 - 45:00 interestingness where it uh calls for fundamental novelty um where you might run out of that um that like so for example if you think it's not nearly as interesting to learn about fundamental physics as it is to Discover It For the First Time then you know eventually we will have figure out what the all the basic laws of nature are and the fundamental truth and
45:00 - 45:30 the big general concepts and then scientists will uh have to content themselves with finding smaller and smaller truths more local truths that are less profound and so that that that that's some kind of form of interestingness that that you would eventually run out of or maybe you would set aside little pockets of ignorance um and Mysteries to to sort of as as I alluded to earlier um if you think about
45:30 - 46:00 human lives currently if you take this second notion of sort of fundamental novelty in in a individual person's life I think a lot of that happens really early on and so like in the first couple of years of life think think about so you discover well like you discover that there is a world like that that's a pretty fundamental Discovery right and then like it contains objects like that continue to exist even when you look away like wow that's like
46:00 - 46:30 just like a jaw-dropping real cognitive Revolution right and then you discover that there are other people in the world like that's a now like when you're grown up what's like the most profound thing you learn in a given year like it's is registered a lot lower on the sort of richer scale of fundamental novelty and so so we are already suffering huge diminishing returns within our current lifespan if you sort of look at the human like the planet Earth as it were from from some alien comes here and like each the average person's life like how much fundamental novelty is there like
46:30 - 47:00 they all you know maybe a few people are doing some interesting new things but most people are just doing the same they have the same old thoughts the same old fears and the same hopes and they hope you know boy meets Girls y y y get the job get the paycheck get old you know somebody passes away you're sad for a while and then you eat and that's like from a certain lens you might think that our current lives would be extremely boring already just because it's already pretty much been done and only small details are different in fact I I want
47:00 - 47:30 to generalize this um this might be a caricature of your view but this was my kind of takeaway of how you essentially are defending uh life in in Utopia which is you're saying look you think you want this fundamental novelty you think you want deep purpose this kind of world historic sense of meaning look at your life now how many of us have those things and yet so many of us live great lives what you actually want is a lot of pleasure uh social engagements aesthetic
47:30 - 48:00 experiences it's kind of like your argument is kind of like the softest position in the gorgeous like let's stop talking about these high Fant values with big fancy names like purpose meaning and interestingness and let's get the basics done and uh the example you gave was n and and you said something like you know nich talks a big game about his higher men and the napoleons of the world he lived like a Bohemian reading and writing books in the Alps is is that a fair uh categorization of your view well I think n was one of those
48:00 - 48:30 people who uh like would have a relatively more plausible claim to interestingness in this later sense like in that he like thought big original thoughts and uh really dove into that um I think that there there is if if you have a an axiology a theory of value that is kind of pluralistic I think given that pretty possible that there's like some extra value in in also having this third kind
48:30 - 49:00 of interestingness that's kind of globally you know registers in a significant way I'd say that even there though like if you just zoom out a little bit further it's probably an infinite Universe out there and with a lot of other planets with a lot of other civilizations that have already thought the same thoughts and thought much better thoughts and created better so like depending on your scale we might already be completely unable to realize any fundamental novelty in the world um but if you do focus on this
49:00 - 49:30 mesoscopic scale that is kind of either an individual life or or like the globe as it is now with 8 billion humans or so and you think that that's where I want to make the significant difference on that scale for there's a particular kind of then I'd say that right now is the Golden Era of purpose right now there are like immense stakes in the world there are like a lot of immediate uh moral urgent causes that
49:30 - 50:00 where you could individually make a significant difference um plus we are seemingly approaching this this critical juncture in human history upon which like the next billion years might like this if you want if you want real purpose like knock yourself out now like because it's never going to be like more at stake than it is right now and in the next few years and um if if you can't even bother to do this right now like then I mean how much value do you really place on this kind of global purpose and so is my
50:00 - 50:30 reading of your view of human nature right which is you're defending uh the life in deep Utopia not by rescuing this deep Global sense of purpose meaning interestingness but by suggesting people actually don't need that or desire it yeah I think it's like one value of which we might have less in a Sol world but I think we could have a lot more of most of the other values such that NE the net balance is like an enormous
50:30 - 51:00 positive I see I see I mean like yeah okay so maybe if there weren't people starving you would be deprived of the purpose of feeding them Etc but still it's a trade I I would be happy to take right um and there's like something lost there like there's like something nice and glorious in somebody going out of the way and feeding the starving that's that's like a little plus there but there's also this huge negative and if we could all just have enough to eat without without that I think that would be better and you can I think generalize that right when you looked back at
51:00 - 51:30 history in the book you said that uh the periods and the people Worth dramatizing or talking about are rarely the good periods or good lives that you would want to live yeah there there's like a big difference and this is like a fundamental um thing to bear in mind when forming some opinion about the this this this kind of utopian problem uh that you could even evaluate a hypothesized condition from two different perspectives and what we I
51:30 - 52:00 think often defaults to if we're not careful is the external point of evaluation like we look at this like future utopian condition as if it were like a stage play we sit in the audience and and we look at is this and then we form some like kind of thumbs up or thumbs down but from that perspective I think we will tend to overvalue interestingness and drama like if you go to the theater you want stuff you know to happen you you want like you know there's like a king and he gets
52:00 - 52:30 killed and then the Assassin fleas and then they overtake him and there's like or the movies we write or the novels Etc um so so good stories often have a lot of suffering in them um but there is a different and which I think is the right way to evaluate this is not like how good is Utopia from the outside to look at but how good is it actually from the inside to inhabit to live it and and there I think yeah the the the stories that you know are are most interesting to read about are not necessarily the
52:30 - 53:00 stories that are best to live out in your own life and uh and we need to correct for that if we're actually trying to build a Utopia that we would be moving into if it's not just a fiction or a screenplay but like an actual plan for what we want to spend the rest of our time in so so you mentioned uh something like there are no Wars in history that are worth it no matter what kind of great art is is produced by them but but I wonder if on the off chance that that War or conflict or or bad
53:00 - 53:30 thing creates these civilizational grounding pieces of art would you still will that away so here's the thought experiment you know exante I think you you would say no no Trojan War right but expost given we know how fundamental the Trojan War was to establishing not only just Greek tragedy but Greek philosophy and Greek culture would you if you could wave your wand one way or the other would you say spare those lives in the Trojan War I don't want my ilad and odyssey or or even the responses to that I don't feel
53:30 - 54:00 entirely competent to make these judgments um I would think that at some point enough is enough um that uh I mean we we've had a lot of Wars by now and we've had a lot of people dying for various causes and suffering horrible Fates and and you know maybe there's a certain kinds of value that can exist in in human style life
54:00 - 54:30 and but if you think about this do we want like okay so maybe you want another few decades another few hundred years another few thousand years but like a 100 thousand need a million years more of of these like two-legged creatures running around here and killing one another and getting cancer and like having headaches and stuff like at some point I think we want to maybe unlock the next level um and uh and say
54:30 - 55:00 well you know if there are values in some of these tragic and beautiful things like it might not scale with a number of instances of the tragic and beautiful like having 10 tragedies doesn't create 10 times as much Beauty value as one like and and so it's the kind of value that seems to saturate right um whereas like the value of like a nice cup of tea you know the thousandth cup of tea might be you know just as taste just as good as the first cup of tea So eventually you've kind of
55:00 - 55:30 ticked all this you've had your big life you're you're the Youth of humanity you've had your adventures and stuff and then you settle down a little bit I like I think there might be different kinds of Adventures that might be a lot more interesting in many ways it's just that maybe they involve less suffering in this scenario so other than this uh Global sense of purpose meaning Global sense of stakes Global sense of uh novelty and interestingness is there any fundamental set of human values that
55:30 - 56:00 will not be fulfilled in this Utopia or or be fulfilled worse than currently I I think a lot of them have some connection with this um yeah this sense the purpose and meaning that they seem to be particularly uh uh threatened by the affordances of of a of a solid World um depending on yeah you're kind of more spiritual and and religious views that there could be
56:00 - 56:30 I guess additional constraints there in terms of what could be achieved in in a solid world and then this kind of fundamental interestingness of the form that requires novelty on on a global scale not a cosmic scale not on a sort of day to-day scale but on the sort of scale of planet Earth that that also might there might just be so many times you can discover like you can discover relativity Theory once the theory of evolution once and then there might be
56:30 - 57:00 you know 50 more discoveries of that magnitude that you can make and then after that it starts to dwindle so that would be another example and I think that's a very interesting Insight on human nature which is that humans are the type of creatures such that one of our core values requires us to be in a fallen world or or in an imperfect world right meaning this kind of global sense of scale purpose or novelty our our Natures have maybe being conditioned on the existence of uh
57:00 - 57:30 problems uh in in that throughout human ex history and prehistory and indeed all the way back to our great ape ancestors and way earlier than that there were like various forms of scarcity and and things that needed to be done all day long you had to check for Predators you had to get food you had to do find and So like um a lot of our psychologist kind of just assumes that there are these
57:30 - 58:00 needs and we see a little bit of how problems can arise uh when that is no longer the case today with the Obesity right so we had we have kind of psychologically um evolved in a way that assumes that it's food is scarce and you need to try to find it and grab it when you can and stuff yourself as much as possible because maybe tomorrow there won't be anything to eat and we've removed that constraint from our external world in in in at least in in
58:00 - 58:30 wealthier countries there's plenty like the fridge you know it's full of food and now there's this mismatch between our environment and our psychology um as we move to technological maturity that that that that little crack could open up much wider and there could be a huge mismatch between what our sort of evolved psychological nature is and what the environment actually demands of us and that's kind of what creates this problem in the first place that we would need to possibly like change ourselves
58:30 - 59:00 quite fundamentally to become suited for life in in Utopia right so so here's a uh proposal about rescuing the sense of at least perceived Global purpose Global novelty Global meaning Global Stakes which is to wipe off our memories and then go into a VR simulation that is indistinguishable now the obviously the the Philosopher's objection is laid out in The Experience machine right where the the conclusion is that you wouldn't want to enter into such an such a
59:00 - 59:30 machine even if you couldn't tell because they would be unfitting that that there's something objectively bad about it even if the experiencer doesn't know but correct me if I'm wrong I believe there were subsequent variations on this uh experience machine uh uh uh thought experiment that said but what if I told you that uh you live an experience machine now would you want to pull out and and I believe the in is that you wouldn't want to pull out and so what people are really after isn't
59:30 - 60:00 fittingness to objective reality but it's a kind of familiarity so so if that's the case what's wrong with wiping out our memories and pretending to live like Achilles uh in Utopia for one life and just keep on doing this to to get that kind of global sense of uh uh uh meaning back first it's not clear how much meaning you would get from that so if there was no it it would kind of fall it seems to me
60:00 - 60:30 into the artificial purpose category um but you wouldn't know right that's the key part you would know but that's that's also the case inopia that I mean if if you want some like partial Amisha so that that you forget about various things that would be easily arranged um I think like I mean I presume that you might want to edit the like so if if if edit edit the uh the sort of uh version 2. like if you want to re revisit history like there are a
60:30 - 61:00 lot lot of parts of it that I think you would want to Omit from from your sort of uh recapitulation of it um either because they are too horrible or because they are just kind of a bit dull um but uh certainly you could imagine creating virtual uh worlds that you could interact with and inhabit and explore um in different configurations
61:00 - 61:30 and with different variations so I I just want to draw a distinction here because when I when I when I heard you say artificial purpose I thought what you meant was for example going rock climbing so you know I could get helicoptered up to the Cliff face but I knowingly restrained my own set of available means uh to climb this rock face knowingly hence making it artificial but if I wiped my memory and entered into the life of Achilles like
61:30 - 62:00 phenomenologically I I wouldn't be able to know right and so so it would lose the artificial side of that at least from from the subject no yeah so I think there is a Continuum there if you sort of you could imagine the the the rock climber once there are halfway up the wall um today that might not be a helicopter that could reach them in time they didn't have to climb the wall that was kind of artificial purpose but once they are there they
62:00 - 62:30 really have no uh Choice other than to like do their utmost to keep doing it otherwise on on pain of death right and similarly you could imagine utopians if they wanted to could create little um holes in Utopia right where the world is not solved where there's like real need and constraint and Stakes of various kinds um if you thought that was a added value to sort of being subject to these forms of
62:30 - 63:00 risk um now you don't want to make too many holes or or you just kind of destroy the Utopia right like if you could like just destroy the whole Utopia you're back to scarcity and real need again but that that would also mean giving up all the good things about it but you might have little pockets like designed and you know maybe that would be real Stakes but maybe the stakes wouldn't be quite like um a child dies from brain cancer type of steak which but but more like well if if you fail at
63:00 - 63:30 this task you will have a month of uh you know being excluded from your normal fun gadgets and friends and you have to like work hard for a month to sort of get back to where you were something like something more kind of human scale Stakes right right I I think the interesting question about human nature of whether you know we think that the picture you painted in in deep Utopia is an attractive one or not has to do with how important necessity is for for a good life right and for for someone on
63:30 - 64:00 the opposite extreme someone like the unibomber he thought that today even in our current technological Society there are not enough necessary primary actions that we have to do and so even now he wants to take us away from the current technological Utopia that we are in and so I I found that to be a very interesting uh uh extreme opposite end of this of this experiment yeah although I think uh some of what he was thinking about were the sort of contingent psychological effects of living the
64:00 - 64:30 current lives where there are various forms of psychological Mal I mean from from overeating but also like like various kinds of Malay that can happen when uh people live in this modern artificial uh like you get addicted to your social media feed and if you never can can you really connect to any other can you have real friends if you've never been in a life or death situation where you like saw that they like were true friend even though it they risk their life like that might be a form of connect there might be all sort of ways
64:30 - 65:00 in which uh the mismatch between what we evolved to do in the current world creates psychological ailments that maybe they are outweighed by by the Comforts and benefits I think that's POS possibly the case but still there is this kind of psychological cost probably some kinds of mental illnesses are more widespread because we're not perfectly adjusted to the modern but those things could be fixed uh like like you you you wouldn't have to uh uh
65:00 - 65:30 overeat or or or become feel socially alienated or uh in in other ways like you might even get closer to Nature in some ways like rather than living in sort of concrete square buildings like you could imagine a technological maturity you would be actually living in some sort of Savannah like you know but minus the bugs perhaps and M like you know the temperature would always be right and right like um and so so in many ways you could have like some like even like first you could adjust the
65:30 - 66:00 psychological apparatus so that it didn't have these negative symptoms second to some extent you could also adjust the environment so that it would in many ways more match what we were naturally uh designed to interact with what was striking to me was how theological our entire conversation today H has been and how similar the thought experiment you set up was was to the Christian afterlife so let me just give you a few examples so in your deep Utopia right plasticity means that the material world is is not
66:00 - 66:30 really a concern we are but we also have our individual bodies right in the Christian afterlife individuality is preserved so all kind of social political issues are resolved and just as the Saints in in heaven are supposedly spend their time contemplating God a lot of the activities that you described are contemplation based uh and also you delivered this book in a lecture format where you kind of construct this world in six days days six days of lectures and then you rest on on the seventh what do you make of the the theological
66:30 - 67:00 aesthetic of your work yeah I mean the number of days is more I I thought it was kind of done on on I mean it took kind of six years to do the six days I felt like it was time for a wrap at that point um and and but I think in general that there are uh strong par between the uh
67:00 - 67:30 thoughts developed in in in religious and Theological context of because it's like in some sense the same fundamental question like what's the best possible future for a human being if if you abstract away from various contingent limitations and constraint like um more generally I think actually when you think through the full and ultimate implications of the sort of standard physicalist worldview and really
67:30 - 68:00 think in in many ways you get to considerations traditionally developed in in a theological context I mean we we alluded to the simulation argument earlier right there it's very striking uh I mean it starts from a different kind of um assumption to begin with but the end result is something at least structurally strikingly similar to to Many religious and Theological conception my my most recent paper um AI creation and the cosmic host is is
68:00 - 68:30 another example of how you start to think through about in in this case um various ethical questions related to how we should relate to uh digital minds and AIS that we are building and and the possibilities of levels of uh where again and you sort of come up and so it might be that there's like the the philosopher Derek parfit who was a colleague at at for um um he he had this metaphor in his
68:30 - 69:00 work of of a big mountain and he he he did work on meta ethics and he he had this view that different approaches to meta ethics um sort of consequentialism and the anology they were kind of climbing the same Mountain from different sides and that when you thought through each one to its kind of purest and clearest form that that would sort of converge at the peak and I think maybe there is like some similar phenomenon where you have a Big Mountain where people have been climbing it from the theological side and if you climb it far enough High Enough from the
69:00 - 69:30 naturalistic side maybe you get to a similar conclusion in the end thank you so much for a fascinating discussion uh thank you Jonathan I enjoy this