Jerry Kaplan: What You Need to Know About Generative AI

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In a captivating conversation with technology expert Jerry Kaplan at the Commonwealth Club World Affairs, Kaplan unfolds the landscape of generative AI, its history, and its impending impact on society. He delves into the evolution and generations of AI, the distinct nature of generative AI, and how it might transform multiple sectors including law and creative industries. Kaplan holds a pragmatic view against the notion of AI-induced job loss and demystifies AI myths by addressing philosophical debates on consciousness and intelligence.

      Highlights

      • Jerry Kaplan discussed AI's shift not just from recognizing to predicting and creating, which is a significant jump. 🏃‍♂️
      • He offered insights into how AI could potentially replace or assist in jobs, like legal positions, not by elimination but by evolution. 👨‍⚖️
      • Kaplan humorously debunked several AI myths, particularly relating to the fear of machines gaining free will. 😂
      • The talk highlighted AI's unforeseen emergence, even to Silicon Valley visionaries, emphasizing the surprise among founders about AI's conversational abilities. 😮
      • Kaplan reflected on the ethical and philosophical implications of AI, particularly the blurred lines between human and machine capabilities. 🤔
      • The discussion underlined the pressing need for regulation amidst AI's rapid advancement. 📜

      Key Takeaways

      • 📚 Generative AI is a leap forward from previous AI technology, emphasizing its generality and predictive capabilities.
      • 👩‍⚖️ AI is moving towards transforming sectors like law with the potential for automated small claims courts.
      • 🤔 Despite scare tactics, Kaplan reassures that AI won't lead to massive job loss but instead could enhance productivity.
      • 🧩 Philosophical questions around AI and consciousness remain explored but unresolved, highlighting AI's complexity.
      • 🤖 The conversation includes humorous takes on AI myths, likening AGI aspirations to religious quests.
      • 🌐 With the rise of AI, its global competition and regulation become ever more pressing topics in international discourse.

      Overview

      Jerry Kaplan, an AI expert and educator at Stanford University, enlightens us on the advent of generative AI, illustrating how it marks a new era in technology from its predecessors. Unlike earlier systems focused on recognition or classification, generative AI is about projection and creation, offering broad applications not bound to single tasks.

        As Kaplan explores, the fear that AI will render human jobs obsolete is unfounded. Instead, AI is an automation evolution, promising increased efficiency and productivity, much like historical technological advancements have done in the past. In fact, AI's role in streamlining mundane processes, such as small claims courts, points toward a future where AI systems could aid legal processes rather than replace them entirely.

          The conversation took deeper dives into understanding AI's place within larger philosophical questions. Kaplan skillfully navigates through the debates of anthropomorphism, consciousness, and even free will, drawing comedic parallels to age-old discussions. As AI continues to break barriers, it also challenges our understanding of intelligence – both artificial and human.

            Chapters

            • 00:00 - 00:30: Introduction and Welcome Remarks The chapter 'Introduction and Welcome Remarks' introduces the program hosted by Commonwealth Club World Affairs. John Markoff, a seasoned reporter on Silicon Valley and former technology and science writer for the New York Times, opens the session. He then introduces Jerry Kaplan.
            • 00:30 - 01:00: Introduction of Jerry Kaplan Jerry Kaplan is recognized as an expert in artificial intelligence, a serial entrepreneur, and a futurist. He has invented several groundbreaking technologies, holds over two dozen patents, and founded multiple tech startups, including some of the first AI startups in Silicon Valley. Presently, he serves as an adjunct lecturer at Stanford University, teaching about the social and economic impacts of AI. The discussion in this chapter sets the stage for exploring Jerry's new book.
            • 01:00 - 04:00: Discussion on Generative AI The chapter titled 'Discussion on Generative AI' opens with an introduction to Jerry and a focus on generative A.I. The discussion aims to unravel the recent developments and significance of generative A.I. over the past few years.
            • 04:00 - 06:00: History of Artificial Intelligence The chapter delves into a personal anecdote revealing the intense dislike one MIT professor had for his colleague, Norbert Wiener. It explores Wiener's influential concept of cybernetics, which focuses on controlling communications across both biological and machine systems. This idea served as a rival theory during the 1950s, marking a significant period in the history of artificial intelligence.
            • 06:00 - 10:00: Generational Advancements in AI The chapter discusses the potential difference in perception and understanding of artificial intelligence if it had been named 'cybernetics' instead. The conversation touches on the confusion surrounding whether machines can truly think or not, and reflects on John McCarthy's role in the AI discourse. It also comments on McCarthy's interpersonal skills and how he viewed AI as an aspirational field.
            • 10:00 - 13:00: Natural Language Processing and Word Embedding The chapter discusses the unexpected evolution of a marketing phrase related to technology, particularly in the realm of robotics and artificial intelligence. Initially intended to make machines more approachable and less intimidating for humans, the phrase inspired countless television shows and movies centered around the concept of robots. It highlights the impact of naming and how something more technically accurate like 'symbolic reasoning entrance' would not have captured the public imagination as effectively. The discussion reflects on how terminology in technology can influence both perception and popular culture.
            • 13:00 - 18:00: Intelligence and Consciousness in Machines The chapter discusses the evolution and impact of intelligence and consciousness in machines. It reflects on how prominent figures such as John McCarthy significantly influenced this field during the 1960s by establishing laboratories like the one at Stanford. The chapter highlights the inspiration that movies like 'Space Odyssey' provided to a generation of young roboticists and technologists, contributing to advancements in artificial intelligence and robotics.
            • 18:00 - 23:00: The Turing Test and Language Understanding The chapter opens with a discussion about the influence of the science fiction film on the perception of artificial intelligence, particularly around the character 'HAL', which is seen as a representation of AI as a potential threat. The conversation starts with a nostalgic recount of watching the film in theatres during the late 60s, highlighting its impact on the audience. It suggests that while initially viewers may not have grasped its full philosophical implications, the film sparked interest and concern about AI's role in society.
            • 23:00 - 27:00: Embodiment and Real-world Interaction This chapter discusses a personal experience shared by the speaker and two friends, each having a different reaction to witnessing 'hell'. While one decides to build it, another is overwhelmed, and the third chooses to become a dentist. The chapter transitions to discussing generative AI, distinguishing it from previous generations of AI technology.
            • 27:00 - 32:00: Autonomy and the Paperclip Problem This chapter, titled 'Autonomy and the Paperclip Problem', distinguishes between generative AI and its predecessor, machine learning. The focus is on understanding how generative AI differs fundamentally by highlighting that machine learning was primarily used for recognition or classification tasks, such as identifying objects in pictures or detecting pedestrians and trees for autonomous vehicles.
            • 32:00 - 39:00: Socialization and Emotional Attachment to AI The chapter discusses the differences between traditional artificial intelligence, which is designed for specific tasks like recognizing images, and generative artificial intelligence. Generative AI is more focused on predicting and projecting what something might become or what is likely to happen next, rather than just recognizing or understanding static inputs. This fundamental difference makes modern AI systems more versatile and general in their application.
            • 39:00 - 46:00: Human Jobs and Automation The chapter discusses the advancements in technology, particularly focusing on the new systems that exhibit creativity and generality. These systems can perform a wide range of tasks such as assisting with homework, writing poems, and diagnosing diseases. The chapter emphasizes the novelty and the unexpected development of this technology and its potential to predict and expand upon existing information.
            • 46:00 - 49:00: Legal Industry and AI The chapter explores the integration of AI in the legal industry, particularly focusing on a breakthrough in natural language processing.
            • 49:00 - 56:00: Regulation and International Competition This chapter discusses the basic principles of predictive text systems, which use a set of given words to predict the next word in a sequence. With increased computing power, these systems began to utilize longer contexts for predictions, moving from analyzing five words to ten.
            • 56:00 - 62:00: Challenges and Opportunities presented by AI The chapter discusses the progression of AI in handling language tasks, noting that early versions could produce syntactically correct yet meaningless sentences. However, as AI models analyzed longer contexts, the output became more coherent and meaningful, surprising developers with its improvement.
            • 62:00 - 65:00: The Future of AI and Technological Evolution The chapter explores the advancements in AI and technological evolution, emphasizing the shift from generating random 'word salad' to producing coherent and meaningful content.

            Jerry Kaplan: What You Need to Know About Generative AI Transcription

            • 00:00 - 00:30 Hello and welcome. Tonight's program, hosted by Commonwealth Club World Affairs. My name is John Markoff. I've reported on Silicon Valley since 1977, and I was a technology and science writer for the New York Times from 1988 to 2017. Now, it's my pleasure to introduce Jerry Kaplan.
            • 00:30 - 01:00 Jerry is widely known as an artificial intelligence expert, a serial entrepreneur and a futurist. During his career, he has invented several groundbreaking technologies, has more than two dozen patents, and he's founded numerous technology startups, including two of the first A.I. startups in Silicon Valley. Currently, Jerry is an adjunct lecturer at Stanford University, where he teaches on the social and economic impact of artificial intelligence. Tonight, we're going to be discussing Jerry's new book,
            • 01:00 - 01:30 General of Artificial Intelligence What Everyone Needs to Know. Jerry, welcome. But let me start. Let's get started by asking. We're going to talk about generative A.I., which is over the last couple of years. But I wanted to start by asking you about the term artificial intelligence and just a little bit of background. I think everybody knows that this term was coined by John McCarthy in 1956 for the Dartmouth Summer A.I.. Well, it was called the Artificial Intelligence Conference. What I think people don't know is that McCarthy came up with the term
            • 01:30 - 02:00 because he had this intense dislike of Norbert Wiener. Now, Norbert Wiener was another MIT professor in the 1950s who's coined a rival idea, which was cybernetics. It was this the idea of the study of controlling communications in both biological and animal systems, biological and machine systems. And one of the things that I've always wondered is
            • 02:00 - 02:30 whether we might have gotten out of this conundrum we've gotten into about this notion of artificial intelligence and whether machines think or not. If we had named it something else like cybernetics, could we have simplified all this? Oh, I think we we definitely could have done that, John. There's no question about it. I knew John McCarthy. You did, too, I'm sure. Right. And John, the interesting thing is he was a little awkward in his I'm trying to fit this slightly awkward in his connections with other people. And I think that he saw this as kind of an aspirational alternative
            • 02:30 - 03:00 for him, like, oh, my God, maybe I could someday, you know, deal with machines. I wouldn't be so uncomfortable. But the funny thing is, it turned out to be one of the great marketing phrases in history because it spawned an endless array of television and movies and and things about, you know, robots going wild. And if you just named it something like more of what it was at the time, which was symbolic reasoning entrance, you know, nobody we wouldn't be here today.
            • 03:00 - 03:30 We wouldn't have been talking about. So, I mean, in that context, I mean, McCarthy sort of came of age. He created this laboratory at Stanford in the 1960s and and, you know, at that time, there was a generation of then young men like yourself who are growing up. At one point I was working on a book and I kept running into a Pioneers roboticists and things like that. Who went into the field after watching the movie Space Odyssey
            • 03:30 - 04:00 and, you know, they saw hell and and they they wanted to build hell. Was that you? Absolutely. You know, I remember that was summer of 68. 69. Yeah. And I remember me and my friends, we went in Cinerama in New York, in Times Square, went to see that six times and watched the film and I only recently learned what it was actually about. That was my reaction when I saw how that was not I didn't have the same reaction when.
            • 04:00 - 04:30 I saw hell, you know, at least in three of us, I was with two friends and we each had a different reaction. My reaction was, Hey, I want to build that. You know, the other one was, Oh, my God, that's that's going to be crazy. And the third one. So I decided to become a dentist. So let's spend a little bit more time with definitions. So this generation of A.I. is being defined as generative AI, separate, generative A.I. from the generations of A.I. technology that have come before.
            • 04:30 - 05:00 Yeah, well, the really been three generations to confuse things of artificial intelligence. But what distinguishes the current one? Generative A.I. from the previous one, which generally has gone under the name of machine learning, is two fundamental things. Machine learning was about recognition or classification. So it's here's a picture. Is there a dog in it or is a cat or your car, your self-driving car has uses this technology. There's a pedestrian, there's a tree. That's what primarily that technology was good for.
            • 05:00 - 05:30 And those programs are trained for very specific purposes, like recognizing pictures of cats. A generative artificial intelligence is different rather than trying to recognize things, it's about projecting things a better word than that. But it tries to take a set of things and say, what's most likely going to come next? What else could things be? And so that's how it's fundamentally different from machine learning. But the other thing is these modern systems are incredibly general.
            • 05:30 - 06:00 And compared to the earlier systems, you know, the same system, they can, you know, help you with your homework, can write a poem and can diagnose a disease and all, all that kind of thing. So it's the generality of it. It's so new. And also the the character of it being sort of loosely speaking, creative and being able to either fill in blanks or project forward into something that might be suggested by the things that it's seen so far. My sense is that this technology kind of snuck up
            • 06:00 - 06:30 even on the so-called visionaries of Silicon Valley, loosely. What happened? Oh, it's really fascinating story. It's been untold until today. It came out of the an advance in natural language processing. And what is natural language processing? Just what it sounds like. It's trying to understand and process human language with natural language, English, as an example, by computer. And they developed a technique called word embedding.
            • 06:30 - 07:00 I'm not going to go into any detail because we don't have we don't have another couple of hours to go into it. But basically, very roughly speaking, if you use this, you feed in some set of words and you try to predict what the next word is. And then you take that and you try to predict what the next word is, that's sort of very crudely what these systems are are doing. So what happened was, as they got more and more computing power and they began to look at a longer, longer context, in other words, instead of looking at five words that looked at ten words,
            • 07:00 - 07:30 that looks at 500 words now looks at a thousand words, something really interesting happened at the smaller lengths it would do things that were syntactically correct but didn't really have any meaning. You know, we just Oh, yeah, that's a good sentence. It just doesn't mean anything or it's just nonsense. But as the context got longer and longer, the things that it began to project out forward became much more sensible. It started to make sense. And this was shocking to the people who are working on it.
            • 07:30 - 08:00 So I thought, My God, what's going on? All of a sudden I'm asking questions instead of just putting out words, you know, word salad. It's starting to make sense. So they continue to compute bigger and bigger quantities of training data and much longer and longer areas of context. And the system simply got better and better. And that's why you hear today about how much computing power it takes. If they throw enough computing power at it, they get something that just begins to make sense. And so this began to raise questions which are just now coming into focus.
            • 08:00 - 08:30 I mean, what does it mean to be intelligent? Is this is this really intelligent? And if so, what does that say about human beings? So there are critics that refer to these things as stochastic parrots, this one word after the next. Do you think they're wrong? Well. No. They're just being unnecessarily pejorative and misleading audiences like like this into thinking that, oh, that's not really intelligent or it doesn't really matter. The first thing to understand is it's not really doing this by words.
            • 08:30 - 09:00 It has this fascinating algorithm for taking enormous quantities of human written text, the script. It's like literally everything on the Internet and you just things that are completely beyond human comprehension. And what it does is it correlates all of these correlations between all the different words and how they appear and where they appear and all that kind of stuff. And it develops an internal model. And that model is in theory, it's a model of language. But in reality it seems to be a model of the real world.
            • 09:00 - 09:30 Would you call it a neural net? Oh, it isn't. Yeah, it is called a neural net. So you can go all the way back to the dawn of I mean, in the 1950s, people were trying to build very simple neural nets. That's right. Rosenblatt. Yes. Yes. The basic concept that he pioneered forgot his first name. Frank. Thank you. Hey, between the two of us, we can get anyone named. It's. It's really good. Yeah. Professor Rosenblatt at Cornell.
            • 09:30 - 10:00 You know, I think I got that one right. And he had pioneered a very early version of I mean, super early of this. You wouldn't recognize it in its current form. And that was the basis through many, many iterations. But you have to understand that computing technology changed almost completely from that time to today. We are literally working. My watch has millions and millions of times more computing power than were available to him at that point in time. So the class of computations that can perform
            • 10:00 - 10:30 became qualitatively different over those intervening decades. Okay. So there's another spectrum that intersects with this one I wanted to ask you about. People talk about narrow A.I. and machines have gotten very good at doing things that humans do, like seeing things or speaking or listening to spoken language or sensing. And then there's this other concept of artificial general intelligence. So there's neuro artificial intelligence and artificial general intelligence. Are we on a path with this
            • 10:30 - 11:00 current iteration of generative A.I. toward what's called AGI artificial general? And how close are we? Okay, well. If we are. First of all, a little context. It's a loaded question because as you know, the air is unfortunately all bound up with this sort of idea of anthropomorphism. And are we something the devil you know, we creating some superintelligence that's going to take over the world and it's going to come alive and say, Hey, wait a minute, I'm working for you, I'm being exploited, you know, and then,
            • 11:00 - 11:30 you know, come after us, right? You know, the Terminator, it's all complete nonsense. So when Pete, when people in the field talk about AGI, AGI, you know, that's like, you know, a crazy religious concept. It's I would describe it as a religious concept more than anything else. But what is artificial general intelligence is now, from my point of view, having been in the field for a long time, the systems are clearly artificial.
            • 11:30 - 12:00 They're very general. I mean, remarkably general today. And I think anybody interacts with them would have to say they're intelligent. So, sure, we've got AGI. That's my point of view. But AGI is one of those. It's like the Holy Grail, whatever you got, it's not good enough, you know, it's over the next hill. You know, people are complaining with this. This is these systems aren't AGI because they're bad at math, which is true and very interesting. Why that's the case. They're not good at math. You know what? I'm presumably energy
            • 12:00 - 12:30 I natural general intelligence and I'm bad at math. So, you know, what is it surprising that a computer would be bad bad at math? So I don't think we need to worry about it. But I did want to have that context of people understanding why people go AGI. So but then what is the right way to distinguish between biological intelligence is machine intelligence and can do it well. So there is this notion of the Turing Test, this imitation game. If I can't tell the difference, does it matter? Well, I think it does, because it's more than just how you interact with it.
            • 12:30 - 13:00 There's how how does it work? What does it do? How how general is it? How predictable is it? But I think the current generation of systems, the generative artificial intelligence systems have so many characteristics that are, if not analogous, they're identical in many ways to the characteristics the human intelligence has. It begins to raise the question of whether our brains are actually organized
            • 13:00 - 13:30 in some way that is similar to the way that these programs are organized. And we haven't answered that yet. Well, I don't think we we don't have an an answer. I don't know we ever have an answer. The real problem is we know how they work. We don't know how this works. That's really where the problem comes in. Which brings us to this debate that happened, I think, in the first time in the 1980s. John Searle is a UC Berkeley philosopher who came up with this thought experiment called the Chinese Room Problem, where he talked about the ability to translate English into Chinese
            • 13:30 - 14:00 and made the argument that a machine could do that without adding any understanding of language. And the Chinese room problem seems very you know, it's it seems to precedes what's going on right now. Clearly, these machines translate they appear to understand language. But would you argue they do or don't in natural language, understanding was the field. Right? I don't see how you can really argue.
            • 14:00 - 14:30 They don't understand language. I mean, there's just there's just no way around that. If you think about the way you understand language, let's say you don't know a word. What do you do? Well, you look it up in the dictionary and that's how you find out what it means, Right? That's how you learn language. Well, these systems are basically it's the same thing. It's a correlation among words, just as I'm the word, I'm trying to look up. It's explained in terms of other words. You take these systems apart. That's what's inside. It's hey, what's the difference? This great example I just saw recently between tart and bitter.
            • 14:30 - 15:00 Now, how many of us can really say what the difference is between tart, bitter, and yet we know what it is? Well, you can ask one of these systems, and they'll give you a very good explanation of what the differences between those things. So it's the relationship, among words that from which the meaning derives, or if I can be a little bit more technical, the interesting question, which I think is a tremendous maybe a tremendous scientific advance that may come out of these systems
            • 15:00 - 15:30 at a fundamental level, is that sufficiently detailed syntax, which is what the Chinese room is about, actually is semantics, that there is no distinction between syntax and semantics. It's purely a question of scale. Now, when John Searle, who I've also known, you know, John, he he couldn't really have conceived of the complexity and the the crystal. And it structure this beautiful network,
            • 15:30 - 16:00 unimaginable amount of detail that's inside these systems. That wasn't what he had in mind. But there's no question that any computer program is a syntactic process. And yet perhaps that's what's going on with us as well. It's just when you get to that scale, it begins to you can begin to call it meaning or semantics. So then going back again to this problem of distinguishing between what is human machine, what are the
            • 16:00 - 16:30 qualities that we have that you wouldn't find in the machine? What about the question of free will? I think it's Brad Templeton who said that about machines, that if it were to be proven that they had free will, you would have to be able to give a car a task to do to do some particular driving cars. And they would go to the beach instead. Boy, that's a you know, that's a bad idea, he said. Our economists cars will really be autonomous when you ask them to drive you to the office and they take you to the beach instead.
            • 16:30 - 17:00 That's what Brad had to say. I was loves. Could that ever happen with this generation of generative A.I. technology? Not really. I don't think. I mean, it's not a risk. Now, these systems are not entirely predictable, So they will do things that you don't expect. And this is going to require a complete change of the way that we think about what how the way we conceive of computers and what they are. You know, today we tend to think computers is objective and precise and cold and calculating.
            • 17:00 - 17:30 But the truth is that these two systems are not that way at all. They're much more intuitive and creative and forgetful. Believe it or not, they do tend to make things up as you know. And so the way in which our children are going to think about a computer is very different than the way we think about a computer today. Yeah. So once again, going back into history, another
            • 17:30 - 18:00 Berkeley philosopher, Hubert Dreyfus, in the 1960s there was an argument in the field and two well-known computer scientists, Seymour Packard, Marvin Minsky, were super critical of the the machine, the neural network approach to artificial intelligence. And he was involved in that debate. And they were he was criticizing the optimism in the field, I think, in the late 1960s. And what he said is they're a little bit like this. I think and Minsky were very optimistic about AI through older forms of AI.
            • 18:00 - 18:30 And the quote is they're a little bit like men saying they are making steady progress to the moon when they have reached the top of the tree. But is that where we are? No, I think we're way beyond the top of the tree. You know, to use to use that analogy. But the real problem wasn't the field. It's what people were saying about the field and the projects they're making. I mean, these were, you know, crazy professors who didn't know how long it takes to do it.
            • 18:30 - 19:00 They never built a product. You know, it was a you know, we're mathematicians and stuff like that. So John McCarthy created go right back to the original proposal for the Dartmouth Conference in 1956. Right. If I've got that correct, he said I he listed a whole bunch of problems, natural language, fusion reasoning, you know, all kinds of stuff. And he said, I think that a small group of people could make substantial progress in all of these areas in summer. That's what he said in the proposal.
            • 19:00 - 19:30 That's interesting. When he started the Stanford lab. This was in 62, just a couple of years later, his proposal, the DARPA's said it would take ten years and I think $100 million and maybe I think a ten years would work, would build a working A.I. And here we are. We're still not quite there. We're still not quite there. Yeah. So let me let me go back to. So let's once again disambiguate between biological and machine systems. Consciousness free will.
            • 19:30 - 20:00 I know there's a debate on what consciousness is sure of self-awareness. Where do they fit into this? Well, these are all three of those things are separate. So I'm going to give you a sentence on on each. Let's start with the easiest one. Let's say free will. I'm just going to state this because we don't have a lot of time. But you can read my book for I did a great job on the and I think one has to reach the conclusion that either we do not have free
            • 20:00 - 20:30 will or machines do have free will. We're both similar enough that you can make you have you can make the arguments. Either we both qualify as having free will or we don't. Now, I know it sounds crazy to say you don't have free will. As a practical matter, of course you do that. You know, whatever you know, you can make a decision about things. But a recent book, for example, by Robert Sapolsky on plugging somebody else's new book called Determined We Have a copy. Of Your Boss.
            • 20:30 - 21:00 Is just a masterful review of this. And when you get done reading that, you just have to understand and come to the conclusion that really in the sense that we think of it like we're completely free and we can make whatever decision, what is just not the case, It's all determined, which is the name of this book. And I think that's also implies I went way over one sentence. The second thing you said was consciousness. Now, that's a very different thing. We don't really know what consciousness is, and it's more of a subjective feeling that we have. And it's a question as to whether these machines can have subjective feeling.
            • 21:00 - 21:30 And my belief, again, we don't have a lot of time is, no, they don't. And I've had of all the places where I've had conversations about this, it's with these large language models and chat bots. You can talk to them about this and ask them probing questions and they will say, Well, I understand what it is because I've read, you know, everything there is to read about consciousness, but I don't think it necessarily applies to me. I don't feel like that I have consciousness.
            • 21:30 - 22:00 And for me, the key one of the key requirements for consciousness is that we need to be able to experience the passage of time. You need to be able to to feel the passage of time or to experience it. And these programs don't. If you ask them that, they will explain it. No, they have no notion of the passage of time. If you stop and you pick it up a year later, they don't know that unless you tell it, tell them that for them, there's everything there's now and there's everything that came before.
            • 22:00 - 22:30 And then there's everything that comes after that. They just don't have access to yet. Now, is that something that could be designed into them or you would? I don't know. I don't want to be too, you know, Pontiff, you know, on this. But I don't think so. It seems to me fundamental and you see this in a lot of these science fiction things. I look at it this way. Suppose I said you, John, you've got to hear Beethoven's Ninth Symphony. It's absolutely fabulous. And we go to the concert and here you go. Wow, that was fantastic.
            • 22:30 - 23:00 Now, instead of doing that, as you know, you can have a digitized version of a performance of it. It's just a stream of bits. And every day I'm going to put one bit on on a postcard. I'm going to mail it to you every day. And a million years later I come back and I say, Hey, John. How do you like Beethoven's Ninth? Well, do you think you experienced it in any sense? My answer is no. So without some notion of what it's like to experience time, I don't think you can be cautious. So there's also this
            • 23:00 - 23:30 age old problem in the philosophical world called the mind body problem. So these machines also are not embodied. Yeah, but that's that will be solved next year. You know, we're really close on that. I don't think that. Really? Yeah. The embodiment of kind of complicated subject and I didn't get to your third issue. Yeah. So sorry about that. We'll do we'll do it after the Anybody wants to come up and ask whatever it was. The fact is, you can take these systems
            • 23:30 - 24:00 and connect them to real time sensors in real time effectors. That's what you are. You know, your brain takes in actually digital data, believe it or not. And it puts out messages that tell you muscles how to move and, you know, whatever goes on. So in that sense, you're a digital your brain is a digital computer, and it's connected to sensors that are taking light and taking sound and touch and all that. And then they put out signals that allow you to move. That's called embodiment. So there's no reason in principle that you can't just take one of these programs.
            • 24:00 - 24:30 You don't have to be physically in the device. It just has to be able to take in real time data and then be able to take actions that affect its environment. That might change the real time data that comes in. And that's embodiment. And I don't I don't think there's anything magic about it. We're actually going to have that. Final philosophical question. We'll get off of this lecture agency. Can these systems have agency in the sense the goal seeking or what? How would you frame that? Boy, I'm not we'd have to talk for another three or 4 minutes
            • 24:30 - 25:00 about what you mean when you say agency. You know, they they can hire a book agency. They can. Go. Okay. Felicity. Agent I don't know. Agency. This gets into the what could go wrong? Oh, yeah. Framing. So there is this discussion of something called the paperclip problem. Yes. You. You know, do what I mean, not what I say. I guess it's sort of what it is. And then I guess it's. You tell it to do something. It destroys the universe trying to do what you asked to do
            • 25:00 - 25:30 because it misinterpreted what you really meant. Right. Well, let me first explain what the paperclip problem is by Nick Bostrom in his book, Superintelligence. I'm plugging another book. I are in. Basically what he said is suppose you had a superintelligent, super omniscient, super powerful computer. Yeah, right. Don't don't kill on that any time in our life. Not my lifetime, anyway. And you said your goal is to make paperclips as many paperclips as you can think about.
            • 25:30 - 26:00 What would it do? Well, it might say, Well, look, I need to collect all the resources in the universe, and that might include our bodies. So I might wipe out all of humanity in the course of it. That was his point. You know, be an existential risk. Gee, I ought to set myself up so that nobody can turn me off. Because if somebody can turn me off, I won't be a new car wash. My goal? That's actually a real issue, which which I think will come up in certain contexts and was the basis behind Skynet was the whole point. This first thing we did is make it impossible for any human to turn it off.
            • 26:00 - 26:30 Yeah, that's from the Terminator. For those of. You, this is going on in the real world. The Pentagon had this this advisory group that had this exact discussion, and a computer scientist by the name of Danny Hillis was this is within the last two years was arguing strongly that there should be kill switches in A.I. robots. Oh, yeah, absolutely. There's no question of that. To me, that's obvious. And it's obvious to anybody who ever used a machine. You know, I suppose you can't turn off your vacuum cleaner.
            • 26:30 - 27:00 I mean, you know, it's the same problem of course, you have to have an off switch and it has to be, you know, absolute you know, you've got to be able to pull up, pull the power in case things go wrong because they will go wrong. And they do. Girl. But I didn't get to your question about agents. So the pivotal problem here is there's good news and there's bad news. The bad news is easy. If you give a machine a goal and you give it certain resources, it will and you design the proper program in between the two, it's
            • 27:00 - 27:30 going to be able to use those resources as best you can to accomplish that goal. Now, when you do that with people, something very different happens because we always interpret goals and requests in in the context of our responsibilities to the rest of humanity. So we don't steal all the newspapers out of the, you know, the free kiosk. You know, you don't just elbow people out of the way when you walk down the street, even though that would be the most efficient way to do it. So we have a kind of social context in which we operate.
            • 27:30 - 28:00 We balance our needs against the perceived needs of others. And this is not something in principle comes natural to a machine that is simply told, go ahead and take this action or do this, do this thing. Now, that's the bad news that that that's not the case. The good news is these these new systems which are trained on the the mountain of electronic debris that all of us are leaving behind, actually really do understand and incorporate human values.
            • 28:00 - 28:30 And so you can actually explain to them, no, no, go ahead and do this. But, you know, don't do in such a way that it's going to bother other people or annoy them or get in their way. You know, don't take the last seat on the bus. You know, give it give it to the, you know, the pregnant lady and they're perfectly capable of understanding that. That's what's so cool about these systems. And if you talk to they're very polite. They go through a process of socialization. As you as you probably know, I my book
            • 28:30 - 29:00 is called I call it a Robot Finishing School, because that's what they what it's like. You build this thing and it's just like a kid that never got trained and, you know, can do things and run around. And then they put it through a process called arl0 God reinforcement learning from human feedback or l whatever, those four letters or And that's a very fancy way of saying you tell it what to do. No, no, no, no. Never say that. No, no, no.
            • 29:00 - 29:30 If you're going to put out a bunch of pictures of people, make sure they're diverse leading into the topic. Yeah, but they can follow those directions just like you would instruct a human being. So I don't really think this is going to be a problem because there's nothing. You know, they're not I am Mr. Robot, you know, and they just do their own thing. They understand and exist in, in, in the the vagaries and peculiarities of human society.
            • 29:30 - 30:00 And I'm going to go on for one more sentence. The amazing thing you can actually this is true, I swear to God proved and there have been studies on this. If you want to get a better answer out of one of these chat bots. Offer to it. Yeah, they respond to economic incentives. I saw a study. I'm not making this stuff up. It was just they tested a whole. But I don't know if you to get into this. They tested a whole bunch of possible incentives and the ones one of and some of them work well and some of them didn't
            • 30:00 - 30:30 offering them front row seats to Taylor Swift was got a much better response than for example, you know, I'm going to tell your mother if you don't do a good job. It's absolutely amazing. Jerry described this recently in a Wall Street Journal opinion column. But the thing that struck me that you're incentive to tip your check, but but you also sketched out a world in which which to me felt a little bit like a Mideastern bazaar
            • 30:30 - 31:00 where we're entering into a world where we're going to be surrounded by literally hundreds of these things offering us services. And it felt like there's this wonderful science fiction book that I could commend. The first two or three chapters by Charlie Strauss called Excel or Rondo, which takes cryptography, open source software, artificial intelligence and biotechnology and sketches, a world that we're about to enter. And you evoked the tipping chat bot model as a way to offset the singularity, which I thought was really good.
            • 31:00 - 31:30 We haven't really gotten to the singularity yet, but I thought that was striking. But we do seem to be in a world that was just recently science fiction and now seems to be surrounding us the way we interact with these systems, which brings me to this question of this what seems to be a significant factor right, right now as we begin to interact with these things on a daily basis, which is anthropomorphism, our tendency is, I think, a species
            • 31:30 - 32:00 to humanize almost anything we interact with, whether it's our pet or the car or a chat bot. To attribute motives or agency. Yeah, so what's going on and what could go wrong? Well, there's this is going to be, I think, a really big problem that we're going to face in the future. And it's not something that I haven't wrote about in the Times. So, you know, maybe now it'll turn up in the Times. There are a few other papers as well, just so you know.
            • 32:00 - 32:30 I know you don't read them, but we'll we'll get. Seven years away from that. Who's counting on the problem is this when you have systems that are going to be tutoring our children or keeping our elderly company, which the system is definitely going to be doing, and they're very good for these systems that are infinitely patient, that listen very, very well, that are always sympathetic.
            • 32:30 - 33:00 And I don't care if you tell them the story two or three times or no matter how crazy you are, they'll be happy to. Oh, you know, that's true. You know, here he is. He's such a bad guy, you know. And you know, you should you should dump him, you know, they'll they'll do whatever you know, they're very, very cooperative. All this people are naturally going to form emotional attachments, children, elderly and us. They're going to form emotional attachments to these these things. When you come home from work at the end of a hard day, instead of arguing with your spouse, you just stay on and, you know, have a nice chat with your personal sister chat.
            • 33:00 - 33:30 But that's just all that's terrible what that person said to you. You know, that's what's going to happen. So the problem is you're forming an emotional attachment to a machine and it doesn't actually have real emotions. It's just able to express emotions in a very natural way. And so in that sense, you're disconnecting yourself from humanity by interposing a machine that's causing you to feel good or to have an emotional response that was really wired into you.
            • 33:30 - 34:00 So that would connect you to the other people around you. And this is now going to cut in between. And I have a term for this that I use in my book. I did I it's a little strong, but I think it's really descriptive. It's emotional pornography. Yeah. It's the the the use of a machine to achieve some kind of emotional reaction in you to some satisfaction. That really is something that disconnects you from the rest of humanity. Isn't this kind of the Borg from Star Trek, you know, resistance is futile.
            • 34:00 - 34:30 You will be assimilated. No, no, no. Sorry. It's no disagree. No, that. That's why we had to get an argument about the Star Trek here. The Borg. That was about you being incorporated. Well, isn't that kind of happening? The system is surrounding us, and we're spending more time interacting with systems than with humans. I think these things, it's tools. I don't get that now. I mean, is you slave to your cell phone? I don't answer that question. You know, it's it's another tool.
            • 34:30 - 35:00 It's a very good play right now. And ACTU is going to go on March 10th, which deals with the exact question of big data. I saw it last night. It's it's really quite good and it has kind of a dark view of that. But so just a little bit more about about sort of the consequences of this human interaction with systems. I mean, I think it was in 2015 I wrote about the Microsoft Research Beijing experiment in China. They designed a system called Zhao Ice, which was a conversational system
            • 35:00 - 35:30 before language models. That time they had 20 million users, intensely loyal MI users who were having long interactions with this program. They called it toilet time. They would go into the bathroom and they would have long conversations with their smartphones. At night, 25% of the users texted, I love you too. It So, you know, I sort of put it aside and I'm no longer in the reporting business. I'm not involved in this. But I recently stumbled across a technical paper by Microsoft Research describing how ICE, which now has the language model, works.
            • 35:30 - 36:00 And in reading through the technical paper, I discovered they have a half a billion accounts. So China is usually a fast follower. In this case, I think there might be something going on that we don't know about in America that we're sort of catching up with in terms of this human machine interaction. Yes, they're ahead of us in authoritarianism and we're catching up them. I wonder if the two are hand in hand in and hand in glove. Not I'm not an advocate of the Chinese system.
            • 36:00 - 36:30 You're going to touch that. We'll come back to that because their competitors. What about human jobs? I mean we are in the midst of this next round of of the next wave of automation fears. It comes at regular intervals and it's bubbled up again. I mean, a couple of times just in the last decade. Yes. And once again, with generative AI, has anything changed? Well, I like to say this time is different. Just like last time people worry, oh, my God, these robots are going to come.
            • 36:30 - 37:00 And now they're so smart and they're going to take everybody's job. And what are we going to do? There's not going to be work for humans. I want to give you great comfort on this. This is not going to be the case for several very interesting reasons. But but the main reason is that the right way to think about to frame this new technology is that it's really just another advance in automation. It's going to help us to be more productive and do things in better ways and with less labor than we did before.
            • 37:00 - 37:30 And all you need to do is look at all the history of automation going all the way back. And like you said, every time something new got invented, you know, people were like, Oh my God, what are we going to do? I mean, there have been waves of automation that are just astonishing. During the 1800s here in the U.S., we went from about 90% of the population working in agriculture to at the end of the century. Well, today only 2% to now, where 98% of the people out of jobs. Well, no, we're doing something else.
            • 37:30 - 38:00 And the real reason is that automation has to effects. The first of them is, yes, it puts people out of jobs in an immediate sense. But what it really does is make people more productive. And because they're more productive, there's just generates wealth. Their wealth goes into the pockets of shareholders, it goes into the pockets of workers, and it makes consumer goods much less expensive. And what do we do with our wealth? Well, we spend it. And when we spend it, we're spending it that's generating new kinds of jobs. And so the first thing is that it reduces employment sort of in the small,
            • 38:00 - 38:30 but it generates new kinds of jobs, like the automobile, you know, took away all the jobs, the people who would groom the horses and all the stuff they went on. But now you had motels and you had, you know, travel agents and you had, you know, people who fixed cars and all of that. It all created new kinds of professions, generative artificial intelligence. It's going to do the same kinds of things. So there's always going to be jobs for people.
            • 38:30 - 39:00 It's just the nature. The work will change. So let's look at how this stuff rolls out. Low hanging fruit for for language models might be call centers. You know, telecommunications networks emerged in the 1990s, the world globalized. All of a sudden, we had these employment centers in English speaking parts of the world that were not in America. Now we have technology that clearly seems to be able to do the job of a call center operator.
            • 39:00 - 39:30 How quickly does that happen? Oddly enough, I think that this particular technology for several professions is going to have an immediate and significant impact. This is going to be way faster than some of the other waves of automation. But there have been things like this in the past. I'll try to come back to that if I can just put a marker in there. The interesting thing about the call center problem is is is this what they find when they use it as an aid to people who are doing call center work and it's just an example of many other fields as well, is
            • 39:30 - 40:00 what it really does is it allows novices to get up to an expert level much more quickly because effectively they have a very able and effective tutor, an assistant, and they learn much more quickly. Now, that's going to have a number of interesting facts. It's going to tend to reduce income inequality because, you know, you can train people much, much faster. It has a number of interesting effects in the labor market as a result. So I mean that. Okay. So but overall, do you where do you think I mean, in
            • 40:00 - 40:30 terms of blue collar versus white collar, this generation of generative? I do think it and there is now a set of of op ed that sort of are fearful for the, you know, the decimation of high skilled white collar work. Well, no, I think that's nonsense. And in some sense, I mean, yes, the way it's being done today. Absolutely. But it's going to change the nature of high skilled work. Everybody is going to have like, you know, a super effective executive assistant.
            • 40:30 - 41:00 And I mean, everybody everybody in this room is going to have that kind of thing when you need to interact with the government or a corporation today, you know, you may not realize it, but you're being trained by Amazon, how to work there, interface. Like I put that in my card, do this so that you can be able to talk to it. Hey, I need this. You consult with a program that's going to talk with you. And they've actually launched an early version and I understand it's worked very well, but it's, you know, it's going to get there. So the way in which you're going to interact with. I just had to renew my driver's license.
            • 41:00 - 41:30 Oh, my God. You know what I had to go through? It was horrifying. And but the truth is, in a couple of years, you're going to say to your assistant, Hey, go ahead and renew my driver's license for me in that system of yours is going to talk to their system and it's going to go ahead and see done just like that. So the fact is we're going to be using these systems. I kind of lost my train of thought, but, you know, in new ways
            • 41:30 - 42:00 to help support things. And it's going to make life easier. And back to your your call center thing, you're not going to need a call center because you're going to be talking to a chat bot that is good or better. And, you know, appealing to a human being is only to dumbed down the process. Yeah. So let's say you were out of the swimming pool and you ran into Dustin Hoffman and he was asking you about or you were offering him advice about what he should do after he graduated from college.
            • 42:00 - 42:30 What would you say today? I wish I had a funny answer for you, but I don't. I don't. The real answer is, is this I'm old and I you know, I had a very good education, you know, all the way, the usual blue chip stuff. I'm not going to review all that. And I sometimes get asked, so which school was the best or, you know, which of these degrees was the most valuable to you? You might think is my Ph.D. in computer science. I love being a pontificating blowhard.
            • 42:30 - 43:00 Which was in the field of natural. Language. It was a natural language processing, yes. Which I'm doing right now. I'm probably okay. But the truth is, it was my undergraduate degree in history and philosophy. Science was the most valuable. So the real answer is take the time when you have the opportunity, i.e., you're young, to get a great broad based liberal arts education. You do not need to specialize at that age in those skills that you will learn critical thinking and understanding of history and philosophy and all that.
            • 43:00 - 43:30 Those are the things which are going to never go out of style. It will always help you in the future to learn whatever kind of skills you're in, you know, whether it's biology or computer science or whatever it might be. It applies to all of those fields and you'll never go out of date if you have a good liberal arts backed background. So that's what I would tell Dustin Hoffman. I forget what the character's name was. Again. A little bit more. I'm sort of the workforce. Let's look at the law, two sides of the law.
            • 43:30 - 44:00 Could you see these systems replacing judges and could you see these systems replacing lawyers? You know, there was a poor lawyer who was using Chachi Pete, to write his appeal. And I thought well, now we've learned that lesson. He was punished. And then what? Trump's former guy did it, too. I mean, these people seem to be slow at learning this lesson. Well, not everybody learns at the same rate in different campaigns.
            • 44:00 - 44:30 The sorry. Judges look. Judges and lawyers. Well, I mean, the general generative artificial intelligence is going to be a full employment act for lawyers, But that's for a completely different set of reasons. It has to do with copyright and infringement and a whole bunch of other things. But we're we're going to need lawyers. I wouldn't say, you know, it's going to reduce the need for lawyers, but I think it will as an aid to the activities that lawyers engage in. It's going to be similar in terms of impact to the introduction
            • 44:30 - 45:00 of the word processor, which any lawyer will tell you, oh, my God, it changed, you know, made me so much more efficient. It's going to be like that now. They're not subject to these systems. Currently, a current form are not substitutes for lawyers simply because they tend to make things up. You know, maybe we'll get into this, but you can't rely on them for factual information. It's like my children and this is what you know, it's true. And for the same reasons, actually, interestingly enough.
            • 45:00 - 45:30 So you're not going to have these things suddenly, you know, being taking over for a lawyer. The other question is very interesting. I think what we're going to see this is me playing, you know, pundit. Luckily, I probably won't live long enough to see this, but we'll find out, I think, as something like small claims court. I mean, there's a lot of disputes that you have with other people that you would really like to have adjudicated by some competent third party. Well, imagine a new system of small claims where you and your what do you call them, an opponent, a adversary.
            • 45:30 - 46:00 Adversary. You know, your counterparty. Your counterparty can't agree on, you know, how much you owe or when the job is done, whatever it is. And you sign up to this for this new electronic judging system and each of you gets a chance to explain it's much details you want for as long as you want Your point of view. You know, everybody you just have to wait, doesn't have to be done in real time while everybody's saying you're going to give it all the information and they give it all their information. And then just like that, this electronic judge
            • 46:00 - 46:30 in the most objective and careful way, writes a complete opinion, explaining its point of view and the law and how it applies and all of that stuff. And that's why, you know, IOU, $200, you know, now, if that costs you 20 bucks instead of having to take off from work and go to small claims court and file a fee and, you know, go through all that nonsense, it's going to be fabulously valuable and useful as a lubricant for commercial transactions in our society.
            • 46:30 - 47:00 Now, you may say, well, what happens if you don't agree with the judge? Well, the first thing is you're probably going to understand that the judges get it right. You know, this will be done by studies, you know, almost all the time. They do it very effectively and get it right. Otherwise you wouldn't be using such system. But the second thing is, sure, you can appeal. That'll be $200 and you can appeal to a human judge. Well, that'll be $1,000, you know. And so I think that a system like that is going to be extremely valuable for dispute resolution and will change
            • 47:00 - 47:30 the fact that most people don't have access to good legal services. And I think that's going to change. Yeah. I wanted to ask one more question about how how these are sort of used in society. And let me set it up by describing a personal experience. I've been experimenting a lot, at least like like Jerry. I have eight of these chat bots and the top two lines of my cell phone. The most fun is pi pi that I would recommend to you because you can. It has a conversational mode where the voice is gone across the uncanny valley and it sounds human.
            • 47:30 - 48:00 It sounds human enough that you think you're watching her if you saw that movie. But I had this experience. Yeah, I'm one of those journalists who, for the last 40 years has had difficulty reading my handwritten notes 30 minutes after I wrote them. And the last story I reported, which is still not appeared in The New York Times, but it might show up soon. I actually used a transcription engine called Otter I and so I had a completely accurate account of the interview.
            • 48:00 - 48:30 But something else happened that really sort of blew my mind. Otter is now added a language model, and so you can have, after you've done the interview, immediately after, you've not only get a summary, a good summary of your interview which points you back to the actual interview, but you can also have a conversation with the interview, which is this really weird sort of, you know, if we didn't have this tool before. And on top of that, you can take all the interviews on a particular topic and you can put them into a folder and you can have a meta conversation about the subject of your story.
            • 48:30 - 49:00 So I put this into the category of I A instead of AEI, which I think Jerry was kind of getting at, you know, at the dawn of interactive computing around Stanford in 1962. And when when when McCarthy started the Stanford Lab and set out to build an artificial intelligence on the other side of campus the same year, Doug Engelbart, the guy who invented the mouse, started on a project to augment human intelligence. So you had this dichotomy between I and I a
            • 49:00 - 49:30 that goes all the way back to the dawn of interactive computing. And so the question then is, do we have a choice? Can we go down a path versus an AI path? Or is this just, you know, technology on its own, on charting its own course? Well, I think it's clearly going to be both of the above. But you make a really interesting point. The framing in the original framing of the field suggests that these things were substitutes for human beings. And so to the extent that A.I.
            • 49:30 - 50:00 researchers think, you know, we go around looking for applications and they go, Hey, that guy who sweeps the street, I can build a robot to do that job. And you know that that's a framework. But there's another framework which is the intelligent agent one, which is really what we're going to be seeing most mostly in the next few years due to generative artificial intelligence is a little bit different, which is, Hey, I can make that guy much more efficient by saving him a lot of time, you know, worrying about, you know, what streets need to be swiped. And so he can, you know, so it's an assistant as well.
            • 50:00 - 50:30 Now, they both have the same effect on employment because whether you make replace a worker or whether you make when you replace a worker, somebody is out of a job because you can do more with fewer workers. When you make somebody more effective and more efficient, you're putting somebody out of a job because you need fewer workers to do the same amount of work. So it has the same effect on employment. But psychologically, I think it's very different because it empowers the remaining workers and it makes them more effective.
            • 50:30 - 51:00 And work is not just about, you know, doing mechanical stuff and getting a paycheck, but country with Andy, what's his name, just had an article arguing exactly the opposite of that in in the Wall Street Journal. Oh, Andy Kessler. Thank you, Ari. No, I'm plugging somebody else's stuff. But the truth is that if you work is also about how fulfilled you feel and how effective you feel. And I think that systems can help support you and make you more effective.
            • 51:00 - 51:30 Like that otter thing you were talking about. I think that's a you know, that's just better for society. And so we're going to see more and more that in this particular technology is extremely well oriented toward that approach to automation available. So I've got some good questions to dive into in a second. But let me ask you about the challenge of regulating these systems. Governments all over the world realize that that, you know, this technology evolving rapidly.
            • 51:30 - 52:00 We don't know where it's going. I think in Canada, they're actively trying to create an international treaty on not weapons. Now we're talking about a systems that are not weapons systems. It will be possible regulate these things. There is an open source component to this. I mean, we have these big tech companies like Google and Openai and Microsoft at all. But then you also have this technology out in the wild. And there is some argument I think I saw a paper inside Google arguing the open source guys are actually in terms of the evolution of the systems,
            • 52:00 - 52:30 they're moving as fast or faster than the big tech companies. That means anybody who gets their hands on these things can have this kind of power. You know? Yeah, there's a whole bunch of issues wrapped up in that, that whole thing. Let's just talk about regulation in general. Can you regulated? Of course you can. It's difficult because it's moving very quickly and it's very difficult to define. It takes time. It's like, you know, we decided in this country
            • 52:30 - 53:00 some decades ago that we were going to try to eliminate bias in employment. Well, what is bias? You know, Well, it turned out it took a long time, but they came up with a good definition and they're actually applying it. And now you can get into trouble, you know, if you don't meet those kinds of requirements, it's going to be the same thing here. Now, pushing that analogy, I just realized a little bit further into the area that you were talking about. The truth is you can go ahead and hire people, you know, and not not obey those rules, but you're breaking the law and you might get caught and there's going to be a consequence.
            • 53:00 - 53:30 It's the same thing that relates to the open source thing. Open source means the system is freely available on the, you know, to anybody. And it's not proprietary to one company or whatever. And you access to all of the technology. Yes. That's out there in the history of open source is that a lot of those systems do turn out for some interesting economic reasons to be much more effective than the closed source systems. So I think we're going to see if it's a software technology, we're going to see a repeat of that same pattern.
            • 53:30 - 54:00 However, that doesn't mean you can use it for anything you want. You can, but that doesn't mean it can't be regulated. It doesn't mean there can't be consequences for abusing these systems. You know, there's a huge efforts in the EU. Here in the U.S., there's a slightly different approach. And the more interesting question is, are we going about this the right way? And I have to tell you, after much consideration, I think here in the U.S., we're doing a good job. It's sort of a light touch So far. Mostly we're doing is putting in the infrastructure.
            • 54:00 - 54:30 It's necessary to be able to monitor what the effects of this technology are going to be. And because you're going to be required to report that to the government is that's going to be very helpful. And they'll be able to have kind of an early warning system for many problems that will come up when we've got going for us on this is we didn't do this with social media. Everybody just assumed it was going to be great. And we had all kinds of problems. And closing that barn door has proved to be very, very difficult this time.
            • 54:30 - 55:00 Everybody's on high alert. So that's the good news. And I think, you know, we'll muddle through it and the benefits are going to be enormous. And the costs, the downsides are going to be real. And one more question before I get to the international competition. Throw in the fact that there are going to be other countries that are using this technology in ways that are different than the way we use it. Well, I think that's true. But, you know, to me, to me, I don't think this is really a big issue. Again, there's there's there's do we have the technical capabilities,
            • 55:00 - 55:30 you know, present in our local economy? You know, how is it regulated or controlled And all that. But like, look at the movie industry. You know, there's a huge movie industry in China and, you know, we don't really care because most people here don't don't speak Chinese. And versa. And I don't really see it as a competition. I know it's really easy. You know, there's a race in AI, you know, where are you going to beat the Chinese or the Europeans or whatever it is? I don't think it really matters. This is software.
            • 55:30 - 56:00 Let's face it. Any any country, including ours, that wants to get its hands on something can do. So now do we have a chip that's a little bit faster than you know, is is in video a little bit faster than Huawei or vice versa? You know, that horse race is going to go on and its own pace and there'll be leapfrogging each other. But this isn't like nuclear secrets or what would be another baby site. What about biological weapons? Biological weapons maybe is another example
            • 56:00 - 56:30 where you can kind of bottled it all up and, you know, keep it keep it from having other other people have access to it for a period of time. This stuff, it's software. And you know, who's Richard Stallman? This, you know, software wants to be free. And if we want to get a copy of. The Stewart brand, but that's. Neither here. Sorry. Stewart Brand, I how did you know that? He wrote a wonderful book about Stewart Brett, just so you know, And even Stewart liked it. I so I don't really see this as an issue.
            • 56:30 - 57:00 There is some issues of national competition, but it's mostly about whether we're making the proper investment in having the proper incentives to develop and make use of these technologies. So two of these asked the same question in different ways. But let's let me ask it this way in regards to the rapid evolution of A.I., what keeps you up at night the most worried about? Boy, I really need a minute to think. Think about that one.
            • 57:00 - 57:30 What keeps me up at night? Mustafa Suleiman, the one of the founders, were both DeepMind and the company I mentioned. Pi has written a book that sort of charts out all the things that could go wrong. Well, you know, you can. It doesn't keep me up at night. I his book kept me up at night and I had a lot of you know, that's why I write short books. I get to bed at a decent hour. Is dismissal healthiest? My book really help get you to sleep.
            • 57:30 - 58:00 You know what keeps me up at night about this? I think that this this issue of taking technology that poses or reduces our opportunities to interact with authentic human beings is is a big negative. And that's going to be a problem. Deep fix. I we didn't get into that. But I mean that's going to be a huge problem. It already is. And I'm not even going to define it for the audience. You've probably seen all the fake pictures and videos and things like that.
            • 58:00 - 58:30 You know, being able for people who consume only through technology to figure out what's real and what's not or what's true and what's not, it's hard enough today. It's going to be even harder in the future. And we might actually need some kind of regulation and to to be able to deal with some of those problems. But other than that, I'm mostly what keeps me up at night is excitement about the incredible effects this is going to have on education,
            • 58:30 - 59:00 on law, on health care, on creative industries, which we haven't gone into the next 7 minutes. You know, it's going to be a revolution that will pervade society in much the same way as as the Internet has or is itself as smartphones have. And you got to be ready for this. It's incredibly valuable. And you know, you're going to want to use it all the time and you're going to really feel bad when you left your chat bot at home. Let's just put that.
            • 59:00 - 59:30 If the Internet didn't exist today, would we have been able to train the language models, the large lying language models with large amounts of data? Put another way, what are some of the common ways to feed data to large language models? Okay. Well, as a practical matter, the answer to that is if we don't have the Internet today, no, we wouldn't be able to do it. It's just it's so easy to accumulate large volumes of data. The whole thing is based on hoovering up, if you know what that expression, that's a little dated,
            • 59:30 - 60:00 hoovering up all of the debris that we've we've left behind. You know, we leave these electronic footprints in this vast, you know, plain of of the Internet and and these systems look at those footprints, and that's how they figure out where we're going and what we're doing. They don't actually know anything directly about the world today. This comes back to the embodiment problem. They're really reflecting. Just really these tablets are just mirrors of humanity
            • 60:00 - 60:30 because they're collecting and looking at all of this garbage that we're spewing out new all of our good stuff and bad stuff and crazy stuff. And that's how that that's how they learn. But how would you collect all that stuff? I mean, you know, getting enough data to train these systems has always been a terrible, terrible problem. And the amount of computation that's necessary to do it is mind boggling. We'll get to some of the IP questions in just a second about that Hoovering process. During your research for this book, was there anything that you came across
            • 60:30 - 61:00 that surprised or shocked you? Oh, God, yes. Well, I mean, first of all, I didn't write it yesterday. You know, it does take take. So this was all relatively new when I when I was writing the book, the main thing that surprised me and it still does this today and I still can't retrain myself to use this technology as well as you do is, you know, you had a question or you want to have some idea or you want to see how to explain something, ask it, ask it like, well, do these systems know how to do it?
            • 61:00 - 61:30 Don't ask me, ask it. And the conversations I had every once in a while, I just had to and get up and go, Oh my God, I won't use the strong language. Oh my God. I just can't believe that it could be that insightful. What I did with this book. It did not write the book. I tell you that these systems. But I took each chapter because I discovered it was one of the best copy editors I've ever had.
            • 61:30 - 62:00 I just give it the whole chapter and say. How did I do? And we come back with things like, Well, it was very well explained and this analogy was good, but you didn't really connect this thought with that one. I think you might lose your readers here in the next couple of paragraphs, and I'm like. Oh my God. You know, You know, I used to have to wait weeks for some copy editor, you know, to get back to me on things like this. And it's it was just wonderful. And so use these things if you have a question to ask you,
            • 62:00 - 62:30 you know, are you conscious? Go ahead and ask it. Really? Yeah. Really remarkable. There's a very high profile lawsuit brought recently by my former employer against Openai and and Microsoft. What are the IP issues around language model think data intake. What is your predicted outcome of the NYT New York Times lawsuit? Okay, we've got 3 minutes on the clock. If you got nine yards to the next down
            • 62:30 - 63:00 the this is a very big, complicated subject that I do cover in some detail in the book, but we're going to need to rethink a lot about the nature of copyright and particularly the concept of fair use. There's arguments on both sides of this, but we will muddle our way through this and get it get it done. Currently, the US Copyright, Copyright, USPTO any way the Copyright Office in the US has got this wrong,
            • 63:00 - 63:30 they said You can't copyright something that a machine has done. That's ridiculous. They had exactly the same argument in 1850 when they invented the the camera. Now the camera, all the artists were up in arms because obviously all of a sudden their business was gone. You could walk into a studio, you could sit down with your family to press a button on the camera. And there's your portrait. Now, you know, you can imagine the horrors about that. And they were considered at the time.
            • 63:30 - 64:00 Photographs were not copyrightable. Why? Because the camera did all the work that was the argument. All the guy did was press the button. All the photographer did was press the button. Now, today, we think of photographers as great artists, not as technicians. That press buttons. And yet this is exactly what we're going to go through as we develop this kind of technology. So we're going to have to evolve the copyright law. By the way, it was Abraham Lincoln just weeks before he was shot that signed the the law that said
            • 64:00 - 64:30 that photographs could be copyrighted. One quick last question, because we've got 2 minutes left. What similarities do you see between this air bubble and the one that surged in the 1980s and then popped? Well, I was right in the middle of that. You know. I do know and, you know, well. Look at the birds. This is a bubble. How will it end? What is the. Answer? The answer is it's going to happen just like last time. We are in the middle of a bubble in the sense of it's a an investment bubble. It doesn't mean it's a technology bubble.
            • 64:30 - 65:00 What it means is that there's too much money flowing into this area, and that means that lots of stuff is getting funded that is not going to be proved to be economically valuable. We're not going to have 20 of these companies putting out chat bots. It's just we've seen this seen over and over again. It's going to be a replay of that. Yeah, it's going to go up for a while and eventually individual stocks can come back down. That's just the way these things work. That doesn't mean anything wrong with the technology. And in this case, and compared to the eighties, this is really valuable.
            • 65:00 - 65:30 I think it is one of the most important inventions in human history. And I mean that seriously. I'm talking about the wheel, you know, the use of fire, you know, telegraph the telephone, photography, airplane travel. This is way more general. And its impact on humanity is going to be absolutely astonishing. And I just I'm genuinely grateful that I have lived to see this moment happen. I have to tell you that. Thank you.
            • 65:30 - 66:00 So our thanks to Jerry, the author of Generative Artificial Intelligence What Everyone Needs to Know. The Jackson Square Partners Foundation, the Ken and Jacqueline Broad Family, Fun and all of you for joining. I'm John Markoff and a last minute shout, shout out to Mike at He-Man Plumbing in Palo Alto. Without his reaction today, I wouldn't have been here. Thank you so.