A Cautionary Tale of AI and Ads

We're building a dystopia just to make people click on ads | Zeynep Tufekci

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In this TED talk, Zeynep Tufekci discusses the powerful and often hidden impact that artificial intelligence and online advertising have on our freedom and autonomy. She highlights how companies like Facebook and Google use AI to manipulate us through targeted ads and subtly influence our behaviors and decisions. The technology employed stretches far beyond traditional advertisements, forming a persuasion architecture that taps into our personal data to predict and influence human behavior. Tufekci warns of the dystopian potential of these tools if left unchecked, emphasizing the need for transparency and restructuring of digital technologies and their business models.

      Highlights

      • AI isn't just the next step after online ads—it's a whole new ballgame! 🎮
      • Digital persuasion architectures can be personalized on a large scale. 🔍
      • Opaque algorithms can sort us without transparency or understanding. 🤯
      • Platforms like Facebook can deeply influence political participation. 🗳️
      • Our data fuels these engines—it's time for a tech industry overhaul! 🔧

      Key Takeaways

      • AI and online ads aren't just selling shoes—they're manipulating society! 🤖
      • Persuasion architectures in the digital world can target us individually based on our data. 🕵️‍♀️
      • Opaque algorithms can subtly influence political and social behaviors. ⚖️
      • We need to reshape how digital technologies operate and are incentivized. 🔄
      • Our data and attention shouldn't be for sale to the highest bidder. 🚫

      Overview

      Zeynep Tufekci opens with a compelling narrative on the dystopian potentials of AI—not through rogue robots but via the digital infrastructures of tech giants. Imagine if every click, like, and online whisper about our preferences is stored, crunched, and repurposed to sell to us or influence us politically. It's more personalized than the candy at the checkout, but it's also more sinister in its invisibility and reach.

        She paints a picture of how advanced algorithms can influence our daily lives and decision-making processes without our conscious consent. From being nudged into buying plane tickets to Vegas to subtly shifting political opinions, these systems are precise, relentless, and often completely opaque. It's like a hidden hand, guiding us along predefined paths that suit the interests of these digital mega-corporations.

          To address this, Tufekci calls for a radical restructuring of our digital economies. She argues for systems that better align with human values, demanding transparency and ethical considerations in how AI and data are used. It's not just about protecting freedom but ensuring that this prodigious potential of digital tech is harnessed in ways that truly benefit humanity.

            Chapters

            • 00:00 - 00:30: Introduction to AI Fears The chapter delves into common fears associated with artificial intelligence, highlighting the frequent imagery of humanoid robots causing chaos, akin to scenarios depicted in movies like 'Terminator.' While such visions are acknowledged, they are characterized as distant threats. The chapter also touches upon concerns regarding digital surveillance.
            • 00:30 - 01:00: AI: Not the Threat We Imagine This chapter challenges the common dystopian view of artificial intelligence, notably rejecting George Orwell's '1984' as a fitting metaphor for AI's threat. Instead of fearing autonomous AI, the chapter argues the real concern is how those in power might exploit AI to control and manipulate people discreetly.
            • 01:00 - 01:30: The Real Threat of AI and Technology Companies The chapter discusses the subtle and unexpected threats posed by technology, emphasizing how companies like Facebook, Google, Amazon, Alibaba, and Tencent capture and sell user data and attention. It highlights how artificial intelligence is beginning to enhance their business operations, potentially compromising personal freedom and dignity.
            • 01:30 - 02:00: AI's Potential and Risks AI's Potential and Risks - The chapter explores the transformative nature of artificial intelligence, comparing it to a significant leap beyond existing technologies like online ads. AI is portrayed as a revolutionary force with the capacity to greatly enhance understanding in various fields of study and research. However, it also carries substantial risks, echoing the sentiment that great power is accompanied by great responsibility.
            • 02:00 - 02:30: The Reality of Online Ads This chapter delves into the pervasive nature of online advertising in our digital lives. It highlights the often-dismissed reality of these ads, which are generally perceived as crude and ineffective. The text explores the common experience of being persistently targeted by online ads after searching for a product. Even after a purchase decision is made, these ads seem to continue following the user around. The narrative captures the general skepticism and desensitization towards what is seen as cheap manipulation by digital marketers.
            • 02:30 - 03:00: Digital Persuasion Architectures The chapter explores digital persuasion architectures by comparing them to real-world examples. It highlights how digital technologies go beyond traditional advertisements, similar to how supermarkets place candy and gum at kids' eye level to prompt them to nudge their parents at the checkout.
            • 03:00 - 03:30: Comparison with Physical World Persuasion In this chapter, the author discusses the limitations and effectiveness of persuasion architectures in the physical world. Even though such architectures, like impulse buy items placed near cashiers, are not considered particularly nice, they do work to a certain extent. The chapter notes that in the physical world, there are limitations such as the amount of space available for placing items by the cashier. Additionally, the same set of items is presented to everyone, which tends to work primarily on people shopping with children who may pester adults for candy and gum. Overall, the chapter emphasizes the constraints and targeted effects of persuasion strategies in physical retail environments.
            • 03:30 - 04:00: Persuasion at a Digital Scale The chapter 'Persuasion at a Digital Scale' discusses how digital platforms have developed persuasion architectures capable of influencing individuals on an unprecedented scale. These architectures can pinpoint personal weaknesses and deliver targeted messages privately to billions of people through their smartphones, highlighting a significant shift in how persuasion operates in the digital age.
            • 04:00 - 04:30: Implications of AI-Analyzed Large Data Sets The chapter discusses how artificial intelligence (AI) can analyze large data sets to enhance decision-making processes. It provides the example of selling plane tickets to Las Vegas, illustrating how traditional methods relied on targeting specific demographics based on experience and guesswork. In contrast, the use of big data and machine learning allows for more precise and data-driven strategies, optimizing outcomes in areas like marketing and sales.
            • 04:30 - 05:00: Black Box Algorithms and Deep Surveillance This chapter discusses the extensive data collection practices of platforms like Facebook. It highlights how Facebook keeps track of every user action, including status updates, messages, location check-ins, and photographs. The chapter points out that even typed messages that are deleted are stored and analyzed. Moreover, Facebook attempts to connect this online data with offline information to build comprehensive user profiles.
            • 05:00 - 05:30: Ethical Concerns and Emerging Market Practices The chapter discusses the ethical concerns surrounding the practice of purchasing data in emerging markets, particularly focusing on differences between the US and Europe. In the US, data such as financial records and browsing history are routinely collected and sold by data brokers. In contrast, European countries have stricter regulations on data collection and privacy.
            • 05:30 - 06:00: The Danger of Algorithms in Social Media The chapter titled 'The Danger of Algorithms in Social Media' explores the function and implications of learning algorithms in the context of social media. These algorithms are designed to understand the characteristics of individuals who have made specific purchasing decisions in the past, such as buying tickets to Vegas. By analyzing existing data, the algorithms learn to predict the likelihood of similar actions in new people, essentially classifying them based on their predicted behaviors. This discussion sheds light on the broader implications of how targeted offers, such as buying tickets to Vegas, are generated and presented to users based on algorithmic predictions.
            • 06:00 - 06:30: Power of Algorithms in Politics The chapter discusses the powerful yet opaque role that algorithms play in modern political processes. It highlights the issue that despite their significant influence, the intricate mechanisms of these algorithms are not well understood by their creators or analysts. The complexity of these systems, often involving vast matrices with thousands or millions of data points, contributes to this lack of transparency. This lack of understanding and transparency in algorithmic operations poses challenges in comprehending how categorization and decisions are made by these digital entities.
            • 06:30 - 07:00: Consequences of Algorithmic Control The chapter delves into the complexities of modern algorithmic control, emphasizing how these systems are becoming akin to growing intelligences that developers can't fully understand. It draws a comparison to the incomprehensibility of reading a person's thoughts by just examining a brain cross-section. Furthermore, it highlights the dependency of these systems on vast amounts of data, which in turn promotes extensive surveillance on individuals.
            • 07:00 - 07:30: Public Awareness and Misinformation This chapter discusses the implications of machine learning algorithms in data collection, using Facebook as an example. It highlights the concerns about such systems making predictions based on sensitive data without full understanding, like predicting the manic phases in bipolar individuals to tailor ads for Vegas trips, potentially exploiting their compulsive behaviors.
            • 07:30 - 08:00: Probabilistic Inferences by AI Probabilistic Inferences by AI delves into the capability of artificial intelligence to make predictions with minimal or no overt cues. The chapter discusses a case where a computer scientist successfully detected the onset of mania from social media posts before clinical symptoms appeared, highlighting the power and potential ethical concerns of AI inferences.
            • 08:00 - 08:30: Authoritarian Trends and Surveillance The chapter discusses the prevalence of surveillance technology and its implications on individual privacy. It highlights the ease with which such technology can be developed and accessed, even with off-the-shelf components. A reference is made to how platforms like YouTube can capture user attention and lead to extended periods of engagement, exemplifying the pervasive and often unnoticed influence of algorithm-driven surveillance.
            • 08:30 - 09:00: Persuasion Architectures in Politics and Ads This chapter explores the role of algorithms in shaping what content individuals consume, particularly in the contexts of politics and advertising. It describes how algorithms collect and analyze user data to predict content that might appeal to them, often via autoplay features. While such mechanisms are designed to seem helpful by personalizing user experiences, they can sometimes have unintended negative consequences by limiting exposure to diverse perspectives or promoting biased information.
            • 09:00 - 09:30: Balancing Digital and Human Values In 2016, a scholar attended Donald Trump's rallies to study the social movement supporting him. While researching, they watched recordings of the rallies on YouTube. Subsequently, YouTube's algorithm began recommending and autoplaying white supremacist videos.
            • 09:30 - 10:00: The Need for Structural Change In this chapter titled 'The Need for Structural Change,' the author discusses the algorithms used by platforms like YouTube, which tend to recommend increasingly extreme content. This is demonstrated through examples such as watching videos about political figures or specific topics like vegetarianism, which then leads to autoplay recommendations that spiral into more extreme territory. The point made is that this phenomenon is not about politics itself, but rather about how the algorithm interprets and anticipates human behavior, prompting a discussion on the necessity for structural changes in how these algorithms function.
            • 10:00 - 11:00: Conclusion: Facing AI Menaces with Open Eyes The chapter titled 'Conclusion: Facing AI Menaces with Open Eyes,' discusses the intricacies of YouTube's recommendation algorithm. The algorithm, although proprietary, seems to entice users into watching increasingly more 'hardcore' content. An example mentioned is how watching a video about being vegan could lead to recommendations for more extreme content. The chapter suggests that the recommendation algorithm is designed to keep users engaged by gradually escalating the intensity or extremity of the content shown. This highlights a broader concern about the potential impacts of AI-driven algorithms on consumer behavior and content consumption patterns.

            We're building a dystopia just to make people click on ads | Zeynep Tufekci Transcription

            • 00:00 - 00:30 So when people voice fears of artificial intelligence, very often, they invoke images of humanoid robots run amok. You know? Terminator? You know, that might be something to consider, but that's a distant threat. Or, we fret about digital surveillance
            • 00:30 - 01:00 with metaphors from the past. "1984," George Orwell's "1984," it's hitting the bestseller lists again. It's a great book, but it's not the correct dystopia for the 21st century. What we need to fear most is not what artificial intelligence will do to us on its own, but how the people in power will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden,
            • 01:00 - 01:30 subtle and unexpected ways. Much of the technology that threatens our freedom and our dignity in the near-term future is being developed by companies in the business of capturing and selling our data and our attention to advertisers and others: Facebook, Google, Amazon, Alibaba, Tencent. Now, artificial intelligence has started bolstering their business as well.
            • 01:30 - 02:00 And it may seem like artificial intelligence is just the next thing after online ads. It's not. It's a jump in category. It's a whole different world, and it has great potential. It could accelerate our understanding of many areas of study and research. But to paraphrase a famous Hollywood philosopher, "With prodigious potential comes prodigious risk."
            • 02:00 - 02:30 Now let's look at a basic fact of our digital lives, online ads. Right? We kind of dismiss them. They seem crude, ineffective. We've all had the experience of being followed on the web by an ad based on something we searched or read. You know, you look up a pair of boots and for a week, those boots are following you around everywhere you go. Even after you succumb and buy them, they're still following you around. We're kind of inured to that kind of basic, cheap manipulation. We roll our eyes and we think, "You know what? These things don't work."
            • 02:30 - 03:00 Except, online, the digital technologies are not just ads. Now, to understand that, let's think of a physical world example. You know how, at the checkout counters at supermarkets, near the cashier, there's candy and gum at the eye level of kids? That's designed to make them whine at their parents just as the parents are about to sort of check out.
            • 03:00 - 03:30 Now, that's a persuasion architecture. It's not nice, but it kind of works. That's why you see it in every supermarket. Now, in the physical world, such persuasion architectures are kind of limited, because you can only put so many things by the cashier. Right? And the candy and gum, it's the same for everyone, even though it mostly works only for people who have whiny little humans beside them. In the physical world, we live with those limitations.
            • 03:30 - 04:00 In the digital world, though, persuasion architectures can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent to everyone's phone private screen, so it's not visible to us. And that's different.
            • 04:00 - 04:30 And that's just one of the basic things that artificial intelligence can do. Now, let's take an example. Let's say you want to sell plane tickets to Vegas. Right? So in the old world, you could think of some demographics to target based on experience and what you can guess. You might try to advertise to, oh, men between the ages of 25 and 35, or people who have a high limit on their credit card, or retired couples. Right? That's what you would do in the past. With big data and machine learning,
            • 04:30 - 05:00 that's not how it works anymore. So to imagine that, think of all the data that Facebook has on you: every status update you ever typed, every Messenger conversation, every place you logged in from, all your photographs that you uploaded there. If you start typing something and change your mind and delete it, Facebook keeps those and analyzes them, too. Increasingly, it tries to match you with your offline data.
            • 05:00 - 05:30 It also purchases a lot of data from data brokers. It could be everything from your financial records to a good chunk of your browsing history. Right? In the US, such data is routinely collected, collated and sold. In Europe, they have tougher rules. So what happens then is, by churning through all that data, these machine-learning algorithms --
            • 05:30 - 06:00 that's why they're called learning algorithms -- they learn to understand the characteristics of people who purchased tickets to Vegas before. When they learn this from existing data, they also learn how to apply this to new people. So if they're presented with a new person, they can classify whether that person is likely to buy a ticket to Vegas or not. Fine. You're thinking, an offer to buy tickets to Vegas.
            • 06:00 - 06:30 I can ignore that. But the problem isn't that. The problem is, we no longer really understand how these complex algorithms work. We don't understand how they're doing this categorization. It's giant matrices, thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data,
            • 06:30 - 07:00 understands anymore how exactly it's operating any more than you'd know what I was thinking right now if you were shown a cross section of my brain. It's like we're not programming anymore, we're growing intelligence that we don't truly understand. And these things only work if there's an enormous amount of data, so they also encourage deep surveillance on all of us
            • 07:00 - 07:30 so that the machine learning algorithms can work. That's why Facebook wants to collect all the data it can about you. The algorithms work better. So let's push that Vegas example a bit. What if the system that we do not understand was picking up that it's easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers.
            • 07:30 - 08:00 They could do this, and you'd have no clue that's what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, "That's why I couldn't publish it." I was like, "Couldn't publish what?" He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well,
            • 08:00 - 08:30 and he had no idea how it worked or what it was picking up on. Now, the problem isn't solved if he doesn't publish it, because there are already companies that are developing this kind of technology, and a lot of the stuff is just off the shelf. This is not very difficult anymore. Do you ever go on YouTube meaning to watch one video and an hour later you've watched 27? You know how YouTube has this column on the right
            • 08:30 - 09:00 that says, "Up next" and it autoplays something? It's an algorithm picking what it thinks that you might be interested in and maybe not find on your own. It's not a human editor. It's what algorithms do. It picks up on what you have watched and what people like you have watched, and infers that that must be what you're interested in, what you want more of, and just shows you more. It sounds like a benign and useful feature, except when it isn't.
            • 09:00 - 09:30 So in 2016, I attended rallies of then-candidate Donald Trump to study as a scholar the movement supporting him. I study social movements, so I was studying it, too. And then I wanted to write something about one of his rallies, so I watched it a few times on YouTube. YouTube started recommending to me and autoplaying to me white supremacist videos
            • 09:30 - 10:00 in increasing order of extremism. If I watched one, it served up one even more extreme and autoplayed that one, too. If you watch Hillary Clinton or Bernie Sanders content, YouTube recommends and autoplays conspiracy left, and it goes downhill from there. Well, you might be thinking, this is politics, but it's not. This isn't about politics. This is just the algorithm figuring out human behavior. I once watched a video about vegetarianism on YouTube
            • 10:00 - 10:30 and YouTube recommended and autoplayed a video about being vegan. It's like you're never hardcore enough for YouTube. (Laughter) So what's going on? Now, YouTube's algorithm is proprietary, but here's what I think is going on. The algorithm has figured out that if you can entice people into thinking that you can show them something more hardcore,
            • 10:30 - 11:00 they're more likely to stay on the site watching video after video going down that rabbit hole while Google serves them ads. Now, with nobody minding the ethics of the store, these sites can profile people who are Jew haters, who think that Jews are parasites
            • 11:00 - 11:30 and who have such explicit anti-Semitic content, and let you target them with ads. They can also mobilize algorithms to find for you look-alike audiences, people who do not have such explicit anti-Semitic content on their profile but who the algorithm detects may be susceptible to such messages, and lets you target them with ads, too.
            • 11:30 - 12:00 Now, this may sound like an implausible example, but this is real. ProPublica investigated this and found that you can indeed do this on Facebook, and Facebook helpfully offered up suggestions on how to broaden that audience. BuzzFeed tried it for Google, and very quickly they found, yep, you can do it on Google, too. And it wasn't even expensive. The ProPublica reporter spent about 30 dollars to target this category.
            • 12:00 - 12:30 So last year, Donald Trump's social media manager disclosed that they were using Facebook dark posts to demobilize people, not to persuade them, but to convince them not to vote at all. And to do that, they targeted specifically, for example, African-American men in key cities like Philadelphia, and I'm going to read exactly what he said. I'm quoting. They were using "nonpublic posts
            • 12:30 - 13:00 whose viewership the campaign controls so that only the people we want to see it see it. We modeled this. It will dramatically affect her ability to turn these people out." What's in those dark posts? We have no idea. Facebook won't tell us. So Facebook also algorithmically arranges the posts that your friends put on Facebook, or the pages you follow.
            • 13:00 - 13:30 It doesn't show you everything chronologically. It puts the order in the way that the algorithm thinks will entice you to stay on the site longer. Now, so this has a lot of consequences. You may be thinking somebody is snubbing you on Facebook. The algorithm may never be showing your post to them. The algorithm is prioritizing some of them and burying the others. Experiments show
            • 13:30 - 14:00 that what the algorithm picks to show you can affect your emotions. But that's not all. It also affects political behavior. So in 2010, in the midterm elections, Facebook did an experiment on 61 million people in the US that was disclosed after the fact. So some people were shown, "Today is election day," the simpler one, and some people were shown the one with that tiny tweak
            • 14:00 - 14:30 with those little thumbnails of your friends who clicked on "I voted." This simple tweak. OK? So the pictures were the only change, and that post shown just once turned out an additional 340,000 voters in that election, according to this research as confirmed by the voter rolls.
            • 14:30 - 15:00 A fluke? No. Because in 2012, they repeated the same experiment. And that time, that civic message shown just once turned out an additional 270,000 voters. For reference, the 2016 US presidential election was decided by about 100,000 votes.
            • 15:00 - 15:30 Now, Facebook can also very easily infer what your politics are, even if you've never disclosed them on the site. Right? These algorithms can do that quite easily. What if a platform with that kind of power decides to turn out supporters of one candidate over the other? How would we even know about it? Now, we started from someplace seemingly innocuous -- online adds following us around --
            • 15:30 - 16:00 and we've landed someplace else. As a public and as citizens, we no longer know if we're seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we're just at the beginning stages of this. These algorithms can quite easily infer things like your people's ethnicity,
            • 16:00 - 16:30 religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and genders, just from Facebook likes. These algorithms can identify protesters even if their faces are partially concealed. These algorithms may be able to detect people's sexual orientation just from their dating profile pictures.
            • 16:30 - 17:00 Now, these are probabilistic guesses, so they're not going to be 100 percent right, but I don't see the powerful resisting the temptation to use these technologies just because there are some false positives, which will of course create a whole other layer of problems. Imagine what a state can do with the immense amount of data it has on its citizens. China is already using face detection technology
            • 17:00 - 17:30 to identify and arrest people. And here's the tragedy: we're building this infrastructure of surveillance authoritarianism merely to get people to click on ads. And this won't be Orwell's authoritarianism. This isn't "1984." Now, if authoritarianism is using overt fear to terrorize us, we'll all be scared, but we'll know it, we'll hate it and we'll resist it.
            • 17:30 - 18:00 But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities,
            • 18:00 - 18:30 and if they're doing it at scale through our private screens so that we don't even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider's web and we may not even know we're in it. So Facebook's market capitalization is approaching half a trillion dollars. It's because it works great as a persuasion architecture.
            • 18:30 - 19:00 But the structure of that architecture is the same whether you're selling shoes or whether you're selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that's what's got to change.
            • 19:00 - 19:30 Now, don't get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I've written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it's not that the people who run, you know, Facebook or Google
            • 19:30 - 20:00 are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it's not the intent or the statements people in technology make that matter, it's the structures and business models they're building.
            • 20:00 - 20:30 And that's the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don't work on the site, it doesn't work as a persuasion architecture, or its power of influence is of great concern. It's either one or the other. It's similar for Google, too. So what can we do? This needs to change. Now, I can't offer a simple recipe,
            • 20:30 - 21:00 because we need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to face and try to deal with the lack of transparency created by the proprietary algorithms, the structural challenge of machine learning's opacity,
            • 21:00 - 21:30 all this indiscriminate data that's being collected about us. We have a big task in front of us. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won't be easy.
            • 21:30 - 22:00 We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don't see how we can postpone this conversation anymore. These structures are organizing how we function and they're controlling what we can and we cannot do.
            • 22:00 - 22:30 And many of these ad-financed platforms, they boast that they're free. In this context, it means that we are the product that's being sold. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue. (Applause)
            • 22:30 - 23:00 So to go back to that Hollywood paraphrase, we do want the prodigious potential of artificial intelligence and digital technology to blossom, but for that, we must face this prodigious menace, open-eyed and now. Thank you. (Applause)