Ruha Benjamin and Kate Crawford on AI and the Future

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In the Pioneer Works talk featuring Ruha Benjamin and Kate Crawford, they tackle the complexities of artificial intelligence (AI) and its multifaceted impact on society. The discussion unpacks AI's dualistic nature—its capacity for both harm and innovation—emphasizing that technology itself is not neutral but deeply intertwined with human values and systemic biases. The speakers call for an inclusive dialogue that goes beyond technical prowess, advocating for a profound societal introspection about who is controlling our digital futures and how to embed ethical considerations into technological advancements.

      Highlights

      • Ruha Benjamin emphasizes the non-neutrality of AI, highlighting that it's imbued with human values and biases. 🤔
      • Kate Crawford discusses the material impact of AI, noting how data centers and infrastructure consume massive resources. 💡
      • Both speakers stress the need for inclusive dialogue in shaping the future of technology. 🗣️
      • A call to action for grassroots movements to drive change in AI practices and policy. ✊
      • The conversation encourages rethinking the roles and responsibilities of technology beyond traditional narratives. 🔄

      Key Takeaways

      • Artificial intelligence isn't just about technology; it's deeply rooted in human values and biases. 🤖
      • AI's impact is global, influencing everything from politics to climate change. 🌍
      • Understanding the material and human cost of AI is crucial—for example, the resources needed for data centers. 💾
      • Policy changes are required, but real change will come from grassroots movements. 🌿
      • Technology can perpetuate existing inequities unless addressed intentionally. ⚖️

      Overview

      The discussion at Pioneer Works presents an insightful examination of artificial intelligence from societal, ethical, and personal perspectives. Ruha Benjamin argues that AI is not a monolithic entity but a collection of tools influenced by human ideologies and biases. This segment sets the stage for a deeper understanding of technology as an extension of our societal fabric and demands a more nuanced conversation about its role in our lives.

        Kate Crawford delves into the material realities of AI, shedding light on the massive environmental footprints resulting from the technological infrastructures needed to support AI advancements. Her exploration calls attention to the often-overlooked consequences of AI deployment, advocating for a more sustainable approach that considers both ecological and human costs.

          The speakers stress the importance of grassroots activism and collective imagination in shaping future technological landscapes. They argue that meaningful progress in AI should not just be driven by policy or technical innovation, but through inclusive and ethical frameworks that prioritize human and environmental well-being.

            Chapters

            • 00:00 - 03:00: Introduction and Framing the Conversation The chapter begins with a discussion titled 'AI and Us,' aiming to provide a wide-ranging conversation about current developments in AI, though acknowledging that not everything can be covered. The authors set the stage for a focused discussion on selected aspects of AI and its implications.
            • 03:00 - 10:00: Historical Context of Technology and Imagination The chapter explores the pervasive and often mythical perception of artificial intelligence (AI) in contemporary discourse. AI is frequently depicted as a powerful and almost magical entity that could either save or destroy humanity. The conversation surrounding AI has permeated discussions on various topics, particularly the future of politics, making it a dominant narrative in modern thought. The chapter aims to deconstruct this monolithic view of AI, emphasizing its multifaceted nature.
            • 10:00 - 20:00: Materiality and Impact of AI The chapter explores the complex and multifaceted nature of artificial intelligence (AI). It challenges the conventional perception of AI by emphasizing that it comprises various tools, algorithms, and technologies rather than being a singular entity. The discussion delves into the notion that AI is neither truly artificial nor particularly intelligent, prompting a deeper examination of its materiality and impact.
            • 20:00 - 30:00: Recognition, Bias, and Representation in AI The chapter titled 'Recognition, Bias, and Representation in AI' discusses the rapidly evolving world of AI technologies, which are significantly impacting global events. The speaker emphasizes the importance of taking a moment to breathe and reflect amidst the chaotic news landscape. They stress the need to contemplate AI's role in these unfolding narratives, hinting at the broader discussion on recognition, bias, and representation in AI—topics that are crucial in understanding AI's societal implications.
            • 30:00 - 40:00: Policy and Future Visions This chapter discusses the concept that societal policies and future visions are not just abstract or magical thoughts, but tangible and human-created ideas. The text highlights the necessity to engage with these concepts thoughtfully and understand them comprehensively. Rua and Kate, presumably experts or figures in this field, are mentioned as guides or facilitators for this discussion. Specifically, Rua is noted for associating with the powerful character Uhura from Star Trek, suggesting an inspiration or model for envisioning and implementing future strategies.
            • 40:00 - 50:00: The Role of Imagination and Collective Action The chapter explores the influence of Octavia Butler on a scholar's work, focusing on technology, race, the future, and imagination. The scholar reflects on how Butler and other notable figures have shaped their thinking and work in these areas.

            Ruha Benjamin and Kate Crawford on AI and the Future Transcription

            • 00:00 - 00:30 [Music] I want to begin by saying this title AI and us. We went with this sort of expansive title to sort of give us as much space as we need to talk about all that's going on. We're not going to talk about all that's going on, don't worry. We'll talk about some of it. Um, and one place to begin here, I think, is just to
            • 00:30 - 01:00 say that AI is this thing that's invoked as this sort of magic thing, this thing that's out there, this monster on a mountain that's going to save us or kill us or eat us or do any number of different things. And that conversation, it's become the thing around which we talk about so many things. Um, the future of politics, it's become this sort of master narrative. And one thing I'm hoping we can do here is to kind of unpack what we mean by this thing AI, which is in no way as kind of unitary or
            • 01:00 - 01:30 single as it's often evoked. It's many different tools, many different algorithms, many different um ways of sort of uh using and engaging technology. Um so we're going to do that. as Kate sort of brilliantly puts it, right? Artificial intelligence is in many ways not artificial and not terribly intelligent. So, we're gonna we're gonna talk about why and how that is. Um, and as we do so, I want to just begin by
            • 01:30 - 02:00 saying let's take a deep breath. I sort of think of this night actually this is how we've sort of conceived of this night. If you haven't noticed, you've all noticed there's a lot going on right now out in the world. It's a bit crazy making to sort of look at the news. Um, and a lot of that news involves these technologies we're going to be talking about, but it feels so important as I say to just take a deep breath, take a pause and think about the ways in which AI or these things, the AIs I should
            • 02:00 - 02:30 say, right, is not magic. It's material. It's human. It's something we made that we need to figure out how to engage with and understand and think about. um and Ruhan and Kate are here to help us think about how to do that. We're going to start by asking you Rua who uh graced us with this image of the mighty Uhura from Star Trek. I want to ask you because I know that you've described yourself actually
            • 02:30 - 03:00 as a as a student of Octavia Butler as someone who's been really important to your thinking and your work as a scholar around technology around race around any number of things that you've done brilliant work around and I wonder if you could just begin by speaking to how you came to your work as a scholar thinking about technology in particular thinking about difference thinking about the future thinking about imagination and that Octavia Butler andor other figures of eminence have done to help
            • 03:00 - 03:30 you on that path. How much time you got? Yeah, let's get into it. So, there are so many origin stories to choose from, so I'll just um be disciplined and choose one. One related to this image. Um when I was uh 15, my family moved from South Carolina to the South Pacific. And this was the mid 90s. And the my main form of entertainment were boxes and boxes and boxes of my dad's
            • 03:30 - 04:00 recorded Star Trek episodes. So I had no choice. And so this was my sort of download into this realm of speculation and world building. And at the same time I was again living in the Marshall Islands, the South Pacific. For those who may know, that is uh was the testing ground for um the bombs that our country um has created. And it was also a place
            • 04:00 - 04:30 where I was befriending marshales who were taking me around and to different islands beyond the capital. And there were two sets of islands that we visited early on. One was the military installation uh e um Quadrilin and we took a ferry from Quadrilin and Quadrilin if you just landed there you wouldn't think you were in the Pacific. It was like uh any suburbia environment. It was like Steepford Wives meets like
            • 04:30 - 05:00 the Truman Show. Uh people walking around with strollers, lots of golf courses, BaskinRobins, no sense that you're in the Pacific. short ferry ride next to that was Ebby, which is where Marshalles were pushed off of Quadrilan to live. It's nicknamed the ghetto of the Pacific. Um, you know, Shanty Town, uh, high high 26 times the TB rate of the US and many people still affected by radiation fallout. So it was this juxtaposition of this
            • 05:00 - 05:30 seeming utopia that had been created, this military uh utopia next to this dystopia side by side. It was kind of my first glimpse that utopias need dystopias. You can't have one without the other, right? And so it it was really this formation that then got me later, you know, in my 20s and then 30s really shaping my research to always
            • 05:30 - 06:00 look for the hidden dystopias beneath the utopias that were being sold. And so that looked like biotechnologies for my dissertation, regenerative medicine, looking at who and what is harmed in the process to fast forward to today to thinking about these AI futures that were being sold and the dystopias that are often out of sight. Um and so coming back to this image, you know, part of what it evokes for me is a question of uh who is
            • 06:00 - 06:30 driving this ship? Who is shaping our collective future? Because as it stands, it's really a small sliver of humanity that's currently imposing its vision, its imagination on the rest of us. One way to think about it is that we're living inside someone else's imagination. And so I think for me it's not just about changing the captain, who the demographics are. A lot of people will be presented with these
            • 06:30 - 07:00 problems, these tech mediated harms and inequalities, and they'll say, "Oh, maybe if we just get more black folks in tech, more Latinx folks in tech, change the captain, change who's behind the screens." And in my view, that's it's not even that it's not enough. It's in fact a a really dangerous cosmetic fix for what is it requires much more substantive
            • 07:00 - 07:30 um redress. It's not that we shouldn't aim for that, but we shouldn't get fooled to thinking that that is going to magically change what we're getting down the downstream by changing who's upstream. And so I want us to think about what the values are, the ideologies, the assumptions, the desires that are getting encoded into the digital worlds that all of us have to inhabit. And the last thing I'll just say is for now is
            • 07:30 - 08:00 that I think these issues are too important to leave it to those who simply have the technical expertise, the technical knowhow. Right? It's an invitation for us to think about the many kinds of knowledges and experiences that we need around the table to move us forward to drive this ship that we're in. Indeed. Thank you. Um, and what you're saying there makes me think about the ways in which Yeah. Again, the key insight that these technologies are not
            • 08:00 - 08:30 neutral, they're not quote objective, right? There's a sort of rhetorical move that's often made about, oh no, it's just math. It's just algorithms. And one thing that your work has helped to see right is the ways in which the values the biases of the data sets that are used to train these models are embedded within them which is to say that they're predicated on the world as it is and all its inequities right um and so we'll get into this much more but love that as a way in and a segue to your work Kate um
            • 08:30 - 09:00 and one of the things that you've done for us as a researcher turn as an artist and I'm going to ask you to talk about this remarkable work that we're going to kind of scan over here calculating empires is to remind us that AI as you say as I said earlier not only is it not necessarily artificial or terribly intelligent but in vital ways it's it's material right and it's embodied it's a
            • 09:00 - 09:30 real thing that has impacts and infrastructure infrastructure that's growing at alarming and extraord extraordinary rates which you're going to tell us about. But this project I wonder if you could talk us through this about thinking about you know the last couple years basically since JPT showed up there's AI AI that's the new new thing out of Silicon Valley but of course it's it's there's been lots of new new things um and this is a kind of map of technology. Talk to us about this project and why it's important to think
            • 09:30 - 10:00 about technologies empire. Yeah, I'm I'm happy to do that, Josh. But I I want to start um by just acknowledging where we are. You know, we are in an extraordinary crisis. In the last week, we've seen hundreds of millions of dollars cut away from researchers across universities in the country. We've seen Mimmude Khaled arrested by ICE officers for attending in a protest. And just today we saw a Yale scholar suspended from her from her job for speaking at a
            • 10:00 - 10:30 Gaza protest. At the same time, AI is being used to scan all of our social media profiles to see if we talk about unacceptable terms. It's being used to scan through federal grants to see if you are working on diversity, equity, inclusion, climate, the female body, structuralism. You name it, there is a long list. We've all seen it. This is an appalling restructuring of power and it is being done by a very small group of men who are also the richest men on the
            • 10:30 - 11:00 planet. And this is really what a tech oligarchy looks like when it stages a soft takeover of a major world government. And we are living through this right now. So part of what I'm really interested in doing as somebody who's been researching AI for, you know, 15 or so years is seeing this increasing centralization of power and feeling incredibly worried that this was
            • 11:00 - 11:30 something that nobody was stopping. Every every single political party has been pushing this barrow further. So, one of the things that really struck me was how do we give this moment more historical depth? How do we look at how these types of centralizations of power have happened not just now, not just in the 20th century, but over the last five centuries? And so working with my amazing collaborator Vlad Jolola over four years, we basically hand
            • 11:30 - 12:00 illustrated a map of five centuries starting with the emergence of merchant shipping and the sort of emergence of colonization through the technology of long range shipping through to all of these forms of technology that initially start as effectively forms of excitement. You could think about the camera, radio spectrum, and then you gradually see these enclosures, how it's captured by particular interests. You see this again in the enclosure acts in
            • 12:00 - 12:30 the UK in the 1500s and 1600s, the capturing of what was land for peasants and it becomes privately owned. This is the echo of time over centuries. And by doing this my hope is that we can see how this is a period of the consolidation of empire of technological empire and this is what we have to fight. Yeah indeed. Thank you for that that clarity and sort of bringing us to this moment this urgent moment this moment of emergency and of crisis. Um,
            • 12:30 - 13:00 and I think one thing I I was thinking about as you as you were talking about the enclosure of the commons is that we often talk about sort of data and we're going to get into data quite a bit here. Uh, and privacy and of course there's a key aspect of sort of thinking about oh we like to keep our data private. There's a way in which intuitively this makes sense. But one of the things that's going on and as the videos that you all saw as you were sort of walking in which is drone shots by our uh brilliant friend John Fitzgerald and and
            • 13:00 - 13:30 Nick Fitzu put together this this work is essentially turning the commons into data and turning turning the world into data and sort of getting rid of this assumption that perhaps there should be stuff that is not owned or privatized or turned into data. Um and that seems to be a key aspect of where we are. Fair to say I mean it is extraordinary. We have seen the capture of everything that has been digitized into privatized structures.
            • 13:30 - 14:00 That is one of the biggest enclosures that will happen in our lifetimes. But as we know it's just the beginning. It's the beginning of a set of further enclosures that we're now seeing again acquitted by the handmaidaidens of capital in now the US administration. So I mean I think this is the issue when we talk about AI. There's such a tendency to think of it as a set of abstract technological processes or algorithms but it's really about the people behind it. The structures of capital behind it and then these processes be which by
            • 14:00 - 14:30 which these systems are impacting our world impacting humans impacting our political institutions and impacting our environment. So how do we make that the conversation around AI rather than the shiny new thing which we're constantly being told about right indeed and one thing your work has done also right is just to think about those immense and to reveal those immense amounts of capital a couple years ago you know there's among the sort of finance and investment types there was this riff oh AI is so expensive we needed you know it's crazy
            • 14:30 - 15:00 data centers it's going to cost hundreds of millions of dollars and now they're spending it and and that infrastructure is coming to right? Half a trillion in the next 5 years. Half a this is not a small sum of money. No, they're spending that money. Um and by way of sort of helping us, I think, think about or sort of pivoting to again the sort of definitional question, what is what is AI? As we've said, there it's a it's much better to put it in sort of plural terms or talk about machine learning. Um
            • 15:00 - 15:30 but R, I wonder if you could just help us think about the ways in which you unpack this phrase. um artificial intelligence which is perhaps a way to ask what we mean by intelligence generally. Yeah. I mean in in some ways we could call it like an AI as an empty signifier is a fancy way of saying that it becomes whatever we make it right. It it's deployed in so many ways. It's a marketing term. You know it's
            • 15:30 - 16:00 certainly connected and wrapped up in these webs of corporate power. And so for me, I personally am sick of using the definition and the assumptions as you say, like the technologies and assuming that that's what we're talking about. Um, and so I've been playing around with other ways to think about AI. What else can it be? Not this artificial intelligence, but what other forms of knowhow and knowledge do we actually need to guide this ship to move
            • 16:00 - 16:30 forward? And so ancestral intelligence is one and there I'm thinking about the kind of collective wisdom that we inherit that we pass down that that often gets discarded as backward and primitive and no longer needed and then abundant imagination as another kind of AI which we'll talk more about. But partly what I want us to do is when that those two letters are invoked to ask which one are you talking about, right?
            • 16:30 - 17:00 Like not assuming that we're talking about the tech mediated one that we're being sold, but what other kinds of AI do we actually need? Do we want to actually foster and grow and invest in and imagine? Um and so again, if ancestral inter intelligence is what we inherit, then in my view, abundant imagination is what we need to pass down. We need to radically expand how we think what we think we need to know. You know, so much of AI, the first kind, um
            • 17:00 - 17:30 the tech mediated kind is about closing off futures, about predicting things, about control, about scanning our social media and labeling people a terrorist. Um and so if that sort of controlling impulse is so tied into this first kind, then I think we need other imaginaries. We need other things to open up. And for me, these two options are to get us started in our brainstorm, in our discourse. Um, and again, we can unpack
            • 17:30 - 18:00 what this might be. But I think we have to reclaim our power, not just to name reality, but to shape it. We can't work with something if we don't have language for it. Language is a technology, right? And so being able to put a name to something, we can start to work with it collectively and and and figure out a way to um reclaim our our agency rather than this top- down vision of the world that we're being um handed. Right? And one of the ways in which intelligence is
            • 18:00 - 18:30 defined or certainly in sort of tech circles is, you know, intelligence is about rational action. It's about decision-m. But as someone you know, wisely said, you know, a good prediction isn't necessarily a good decision at all. These are very different things and I think that yeah one thing we can we can sort of talk about here is this shift right that you just invoked from predictive AI to generative AI which has been massive and has been you know now I think lay people often say AI they mean
            • 18:30 - 19:00 generative AI in a sense they mean oh a machine's going to write a song or I can ask GPT a question and um and we'll see what comes back and they do some impressive things but this sort of shift it seems to me and K you can tell me if this is correct but yeah essentially over the last couple years as sort of generative AI has taken off and sort of popularity and sort of oh this is this is the future for better or worse there's this sort of v vision um and we're going to talk about predictive AI
            • 19:00 - 19:30 more here but as generative AI has kicked off it's changed the geography and infrastructure of this technology that you've urged us to think about yeah not as this mystical thing on the cloud cloud although it is a so-called cloud but something that lives in in mines and in tech centers and in towns that has a geography right um I'm a geographer as Jana mentioned so I love your Alice stuff just in principle but talk to us about this visualization this anatomy of
            • 19:30 - 20:00 an AI which is a kind of another map and the work you've been doing about this lived geography this extent thing yeah it's it's a delight to um to have a geographer with us on stage. I'm Yeah, math nerds. Let's go. Um so this work uh again this was Vlad and I this is before Calculating Empires. Um this is the precursor. So if calculating empires is a story about power and technology over centuries of time. This is about power
            • 20:00 - 20:30 and technology over space. So, we've moved from time to space and and this one really came about back in 2016 um where a lot of the conversation around sort of building what was then much more about predictive AI was looking at the sort of data and technology pipeline. What you kind of see there in the middle um and then we started sort of going well this doesn't tell you anything about you know you see this beige container of an Amazon Echo like it it looks like it doesn't really do anything. it's it's, you know, barely
            • 20:30 - 21:00 creating a physical imprint. So instead, we opened it up. We looked what was in it. We found where the components come from, where they're mined, where they're extracted, how much the miners are paid, you know, what their health conditions are, and then all the way through to the end of life of the devices where they get thrown out generally after about 3 to four years into e-waste tips in Ghana and Pakistan where they poison waterways and the soil um and and cause this kind of horrifying whole of life cycle. Um and this for me this this changed me as
            • 21:00 - 21:30 a scholar. I mean I think really before this time I was used to going to archives you know sitting at my desk reading papers. This put me in the world in a very different place and it meant that I would now you know spend the next you know eight years in mines going inside factories going to data centers speaking to people who were labeling data sets or you know extracting lithium. And for me, those material landscapes are what is the real story of AI right now. Um, it's extraordinary. In
            • 21:30 - 22:00 3 years, we've seen basically the creation of the largest, most expensive planetary infrastructure that we have ever built as a species. It is wild. I mean, basically the new Trump project Stargate is unleashing not just a vast amount of capital, but capturing a huge amount of land for hyperscale data centers for generative AI that use enormous amounts of energy and water. And it's coming from those communities.
            • 22:00 - 22:30 For example, in Virginia right now, data centers use, I think it's over 20% of the state's energy. They're predicted by 2030 to be using 50% of the state's energy. So times that across the country, but in this huge roll out that we're seeing, you're looking at AI systems competing directly with humans for land, energy, water, basic resources of life. That is the real story that we're seeing with generative AI. Indeed. And the Yes. making this material,
            • 22:30 - 23:00 making it making it a material history, making it a geography, making it a human story. Right. This is what this work of mapping has done. And you were we were talking uh beforehand about one of the things you've been looking at, right, these lithium mines, but you've been hanging out in North Carolina looking at sand, too. Yeah. Yeah. I've t it's taken a turn. So, in in the last six months, I've been spending time in this basically the only mine in the world that is used to produce ultra pure quartz. It's really fine sand that is essential to making semiconductors. And
            • 23:00 - 23:30 it's extraordinary that there is one place in the world where this is coming from and it's in North Carolina in the Appalachian Mountains. And so I started going there um speaking to people, studying the mines, you know, got into a little four-seater plane and we flew over the mines to sort of see how it cuts into the landscape. And then a few days after we had arrived, this was like a, you know, one of our trips, Hurricane Hela hit and wiped out the town. It was under 12 feet of mud. Uh over 200 people
            • 23:30 - 24:00 died in the region. It was catastrophic. It was climate change in the most vivid way. They never get hurricanes in this, you know, part of the mountains. And so for me it was this kind of extraordinary experience of being in the story of the impacts of AI on our ecologies on our atmosphere on our climate and then living through the horror of it and then trying to record that and trying to just understand how this is going to repeat
            • 24:00 - 24:30 and accelerate if we don't acknowledge what's happening and start to actually address it. And this is the real mission of how do we do that, right? and making these stories real and grounded in these in the most real and urgent ways, right? Um, and it does it's such an interesting thought experiment if you think about how do we how do we picture AI? And it was a fun we had fun this past week talking with our wonderful designers about, you know, little we did this little GIF of the mouse, which was sort of fun and cute, I hope. But one thing we were thinking about, you know, is
            • 24:30 - 25:00 sort of how do you Yeah. How do you lend it an image, right? And some people might just think, oh, it's it's an iPhone. It's a weird face. It's some weird futurist image, but it changes things. If you say, "No, it's it's a data center a mile across, right? It's a it's a catastrophic storm. It's a it's a mine that's, you know, devastating an ecology, right? And that's an amazing I would say that minimalism, the mouse, and the other minimalist aspects are part of the subtuge, part of tricking us
            • 25:00 - 25:30 into looking away from the vast materiality of it." Indeed. Yeah. Absolutely. Um, and so this is Kate, your artwork has been sort of trying to trying to give us new images for thinking about this, but I wonder if we could pivot a little to thinking about predictive AI, which you both have have worked a great deal on, and obviously this technology is changing all the time. And this invokes again the images we were looking at um before the talk here. And this essentially raises these
            • 25:30 - 26:00 questions about recognition. What is it to teach a machine to to recognize? Um what do we mean by that? What's the data that's sort of teaching that algorithm to if not see than to recognize? And Ru, a lot of your work has been uh looking in a very fine grained and detailed way at the ways in which again inequities and biases that are baked into our society have become a part of many of
            • 26:00 - 26:30 these predictive AI systems. I wonder if you could you could talk a little about that work. Yeah, I mean we I'm glad we started where we did. First of all, I came here to listen to Kate, so I don't know what y'all are doing. Um I've been I've been following her around like Okay. So, I'm glad we started there though because that's often what gets lost when we jump to this. We can think about this as the kind of impacts of AI on people, on communities, who's seen, who's not seen, who's surveiled, etc.
            • 26:30 - 27:00 But what we really have to do always is start the story and the conversation much earlier which are not about the impacts but the inputs the material inputs the ideological inputs the social historical all of the things that happen that we take as given when we start with the technology and its impacts. We assume that all of that is not up for debate. And so which brings us to you know this juxtaposition which again we often are focused on one side of this
            • 27:00 - 27:30 which is the being excluded part when we're not seen by these tools that often gets takes up a lot of space um in our own sort of uh conversations and critique of technology. And what I really would love us to do again in terms of expanding how we think and talk is not to only focus on the exclusion because then the demand demand becomes inclusion, right? Um without really
            • 27:30 - 28:00 questioning what we're being included into. And so that's why it's so important to think about all of the harmful ways that we can be centered by these technologies and these systems. um the and rather the people wielding them. And so really being um watched but not seen is that left side of the screen. And so to think about hyper vvisibility and invisibility is as two sides of the same coin. And it leads
            • 28:00 - 28:30 us down a very different rabbit hole when we start to think thinking about what it means to be included into harmful systems. Right? Then we have to move beyond the buzzwords and the demands for inclusion and even for diversity, right? Because we have to remember that all kinds of things are diverse that aren't liberatory, right? Um plantations, colonies, you know, uh Amazon factories, you know, warehouses. And so again, moving beyond the
            • 28:30 - 29:00 buzzwords leads us to open up the black boxes and thinking about, you know, what it is happening behind the scenes. And and so I really again thinking about expanding um where we start and then where we where it leads us in terms of the conversation, right? And one of the things I think that that leads us to is the distinction between recognition and construction, right? are the ways in which an algorithm could sort of say
            • 29:00 - 29:30 could correlate data and think about oh I recognize this or recognize that and not recognize the ways in which recognition is a part of constructing race constructing gender constructing the ways in which we have categorized the world for a long time in very harmful ways right and that these predictive AIs play a part not merely in seeing but in constructing fair to say Well, there we go. Um, a similar image,
            • 29:30 - 30:00 Kate. This comes from a This comes from a project of yours, Imagenet Rouette. We have Obama there as a demagogue. We have the chairman of the Joint Chiefs as a state trooper, which I kind of love. And Biden is an incurable, you'll see. He Oh, poor Joe. And And Hillary Clinton gets sick personick, diseased person. Yeah. Poor Hillary. um talk to us about image net roulette. Yeah, this is a one of part of your art practice. Um yeah, go on. Well, this is
            • 30:00 - 30:30 a this is a really good example of things that are very much bridging research and art. So So this is a project um with artist Trevor Pagland. We started this back in 2017 where Oh no, no, no. Josh, put it away. Put it away. Um you can see Yeah. Yeah, it's it's a good one. Um there you go. Yeah, we don't need that. I'm trying. I'm trying. Anyhow, so uh while we what we were doing, um I had been spending several years looking at the big training data
            • 30:30 - 31:00 sets that we used to make AI systems see the world. How do they see what what is the basis of recognition? And the big daddy of uh visual AI systems is this data set called ImageNet. It was first uh basically released in 2009. had been openly on the web for you know the better part of eight years. By the time um we were really looking at it in detail it's got 14 million images which are categorized into around 22,000 categories. So we started like
            • 31:00 - 31:30 physically looking at the categories and the images which very few engineers do. They just it's an infrastructure. You pull it off the shelf you apply it you know and we saw extraordinary things. people being labeled in, you know, the worst types of categories from things that were kind of obvious like, okay, this this person is a nurse because they're wearing a uniform through to this person is a kleptomaniac
            • 31:30 - 32:00 or a crime suspect or an Eskimo and like all of these extraordinary racial, gender like was in there. It just keeps going and going and going. It's like a horrifying horrifying categorization of just people who had their images online, you know. Um and so then we trained a very simple neural net um to basically apply those existing categories um to people. So and it we released it both as like a an art piece but also as an app that you could use.
            • 32:00 - 32:30 And um we didn't expect that a whole lot of people would start using it. And so we got up to like a million people using it a day where we had to say and we're shutting this down because we can't afford the service cycles. People got into the people people got into it. But you can see immediately the image logic. You can understand these sort of systems of classification. You know, if if I'm wearing a suit, I get categorized as a news readader. If I'm wearing a bikini, I'll get categorized as a You know, it's not it's the most basic deeply stereotype form of looking
            • 32:30 - 33:00 at people. But it is an object lesson in what happens when you classify people as objects. And so this was very much uh the work that we did for this show called training humans um where we looked at the material of data that how data is understood, how it's applied. And of course this has now really exploded so that training data sets now have 12 to 14 billion images. It's like even to look at those images now is a different it's a different project than
            • 33:00 - 33:30 when I started doing this a decade ago. Like you you have to train an AI to look at the data set that's being used to train AI. Like it's it's that cumbersome now, right? And these neural nets, I mean, one of the things I'm by no means a a computer scientist or a technologist, but it's fascinating to just read them and sort of think look at the ways in which people articulate how they work or they attempt to articulate how they work that neural nets and as you have put it that AI are essentially probabilistic statistics at scale, right? and the mechanics of that again
            • 33:30 - 34:00 above my pay grade but it's remarkable to think about sort of again yeah what is the data that they're being trained on um what are we how can we understand that data uh and Rua you have done work do work in the just data um lab at Princeton I wonder if you could talk about that work and sort of thinking about as came up earlier this is not necessarily just a question about
            • 34:00 - 34:30 privacy but about the ways in which everything being turned into data um and how we should understand that process and perhaps think about shaping it. Yeah, absolutely. Actually, are any of my students in here from the just data lab? Yay. There we go. Welcome. So, it's actually named the Ida B. Wells just data lab. And that also sort of primes you to think about the fact that the conversations, the movements that we're trying to engender
            • 34:30 - 35:00 now are part of a much much longer freedom struggle. They take a different iteration now. The kinds of harms and oppressions that are being mediated by AI. But again, it's important to trace these continuities. And I think in naming it after Wells again, it's a reminder that she was wielding statistics and stories in order to combat terrorism in this country. Lynching, traveling around the world
            • 35:00 - 35:30 using journalism and writing, but also data to try to marshall facts about what was behind this this racial terrorism. And so I think recognizing that all data by definition is historic, right? So that already brings the us into the conversation. Um and so here again you see two images. One that evokes a prior era of racial domination on your left in terms of you know injustice in education
            • 35:30 - 36:00 and then on your right you have an image from the first few weeks of the pandemic when students were sent home. they couldn't take their tests, all those important tests. The US SATs, but in the UK, where this image comes from, Alevel exams are the corollary, really important for what university you can apply and be competitive for. And so what the UK government decided to do was use an algorithm to predict their grades
            • 36:00 - 36:30 predictably, right? students that went to more working-class schools, diverse schools, their grades were predicted as lower than their wealthier, whiter counterparts, and so they took to the streets as they should. And one of the signs, you saw the first sign, I think, you know, the algorithm, which is like in a nutshell. Um, but this sign is really instructive because she's making a connection between those grades, those
            • 36:30 - 37:00 predicted scores, and the postal codes that students live in. And I I think this is important because it's a reminder that this algorithm is not creating the problem from scratch. It's reflecting and reproducing existing forms of inequity encoded in the geography of the city and coded in the geography of the nation. So again, we can't just fixate on the technology and it's causing all these problems when there were all kinds of oppressions and
            • 37:00 - 37:30 injustices and inequities that predated this algorithm. So one, you know, upside in my view of centering AI occasionally is that it has the potential to make us face things that we normally choose to ignore, right? It makes it clear that these inequalities exist, you know, whether we act on it like the UK government and dole out scores or we use it as grounds to contest the underlying
            • 37:30 - 38:00 inequality and oppressions. And that's the direction I see those students in the UK and these students here my students in the audience that are using their collective power as students to push back against these corporations that are deploying developing and deploying these surveillance tools these predictive tools whether it's an education health care policing borders you name it and so I think you know part
            • 38:00 - 38:30 of it my sort um mission. My soapbox is for us to always get back to our power, not to allow the immensity of the problems that we're up against to completely overwhelm us to the extent that we paralyze. And so I I offer this image again for students who within the hierarchy the feudal hierarchy of the university on lowest on the totem pole perhaps a a little above staff but really when we
            • 38:30 - 39:00 organize and collect that power um then there's a potential in this case refusing to even apply for jobs at certain companies essentially withholding your labor beforehand. Um, and we see in union drives and other tech companies again thinking about these are the people that see these tools being developed before any of us are affected by them. And so if there are not protections in place that allow them to blow the whistle, that allow
            • 39:00 - 39:30 them to speak up, then we are going to feel the downstream results. And so that's why the labor issues and protections that are happening behind closed doors within these companies are directly tied to the harms the public then faces. And so again sort of circling that back and really thinking about um what's happening upstream as grounds for this larger tech justice movement that we've inherited from the likes of Wells Dubo and many others. Right.
            • 39:30 - 40:00 And thinking about I think one thing that we can sort of pivot to here around Twitter. Remember that. I know. Oh, right. Kate said, "I don't want to see my angry co see my old tweets." I'm sorry. You know, we'll go very quickly over your old angry Twitter tweet. Your co We dug this up. Like I know. You know, we did our research here at Pioneer Works. Um anyway, this is you riffing on this ludicrous attempt to sort of categorize facial expressions into seven universal emotions. didn't
            • 40:00 - 40:30 work out too well. We we But they're still doing it. They're still trying. They're still They're still going for it. Here's you talking about chat GPT getting your bio achievements all wrong. Well, actually, it was um it it they they sked Lex Freriedman onto me because a journalist contacted me and said, "Hey, um I asked Chat GPT um and this is in like April 23, so this is, you know, pretty early on. Um I asked you who is a big critic of Lex Friedman and it says you. So, can you um it's telling me about all these articles you've written. Could you just like send me them because
            • 40:30 - 41:00 I can't seem to find them online? And I'm like, yeah, that's because they don't exist. Um and this is where I basically started to think about um not just hallucinations of LLM, but hallucations [Music] citations citations. Um but yeah, I mean this was kind of we've seen so much happen from this period. this kind of like enormous sort of cultural learning curve um around these technologies which
            • 41:00 - 41:30 supposedly can do anything and are close to becoming you know um you know AGI and yet as we saw in a study that came out this week um if you ask them a set of qu any LLM questions around you know what's going on in the world it's going to get it wrong 60% of the time you know so this is this is the the procarity of the information ecosystem in which we live and I think a growing epistemic collapse around not just you know what is potentially deep fake but what is you
            • 41:30 - 42:00 know shallow true like how do you figure out like which is which right now and we're losing that sense of collective knowledge which is why I'm loving that you know Rouar is talking about this is history we have to talk about our history we have to center in our history because it is being taken away from us we are being told an alternative history right now by the US government and we have to start telling our own stories loudly, right? And that I think is a
            • 42:00 - 42:30 nice sort of segue to the spectre of AGI uh which goes along with these lovely robot images that that Rua has shared with us. Um and we want to talk about policy briefly and then imagination and sort of figure out where we go from here. Well, one thing I think that large language models just came up sort of for the first time here as the sort of centerpiece in certain ways of of generative AI or at least the programs that we know best. Um, but it's worth
            • 42:30 - 43:00 saying right that again the sort of multiplicity of AIs that are out there and and an algorithm or machine learning model for in a in a lab that's trying to create new proteins or drugs and we've seen these sort of positive stories of oh speeds up the scientific method. That's a hard thing to be mad at that there's there's ways in which these so-called AIs are doing remarkable things. There's also all the scary stuff. Um, let's talk about robots and AGI. Um, Rua, you gave us these these
            • 43:00 - 43:30 excellent robot images. Talk to us about the spectre of of the robot. Yeah. In history. And now speaking of history, let I mean, let's start with the word robot, you know, thinking about its origin and etmology drawn from the Slav robata, which means servitude, hardship. And so, you know, anytime we're talking about robots in pop culture, in my view, it's a way to channel our anxieties about
            • 43:30 - 44:00 human dehumanization. So it becomes this kind of proxy for thinking about all of the ways that human labor and human agency is diminished. And that in my view there's two stories that we're routinely told about robots, emerging technologies more broadly. The first, you know, is that dystopian story. They're here to slay us, right? They're going to take all the jobs. They're going to rob us of our agency. Everything bad associated
            • 44:00 - 44:30 with technology. And really that's the narrative that Hollywood loves to sell us. The Terminator, the Matrix, the list goes on. On the other hand, we have the technoutopian narrative that everything good is associated with the progress, the future. It's going to make everything more efficient, less biased, more neutral. And this is the narrative that Silicon Valley, you know, until very recently f focused on selling us. And I think the thing that I really want
            • 44:30 - 45:00 to point out is that although on the surface these seem like opposing narratives, they have different endings to the story, right? One in which we're slayed, one in which we're saved. They share an underlying logic, what we would call a techneterministic logic, which is just a fancy way of saying that we assume that technology is in the driver's seat, propelled by a will of its own. But the human, going back to the first part of our conversation, the human humans behind
            • 45:00 - 45:30 the screen are missing from both of those scripts, right? All of those values, assumptions, desires. Um, and so we have to pull back that screen, reenter the us that's actually monopolizing power and resources and imposing that vision onto those who are treated like automata. And you see the signs here that people are holding up pushing back against the diminishment of their agency and their labor. This last
            • 45:30 - 46:00 one is from a uh walk out in St. Paul, Minnesota at Amazon warehouses. And again, we are humans, not robots, as this refrain, this mantra um recognizing, you know, if you can't stop work to pee, you can't call in sick, you go home with your limbs aching. Um, you know, that's a way of of of really pushing back the this slogan pushing back against that diminishment. Um, and so again, I use the image of
            • 46:00 - 46:30 robot and I bring these slogans for us to really think about what is behind that term, what's behind the slogan, to think about the layers of um, labors that make AI appear magical. Whether it's content moderators in the Philippines, whether it's click workers in Kenya, whether it's Amazon workers down the street, part of it is pulling back the screen and and and reckoning with what some people call artificial artificial intelligence that do the
            • 46:30 - 47:00 ghost work that makes AI appear magical. Um, and then it brings us back to us. And I think it was actually Jeff Bezos who even called it artificial artificial intelligence when he when he created Mechanical Turk. Um, and I'm so glad that you brought this image in, Rua, because it really reminds me of one of the experiences, the first Amazon fulfillment center that I ever went to in New Jersey. The thing that really struck me was just the amount of bandages that people had, like the
            • 47:00 - 47:30 repetitive strain of like bending down to pick up objects and put them in boxes and just the amount of like physical pain that was on display. And then, you know, these big white boards that said, you know, we want to hear your voice, share things here. And, you know, people were asking things like, you know, why can't I get a day off around Thanksgiving? And and and it wasn't changing anything. It was this kind of broken feedback loop. Um, but you were just seeing sort of again in in really
            • 47:30 - 48:00 sharp contrast just how difficult this is as a place of labor. Some warehouses have these phone booth size or we might say coffin size booths called Amazen that you can enter and smell lavender and have a meditation going and some very calming images. And I think about that Amazon, how it's not just an Amazon, but there's so many of our workplaces that have a a a corlary to
            • 48:00 - 48:30 that sort of, you know, mental fix, mental health fix, wellness fix that doesn't get at the underlying conditions that people are actually working in, right? And this I mean I love that as you point out and as you just did the idea of the robot long predated right the idea that we could actually sort of have these surviile machines right not unlike the idea of the zombie or the sort of automaton. Um so these mythologies run deep. Um if we could turn briefly to the world of policy and
            • 48:30 - 49:00 this frightening gathering that Kate just returned from in Paris. You said you had to sleep a week to recover. I need to recover. Yeah. This is you've revived. Thank you for reviving and being with us. Um I want to talk about policy and then the sort of where do we go from here and the imagination moment of this conversation. But one line that that I've sort of returned to and sort of thought about and thinking about this gathering and this image, right? People who think about geopolitics these days have heard people say this that you know
            • 49:00 - 49:30 data is the new oil. Like this is what nation states covet. metadata of people whether it's at the border and sort of being able to track and know and ascertain and and recognize um the data has become vastly important to our politics but tell us Kate if you could about what happened in Paris a couple weeks ago people saw JD Vance say what he said there was a yeah this moment about accelerationism deregulation AI is the thing let's go you know and the US didn't sign on to
            • 49:30 - 50:00 whatever resolution big surprise about sustainability ility etc. But talk to us about what you experienced and saw there. Yeah. What the powers that be? Yeah, sure. I mean, if for people who don't know this image, this is um from the uh President Mccron's AI action summit, which is the third international summit on artificial intelligence. Uh we've had summits at Bletchley Park in the UK and in South Korea. Um, and each one of these summits sort of concluded with a statement looking at the potential risks of AI and how to address
            • 50:00 - 50:30 that internationally, creating a sort of a thin but present kind of connective tissue across nation states to acknowledge that there are downsides of these technologies. Uh so we all you know sort of went along to this summit and they sort of invite heads of state and some you know academics and you know some CEOs and whatnot and sort of pilots into a room and you know the big question is you know what what are the resolutions like what is the state of understanding risk which of course has
            • 50:30 - 51:00 gone up and up and up just in the last 24 months and Macron gave a speech on the first day saying thanks for being here um I've just raised 109 9 billion EU to make France the center of AI for Europe. And thanks for coming to my party. It's going to be a good one. I hope it was a ranger. I mean, it was extraordinary. It was kind of like watching in game theory like defection when people say, "Hey, we could
            • 51:00 - 51:30 collaborate or I could just do what's good for me." And this began a cascading series of defections where the US did not refuse to sign you know this this potential agreement which did mention climate change for the first time which was great then the UK didn't sign and and then there was this sort of scrambling around you know interestingly China did sign saying you know how do we think about labor and the environment in relation to AI so that the tables really flipped um but as in a moment of see understanding where we are this is it we
            • 51:30 - 52:00 are going into this period of extreme extreme AI acceleration, the removal of what small tiny regulatory frameworks that we had around the tech sector and then a pouring of petrol on the fire of a vast amount of capital. So this was that moment. It was a big fire. It was it was really for those of us who've been looking at AI risks and harms and the bigger picture of what these technologies do to our social systems,
            • 52:00 - 52:30 our political systems, our environmental systems. It was really horrifying because no one's coming to save us. We can't look to government structures to solve this. In fact, in many ways, they're actually taking us back. So, if change isn't going to come from above, it has to come from below. And we have to think about how that is going to happen because we're not just pushing back on algorithmic decision- making at the border or in in the policing systems. I'm thinking here of, you know,
            • 52:30 - 53:00 American artists amazing work around sort of security theater and the use of, you know, everything from facial recognition to, you know, predictive policing data. But we now have to think about our water supply. We have to think about what's happening to the energy crisis. And and these are things that communities can push back on. We have to remember that we we have a history that we can turn to that does remind us that change is possible precisely because it's not on a cloud somewhere. It's in actual places where people live and work
            • 53:00 - 53:30 and and exist and and so on. Um this slide we're going to skip over very uh quickly because it's kind of a postmortem. This document is kind of dead. I'm sorry. It's gone. It's gone. It's gone. But the reason I think it it bears sort of mentioning is that you know some very brilliant people worked on this thing in the last administration not at least Alandre Nelson your colleague at at Princeton Rua and she of course talks about the ways in which all these things that we need to think about
            • 53:30 - 54:00 in terms of transparency in terms of discrimination in terms of how we can engage and think about AIs that could be emancipatory or could be could be tools for for good or for equity but she also said something I think seems so important which is that so much of AI policy is sort of focused on bad actors and oh what if what if the Chinese get these chips or what if what if the AI itself becomes a bad actor without focusing on what you've focused on so much in your work both of you the ways in which extant sort of systems or AIs
            • 54:00 - 54:30 that are doing what they're supposed to do reinforce exactly extent systems the world as it is right absolutely I mean and one of the you sort of takeaways thinking about this AI bill of rights is how hard it is to create something and how easy it is to destroy something you know like in terms of just time scale energy investment and so you know we think about you know I've
            • 54:30 - 55:00 written about sort of abolition as it relates to technology and that word often evokes the dro destroying part but really that abiler the growing what do we want to foster and culture cultivate. It takes so much more work and wisdom to be able to create something as my colleagues did and in a stroke of a pen destroy. And so that's partly an invitation for us to think about how we place our limited energies, our little lifetimes that we have here. How do we want to spend it? What do we want to
            • 55:00 - 55:30 invest it in in terms of those twin processes? And we've arrived at imagination. Yeah. Right on Q. Yes. And these words from another mighty writer. Yeah. Tony Morrison, as you enter positions of trust and power, dream a little before you think. Yeah. Those words are important to you. Tell us why. Yeah. You know, you know, train train as a scholar, research, you know, this enlightenment model of what we need. And, you know, I started coming to the
            • 55:30 - 56:00 conclusion many years ago that the facts alone aren't going to save us. you know, creating more and more data about things that we know a lot about already, you know, and so in some ways constantly seeking data for things and to prove this or that, you know, um it becomes in some ways a placeholder for acting on what we already know, right? So the critique the a mass of data but thinking about how to actually infuse that in our
            • 56:00 - 56:30 actions in our work in our world building and this sort of invitation is really important for me as an educator as I think about how to scaffold knowledge production where to place our energies as I was mentioning before and really taking imagination seriously not as flights of fancy not as something only for self-proclaimed artists not as something that necessarily can fit in the context of an exhibit, but all of the forms of imagination that happen
            • 56:30 - 57:00 even in the mundane things like in a spreadsheet, right, in a budget, organizers often call, you know, budgets moral documents because they tell us who and what we value. So looking for imagination in unexpected places but also inviting ourselves to exercise imagination even if we don't think of ourselves as a creative or as an artist because in my view that's part of the world building that Butler did when you go to the Huntington and you see um you know her papers she was
            • 57:00 - 57:30 studying newspapers she was studying medical anthropology she was doing so much research and then deciding it to present it in the form of narrative and fiction. on the one hand, but she was also doing a lot in the margins in terms of dreaming before we think in that, you know, she was fashioning and creating her own life. There's so many aspects of of the exhibit here that just remind me, you know, the fact that she has this
            • 57:30 - 58:00 phrase that she would write her marginelia note to herself. So be it, see to it. a mantra that's almost willing into being what she wanted to manifest in the world. She was writing herself into existence. Um, she was saying well before anyone acknowledged her brilliance, you know, I'm going to be a New York Times bestseller. Boom, boom, boom. So be it. See to it. I'm going to be able to buy my dream house, so be it. See to it. I'm going to do X,
            • 58:00 - 58:30 Y, and Z. And it's that kind of conjuring and imagination and creativity that happens yes on the individual level as she did but I think we have to also channel that sort of willingness in our collective work and our world building and our organizing and our movement efforts um to really step back sometimes and think beyond the strategies that we need to get those short-term goals which are important but really to to center vision that bigger vision that we're
            • 58:30 - 59:00 that we're trying to conjure and manifest in terms of our shared futures. Quite um beautifully put and as you've also written Rua that you know we're a pattern seeking species, right? And it's such a necessity right now to sort of create new patterns, right? and sort of I think as as you've also put it right ideas like race were nothing if not the act of imagination of a sort of mad French scientist in the 1700s right so
            • 59:00 - 59:30 all of these things come from imagination the good and bad not just the good the bad too but speaking of this is because you speak so eloquently to this you have a little exercise or some sort of words for us to to imagine a future we uh feel good about I'll I'll do I hit play how do we do it okay you got it you got it you got just go back to the words all right I'm going back so I mean part of it is that you know imagination is something we have to exercise it's a muscle we have to you
            • 59:30 - 60:00 know strengthen it we have to be like you know train in that thing um and so part of it is again not to think of it so mystically but how do we actually practice this um form of looking beyond the horizon of what's right in front of us whether that's four years of a certain administration or whether that's longer time scales. Um, and so this is a little exercise again just to open up that space that we will you'll fill, you know, in conversation afterwards hopefully. But we're going to split the
            • 60:00 - 60:30 room in half. And we're going to say this side of the room, uh, your catchphrase is be bold. All right. So when I point to you, you're going to shout it in unison like you're trying to shoot me like a rocket in the sky. All right, you ready? Excellent. This side of the room, your catchphrase is get real. And I want you to say it like a very bored teenager. All right. Um, and the more obnoxious
            • 60:30 - 61:00 the better. Ready? Space travel. Free public transportation. Colonize Mars. Decolonize Earth. AI super intelligence. Cancel student debt. Build underground bunkers to survive the AI apocalypse. Build affordable housing for all.
            • 61:00 - 61:30 Spend billions on weapons and war. Invest in peace and well-being for all. You get the picture. Now we can do the picture. And so it's this idea, you know, that boldness is rationed while realism is mass- prodduced and that we're we're trapped in many ways inside this lopsided imagination of those who are monopolizing power and resources to benefit the few at the expense of the
            • 61:30 - 62:00 many. And those futurists, that astronaut on the pool, now they let their own imaginations run wild. It's not that they don't have imagination. They have an excess of imagination that they're materializing, bending our physical and digital realities. And their visions, our visions even grow limp when it comes to, you know, thinking about a world in which everyone has what they need to thrive. And so the invitation is for us to be bold when it
            • 62:00 - 62:30 comes to our social vision and to reclaim that power and to you know expel those colonizers of our imagination that make one set of goals seem feasible that are really far-fetched and sign sci-fi and really reclaim the possibilities for what we're constantly told are impossible. Going back to Butler, I guess the last thing I'll say, you know, I have this
            • 62:30 - 63:00 little uh quote of hers on my office door and and it on the wall and it says uh we can each of us do the impossible if we can convince ourselves it's been done before. And that's why that historical imagination to think about what many of our ancestors were able to conjure that seemed impossible in their lifetimes and yet they woke up and they did their part whether it was small or
            • 63:00 - 63:30 big and it's channeling that historical stubbornness um that we want to infuse in our own work to keep us going in this moment. Beautiful way to sort of be in this conversation to close. We're indeed we're a species who's been in awful spots before, many, many times. And just reminding ourselves of that and what we've done and as you've helped us do um to remember that we have done the impossible. Yeah. Is important. I
            • 63:30 - 64:00 promise Kate no more of her tweets. But this one, this one I loved so much because Lori Anderson has performed here. I I love this because like this is like the equivalent of putting it on like my office door except I just had to share it with them because Lauri Anderson, you know, huge influence on me as like a baby electronic musician in my part of your past. You know, she was a, you know, absolutely changed my thinking about what sound and music could be. And then she she says this perfect thing
            • 64:00 - 64:30 that completely encapsulates the problem which is that if you think technology is going to solve your problems you don't understand technology and you don't understand your problems. I just think it's absolutely so perfect because exactly as Ruhar is saying we are misdiagnosing what technology can do and we are misdiagnosing what we can do and the power is with us. Yes, that I think it's more than almost more than anything at
            • 64:30 - 65:00 this time is like how are we going to remember what the us is in this phrase and rather than the AI always being centered always coming first that we are going to have to start asking harder questions around who technology is serving and whose interests is it advancing and I know things feel really bleak they do in our world we were chatting before about you know the the collapse that's that's that's coming in the American university system and it feels like the end of the world. You know, we've seen the US depart the Paris
            • 65:00 - 65:30 agreement. We've seen the end of the Bill of Rights. Things that many people in this room have worked really hard for. But the thing that I always keep coming back to is that the world has ended many times and we just we have to remake it. But we have to remake it not by putting technology at the center and saying how can it drive us forward but by flipping the question and saying what kind of world do we want
            • 65:30 - 66:00 to live in and how does technology serve that vision not drive it. That's the big one I think for me in terms of how we go through this moment. And just to to end you on possibly one of the most dystopian things that I was reminded of recently and then we'll get back to a hearty bit of of of poetry but um you know thinking about the end of the world um Sam Olman apparently said he said you know AI might bring about the end of the world but in the meantime
            • 66:00 - 66:30 we'll have some pretty great companies. I kid you not. And what I would like to suggest is that that is not the vision that Ruhar and I are suggesting that we ground ourselves in right now and that we are facing real collective threats both humanmade and of the planet right now and we are only going to solve them by thinking about how we want to remake
            • 66:30 - 67:00 the world after the end of the world. Yeah. Ashe and I will just footnote to that is although we're three individuals sitting here on this stage talking about this pragmatically speaking this really involves us plugging into the organizations and movements that are flowering um in many different loces and regions. And so this isn't, you know, kind of like individual brain power genius that's going to move us forward, but it's really plugging into these all
            • 67:00 - 67:30 kinds of whether it's Takeback Tech, which is a wonderful initiative. I went to their conference last summer. So many local organizations, probably many of you in here are working with. But, you know, there's ways to plug in to sort of channel the despair, to channel the rage, um to metabolize it through action and through um community. And so, again, moving beyond the kind of individual model of, you know, uh leaders and change to really plug into all of the
            • 67:30 - 68:00 resources that we have around us to be able to do this. Beautiful. Uh, it just remains for me to thank the brilliant Rua Benjamin, Kate Crawford for taking a deep breath with us, for sharing your knowledge, your vision. [Applause]