Why we should use AI to expand what it means to be human | Sari Azout
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this thought-provoking talk, Sari Azout delves into the complex relationship between humans and artificial intelligence, challenging the narrative that AI will simply replace humans. She points out the limitations of this binary view, emphasizing instead how AI can expand human capabilities and creativity. By redefining AI not as a threat but as a new form of collective intelligence, Azout encourages us to rethink how we use language, engage with technology, and evaluate our work. Her insights highlight the gaps between expectation and reality in AI's impact on workload and expertise, while also urging us to transcend quantifiable metrics and focus on the unmeasurable aspects of human experience. With AI's open-ended potential, she pushes for a more thoughtful integration of AI that enriches rather than diminishes our humanity.
Highlights
The narrative of AI replacing humans is as nonsensical as humans versus pencils or calculators. 📏
AI should be viewed as a form of collective intelligence, emphasizing its potential to aggregate human knowledge. 🌐
Technology creates new standards and doesn't just make old tasks easier, leading to a paradox of increased workload. 💼
AI democratizes tools but requires expertise to direct and assess outputs effectively, highlighting a new form of expertise. 🎨
The importance of reclaiming unquantifiable human qualities in an age dominated by mechanical metrics. 💡
Key Takeaways
The phrase 'AI will replace humans' is oversimplified; AI can actually enhance human creativity and potential. 🤖
Language shapes our perception of technology, and rephrasing AI as 'collective intelligence' can change how we perceive its role. 🗣️
AI's impact on workload is paradoxical; it can increase work by creating new opportunities rather than just saving time. ⏳
Access to AI tools doesn't replace expertise; it shifts the focus to guiding and evaluating work. 🔍
AI reflects and amplifies our values; it calls for a return to valuing unquantifiable human qualities. 🌟
Overview
In her enlightening talk, Sari Azout challenges the oversimplified notion that AI will replace humans. Instead, she invites us to view AI as a tool for enhancing human creativity and potential. Through her exploration, she demonstrates how language shapes our perception of technology, recommending a shift from artificial intelligence to collective intelligence. This, she argues, can profoundly change how we perceive and utilize AI.
Azout discusses the paradoxical nature of AI's impact on our workload. While it's often thought of as a means to save time, AI can actually increase the amount of work we consider worth doing by creating new opportunities. This phenomenon mirrors historical labor-saving technologies that raised standards rather than reducing effort. Additionally, she emphasizes that expertise remains crucial in guiding and evaluating AI-generated outputs.
Overall, the talk encourages a reflective approach to integrating AI into our lives. Azout suggests that rather than fearing AI as a competitor, we should see it as a catalyst for exploring and reclaiming the invaluable aspects of our humanity that cannot be measured. By navigating these reality gaps, we can harness AI's potential for profound personal and societal growth.
Chapters
00:00 - 01:30: Introduction and Framing the AI Question The chapter opens with a speaker acknowledging an impressive introduction given by Lauren. The speaker then humorously questions a commonly debated topic in AI discourse: Will AI replace humans? This question is often considered mundane yet remains intriguing enough to fuel discussions and sell conference tickets. The speaker reflects on the prevalence of this question in AI conversations.
01:30 - 03:30: The Impact of Language on Perception This chapter explores how language shapes human perception, particularly in the context of technological advancements, using the inception of the term 'artificial intelligence' as an example. The narrative begins with John McCarthy in 1955, who coined the term 'artificial intelligence' as a funding strategy, a decision that has since influenced how people perceive and interact with technology. The chapter reflects on humanity’s tendency to view technological progress through a lens of threat and opportunity, underscored by the power of language in framing these perceptions.
03:30 - 06:30: The Reality Gaps in AI Utilization The chapter titled 'The Reality Gaps in AI Utilization' addresses the perception of technology, particularly how language influences our thoughts. It presents a comparison between human editors and AI aides in writing. While human editors are seen as collaborative and legitimate, using AI is often viewed as lazy and inauthentic.
06:30 - 09:30: The Nuanced Expertise Required in AI Use The chapter titled 'The Nuanced Expertise Required in AI Use' addresses the complexity and emotional dimensions involved in utilizing AI. Initially, people may take shortcuts in learning and application due to misunderstandings or misconceptions about AI. The text suggests that changing the way we talk about AI can alter our understanding and application of it. An artist, Holly Herndon, argues that the term 'AI' might actually misrepresent the collective nature and potential of artificial intelligence, influencing our perception of it.
09:30 - 12:30: AI as a Reflection of Human Values The chapter delves into the nature of artificial intelligence, suggesting that AI models like large language models (LLMs) are fundamentally advanced statistical tools. These tools aggregate human-generated data more effectively, thereby enhancing human intelligence and enabling us to extract greater utility from our collective knowledge. The chapter proposes reimagining AI as "Open Collective Intelligence" rather than just individual artificial intelligence, emphasizing its reflective nature of human values and collective progress.
12:30 - 16:00: Concluding Thoughts on AI and Human Evolution In this chapter, the author reflects on the use of AI in writing and creativity, emphasizing the advantages of collaborating with AI systems. They argue that using AI, referred to as CI (Collaborative Intelligence), is not just resourceful but essential, equating it to dismissing legendary writers if it isn't utilized. The chapter concludes with an introspective note on how language and perception influence one’s understanding of reality.
Why we should use AI to expand what it means to be human | Sari Azout Transcription
00:00 - 00:30 Wow, thank you for that incredible introduction,
Lauren. And now, I'm going to completely undermine it by starting with a very boring question. Will
AI replace humans? It's the "how's the weather" of AI discourse. right? Safe enough for dinner parties
but provocative enough to sell out of conference tickets. But I'm fascinated by why this question
dominates the discourse. Why, when faced with one
00:30 - 01:00 of humanity's most remarkable achievements, we
instinctively frame it as a threat to ourselves. Like most things in Tech, it began with a funding
pitch. It was 1955, and a researcher named John McCarthy was looking for a catchy term to attract
funding for his work. Artificial intelligence sounded exciting, and it worked. And just like that,
this choice of words became the defining frame
01:00 - 01:30 through which we experience this technology. We
often think of language as a means to an end but rarely think about how language itself influences
our thinking. Let's look at an example. If you're writing with the help of a human editor, you're
collaborative, responsible, a completely legitimate form of assistance. If you're writing with the
help of AI, you're lazy, cheating, inauthentic, you
01:30 - 02:00 know, cutting corners, taking shortcuts, rather than
learning. Similar activity—completely different emotional response. Now, I want to convince you
that changing your vocabulary can change your reality because it happened to me. This was also my
initial reaction, and then I heard an artist by the name of Holly Herndon say that AI is actually
a huge disservice of a term, and that collective
02:00 - 02:30 intelligence is at far more accurate. Because if
you strip LLMs to their pure essence, they are just a much better way of using statistics to aggregate
human intelligence and connect all of the things we've done together so we can get more use out of
them. So, let's run this exercise again except this time let's imagine OpenAI had been called OpenCI, open collective intelligence. Again, if you're
02:30 - 03:00 writing with the help of a human editor, you're
collaborative. If you're writing with the help of CI, you're resourceful and leveraging the best of
human knowledge. In fact, I'd even go as far as to say that I'd be judged if I wasn't using CI. That
would be the equivalent of having Gabriel García Márquez and Virginia Wolf in the room and asking
them to leave. So, if one word can distort my entire perception of reality, what other blind spots was I
dealing with. I couldn't shake it. I needed to find
03:00 - 03:30 out. So that's what I did. I spent the past two
years going to the source, building my company Sublime while obsessively using AI tools. I used
them to write, to build software, to design software, even tried to use it to write this presentation
for me. And then I paid attention not just to what the tools could do but to how they made me feel.
How were they changing me and my relationship with
03:30 - 04:00 work. How were they changing what it feels like to
be human. And what I discovered was a profound gap between what I was expecting and the reality I was
experiencing. So, I want to spend the next couple of minutes just going through these three reality
gaps, and they've changed how I work with and relate to AI, and maybe they can do the same for
you. Here's reality gap number one. I expected AI to
04:00 - 04:30 reduce my workload and free up time—you know the
dream of robots doing our jobs while we chilled in hammocks all day. Instead, I was caught in a
paradox. The amount of work that is now possible and exciting and worth doing because of AI greatly
exceeds the amount of time I save using AI. This isn't a new pattern. In the early 20th century, we
witnessed an explosion of labor-saving technologies.
04:30 - 05:00 Fridges and washing machines and dishwashers
and vacuums that should have theoretically reduced housework. But women in 1960 were spending
more time on housework than they were in 1920. Here's my grand theory for why we have no time.
Technology doesn't just make old tasks easier. It creates entirely new standards. Before washing
machines, families used to wash clothes maybe twice
05:00 - 05:30 a week. After that, weekly laundry became the norm.
Technology does not exist in a vacuum. We respond to technology. I have felt this shift personally.
One of the litmus tests that I now have before releasing work is is this un-LLM-able, by which
I mean, does this work carry the unmistakable fingerprint of human creativity, perspective, and
lived experience. Of course, the irony is I use
05:30 - 06:00 LLMs to assist me in producing the work. But as a
whole, AI makes average attainable by anyone, and therefore raises the bar for what is considered
exceptional. LLMs will soon do so many things that we once thought only humans could, but we'll keep
moving up the stack. We'll take those outputs as inputs and dream bigger. Some human skills will
be commoditized, but human beings will not. We are
06:00 - 06:30 un-LLM-able. Here's reality gap number two. I assumed
AI would democratize expertise and make anyone a designer, a writer, a lawyer, an engineer, and to
some degree, that's true. The technical barriers are crumbling. But what I discovered is far more
nuanced. To shape the inputs and to evaluate the
06:30 - 07:00 outputs, you still need expertise. LLMs are
really impacted by every word you input. Let me show you with an example, and of course I'd
be remiss not to use this wonderful opportunity in the heart of Stockholm to not plug an app that I'm
building called Podcast Magic. So, in this example, we're going to prompt an LLM to help us generate
the marketing headline for Podcast Magic and, just for context, Podcast Magic is an app that lets you
capture insights from podcasts using screenshots.
07:00 - 07:30 As in, you're out for a run listening to a podcast
on Spotify. here's something you love, take a screenshot. the app will automatically generate a
transcript plus audio of that moment so you can refer to it later. So, in this example we have two
prompts. Prompt on the left is really someone who maybe they don't care, maybe they're not an expert, they're just trying to get the task over with.
07:30 - 08:00 The point is they're just going to enter a generic
input and accept whatever mediocre output the AI will generate. On the right is a prompt by someone
who's really leveraging their understanding of positioning and psychology, who is adding context,
importantly, I cannot emphasize this enough, asking the AI to generate multiple variations and
then of course pressure testing the models until the words hit just right. So, in this scenario,
you'll have two prompts, same tool, wildly different
08:00 - 08:30 results. And the key thing here was not the tool, it
was the person operating the tool. And I think the key thing to take away from this is that the
expertise here is not in doing the work. It's in guiding the work, in evaluating the work, in
knowing what is worth prompting, which is really just another way of saying—knowing what is worth
doing. I was trying to come up with an example to
08:30 - 09:00 illustrate this point better in a way that
was more sticky, and I kept coming back to this definition of modern art that I came across not
long ago, which says modern art equals "you could do this plus yes, but you didn't." You know the feeling of
seeing a black square painting go on auction for $80 million and thinking, "Come on I could have
done that." But you didn't. They did. They knew
09:00 - 09:30 it was worth doing, when, and how to frame it
so it mattered. And I think more and more this will apply to realms outside of art. Things will
look simple and easily replicable on the surface. People will say things like, "Ah anyone could have
prompted that with ChatGPT." But the more and better access everyone has to tools, the clearer
it becomes that the bottleneck to great work is
09:30 - 10:00 not knowledge, it's not information, it's not even
intelligence. It's that intangible quality, call it taste, creativity, judgment, courage, intuition,
agency. We've all heard the quote from Thomas Edison where he says, "Genius is 1% inspiration 99%
perspiration." Back then, execution was expensive and learning was slow. Well, I think it's going
to flip, and maybe be like 99% inspiration 1%
10:00 - 10:30 perspiration. If you have the will, a laptop, and
a point of view, you can become dangerous in days. That's a completely different definition of
expertise. Here's the final reality gap. When I first started using AI, I feared that machines
were becoming more human, and yes, it is getting harder to tell the difference. But that's not just
because humans are becoming more machine-like—or
10:30 - 11:00 because machines are becoming more like humans—but
because humans are becoming more machine-like. We worry AI will replace writers, but half the
internet is engagement farmers on LinkedIn selling five ways to 10x your creativity by 6 a.m.
And this mindset has infected everything. We join Twitter for the love of learning, come out obsessed
with likes. We do research for discovery then get
11:00 - 11:30 trapped chasing citation counts. We want to hire
and retain the best teachers so we measure test performance, and then teachers teach the test.
We want to reward quality media so we measure clicks, and then we get clickbait. Put differently,
we try to measure what we value but inevitably end up valuing what we measure. And what can
be measured, tends to be the mechanical stuff.
11:30 - 12:00 Not only do numbers not capture what we care about,
they are extremely open to manipulation. The idea that our value can be measured in KPIs, and OKRs,
and numbers, that can be seen and tracked and made legible to spreadsheets is a modern invention.
If we go back to ancient Greece, human worth was really tied to wisdom and contemplation. Medieval
England is incredibly tied to religious devotion.
12:00 - 12:30 If we look at many indigenous cultures, status was
deeply tied to spiritual connections, storytelling, abilities, relationships. We just decided at some
point that progress meant numbers on a spreadsheet. I think AI is really just holding a mirror
to us, and in that reflection awakening this desire to return to the things that cannot be
easily quantified. You know, so many of us in
12:30 - 13:00 Tech, we tend to see the world through the lens
of technological determinism, where we assume that the course of history will play out according
to what is technologically possible. But the most exciting breakthroughs won't come from technology,
they'll come from how these new tools reshape our understanding of ourselves. From who we can become
when we have all of this new power. Which brings me back to the fundamental problem with how we
framed this entire conversation. We've taken one
13:00 - 13:30 of the most extraordinary technologies and given
it the most banal framing possible—how can this replace a human? To illustrate the absurdity of
this framing, let's replace humans with technology. The debate between humans and AI is as nonsensical
as humans versus pencils, humans versus calculators, humans versus maps. Technology is not our rival. It
is simply applied human knowledge. But here's the
13:30 - 14:00 thing about AI. Compared to other technologies with
clear founding missions, AI is an ideological blank canvas. The Internet was about openness. Blockchain
and crypto were about freedom and decentralization. But there is nothing missionary built into AI
itself. The leading labs have all stated that building AGI is their explicit goal, which focuses
on some vague technical dream of replacing human
14:00 - 14:30 intelligence while offering very little about
what that means for us. Now, that should all be great news for the people in this room because
the biggest problems are not technological, they are philosophical. They are questions of values,
ethics, and worldviews. Just as we are living inside Zuckerberg—Zuckerberg's belief in instant
connections, and Steve Jobs' dream of the computer
14:30 - 15:00 as a bicycle for the mind, we are already stepping
into the world as imagined by Sam Altman. But what is his vision of a meaningful life? What does he
believe leads to human flourishing? And forget about Sam Altman. What do you believe leads to human
flourishing? In my own work at Sublime, this has meant reimagining what a knowledge management
can be if designed not for productivity, but
15:00 - 15:30 for creativity. Not for making something fast but
for making something wonderful. Not for automating words but for alchemizing minds. And I've seen
how the shift in frame can create ripple effects across every decision we make. This isn't woowoo
stuff, it's deeply practical because products are not neutral They are opinions embedded in pixels.
Every button, every default, says something about
15:30 - 16:00 what we believe people are for. So, I want to return
to the question we began with. Will AI replace us? Yes, but fearing that AI will replace us is
like fearing that our children will replace us.
16:00 - 16:30 Yes, they will replace us because that is
why we created them, but also they depend on us, and they are us, and they
can improve us and free us from becoming machines so that we can reclaim the
things that machines can't touch. Thank you.