Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
Mark Zuckerberg and his team at Meta are working on groundbreaking technologies that could redefine how we connect and interact. From holographic glasses to the next-level AI capabilities, Meta envisions a future where physical presence and digital interaction blend seamlessly. In a candid conversation, Zuckerberg discusses the challenges and potential of these innovations, emphasizing the importance of maintaining human connections while embracing technological advancements.
Highlights
Holographic augmented reality glasses are in the making, aiming to offer a full field of view. 🕶️
Meta focuses on creating a deep sense of social presence through AR and AI. 🌍
The company is venturing into affordable mixed-reality headsets, like the Quest 3S. 🎮
AI will play a significant role in enhancing human communication and creativity. 💬
Mark Zuckerberg's bet on the scalability of AI systems with Llama models. 🔄
Key Takeaways
Holographic glasses are the next big computing platform after phones. 📱
Meta aims to revolutionize social presence with AR and AI. 🕶️
Haptics and presence are key challenges for future interactions. 🤝
Personalized AI could transform social media and daily life. 🌐
Open source models could ensure a safer AI development. 🔓
Overview
Mark Zuckerberg, the CEO of Meta, is at the forefront of innovative technology, striving to bridge the gap between the virtual and physical worlds. In an exciting discussion, Zuckerberg unveils the decade-long journey towards creating holographic augmented reality glasses, which he believes will succeed smartphones as the next major computing platform. These glasses aim to keep us engaged with the world around us while offering a seamless integration of digital enhancements.
Zuckerberg also delves into the company's efforts in developing personalized AI, which promises to enrich human interaction and creativity. By leveraging advanced AI systems, Meta hopes to redefine how people connect, both through social media and everyday activities. The introduction of affordable mixed-reality headsets is a strategic move to make advanced technology more accessible, ensuring that such innovations reach a broader audience.
Open source AI models are at the heart of Zuckerberg's vision, as he argues for a decentralized approach to AI development. This method, he believes, will not only foster a creative and competitive ecosystem but also enhance safety and security through collaborative scrutiny. The Meta team stands on the brink of transforming the tech world, with advancements that promise more genuine connections and innovative experiences.
Chapters
00:00 - 00:30: Introduction and Interview Start The chapter titled "Introduction and Interview Start" begins with a discussion on a decade-long project that the interviewee has worked on, described as 'the real-life Tony Stark glasses.' These technologically advanced glasses are seen as a significant innovation, with potential to be the next major platform after smartphones. The conversation touches on the growing intelligence of AI and how it could reshape social media interactions. Furthermore, the interviewee reflects on personal experiences such as missing physical interactions, like hugging their mom, and addresses broader societal changes, such as the decline of average friendship numbers in America over the past 15 years.
00:30 - 01:00: Interviewing Mark Zuckerberg In this chapter, the author is preparing to interview Mark Zuckerberg, the CEO of Meta. The anticipation is high due to Zuckerberg's influence on the future, given that nearly half of the global population uses Meta's products. The author has also recently experienced some of Meta's cutting-edge technology, which feels like it belongs in a science fiction narrative. This sets the stage for a discussion with Zuckerberg and his team about the future they are imagining for billions of people.
01:00 - 01:30: Goal of the Conversation The chapter is an introduction to the series titled 'Huge Conversations.' The host outlines the primary goal of the conversation, which is to envision the future that Mark Zuckerberg is attempting to create. The intention is to provide the audience with a clear vision so they can form their own opinions. The chapter sets the stage for a comprehensive exploration of the subject matter by highlighting the significance of understanding the future of digital and social landscapes.
01:30 - 02:00: Introducing Augmented Reality Glasses The chapter introduces the concept and development of augmented reality glasses, highlighting a forward-looking perspective on technology's potential. The discussion focuses on exploring possible futures with the advancement of science and technology. The goal is to provide insights into the envisioned future shaped by Meta's product development, imagining how people will use these augmented reality glasses.
02:00 - 02:30: Development of Holographic Augmented Reality Glasses The chapter discusses the development of holographic augmented reality glasses. These glasses are claimed to be the first of their kind in the world, akin to Tony Stark's fictional high-tech glasses. The project took ten years to develop and has resulted in the production of a few thousand units.
02:30 - 04:00: Applications and Possibilities of AR Glasses The chapter discusses the potential applications and technological advancements of augmented reality (AR) glasses. This innovation is the result of a decade-long research and development effort to condense complex computing into a wearable glasses format, as opposed to a headset. These AR glasses are capable of projecting full holograms into the real world with a wide field of view, opening up various possibilities in both personal and professional domains.
04:00 - 06:00: Multiple Products and Their Future The chapter explores the future of technology, particularly focusing on the integration of holograms in daily life and interactions. The conversation speculates on a future where one could participate in meetings or social interactions as full-body holograms, rather than just video calls. This new form of presence would allow for interactive experiences such as playing games like poker or chess with holographic elements, indicating a transformative impact on work and communication.
06:00 - 09:00: The Future Vision and Presence with AI The chapter discusses the expanding role of AI in various fields such as science, education, entertainment, and gaming. It highlights how current AI technologies are merely prototype versions with plans to develop consumer-ready versions in the future. The motivation behind these developments is presented as AI's potential to become the next major computing platform, in line with the overall progression of computing technologies.
09:00 - 12:30: Exploring Human Connection and Haptics The chapter discusses the evolution of computing technology from large mainframes to personal computers, and finally to mobile phones that are carried everywhere. It emphasizes that while these advancements are significant, they can also disconnect users from their surroundings. The future trend in computing is anticipated to be more integrated into daily life, becoming more natural in its interaction and fostering social connections.
12:30 - 16:00: Technology's Impact on Social Capital The chapter discusses the impact of technology, particularly the new platform emerging after phones, on social interaction. It seems to highlight a specific special edition device equipped with various features such as micro projectors. The focus is on how these advancements might shape the way people engage and interact in their daily lives.
16:00 - 21:00: AI's Role in Education and Creativity The chapter discusses a unique display system that uses waveguides to create holograms. Unlike traditional displays found in phones, TVs, or computers, this system involves shooting light into waveguides with nano etchings that catch and create holographic images. The emphasis is on the synchronization of these holograms with the viewer's gaze.
21:00 - 26:00: Balancing Tradition and AI in Learning The chapter delves into the integration of traditional learning methods with advanced AI technologies. It discusses the various components involved in creating augmented reality systems, such as eye tracking, cameras, computing power, batteries, microphones, and speakers. These components work together to enable the placement of holograms in the real world, enhancing the learning experience by synchronizing virtual and physical environments.
26:00 - 31:00: Integration of AI and Social Media The chapter discusses the integration of AI with social media platforms, focusing on holographic display technology that requires synchronization between multiple displays, unlike traditional single displays in phones or TVs. It explores the challenges of physical synchronization and the necessity of radio communication with other devices for complex computations. Additionally, it mentions a wrist-based neural interface, highlighting miniaturization efforts to incorporate these advanced technologies into conventional wearable forms.
31:00 - 40:30: Open Source and AI Development In this chapter titled 'Open Source and AI Development,' the discussion revolves around the future of technology, particularly focusing on advancements in smart glasses and digital objects in physical spaces. A decade ago, the team was uncertain about achieving these technological advancements. However, there is now optimism about not only achieving these goals but also doing so in a cost-effective, high-quality, and aesthetically appealing manner. The future envisioned here includes smart glasses that are more stylish, smaller, and cheaper with advanced features like heads-up displays.
40:30 - 46:00: Zuckerberg's Big Questions on AI In this chapter, the focus revolves around the development and categorization of augmented reality (AR) and virtual reality (VR) technologies. The discussion highlights devices like glasses that create digital objects in physical spaces, like those envisioned by Snapchat Spectacles. Additionally, it talks about headsets such as the Quest and Apple's Vision Pro, which belong to a different category. The chapter raises questions about how these tools are organized and their practical applications in everyday life.
46:00 - 47:00: Wrap-up and Final Thoughts The chapter titled 'Wrap-up and Final Thoughts' reflects on the journey over the past decade in developing advanced product technologies. Initially envisioned as the ultimate solution, the goal was to create normal-looking glasses capable of projecting full holographic images. This concept aligns with a science fiction-inspired future that many aspire to reach. The process also explored alternative approaches, such as creating glasses without displays, to aid in development and learning.
The Future Mark Zuckerberg Is Trying To Build Transcription
00:00 - 00:30 I'd love to start with these. 10 years of work
right there. Someone on your team called these the real life Tony Stark glasses. Very hard
to make each one of these... That makes me feel incredibly optimistic... In a world where AI
gets smarter and smarter... This is probably going to be the next major platform after
phones... I miss hugging my mom. Yeah haptics is hard... How does generative AI change
how social media feels?... We haven't found the end yet... The average American has fewer
friends now than they did 15 years ago. Why do you think that's happening? I mean
there's a lot going on to to unpack there...
00:30 - 01:00 I'm about to interview Meta CEO
Mark Zuckerberg. There are not that many people with more power over what our
future might look like. Nearly half the total human population now uses Meta products and I just
tested some of their new tech that feels like science fiction. This is crazy! Mark Zuckerberg and
the team at Meta are imagining a future that billions
01:00 - 01:30 of other people might actually end up living in. So
my goal for this conversation is to try to figure out what that future really looks like. To paint a
picture of the future Mark Zuckerberg is trying to build so that you can decide for yourself what you
think of it. Welcome to the first episode of our new series, Huge Conversations Hey, good to meet you! Thanks
for doing this. Yeah looking forward to it. Awesome. I'd love to tell you what my goal
is of this conversation. Go for it. We have a called
01:30 - 02:00 huge if true which is this very optimistic about
science and technology and the potential futures that we can build and in every episode we're sort
of exploring what does it look like if you play a certain technological future out and so my goal
in this conversation is to try to help people see the future that you're imagining when you're
building the products that you and the Meta team are building. What are you imagining this looks
like in future? How are you imagining people use
02:00 - 02:30 this? All of that. Cool. All right awesome. So
I'd love to start with these. Let's do it. 10 years of work right there! I got to demo them a little
bit earlier today. I heard someone on your team call these the real life Tony Stark glasses? We're
getting there. But I'd love to just hear in your voice what are these? Well these are the first full
holographic augmented reality glasses I think that exist in the world. We've made I think it's a
a few thousand or something right. Very hard to
02:30 - 03:00 make each one of these but this is the culmination
of 10 years of research and and development that we've done to basically miniaturize all the
computing that you need to have glasses not a headset but glasses that can put full holograms
into the world with a wide field of view. So you
03:00 - 03:30 can imagine sort of in the future we'd be having a
version of this conversation where you know maybe I or you are not even here it's like one of us is
physically here and the other one is here as a as kind of a full body hologram and it's not just
a video call you can actually interact you can do things I mean in the the demo we had the you
know ping pong and games and things like that but I mean you could you can interact you can work
together you can you know play poker play chests whatever like the holographic cards holographic
board game. I just think it's going to be wild. it's going to remake I think so many different
fields that we think about today from how we work
03:30 - 04:00 and productivity to a lot of things around science
a lot of things around education entertainment fun gaming. But this is just the beginning you
know this is the first version, it's a prototype version that we've made in order
to develop the next version which is hopefully going to be the consumer one that we sell to
a lot of people. Why build these? Well I think it's going to be the next major computing platform.
So if you look at like the grand arc of computing
04:00 - 04:30 over time you've you've gone from like main
frames to computers that basically like live on you know your desk or on a tower to phones
that you have in your hand that you basically like you know can take with you everywhere that
you want but it's it's pretty unnatural right it takes you away from the world around you and. I
think that the trend in computing is it gets more ubiquitous it gets more natural and it just
gets more social right so you want to be able
04:30 - 05:00 to interact with people in the world around you
and I think that this is probably going to be the next major platform after phones. I'll give
these to you. These are the clear ones that show all the... The whole thing is a special edition and
this is like a really special edition. There's not a single millimeter of of space. You know
everything in here from the micro projectors that
05:00 - 05:30 um basically shoot light into the wave guides
right it's a special type of display system. I mean these aren't normal displays like you have
in a phone or a TV or computer like the type of displays that people have been building for
decades. It's a waveguide system. The projector that's shooting light basically goes into these
nano etchings across the wave guide that are what catches and creates the holograms. In order to
synchronize that with your where you're looking
05:30 - 06:00 there's eye tracking and little cameras,
they illuminate your eyes and then of course there's all the basic stuff that you need all the
computing, the batteries to power the whole thing, microphones, the speakers because it needs to be
able to play audio and speak with you and the cameras and sensors to see things around you in
the world so that way when it's placing holograms in the world it can do that in the right place
and understand where you are so that probably is still not covering everything because there's
a lot of things that need to go into syncing up
06:00 - 06:30 the holographic images between the two displays
because you don't just have a single display like you have in a phone or TV you have two and
it moves around and you know physical things are hard and need to be synced up. There's also
the radio that has to communicate with your other computing devices to do heavier computing um and
the wrist based neural interface that you probably got to try out. We kind of miniaturized all of this
and fit it into uh you know normal looking pair of
06:30 - 07:00 glasses which is... you know when I told the team
that we were going to do this 10 years ago you know people weren't sure if we were going to be
able to but I think you not only we're going to be able to do this but I think we're going to
be able to get it cheaper and higher quality and even even smaller and more stylish over time. So
I think this is going to be a pretty wild future. There are so many versions of trying to get
a similar idea of digital objects in physical space. I'm thinking of for example of glasses that
have heads up displays where it's headlocked and
07:00 - 07:30 it's moving with my eyes, glasses that are really
creating digital objects in physical space that don't move as I move, I'm thinking of these, I'm
also thinking of the Snapchat Spectacles that they just announced, then on the other hand there are
headsets like the Quest and also like the Apple Vision Pro that seem to fall into a different
category. I'm curious how you would organize this landscape for people and how you think about
people using these tools in their real lives
07:30 - 08:00 in the near future? Yeah so when we were getting
started on this about 10 years ago I thought that something like this was going to be the ultimate
product for everyone. Right you get to you know normal looking pair of glasses and we'll continue
improving that that can have full holographic images. I think it's super powerful
and it is sort of the science fiction future that I think we all hope to get to. On the journey we
took a few other approaches as well um to help us develop towards that including building glasses
that don't have displays to try to learn. Just
08:00 - 08:30 take a stylish pair of glasses today and put as
much technology into it as you can but really focus on the form factor and that's the Ray Ban
Meta glasses and it's doing really well and initially we thought that that was sort of intro
product for us to learn how to build this but one of the things that's clear now is you're going to
be able to make that product a lot more affordable than this probably permanently. So I actually think
that there are going to be a bunch of different of
08:30 - 09:00 these paths that we've taken are going to be
kind of permanent product lines that people will choose. I think you'll see display-less glasses
like the Ray Ban Metas continue to get better and better, great for AI, no display but you can talk
to it, it can talk back. I think there's going to be something in between these that's basically a
heads up display, so it's not a 70° field of view, maybe it's a 20° or 30 degree field of view,
so that's not going to be what you want for
09:00 - 09:30 putting kind of a full hologram of a person or
interacting with the world around you but it's going to be great for you know when you're talking
to AI, not just having voice but also being able to see what it's saying or being able to text
someone with your wrist-based neural interface and then have their text show up rather than having it
read to you, which is, we read faster than we can listen or getting directions right or just
being able to search for information get all that. So there's a lot of value for heads up display
that will be somewhat more expensive than the
09:30 - 10:00 display-less but somewhat cheaper than this.
Then I think you're going to get this. It's going to be probably the most premium and and expensive
of glasses products but hopefully still something that you know like a computer is generally
accessible to most people in the world but I think that there are going to be all of those and I
I think people will like them. I also think that the headsets that people are using around mixed
reality will continue to be a thing too because
10:00 - 10:30 no matter how good we get at miniaturizing
the tech for this you're just going to be able to fit more compute into a full headset.
Fundamentally our mission is not you know build something that is advanced and only a few people
can use, we want to take it you the last mile and do all the innovation to get it to everyone. We
you know just shipped or announced Quest 3S, the new mixed reality headset where we basically are
delivering high quality mixed reality for $299.
10:30 - 11:00 I was really proud last year when we delivered
Quest 3, the first kind of really high quality high resolution color mixed reality device for
$500, right it was like, it's like a fraction of the cost of of what the competitors are doing
and I think it's actually higher quality in a lot of ways, and now we've just doubled down on
that. So I think that they're all actually going to end up being important long-term product lines:
display-less, heads up display, full holographic AR, full headsets. I think that they're all going
to be important. Yeah. If you play out the future
11:00 - 11:30 of not just the hardware that we've been talking
about so Meta Ray Bans, Quest, Orion, but also the Llama models, if everything goes according to
you and the teams wildest dreams, I'd love for you to just begin to describe what that feels like.
I mean I think that there are two primary values that we're trying to bring. On the AR and kind of
mixed reality side, the main value we're trying to
11:30 - 12:00 bring is this feeling of presence .Right so there's
something that I think is just really deep about being physically present with another person that
you don't get from any other technology today and I think that's the thing when people have a very
visceral reaction to experiencing virtual or mixed reality what they're really reacting to is that
they actually for the first time with technology feel a sense of presence like they're in a place
with the person and that's super powerful. I
12:00 - 12:30 focused on designing social apps and experiences
for 20 years that's sort of like the Holy Grail of that is being able to build a technology
platform that delivers this like deep sense of of social presence. The other big track is around
personalized AI and for that and that's sort of where Llama and Meta AI and all those things are
going. There's all this development that's going into making the models smarter and smarter over
time but I think where this is going to get
12:30 - 13:00 really compelling is when it's personalized for
you and in order for it to be personalized for you it has to have context and understand what's going
on in your life both kind of at a global level and like what's physically happening around you right
now and in order to do that I think that glasses are going to be the ideal form factor because
they're positioned on your face in a way where they can let them see what you see and hear what
you hear which are the two most important senses that we use for for kind of taking information
and context about the world. I think that this is
13:00 - 13:30 all going to be kind of really deep and profound
stuff but it's basically those two things: It's this feeling of presence and this capability
of really personalized intelligence that can help you. I'd love to talk about each of those
two things. The first on presence, I owe a lot to being able to connect with people online. Right
this job that I have is by definition that, also with my family. My parents don't live anywhere close
to me. I video call them a lot and when I think
13:30 - 14:00 about the progress of technology like this in a
timeline from the telegram to the telephone to video call to some feeling of presence with
another person who's feels like they're right there in front of me, that makes me feel incredibly
optimistic. I would love a future where like I can lose in Scrabble to my mom and feel like she's
really there in front of me. Yeah and it feels like we're not that far away from something - I agree! - that
persuades my brain that that's happening. Yeah
14:00 - 14:30 totally. And also I miss hugging my mom right like
that never goes away. Yeah haptics is hard. Yeah and so my question is about that
it's about this this feeling of like it's hard for me to imagine um a future where real physical
presence is not different and special in some way where I don't miss literally hugging my
mom and I'm curious how you think about the
14:30 - 15:00 parts of human connection that are eye contact and
physical touch and things that our ape brains value for connection with other people. Yeah well eye
contact I think we're going to get to a lot before the the touch part. For haptics I do think we'll
make progress on that but it's it's obviously there's a spectrum there too from kind of hands
which is where if you you draw out the kind
15:00 - 15:30 of like homunculus version of a person in terms
of like what are what are our kind of sensory you know what what's like the majority of what we're
sensing it's like yeah yeah so I think being able to do that for your hands is probably the
most important place to start and you have a rough version of that with controllers today. I think
that that'll get even more over time. We have this demo playing pingpong where you have a controller
where as the digital ball hits the ping pong paddle you feel it hit the as if it's hitting the
ping pong paddle wherever it is so you actually
15:30 - 16:00 have a sense of like where it's it's hitting
the the the paddle so I think that was that was just a wild demo so I think we'll get some of
that the most extreme version of this is wanting force feedback right so I mean like for doing a lot
of sports right it's it's like okay we can kind of do a good approximation of like boxing today or
you get like good feedback on your hands but it would be hard to do a virtual reality version of
Jiu-Jitsu where you're like grappling with someone and you need like real kind of force feedback on
that so that's probably like the hardest thing
16:00 - 16:30 right to go do but I think we'll get there.
You know I think like most science fiction it's not this binary thing that you just like wake up
one day and we're like oh we've realized all the dreams but but I I do think that these platforms
are going to be the first time that I think that there's a realistic sense of presence in all
the ways that that's special to people for most things that people want to do which are not
the most physical ones and even some of the basic
16:30 - 17:00 physical ones I think we'll get. But then there's
a long tale of other stuff I mean smell is also really important for people yeah right it's
I think it's disproportionately important for memories and that's not really a thing that
I think in the next few years we're going to have in any of these devices I mean that's a very
difficult and challenging thing on its own. What is the piece of that that you feel most interested in,
that you keep coming back to in your mind? This has
17:00 - 17:30 the frustrating property to develop that the
sense of presence is almost like when you're designing something that that's sort of trying
to artificially deliver it you're delivering an illusion to a person and more than any one
thing that provides a sense of presence it's actually more the case that any one thing done
wrong breaks the sense of presence. You kind of know that you're interacting with technology
but it's so convincing that um that you just kind
17:30 - 18:00 of go along with it. You're like okay yeah no this
person feels like it feel like they're there right. When I did that pingpong demo I like at the end
of it I dropped the pingpong paddle on the virtual table and it shattered so that was not the best
for for our internal development but like that's winning in our in our development
right it's like when when you feel like something is is kind of so realistic that you you're just
convinced that um that it's there now and there
18:00 - 18:30 are a lot of things that can break that right so
I think a a field of view that's too low right so something feels real but then you turn your head
and it's not there um latency read physics that don't behave like realistic physics. It also is
interesting in some ways what people can accept as physically real even though it's not right
so like we've done a ton of work on avatars we we have this whole work stream on Kodak avatars
to do these photo avatars and it's I think it's
18:30 - 19:00 going to be incredbly compelling and people are
going to love it but one of the things I found interesting is the ability to mix photorealistic
and expressive kind of the cartoony avatars with photorealistic worlds and kind of more cartoony
computer game type worlds so you can have the a Kodak kind of photorealistic avatar of a
person in what is clearly like a video game or cartoon world and people are generally
pretty fine with that it's like okay that that feels pretty good and similarly having
a photorealistic world but good increasingly
19:00 - 19:30 good kind of cartoon avatars as long as the
avatars move in a way that feels authentic to the person you're interacting with it actually
feels pretty good you know it's when you look at a 2d still frame of it some of the stuff can
look a little bit silly and and we've certainly you know had had a our share of memes around
that but um but when you're in there you know and you you've played around with lot of the stuff it
feels realistic because it's basically mimicking
19:30 - 20:00 the kind of authentic mannerisms of of a person
that you're interacting with and even if it's not a Kodak photo realistic avatar if it's kind of
a more cartoony expressive one so I I think that that's it's very interesting to see kind of
which pieces you need to unlock and what where you just need to be like very technically excellent
and consistent but it's um this isn't a space where it's like you deliver one thing and it's
good this is like there's a wide breth of things that you need to nail and then have it all come
together and that's why these are you know 10 year
20:00 - 20:30 projects. It seems like an interesting way to learn
about the human brain and what we actually care about with respect to what feels real. I was
wondering about, there was this moment in an interview that you did with Lex Friedman, you quoted
research that says that the average American has fewer friends now than they did 15 years ago and I was so interested in that because
20:30 - 21:00 it seems like if we want to get to a world where
there's more human connection this is the trend that we're going to have to grapple with and just
to give some data on this in the American Time Use Survey over the last 20 years the amount of time
American adults spend socializing in person has dropped by nearly 30%. For ages 15 to 24 according
to the Surgeon General it's nearly 70%. and I look at that data and I think to myself well maybe
if we're all socializing digitally that doesn't
21:00 - 21:30 matter so much maybe there's a future where that's
actually fine but there's also data that suggest that we're struggling somewhat. The number of
Americans who say that they don't have a single close friend - yeah it's really sad - that share has
jumped from 3% to 12% in the last 30 years. It feels to me like with all the tools that we've built for human
connection, we're struggling to connect and I'm curious
21:30 - 22:00 why do you think that's happening? I mean there's a lot
going on to to unpack there. A lot has changed sort of economically and socially during that
period and a lot of those trends go back before a lot of the modern technology. So I mean this
is something that a lot of academics and folks have have studied but it is an interesting lens
to look at this though because I think whenever
22:00 - 22:30 you're talking about building digital types
of connection one of the first questions that you get is is that going to replace the physical
connection and my answer to that especially in the case of something like this is that no because
people already don't have as much connection as they would like to have. It's not like this is
replacing some sort of better physical connection
22:30 - 23:00 that they would have otherwise had. It's that the
average person would like to have 10 friends and they have two right or three and there's just
more demand to socialize than what people are able to do given the current construct and giving
people the ability to be present with people who are in other places physically just seems like
it will unlock more. It's not going to make it so, if I have glasses, it's not going to make
it that I spend less time with my wife, it's going
23:00 - 23:30 to make it so that I spend more time with you
know my sister who lives across the country. And that's, I think that's good. I
think people need that. As for the rest, I I think we could probably spend a multi-hour
podcast just going into all of the different kind of socioeconomic political dynamics that are
going on but none of the trends that I've seen does it seem like the primary thing that's going
on is that because people are interacting online
23:30 - 24:00 they're now not interacting with their with people
physically. Now certainly I think you you I do interact with people online who I also
like to interact with physically but and I think that that's kind of like a combination um like
more combined richer relationship that you have overall but I think that there's a lot going
on with the loss of of kind of social capital and
24:00 - 24:30 connections that really predates a lot of the
modern technology. The goal of what, I'm what I'm trying most to learn about is how we can structure
the technologies that we use in the future to get toward this future I think you're imagining of
more human connection in more ways. I'm curious, you brought up the other big pillar of AI and in some
of your conversations, I'm thinking of a conversation with Tim Ferris in particular, you talked about
a lot of different use cases of AI and they seem
24:30 - 25:00 to me to fall on somewhat of a spectrum. Like
for example you mentioned automatic real-time translation, like basically the Star Trek
Universal translator. We're pretty much there! Yeah and that's one example on one
end of the spectrum where some people might argue that there is a chance that someone is less likely
for example to learn a language because we can all speak to each other in real time in different
languages. I think nobody would really argue
25:00 - 25:30 that therefore we shouldn't have that kind of
universal translator. People still learn Latin and Greek. Right exactly and so I think that end
of the spectrum is something like um technologies that really measurably unlock our humanity because
they remove a struggle between people and then on the other end of the spectrum there are a lot of
educational things for example where the struggle is kind of the point right? Like it's like building
a muscle. I can think of so many times
25:30 - 26:00 in my life where like the reason why I was doing
something was not the output it was the fact that I was trying so hard to do it. There's one example
in the Tim Ferris interview where you talked about your kids struggling to articulate
themselves emotionally and adults very much had the same problem and you talked about AI as a way
to help them articulate those emotions. Yeah and I thought about all of the many times in my life
where I have struggled to articulate my emotions
26:00 - 26:30 and how I really could have used some help in
those moments and I also found myself thinking about the times when that was really building
a muscle where like the act of struggling to communicate with someone and understand what they
wanted from me was was important to my development. And so my question is if you think about that
as a spectrum between things that are really important to our humanity where and the struggle
being removed is helpful versus things where the struggle is the point and it unlocks
something about our humanity and is important
26:30 - 27:00 to preserve like building a muscle, how do you
draw the line between those things and how do we ensure that the muscles that we're building for
this future are stronger and not weaker? Yeah it's interesting I mean I think we're always going to
find new things to struggle with and I mean it's you can always get better at communicating with
other people and kind of expressing yourself and understanding other people so having a tool that
can help you do that better isn't going to mean that like oh now we perfectly understand every you
know it's I mean I think the maybe one
27:00 - 27:30 of the most functional aspects of this you're
already seeing a lot of these AI models really help people with coding right like a generation
ago um before I was getting started a lot of coding was like really low-level system software
and you know then by the time that I got into it there was a little bit of that but um you you
can make websites pretty easily make apps pretty easily and I think in 20 years or a lot sooner
than that you're going to basically be in a
27:30 - 28:00 world where kids will be able to just describe the
things that they want and build incredibly complex pieces of software so it's um in that world
are kids going to be not struggling I I don't think so I think that they're going to be just
expressing their creativity and and it'll it'll be this kind of constant iterative feedback loop
around like okay like yeah I you know took a few minutes to describe this thing and like yeah this
whole like amazing virtual world was created that
28:00 - 28:30 I can have see on my glasses or whatever but like
these things are not exactly what I want them to be so now I need to like go back and edit them it
just I don't know I think that there's always more. Another way to get this - it's one of the things
that I think makes makes people so good. It just there's there's always more to do. We'll always
find the struggle? Yeah. Another way to get at this is if you if you play this out to make the
tools even better in like 10 years let's say
28:30 - 29:00 your kids are in high school are there ways that
you would want them using AI because you think it would accelerate them intellectually and ways that
you would advocate for them not to use it or things that you would have concerns about? I mean
I think that there's some things that you need to be able to do yourself. I think that's a lot of
the basic fear that people have around this is that while we're building these amazing tools we
get away from this self-confidence and ability of
29:00 - 29:30 being able to do like this basic stuff yourself so
it's like all right you have a calculator but it's still good to be able to do kind of basic math in
your head because there are a lot of things that come up throughout the day that you just want to
have a general numeracy around right that often they're not expressed in numerical terms but just
in terms of understanding trends or understanding arguments that people are making, you you kind of
need to understand the shape of how numbers come
29:30 - 30:00 together and so I think one of the big debates
is like should we still teach our kids to program computers even though you're going to have
these tools in the future that are just so much more powerful than anything that we have now to
produce incredibly complicated pieces of software. I think the answer to that is probably yes
because I think teaching someone how to code is teaching them a way to think rigorously and
that even if they're not doing most of the code
30:00 - 30:30 production I think it's important that you kind of
have the ability to think in that way and I think it's going to just make you generally a better
thinker and better person so yeah maybe that's like this generation's version of calculators
it's like so you you want to you want to use the calculator but you'll also want to be able to
generally do without it. Other ones like language I don't know I mean different people can come
out I think this is one of the interesting questions about parenting these days is like is
is just kind of like what what's important to
30:30 - 31:00 teach your your kids and in an era where so much
is going to change over the the time that they're even in school. Language I think you can make
similar arguments. I think there's a lot of it's like it's probably going to be less functionally
relevant in the future to learn multiple languages but it sort of helps you think in different ways,
you know I found from the languages that I've studied that a lot of it you learn about
the structure of your own language, you can
31:00 - 31:30 you know you also learn about the culture right
because so much of how things are expressed in different places is tied to the nuance and the
history of kind of what how so I think like you that's all valuable and interesting
stuff to get into but then I don't know at the same time we only have so many hours in the day
so people need to prioritize what they're going to learn and it may be that okay in a world with
perfect translation which by the way we basically just announced on the Ray Ban Metas that now
you're going to be able to just like you go to countries yeah we're starting out with just a few
languages but we'll roll it out to more and you
31:30 - 32:00 know you'll be you could be traveling anywhere and
you have your glasses and they just translate in real time in your ear. So it's wild, yeah so
I think people are going to need to choose what what what they want to focus on going forward.
How do the developments that we've been talking about in AI intersect with social media and the
platforms that most people use today? There's a future where there's images and generated text
and maybe AI influencers. How does generative
32:00 - 32:30 AI change how social media feels in the future?
Yeah I mean I think that that's a really deep one. You know there's already been one
big shift which is that social media started out as people primarily interacting with their
friends and now it is you know at least half of the content is basically people interacting
with creators or content that's not created by people who they kind of personally know so
we sort sort of already have that paradigm and
32:30 - 33:00 I think AI is probably going to accelerate that. It
will give all these people additional tools right so your friends will create kind of funnier memes
and more interesting content um that'll come from a lot of different ways. I think some of it will
be okay your friends have glasses and they capture a bunch of stuff and before they might have not
been able bble to edit it to make it interesting or maybe it was just too much work or they didn't
even realize that they captured something amazing
33:00 - 33:30 but now the AI is like hey I like made this thing
for you out of your content um it's like okay that's awesome like people will enjoy that. Creators
obviously kind of much more specialized skills are going to be able to use even more advanced AI
tools to make more compelling content but then I think that there will be a bunch of kind of green
field type stuff where maybe in the future there will be content that is purely generated by AI
by the system personalized for you maybe it's
33:30 - 34:00 summarizing things that are out there that that
are going to be interesting maybe it's um just producing something funny that makes you laugh
this is going to be like a very kind of deep zone that there's a lot to to experiment with.
I think there are going to be AI creators as well, as creators building AI versions of themselves,
I mean that's a thing that we just showed too at Connect is basically I mean if you're
a Creator one of the big challenges is
34:00 - 34:30 like all right there are only so many hours
in the day and your community probably has a nearly unlimited demand to interact with you and
you want to interact with them because you're trying to grow your community. I mean that's both
socially and from a business perspective that's sort of you know growing the community is an
important part of what every creator does so okay if we can make it so that each creator
can basically make an like an AI artifact that their community can interact with people be
clear it's not the actual creator themselves but
34:30 - 35:00 it's almost like a piece of digital art that
you're producing like an interactive sculpture or something that it's like it's like you train
it to here's the context that I wanted to have here's the topics I wanted to communicate
on here's stuff that I wanted to stay away from you're giving your community something to
interact with when you can't be there to to kind of answer all the questions and I think that's
going to be super compelling so there's like these interesting things but I think it's I AI
it's kind of like the internet in a way where
35:00 - 35:30 it's probably going to change almost every field
and almost every feature of every application that we use um it seems sort of hyperbolic to say that
but I do think that's true and it's just hard to sort of enumerate all the different things up
front but I think that over the next 5 to 10 years we're just going to explore the impacts
in each of these areas and it's going to be like an amazing amount of innovation and really
exciting. I feel two things simultaneously when
35:30 - 36:00 you say that. I feel both like I really want
to be optimistic about the future of these platforms and I obviously have gained so much from
an enormous pace of change right like everything that we're doing now and what I actually feel is
worried. I feel some specific concerns around the way that you know I might communicate with an
audience and the way that they might respond to that or the way that human communication might
change but also more generalized just sort of
36:00 - 36:30 fear of the pace of change and and worry and I
don't think I'm alone in that feeling. Yeah and you're supposed to be the optimist! I know! And I'm
curious like how you talk to people who feel that way. What concerns do you feel are most legitimate
and what do you feel most misunderstood? I think the pace of change is always a concerning
thing right it's there is a lot of uncertainty
36:30 - 37:00 about how how things will go in the future and
we're all going to get really amazing new tools to do both our hobbies and our jobs and
they'll make it so we can do better work and have better lives but at least on the professional
side it's going to be our responsibility to keep up with that or else it's going to be difficult
for us to compete with other people who are doing a good job of kind of keeping up with
the new trends. So I get it. I mean I think you know especially in the you know line of of
work of being a creator and it's a very sort of
37:00 - 37:30 competitive space, I don't think that like creators
necessarily think about it as competitive but it is right it's like it's you know and um and so I
get it. I think that this is going to make it so that like the quality of work that people produce
and how interesting it is and how much they can communicate and like really efficiently is is
just going to kind of go through the roof but but when you're staring down a set of changes like
you know that there's some big change coming and
37:30 - 38:00 you don't know what it is that's always a time of
anxiety so I get it. If I take my creator hat off and I'm just a person who is youngish starting
out my career-ish, starting out building a family, how would you advise someone like me to prepare
well for the future that we're headed toward
38:00 - 38:30 to be able to learn new skills now or just think
about this future in an educated way? Yeah I mean I just think maintaining curiosity about things is
is important. I do think we can overstate to what extent the next 10 years is going to be sort of
different from the last 10 or 15. I mean a ton of stuff changed over the last 10 or 15 years too.
It's not like this is the only time in history where there's some technology it's going to make
it so there's new opportunities and things change
38:30 - 39:00 the internet coming into maturity and everyone
having smartphones has already rewired things dramatically and I mean maybe the next period will
be a somewhat bigger change or maybe it won't I think it'll feel different to different people but
I don't think this is like going from zero to one it's not like okay everything's just kind of
been normal and now like now it's about to change it's like the technology of evolves over time and
and like the opportunities that we have evolve and
39:00 - 39:30 improve and I think that's like the people who
do well I think are are people who are generally curious about it and and dig in and and try
to use it to live better lives rather than the people who who basically you know try to fight it
in in some way. One thing that I really want to ask you about is open source. Yeah. I think imagine that
we're talking to an audience that has maybe heard
39:30 - 40:00 that term but doesn't have any real idea of how
that might impact them in the development of AI. How would you explain the reasonable debate
that people in your field are having about this right now? Well I think there are two pieces. I mean
so what does open source mean? It means that people can build a lot of different things right so at
a high level I look at the vision that a bunch of companies have right so Open AI, Google, they're
building an AI right like one AI that I think in general they're like okay this is going to be
it's like you're going to use they think you're
40:00 - 40:30 going to use Gemini or ChatGPT for like all the
different things that you want to interact with and at a high level that's just not how I think
the world is going to go. I think we're going to have a lot of different AI systems just like we're
going to have we have a lot of different apps. I think in the future every business just like
they have a website and a phone number and an email address and a social media account is also
going to have an AI that can interact with with their customers to help them sell things to help
them do support. I think a lot of creators will
40:30 - 41:00 have their own AIs right I think like a lot
of people will interact with with a bunch of different things. There's a question of okay do you
want a future that's fundamentally kind of very concentrated and where you're interacting with
kind of one system for everything or do you want one where a lot of different people are building
a lot of different AIs and systems just kind of like you probably didn't want there to be you
know just one app or just one website. It's like a
41:00 - 41:30 richer world when there's a diversity of different
things so that's one piece is is just giving people the ability to build it themselves and
what open source does it makes it that everyone can take and modify the model and build stuff on
top of it which is different from the kind of closed and centralized approach. The safety
debate is a specific part of this which is in a world where AI gets smarter and smarter, what's the
way that we have the highest chance of of having a
41:30 - 42:00 a a kind of positive future and and not having
a lot of the safety concerns? And I think some people think that if we keep the model closed
and don't give it to a lot of developers that should make it safer because then you don't get
bad developers doing bad things with the model. Historically I think what we've seen with open
source is actually the opposite which is that
42:00 - 42:30 this is not the first open source project right
I mean this is obviously this has been a thing in the industry for decades and I think what we've
traditionally seen is that open source software is safer and more secure largely because you put
it out there more people can scrutinize it because they can see all parts of the system and then
there are inevitably issues with any software there are bugs there are security issues and
initially with open source people thought hey if
42:30 - 43:00 you're putting the software out there and there
are holes in it isn't everyone just going to go exploit those holes and especially the bad
guys but it turned out that it sort of in this counterintuitive way that by making by adding more
scrutiny to the systems the holes became apparent quicker and then were fixed and then people
roll out a new version just like we roll out a new version of our models right Llama 3, Llama
3.1, Llama 3.2 everyone upgrades, so I think the same thing is going to happen here I think it's
sort of this counterintuitive thing where even
43:00 - 43:30 though I I think there's some concern around all
right are bad guys going to do bad things with these models. I actually think you just get a kind
of smarter and safer model for everyone the more it's rolled out and the more kind of scrutiny
is on it and then part of that is we get feedback and we make the model safer so that is
we roll it out to to more people it's safer for more people to use. So I think that the history
of open source in the software industry generally
43:30 - 44:00 would suggest that open source is going to lead
to a more prosperous and safer future. Our show is called Huge If True and what I mean by that is
kind of testing the most optimistic non-obvious thing and so my question to you is what is the
biggest open genuine question on your mind right now? In which field? You're in so many! I am
particularly curious about the combination of
44:00 - 44:30 AI and hardware but I realize that we've covered
a lot so I'm curious the direction you'd take this on a question that occupies you right now. Gosh
I mean I think maybe one that's a little more AI specific is there a current set of methods
that seem to be scaling very well right so with past AI architecture you could kind of feed an
AI system a certain amount of data and and use
44:30 - 45:00 a certain amount of compute but eventually it
hit a plateau and one of the interesting things about these new transformer based architectures
over the last you know 5 to 10 years is that we haven't found the end yet. So that leads to this
dynamic where Llama 3 you know we could train on you know 10 to 20,000 gpus, Llama 4 we could train
on you know more more than 100,000 gpus, Llama 5 we can plan to scale even further and there's just
an interesting question of how far that goes. It's
45:00 - 45:30 totally possible that at some point we just like
hit a limit and just like previous systems there's an asymptote and it doesn't keep on growing but
it's also possible that that limit is not going to happen anytime soon and that we're going to be
able to keep on just building more clusters and generating more you know synthetic data train the
systems and that they're just going to keep on
45:30 - 46:00 getting more and more useful for people for quite
a while to come and it's a really big and high stakes question I think for for the company is
because we're basically making these bets on how much infrastructure to build out for the future
and this is like hundreds of billions of dollars of infrastructure so like I'm clearly betting
that this is going to keep scaling for a while but it's one of the big questions I think in the
field because it is possible that it doesn't. You know that obviously would lead to a very different
world where it's I mean I'm sure people still
46:00 - 46:30 figure it out eventually just need to make some
new fundamental improvements to the architecture in some way but that might be a somewhat longer
trajectory for okay maybe you know the the kind of fundamental AI advances slow down for a bit
and we just take some time to build new products around this or it could be the case and that's
what I'm betting on that the fundamental AI will just continue advancing for quite a while and that
we're going to get both a new set of products that
46:30 - 47:00 are just really compelling in all these ways
and that the technology landscape and what's possible will just continue being dynamic over
like a 20-year period and that's probably what I'd guess is going to happen but it I think it's
one of the bigger questions in the industry and kind of for technology across the world today.
Is there anything else that you want to say? I don't know! Awesome. We're good. Amazing yeah thank
you so much for doing this. Yeah no thank you...