Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In recent times, AI advancements, viewed previously as an unstoppable force, may be experiencing a slowdown. The video from CNBC discusses the eerie calm after the initial storm of rapid AI model progress, highlighting concerns in Silicon Valley about hitting a plateau. With innovations like Claude Three and ChatGPT Five on the horizon, there's an intriguing debate on whether core assumptions about AI's future are faltering. Financial investments from major companies in AI have been enormous, but the anticipated returns may not be materializing as expected. Key insights from industry leaders point toward a period of consolidation and reassessment in the AI landscape, fueled by the challenges of scaling, data limitations, and the emergence of synthetic data usage.
Highlights
Google's Gemini and Claude Three are emerging as powerful AI models, but progress isn't as expected 🤔
There's growing anxiety in Silicon Valley about AI's rapid progress losing steam 🌪️
Major players like OpenAI, Google, and Anthropic may be facing development roadblocks 🚧
Billions invested in AI with expectations of significant returns, but reality might differ 💰
Reports suggest that the new AI models aren't significantly out-performing older ones 📊
Synthetic data is being used to overcome data shortages, but it poses its own challenges ⚠️
A new focus on improving AI models during the post-training phase is emerging 🎯
Upcoming AI agents could revolutionize various sectors by acting on behalf of users 🌐
The upcoming 18 months are critical for the AI race with new models expected from key players ⏳
Key Takeaways
AI advancements may be hitting a plateau, sparking concern in Silicon Valley 🌄
Billion-dollar investments in AI may not be yielding the expected returns 💸
Scaling laws, the empirical belief that more data and compute power improve AI, might be more theory than reality 📉
The search for new AI applications and use cases becomes crucial 🕵️♂️
Expect a rise in AI agents acting on behalf of users in numerous areas 🤖
Overview
Artificial Intelligence, once considered the burgeoning powerhouse of technological innovation, might be on the cusp of a slowdown. The video from CNBC explores the waning momentum in AI advancements, with once revolutionary models like Claude Three and ChatGPT Five now facing scrutiny. Experts in the field are reassessing strategies as industry giants spend billions, hoping for a breakthrough that may not be forthcoming.
The industry's confidence in scaling laws—the notion that more compute power and data will drive AI advancements—faces new skepticism. As companies like OpenAI and Google try to scale AI further, they confront new challenges: data limitations and the risks associated with synthetic data creation. Meanwhile, there remains a strong push to extract more from existing AI models through improved post-training techniques.
In the face of these potential plateaus, the race for AI-powered solutions is still heating up. AI-driven agents are set to become transformative across sectors, challenging how we interact with technology. The next 18 months are pivotal as companies prepare to release new models that could redefine the industry's trajectory, with significant implications for tech investments and strategic direction.
Chapters
00:00 - 00:30: Introduction to AI Breakthroughs The chapter discusses the continuous evolution and breakthroughs in the field of AI, highlighting that the question of advancements is a matter of timing rather than possibility. It mentions Google's recent update on its large language model, Gemini, and comments on the impressive capabilities of Claude three, possibly one of the most powerful AI models currently. The chapter also speculates on the potential advancements expected from ChatGPT five, describing it as a significant step forward. However, it raises an intriguing point about the assumption that AI models will continue to grow and improve, questioning if there might be a plateau in progress or a slowdown in these advancements.
00:30 - 01:00: Impact on Major Tech Companies The chapter titled 'Impact on Major Tech Companies' examines potential challenges facing major technology companies like Nvidia, Amazon, Google, and Microsoft amidst rapid advancements in artificial intelligence (AI). It highlights skepticism about whether increased GPU production is leading to corresponding advancements in AI capabilities. The discussion raises concerns about the sustainability of current spending levels, as these tech giants seek tangible use cases and transformative applications that justify their massive financial commitments to AI development. The chapter features insights from Deirdre Bosa, questioning whether AI's progress may have reached a plateau.
01:00 - 01:30: Concerns in Silicon Valley "Concerns in Silicon Valley" discusses the growing worries within the tech industry about the potential stagnation of AI technology. The chapter highlights that the rapid progress previously experienced is now experiencing a slowdown. There's a discussion about hitting a ceiling in terms of improvements, and the anticipation of reaching an asymptotic limit with scaling AI models. Even major companies like OpenAI are facing these limitations.
01:30 - 02:00: Financial Investment and Progress This chapter discusses the financial investments made in technology, particularly focusing on AI and its development by major players like Google and OpenAI. It highlights the significant capital investment required to stay competitive and the expectation of substantial returns. However, it also notes the challenges and potential limits of growth as signs of stagnation start to appear, especially in the progression of AI models such as OpenAI's GPT series. While previous advancements between model generations have been notable, there's concern that future progress may not maintain the same pace.
02:00 - 02:30: Challenges Faced by AI Models The chapter 'Challenges Faced by AI Models' discusses the stagnation in the advancement of AI models, which had previously been exponentially improving in understanding, generation, and reasoning capabilities. Reports suggest that progress has halted, contradicting earlier expectations of continually bigger and better AI systems through extended training. The focus is specifically on OpenAI and its anticipated next model 'Orion', highlighting industry concerns about reaching a plateau in AI advancements.
02:30 - 03:00: Generational Leap Expectations The chapter discusses the initial high expectations for a new system called Orion, which was anticipated to represent a significant generational advancement towards AGI (Artificial General Intelligence). However, it's mentioned that this vision is now being reduced. Employees involved with Orion reported that its quality improvement was modest and fell short of the significant leap seen between GPT-3 and GPT-4. Furthermore, they noted that Orion is not consistently superior to its predecessor in performing specific tasks, such as coding.
03:00 - 03:30: Current Plateau in AI Advancements The chapter discusses the current state of AI advancements, noting a plateau following initial rapid development as exemplified by ChatGPT's release in late 2022. It suggests that performance improvements are tapering off for many new models, including those by leading AI developers like Anthropic, which could be facing challenges in enhancing its capabilities.
03:30 - 04:00: Data Limitations and Synthetic Data The chapter discusses the challenges faced by companies in the development of large language models, despite significant financial backing from major corporations like Microsoft and Amazon. A new version of the model, Opus, was announced but reportedly did not show expected improvements, raising concerns about diminishing returns and plateauing progress in AI advancements. The discussion includes Google's observation of similar trends in slowing progress. This highlights the data limitations and financial burdens associated with building and maintaining such advanced systems.
04:00 - 04:30: Post-Training Innovations The chapter discusses the current state of large language models (LMs), noting that a few companies have risen to the top in this field. Despite this, these companies are working on their next iterations, which are becoming increasingly difficult to develop. The text mentions that the 'low hanging fruit'—easier advancements—have already been achieved, and future progress will be more challenging. Highlighted in the discussion is the AI model 'Gemini,' which is striving to compete with leading models like those from OpenAI and Anthropic. However, there are reports suggesting that the anticipated update to 'Gemini' is not meeting internal expectations.
04:30 - 05:00: Emergence of AI Agents The chapter 'Emergence of AI Agents' discusses the significant investments being made in AI technology, raising questions about whether these expenditures will lead to substantial growth or require time for absorption and integration into existing systems. This concern arises from static or reduced revenue forecasts despite increased spending, as echoed by AI experts like Ilya Sutskever of OpenAI.
05:00 - 05:30: Agentic Platforms and Transformations The chapter discusses the dynamics of scaling up pre-training in AI startups, specifically emphasizing on receiving significant seed funding such as $1 billion. It evaluates the observable patterns in AI development, acknowledging that while foundational model pre-training and scaling up have accelerated growth, there's a viewpoint that the pace of progress might have peaked. However, this perspective is contested, and it's acknowledged that the scaling is still much on track. These observations are guided by empirical laws rather than fundamental physical laws.
05:30 - 06:00: Nvidia's Role in AI Growth The chapter titled 'Nvidia's Role in AI Growth' discusses the persistent and continuous scaling of AI, with no evident slow down in progress as observed by experts over the past decade. The chapter suggests that although there's an expectation that AI's growth will eventually hit limitations, such a point hasn't been reached yet. Even notable figures in AI like Sam Altman express confidence in sustained growth, with no immediate barriers, though companies like OpenAI and Anthropic declined to comment further on the matter.
06:00 - 06:30: Future Model Releases and Implications This chapter discusses the development and potential plateauing of Google's project 'Gemini'. The project has reportedly made significant advancements in reasoning and coding capabilities. The chapter also explores the concept of scaling laws, which suggest that increasing compute power and data leads to better models. However, there is an implication that this progress might reach a plateau.
Are AI Advancements Already Slowing Down? Transcription
00:00 - 00:30 AI breakthroughs have been a
question of when, not if. Google unveiling long awaited
new details about its large language model Gemini. Claude three is arguably now
one of the most powerful AI models out there, if not the
most powerful. Preview, if you will, for us
ChatGPT five. I expect it to be a
significant leap forward. But what if that core
assumption that models can only keep getting bigger and
better is now fizzling? Is there really a slowing in
progress because that wasn't
00:30 - 01:00 expected? It could spell cracks in the
Nvidia bull story. We're increasing GPUs at the
same like rate, but we're not getting the intelligence
improvements out of it. Calling into question the
gigantic ramp in spending from Amazon, Google, Microsoft, a
rush for tangible use cases, and a killer app. I'm Deirdre Bosa with the
TechCheck take has AI progress peaked?
01:00 - 01:30 Call it performance anxiety. The growing concern in
Silicon Valley that AI's rapid progression is losing steam. We've really slowed down in
terms of the amount of improvement. Reached a ceiling and is now
slowing down. In the pure model competition,
the question is, when do we start seeing a asymptote to
scale. Hitting walls that even the
biggest players from OpenAI to
01:30 - 02:00 Google can't seem to
overcome? Progress didn't come cheap. Billions of dollars invested
to keep pace, banking on the idea that returns they would
be outsized, too. But no gold rush is
guaranteed to last, and early signs of struggle are now
bubbling up at major AI players. The first indication
that things are turning, the lack of progression between
models. I expect that the delta
between 5 and 4 will be the same as between 4 and 3. Each new generation of
OpenAI's flagship GPT models,
02:00 - 02:30 the ones that power ChatGPT
they have been exponentially more advanced than the last
in terms of their ability to understand, generate and
reason. But according to reports,
that's not happening anymore. There was talk prior to now
that these companies were just going to train on bigger and
bigger and bigger systems. If it's true that it's top,
that's not going to happen anymore. OpenAI has led the pack in
terms of advancements, its highly anticipated next model
called Orion, it was expected
02:30 - 03:00 to be a groundbreaking system
that would represent a generational leap in bringing
us closer to AGI or artificial general intelligence. But that initial vision, it's
now being scaled back. Employees who have used or
tested Orion told The Information that the increase
in quality was far smaller than the jump between GPT
three and four and that they believed Orion isn't reliably
better than its predecessor at handling certain tasks like
coding.
03:00 - 03:30 To put it in perspective,
remember ChatGPT came out at the end of 2022. So now it's been, you know,
close to two years. And so you had, initially a
huge ramp up in terms of what all these new models can do. And what's happening now is
you've really trained all these models, and so the
performance increases are kind of leveling off. The same thing may be
happening at other leading AI developers, the startup
Anthropic. It could be hitting
roadblocks to improving its most powerful model, the
Opus, quietly removing wording
03:30 - 04:00 from its website that
promised a new version of Opus later this year, and sources
telling Bloomberg that the model didn't perform better
than the previous versions as much as it should, given the
size of the model and how costly it was to build and
run. These are startups focused on
one thing the development of large language models with
billions of dollars in backing from names like Microsoft and
Amazon and venture capital. But even Google, which has
enough cash on hand to buy an entire country. It may also be seeing
progress plateau.
04:00 - 04:30 The current generation of LM
models are roughly in a few companies have converged at
the top, but I think we're all working on our next versions
too. I think the progress is going
to get harder. When I look at 25, the low
hanging fruit is gone. You know the curve that the
hill is steeper. Its principal AI model,
Gemini, is already playing catch up to OpenAI and
Anthropic. Now, Bloomberg reports,
quoting sources that an upcoming version is not
living up to internal
04:30 - 05:00 expectations. It has to make you think,
okay, are we going to go through a period here where
we're going to need to digest all this hundreds of billions
of dollars we've spent on AI over the last couple of
years? Especially if revenue
forecasts are getting cut or not changing, even though
you're increasing the spending you're doing on AI. The trend has even been
confirmed by one of the most widely respected and
pioneering AI researchers, Ilya Sutskever, who
co-founded OpenAI and raised
05:00 - 05:30 $1 billion seed round for his
new AI startup. As you scale up pre-training,
a lot of the low hanging fruit was plucked. And so it makes sense to me
that you're seeing a deceleration in the rate of
improvement, but. Not everyone agrees the rate
of progress has peaked. Foundation model pre-training. Scaling is intact and it's
continuing. You know, as you know, this
is an empirical law, not a fundamental physical law.
05:30 - 06:00 But the evidence is that that
it continues to scale. Nothing I've seen in the field
is, is, you know, out of character with what I've seen
over the last ten years or leads me to expect that
things will slow down. There's no evidence that the
scaling has laws, as they're called, have begun to to
stop. They will eventually stop. But we're not there yet. And even Sam Altman posting
simply, there is no wall. OpenAI and Anthropic. They didn't respond to
requests for comment.
06:00 - 06:30 Google says it's pleased with
its progress on Gemini and has seen meaningful performance
gains in capabilities like reasoning and coding. Let's get to the why. If progress is in fact
plateauing. It has to do with scaling
laws, the idea that adding more compute power and more
data guarantees better models to an infinite degree. In recent years, Silicon
Valley has treated this as
06:30 - 07:00 religion. One of the properties of
machine learning, of course, is that the larger the brain,
the more data we can teach it, the smarter it becomes. We call it the scaling law. There's every evidence that
as we scale up the size of the models, the amount of
training data, the effectiveness, the quality,
the performance of the intelligence improves. In other words, all you need
to do is buy more Nvidia GPUs,
07:00 - 07:30 find more articles or YouTube
videos or research papers to feed the models, and it's
guaranteed to get smarter. But recent developments
suggest that may be more theory than law. People call them scaling laws. That's a misnomer. Like
Moore's law is is a misnomer. Moore's law, scaling laws. They're not laws of the
universe, they're empirical regularities. I am going to
bet in favor of them continuing, but I'm not
certain of that. The hitch may be data. It's a key component of that
scaling equation, but there's only so much of it in the
world.
07:30 - 08:00 And experts have long
speculated that companies would eventually hit what is
called the data wall that is run out of it. If we do nothing, and if you
know at scale, we don't continue innovating, we're
likely to face similar bottlenecks in data like the
ones that we see in computational capability and
chip production, or power or data center build outs. So AI companies have been
turning to so-called synthetic data. Data created by AI fed
back into AI, but that could create its own problem. Ai is an industry which is
garbage in, garbage out.
08:00 - 08:30 So if you feed into these
models a lot of AI gobbledygook, then the models
are just going to spit out more AI gobbledygook. The information reports that
Orion was trained in part on AI generated data produced by
other OpenAI models, and that Google has found duplicates
of some data in the sets used to. Train Gemini. The problem? Low quality data. Low quality performance. This is what a lot of the
research that's focused on synthetic data is focused on.
08:30 - 09:00 Right. So if you if you if
you don't do this well, you don't get much more than you
started with. But even if the rate of
progress for large language models is plateauing, some
argue that the next phase post-training or inference
will require just as much compute power. Databricks CEO Ali Ghodsi
says there's plenty to build on top of the existing
models. I think lots and lots of
innovation is still left on the AI side. Maybe those who
expected all of the ROI to happen in 2023, 2024, maybe
they, you know, they should
09:00 - 09:30 readjust their horizons. The place where the industry
is squeezing to get to get that progress is shifted from
pre-training, which is, you know, lots of internet data,
maybe trying synthetic data on huge clusters of GPUs towards
post-training and test and compute, which is more about,
you know, small amounts of data but is very high quality
and very specific. Feeding data, testing
different types of data, adding more compute. That all happens during the
pre-training phase when models
09:30 - 10:00 are still being built before
it's released to the world. So now companies are trying
to improve models in the post-training phase. That means making adjustments
and tweaks to how it generates responses to try and boost
its performance. And it also means a whole new
crop of AI models designed to be smarter in this
post-training phase. OpenAI just announced an
improved model their AI model. They say it has better
reasoning. This had been reportedly called strawberry,
so there's been a lot of buzz around it. They're called reasoning
models, able to think before they answer. And the newest
leg in the AI race.
10:00 - 10:30 We know that thinking is
oftentimes more than just one shot, and thinking requires
us to maybe do multi plans, multiple potential answers
that we choose the best one from. Just like when we're
thinking we might reflect on the answer before we deliver
the answer. Reflection, we might take a
problem and break it down into step by step by step, chain
of thought.
10:30 - 11:00 If AI acceleration is tapped
out, what's next? The search for use cases
becomes urgent. Just in the last multiple
weeks, there's a lot of debate. Or have we hit the
wall with scaling laws? It's actually good to have
some skepticism, some debate, because that I think will
motivate, quite frankly, more innovation because. We've barely scratched the
surface of what existing models can do. The models are actually so
powerful today and and we've
11:00 - 11:30 not really utilized them to
anywhere close to the level of capability that they actually
offer to us and bring true business transformation. OpenAI, Anthropic and Google. They're making some of the
most compelling use cases yet. OpenAI is getting into the
search business. Anthropic unveiling a new AI
tool that can analyze your computer screen and take over
to act on your behalf. One of my favorite
applications is notebook LM. You know, there's this Google
Google application that came out. I used the living
daylights out of it just
11:30 - 12:00 because it's fun. But the next phase, the
development and deployment of AI agents, that's expected to
be another game changer for users. I think we're going to live in
a world where there are going to be hundreds of millions of
billions of different AI agents, eventually, probably
more AI agents than there are people in the world. I spoke with with I spoke with
Nvidia after the call. They said, Jim, you better
start thinking about how to use the term agentic when
you're out there, right? Because agentic is the term. Benioff's been using it for a
while.
12:00 - 12:30 He's very agent. You can have health agents and
banking agents and product agents and ops agents and
sales agents and support agents and marketing agents
and customer experience agents and analytics agents and
finance agents and HR agents. And it's all built on this
Salesforce. Platform, meaning it's all
powered by software. Everybody's talking about when
is AI going to kick in for software? It's happening now. Well, it has to be. It's not
a future thing. It's now it's something the
stock market is already taking note of. Software stocks
seeing their biggest outperformance versus semis
in years.
12:30 - 13:00 And it's key for Nvidia,
which has become the most valuable company in the world
and has powered broader market gains. It's hard for me to imagine
that Nvidia can grow as fast as people are modeling, and I
see that probably as a problem as at some point when you get
into next year and Nvidia shipping Blackwell in volume,
which is their latest chip, and then the vendors can say,
okay, we're getting what we need, and now we just need to
digest all this money that we've spent because it's not
scaling as fast as we thought. In terms of the improvements.
13:00 - 13:30 The sustainability of the AI
trade hinges on this debate. OpenAI, xAI, Meta, Anthropic,
and Google they're all set to release new models over the
next 18 months. Their rate of progress, or
lack of it, could redefine the stakes of the race.