Unpacking the Impact of AI on Human Cognition

AI is Making You Dumber. Here's Why

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In 2035, AI has permeated every aspect of life, from generating corporate presentations to creating chart-topping songs and films. While it offers incredible advancements in science and medicine, the conversation turns to consumer-grade AI and its overuse. The video highlights how heavy reliance on AI can lead to cognitive offloading, where mental tasks are shifted to AI, potentially weakening critical thinking skills. Experts express concerns over AI's role in learning and creativity, warning of long-term cognitive atrophy if not used judiciously. The episode ends by urging viewers to use AI thoughtfully, understanding its limitations while valuing human critical thinking skills.

      Highlights

      • In 2035, every corporate presentation is AI-generated! 
      • Pop songs topping the charts are composed by AI 
      • AI-generated films hit the cinemas in mere days 
      • Consumer AI can make us rely less on our mental faculties 
      • A professor found students' essays improved suspiciously during the pandemic 
      • Over-dependence on AI might lead to cognitive atrophy 
      • Algorithmic complacency: letting AI decide what we consume online 
      • Model collapse from AI re-reading and rewriting AI content hundredfold 

      Key Takeaways

      • AI is everywhere in 2035, from work presentations to pop songs 
      • Overusing AI may lead to 'cognitive offloading,' making us mentally lazier 
      • Heavy reliance on consumer AI, like GPS, weakens our natural skills 
      • Critical thinking can decline over time with too much AI dependency 
      • Professor warns AI could be a 'mixed bag'; use it to enhance, not replace skills 
      • Choose wisely: balance AI tools with your brain's thinking power 
      • The decade poses a question: Is AI smartening us up or dulling us down? 

      Overview

      In a snapshot of 2035, AI has become an omnipresent force—from crafting workplace presentations to composing top tracks on the radio. It’s a fascinating warp into the near future where the wonders of AI take center stage, offering both opportunities and challenges. While AI continues to conquer the realms of science and medicine, ColdFusion takes a deep dive into consumer-grade AI to evaluate its pervasive influence on convenience culture.

        As AI makes life seemingly easier, experts worry about cognitive offloading—a term used to describe shifting mental tasks onto AI systems. The concern is not just about AI doing your homework or picking songs; it's a profound look at how constant reliance can nudge our natural cognitive abilities towards complacency. From diluted critical thinking to impaired spatial navigation, overuse paints a stark picture of potential mental atrophy.

          Despite the technological advances, a cautionary note is sounded throughout the episode. The message is not to abandon AI but to be mindful consumers and active thinkers. The narrative strongly suggests valuing and nurturing brain power to complement, not be overshadowed by AI. The critical takeaway for viewers is the choice to foster a harmonious relationship with AI—capitalizing on its benefits while staying intellectually engaged and resilient.

            Chapters

            • 00:00 - 00:30: Introduction and Setting the Scene The chapter sets the stage by welcoming the reader to the story, with a brief discussion on the year 2035 where AI dominates the corporate world. This includes AI handling all communications and presentations, which is a slice of daily life in this futuristic setting. The chapter introduces the tone and expectation that AI is integral to everyday functions in the workplace.
            • 00:30 - 01:00: AI in Everyday Life "AI in Everyday Life" explores the profound integration of artificial intelligence into our daily routines. The chapter illustrates how questions at workplaces are swiftly addressed by AI, with responses seamlessly incorporated into reports. It highlights how the majority of top music hits and cinema productions are now AI-driven, showcasing rapid content creation. Moreover, the educational landscape has transformed, focusing on equipping students with skills to effectively utilize domain-specific AI tools. The overarching theme underlines that in the modern world, AI is pervasive, and acquiring information or solutions is merely a prompt away.
            • 01:00 - 01:30: The Potential Downsides of AI The chapter explores the potential negative impacts of artificial intelligence on human cognitive abilities. It raises concerns about humans becoming overly dependent on AI technology, possibly leading to a decline in problem-solving skills and independent thinking. The discussion reflects on whether AI could contribute to making humans less intelligent or proactive in the future by softening our cognitive abilities.
            • 01:30 - 02:00: The Impact of AI on Critical Thinking The chapter titled 'The Impact of AI on Critical Thinking' introduces the subject by noting that Cold Fusion episodes are now also available on Spotify. It highlights the distinction between advanced AI applications in fields like science, physics, and medicine, and the more typical consumer AI applications (referred to as 'AI slob'). The chapter suggests that the discussion will focus on the latter, setting the stage for an exploration of how everyday AI impacts critical thinking.
            • 02:00 - 02:30: AI and Mental Atrophy This chapter discusses the theme of AI and mental atrophy, focusing on the concept of 'word overuse.' It begins by highlighting the potential negative impact of relying too much on AI technologies, such as how heavy use of apps like Google Maps can affect the human brain's natural adaptability. The episode aims to balance the discussion by not only addressing the problems associated with AI dependence but also providing practical tips on how to mitigate these effects.
            • 02:30 - 03:00: Justice and AI Misuse The chapter discusses the implications of convenience technologies on human abilities, specifically focusing on spatial memory and directional sense. It points out that despite data indicating otherwise, individuals over-relying on GPS systems might not recognize their declining spatial awareness. This phenomenon is set against a broader exploration of AI and its impacts, foreshadowing the deeper dive into AI misuse and justice issues in the discussion. Mention is made of a study conducted five years ago and introduces Professor David Rafo, potentially pivotal to the ensuing narrative.
            • 03:00 - 03:30: Algorithmic Complacency The chapter titled 'Algorithmic Complacency' discusses a professor's growing concern over the quality of his students' written assessments. Initially, the academic structure appeared to be weak, leading to a faculty-wide observation that students were disinterested. However, during the pandemic, the professor noticed a significant improvement in the writing quality of some students, which seemed unnatural and extreme to him. Suspecting foul play, he decided to confront his students directly and soon discovered that they were using external aids to enhance their writing.
            • 03:30 - 04:00: AI and Information Integrity In this chapter, the discussion centers around the role of AI in influencing information integrity. A Portland professor reflects on the impact of AI tools, highlighting that they enhance writing mainly through tools rather than improving the writers' skills themselves. While acknowledging the efficiency AI brings by rapidly organizing and gathering information, developing designs, and offering solutions to complex problems, the professor maintains a critical stance. He suggests that reliance on AI to improve work may not necessarily lead to genuine skill enhancement, emphasizing the importance of developing actual writing skills alongside using AI tools.
            • 04:00 - 04:30: AI as a Tool, Not a Replacement The chapter "AI as a Tool, Not a Replacement" explores the concept that mental and cognitive abilities need regular exercise to remain strong. It posits that the overuse of artificial intelligence might lead to mental atrophy due to a lack of cognitive exercise. The chapter highlights the challenge in maintaining mental discipline and resisting the temptation of easy solutions provided by AI technologies.

            AI is Making You Dumber. Here's Why Transcription

            • 00:00 - 00:30 Thank. Thank you. Thank you. Maybe instead you should say, "Dear John, I wanted to say thank you for taking the time to meet for lunch last week. Please feel sus as a reference in the future. All the best." Hi, welcome to another episode of Cold Fusion. The year is 2035 and this is a slice of daily life. Every office is run by AI. Every corporate presentation, email, slideshow, it's all
            • 00:30 - 01:00 generated by AI. Every question in the office is met with frantic typing. Every answer is copied into reports. Most of the top 20 songs on the radio is AI generated music. The top films and cinemas are produced in a few days using artificial intelligence. Of course, studying is a thing of the past. Students must only learn the best way to use their domain specific AI tools. And you get it by now because everything you need in this new world is a prompt away. In the last century, this all
            • 01:00 - 01:30 would have sounded a little cartoonish, but in 2025, it's pretty easy to see how this all could happen with this technology seemingly baked into every single piece of tech that we use today. Are we going to slowly stop relying on our brains? Will we stop solving problems ourselves over the next decade? Will AI gradually soften our brains, rendering us incapable of thinking for ourselves? In other words, is AI making us dumber? You are watching Tool Fusion
            • 01:30 - 02:00 TV. But before we get started, just for your information, Cold Fusion episodes are now available on Spotify, so you can watch them there if you prefer. To be clear, AI is doing some incredible things in science, physics, and medicine. But the story here is about consumer grade AI, the run-of-the-mill stuff that today is known as AI slob. Something to keep in mind throughout
            • 02:00 - 02:30 this episode is the word overuse. That's the theme here. And don't worry, this video isn't completely negative. At the end, I'll share some tips on how you can avoid the pitfalls of AI use. Now, to kick things off contextwise and to set the tone for the episode, let's start with the fundamental problem. It's about the way the human brain naturally adapts to technology. I'm going to start with an app that most of us use, Google Maps. A 2020 study found that despite economic benefits, heavy GPS use can weaken
            • 02:30 - 03:00 spatial memory. Funnily enough, those impaired didn't believe that they had a poor sense of direction, even though the data proved otherwise. As we all know, convenience often comes at a price. And these simple GPS systems aren't even AI. It's just an app on your phone that helps with directions. But yet, overuse can damage our memory. As we'll later see in this episode, AI is a whole different kettle of fish. Meanwhile, around the same time as the study 5 years ago, a professor by the name of David Rafo had become
            • 03:00 - 03:30 increasingly concerned with the quality of his students written assessments. The academic structure was weak. The whole faculty could tell that the students were disinterested. Then suddenly during the pandemic, he noticed that the writing quality of a number of his students improved significantly. But the professor smelt a rat. Something felt off. A slight improvement would have made more sense. But the leap was so extreme that it felt unnatural. Rafio decided to ask his students directly. Upon his discovery that they were using
            • 03:30 - 04:00 AI, he remarked, quote, "I realized it was the tools that improved their writing and not their writing skills. Skills being the operative word here." The Portland professor didn't outright shame the use of AI, but he did say it was a mixed bag. Quote, "AI enables us to get work done quickly and efficiently by rapidly gathering and organizing information across written communications, developing designs, and providing suggestions on how to address difficult issues and problems." End quote. But he added a caveat. Quote,
            • 04:00 - 04:30 "Our mental and cognitive abilities are like muscles, so they need to be regularly used to remain strong and vibrant. Truly, it would take an extraordinary person to have the discipline necessary to stay mentally strong and vibrant when engaging with the technologies that are available." End quote. And that's an interesting point to consider. With the chronic overuse of artificial intelligence, would there be a kind of mental atrophy from a lack of cognitive exercise? And furthermore, as he states, it's actually an uphill battle to resist the temptation of having things done easier
            • 04:30 - 05:00 and hence this atrophy. This conversation is an important one to have. As Dr. Anne McKe, Alzheimer's researcher, said in the Diary of a CEO podcast, staying mentally active and in control is an important habit to prevent dementia. Over 50% of people that live to the age of 85, if you'd looked at their brains, half of them would have Alzheimer's disease. Not everyone gets the symptoms of Alzheimer's disease at 85, but they'd have the pathology. Now, things you can do to uh lessen the
            • 05:00 - 05:30 symptoms of Alzheimer's disease are using your brain, challenging your brain because high cognitive reserve, high cognitive ability uh gives you uh strength, brain strength, brain resilience against these diseases. So even if you have pathology, you can circumvent the areas of injury, the areas that aren't working well and and know and not experience the symptoms. So that's one thing. Knowing that information, this next clip highlights the exact applications we don't want to
            • 05:30 - 06:00 encourage. So you guys may have noticed I snuck a peek back at the shelf a moment ago. I wasn't paying attention, but let's see if Gemini was. Hey, did you happen to catch the title of the white book that was on the shelf behind me? The white book is Atomic Habits by James Clear. That is absolutely right. It is prudent to gain some awareness of how revolutionary tech progressively makes us more dependent. Something that has been quite prevalent in the 21st
            • 06:00 - 06:30 century. In fact, there's actually a wide range of studies on the effects of using calculators for basic maths or autocorrect damaging students ability to punctuate and spell effectively. So, when it comes to systems such as chat, GPT, Llama, Grock, and other language models that do the actual thinking for us, you can see now that we're stepping into uncharted territory. By now, you've probably heard that all things clerical or repetitive are expected to be replaced by AI. And
            • 06:30 - 07:00 this includes jobs like data entry, bookkeeping, and customer service. Well, in some ways, that shift has already begun. And the results are worrying. Aside from generated answers being factually wrong, a new study showed how increased reliance on AI resulted in a phenomenon known as cognitive offloading. Basically, using external tools, resources, or systems to reduce the mental effort expelled for tasks. They surveyed over 600 participants across multiple demographics to see how AI affected their critical thinking
            • 07:00 - 07:30 skills. They noted that quote frequent AI users were more likely to offload mental tasks relying on the technology for problem solving and decision-m rather than engaging in independent critical thinking. Over time, participants who relied heavily on AI tools demonstrated reduced ability to critically evaluate information or develop nuanced conclusions. End quote. Believe it or not, this has already started to affect the way society functions, even when it comes to
            • 07:30 - 08:00 administering justice. In 2023, the Detroit Police Department was notified about a liquor store robbery. The surveillance footage was poor, so with the limited information the department turned to a facial recognition vendor called Data Works Plus, a criminal database manager founded 25 years ago. They now use AI to make law enforcement easier. When the scan of the footage was completed, the AI facial recognition analysis pinged a match from a 2015 mugsh shot. It was of Porsche Woodruff. She'd been arrested previously for an expired license. When
            • 08:00 - 08:30 the police arrived to arrest Porsche, she was reasonably surprised. She pointed to her stomach as she was 8 months pregnant at the time, definitely in no shape to commit a violent felony. Acting solely based on data works analytics, the police wrongly arrested her, which led to Porsche suffering from dehydration and labor complications. Ultimately, the case was dismissed due to insufficient evidence. But sadly, this isn't even the first time that even just the Detroit Police Department has succumbed to
            • 08:30 - 09:00 cognitive offloading when investigating crimes. They currently face three lawsuits of wrongful arrests based on the use of data works AI, and there are many such cases. From the outside looking in, it might seem obvious. The police were just being negligent, and they were. But the issue is much bigger. This technology is being sold as a reliable alternative. And that's the real problem. People trust AI because it makes life easier. Just like in the GPS study, the effects are hard to notice
            • 09:00 - 09:30 when it becomes part of our daily routine. It's the convenience. It's so tempting. But hey, want to see cognitive offloading in action? Just head over to X/ Twitter and look at the replies to tweets over there. You'll see people asking the Grock AI to explain the simplest of posts. Many users simply can't be bothered thinking for themselves anymore. They would rather trust an AI to find the answer. Whether this is perceived as progress or not is up to you. Now, that being said, I could see how asking an AI
            • 09:30 - 10:00 on social media could be useful to save time, but the overuse and asking the AI to explain simple posts is a bit worrying. But even if you think you're not the kind of person to do that, there's something interesting beyond this that the rest of us can learn from. When you think about it, we are all surrendering to algorithms for decision-m daily. Think of how algorithms work on platforms like Instagram, Facebook, Twitter, Tik Tok, and even YouTube. It's
            • 10:00 - 10:30 even possibly how you found this video. But the fact of the matter is that today, people tend to surrender their agency all the time. They simply don't realize it. The more we rely on these algorithms, the less we ask ourselves what we actually want. Ultimately, the algorithm decides, not us. Alec Watson from the channel Technology Connections calls this algorithmic complacency. I want to talk about how we decide what we want to see, watch, and do on the internet because, well, I'm
            • 10:30 - 11:00 not sure we realize just how infrequently we are actually deciding for ourselves these days. I'm going to be focusing on something which feels new and troubling. I'm starting to see evidence that an increasing number of folks actually prefer to let a computer program decide what they will see when they log on, even when they know they have alternatives. I felt it needed a name. I've chosen to call it algorithmic complacency. Think for a moment about
            • 11:00 - 11:30 what your experience on the internet is like these days, and if you're old enough, how it differs from a couple of decades ago. The internet used to only exist through a web browser on a desktop computer, maybe a laptop if you're fancy. It was a dark time. Back then, Google was just a happy little search engine which helped you find websites. And when you found a cool website which you liked, you'd use your web browser to bookmark that website. That would make
            • 11:30 - 12:00 sure you could get back to it later without having to search for it again, like writing a note to yourself. In other words, the internet was still very manual and you were in charge of navigating it and curating your own experience with it. For the generations entering adulthood in the 2020s, they tend to trust algorithms more than they trust other humans. This shows why students who used AI during and after the pandemic to skip basic learning skills often carried that habit into their jobs. As we'll see near the end of
            • 12:00 - 12:30 this episode, many now rely on extra tools to cover gaps in their abilities. So, here's a question for you. Is this working smarter or is this slowly eroding long-term mental strength? As always, it's quite nuanced. Currently, for simple, repetitive tasks where AI doesn't make mistakes, it can indeed save time. But if people continue to use AI to do all of their thinking for them, they'll barely be thinking at all. And in that way, AI can make you dull. And this is especially true when it begins
            • 12:30 - 13:00 to replace critical thinking altogether. Since the mid90s, the internet brought us into the information age, now amplified by search engines, social media, and YouTube. Now, with AI synthesizing that information into knowledge, we've entered the knowledge age. And that sounds great in theory, but if that knowledge is flawed and most people can't tell, our grasp on reality starts to slip. We already saw this with
            • 13:00 - 13:30 the launch of AI overviews by Google last year. You all know it. It's that little AI generated block that you see at the top of your Google search results. At launch, it was a disaster. From calling Obama the first Muslim commander-in-chief to calling snakes mammals or saying that eating one rock a day is healthy, it revealed the glaring shortcomings of AI. Now in a few years this technology could be near perfect but for now trust is compromised because at the time of writing hallucinations and bad sources
            • 13:30 - 14:00 remain a fundamental issue. It's a real problem because people go along and take this information and then post them on other platforms as facts. And that's the crux here. AI is fundamentally different from the other technologies that we mentioned earlier because it still gets a lot wrong. 70% of people say that they trust AI summaries of news and 36% believe that the models give factually accurate answers, but a BBC investigation last year found that over half of the AI generated summaries from Chat GPT,
            • 14:00 - 14:30 Copilot, Gemini, and Perplexity had quote significant issues. Even just simple tasks like asking Chat GPT to make a passage look nicer can end up distorting the original meaning of the text. And a lot of people wouldn't know this. In early 2023, researchers at Oxford University studied what happens when AI reads and rewrites AI generated content. After just two prompts, the quality dropped noticeably. By the 9th,
            • 14:30 - 15:00 the output was complete nonsense. They call this model collapse, a steady decline in which AI pollutes its own training data, distorting reality with each cycle. In a second, I'll tell you why model collapse is so dangerous for the quality of the knowledge on the internet. But first, just take a listen to what one of the leaders of the study stated in an email exchange. Dr. Ilas Shamalov, who ran the study, states, quote, "It is surprising how fast model collapse kicks in and how elusive it can be. At first, it affects minority data.
            • 15:00 - 15:30 Data that is badly represented. It then affects diversity of the outputs and the variance reduces. Sometimes you observe small improvement for the majority data, which hides away the degradation in performance on minority data. model collapse can have serious consequences. End quote. But that isn't the most troubling part of the story. According to a separate study conducted by researchers at Amazon Web Services, about 60% of internet content as of this year has been generated or translated by AI. In
            • 15:30 - 16:00 other words, if these numbers are even close to accurate, this technology is causing the internet to slowly eat itself, producing more and more inaccurate information with each cycle. Either AI technology improves so quickly that we avoid the worst case scenario or the internet would just be inaccurate, incomprehensible AI slop. All of this AI content being generated feeds into the dead internet theory. A theory that
            • 16:00 - 16:30 suggests the vast majority of internet content has been replaced by bots and AI. There's an entire episode dedicated to that topic on this channel, so you can check that out if you like. Now, AI might be a great distillery of knowledge in the future, but in its present state, it's such early days. So much so that the technology might be on the precipice of setting us back. But it doesn't have to be this way. As long as people begin to understand the limitations of current AI and do acknowledge that although AI can be useful, it's not ready to be a
            • 16:30 - 17:00 stand-in for your brain. And that's the purpose of this video. I'm not being a doomer here. I'm just trying to warn people to be aware of what could be coming. Jeffrey Hinton, who's considered to be the godfather of AI, has said that while AI is well on its way to being an effective resource, large language models like chat GPT, haven't gained the ability to tell the difference between the truth and a lie. We're at a transition point now where chat GPT is this kind of idiot of and it's also doesn't really understand about truth. It's been trained on lots of
            • 17:00 - 17:30 inconsistent data. It's trying to predict what someone will say next on the web. Yeah. And people have different opinions and it has to have a kind of blend of all these opinions so that it can model what anybody might say. It's very different from a person who tries to have a consistent world view. Yeah. Particularly if you want to act in the world. Um it's good to have a consistent world view. We got our own truths. Well, that's the problem, right? Because what
            • 17:30 - 18:00 you and I probably believe, unless you're an extreme relativist, is there actually is a truth to the matter. This might give some of us pause, but for younger generations, it's already become too comfortable using these systems. AI is marketed as flawless, so why not rely on it for information? After all, all it's doing is just making our lives easier. What's the harm? It's what humans have always done. And if we're given the chance to let our minds rest, will often take it, no matter the
            • 18:00 - 18:30 cost. As university professors continue to grapple with AI doing students homework for them, automation and the use of language models is no longer containable. Students coming into university are using AI almost like a right of passage. And students like those in Rafo's class have graduated taking their habits with them to the workplace. In fact, this is happening on a massive scale. Surveys conducted among employees aged 22 to as old as 39 have
            • 18:30 - 19:00 been using AI to lessen their workload. Gen Z has obviously taken the lead with this one with some businesses finding out that over 90% of their employees use two or more tools weekly. Now, this in itself isn't necessarily a bad thing. Over half of younger employees surveyed about their AI use explained that corporate life can be unbearable with spending 30 minutes or more finding the right tone for an email. And a lot of people coming across this video have undoubtedly struggled with productive
            • 19:00 - 19:30 briefings about something discussed just 24 hours earlier, probably thinking, "Oh man, Robert wants to know about the team's synergy again." And this isn't to mention that AI can increase productivity. It can help scale businesses, help with management, and cross team communication. But this episode is focusing specifically on the overuse of LLMs, the AI slop, overusing it to such an extent that it substitutes for thinking with your own gray matter. Later at the end of this episode, we'll explore how to avoid being lulled into not thinking for
            • 19:30 - 20:00 yourself. But all of that being said, if incoming generations of workers aren't careful, they'll become overreiant and their creative muscles will begin to atrophy, much like those surrounding a broken leg. With AI often providing inaccurate information, the last thing you want to be recommended is the wrong SIM for a new country while you're traveling. This is where I'd like to introduce to you SY, a new eSIM service app from Nord Security. If you've ever been lost abroad or desperately need an internet connection, you'll understand
            • 20:00 - 20:30 what a difference a local SIM card can make. Thanks to you guys watching my videos, I get to travel sometimes to cover some interesting stories in the world of tech and science. And I personally haven't gone without an eSIM. First of all, it saves you a ton of money on roaming fees, and that's because it provides you an internet connection wherever you travel, and you can buy the data as you need. SLY lets you choose several affordable eim plans in over 150 countries and eight regions. You can get tons of data and never have to worry about losing a connection. Plus, with the SY app, getting an eSIM couldn't be simpler. You just buy a plan
            • 20:30 - 21:00 and activate it, all on the SY app or website. Once it's activated, you're good to go. No more ducking in and out of cafes for Wi-Fi. In fact, and this is a true story, while traveling in the United States, one of my colleagues had to use my phone for a hotspot to avoid roaming charges. So, with SYL, there's no more waiting in line at the airport for a physical SIM card or fumbling to switch one out. All Sally ESIM plans are compatible with iOS and Android devices. And if you're traveling to multiple countries, the eSIM just needs to be installed once, eliminating the need for
            • 21:00 - 21:30 users to install, so you're covered virtually wherever you're going. A sale eim is really a traveler's best friend. It's the best decision you can make before going to another country. Click the link below or use the code cold fusion to get an exclusive 15% off your first purchase. All right, now back to the video. So before swearing off AI altogether or getting paranoid about losing free agency, it's important to remember while these language models aren't exactly like GPS or spellch
            • 21:30 - 22:00 check, they are all tools, devices used to carry out a particular function. It might seem like an imminent threat, but the fear of automation isn't new. And when it's done properly, it ultimately makes us more productive. Way back in 1979, Dan Bicklin and Bob Frankston created VisiCal, the first real spreadsheet program for personal computers. It aimed to speed up the processing of spreadsheets. A lot of people don't know, but if one number changed in a massive spreadsheet, you'd have to recalculate the whole thing and write it
            • 22:00 - 22:30 down. Visyal was seen as the first killer app and was one of the key reasons why people started buying computers in the first place. Experienced computer buffs didn't understand, hey, I can already do most of this in basic, a computing language. But when accountants saw Vizal, they cried. Brooklyn recalled one accountant saying, quote, he started shaking. He said, "This is what I do all day." Of course, this automation didn't eliminate accountants, and it just took people who actually understood it to make that
            • 22:30 - 23:00 change happen. And AI language models can do the same as long as they're handled responsibly. The key is to use AI as more of a companion rather than doing the thinking for you. And any answers should be taken with a grain of salt. As Oregon State computer science professor Thomas Dietrich put it, quote, "Although we want to interpret large language models and use them as if they were knowledge bases, they're actually not knowledge bases, the statistical models of knowledge bases." End quote. Put simply, they're designed to answer questions at length, even if they have
            • 23:00 - 23:30 nothing to offer. It will never say, "I don't know." of work right now on exactly that of uh as you know right I've been interested in this problem of how a system can have a a good model of its own competence which questions it's competent to answer and which it should refuse to answer uh and uh and I think some of those ideas should extend to the LLM case the neural network technology has some fundamental problems because it's learning its own representation it only can represent
            • 23:30 - 24:00 things that in some sense uh where it has been exposed to variation of some kind in the past. Undoubtedly, there's so much sensationalism surrounding this new technology and some of it isn't unwarranted. As I said, it does have a massive future, but that time is not now. One of the key problems in the findings presented by professors, think tanks, labs, and other institutions is that people trusted AI as their primary source.
            • 24:00 - 24:30 Before we end the video, I just wanted to share this image from a newspaper in 1988. It shows elementary school teachers in a demonstration against the use of calculators in grade school. The demand wasn't a blanket ban on calculators altogether from schools, but rather early use. This was so that young children could learn about the math concepts first. We need to treat AI similarly. It should be a tool that helps us get things done and done more efficiently, but we shouldn't lose our own ability to understand complex problems. No matter how sophisticated AI becomes,
            • 24:30 - 25:00 humans and their capacity to think critically will be necessary. We have authentic experiences and a nuanced understanding of the world around us that is so complex. Until the AI overlords come, humans should value and treasure their abilities to think for themselves. After all, there's a reason why the phrase from the first principle of Renee Dikart's philosophy is so popular because it truly defines the one thing that makes us humans. we think, therefore we are. So that is the story and the
            • 25:00 - 25:30 discussion of if AI is making us dumber. I'd be interested to hear what you guys have to think. Thanks so much for watching this longer episode all the way to the end. It means a lot. If you did like this, feel free to subscribe to Cold Fusion. There's plenty of other interesting topics on science, technology, and business. Anyway, that's about it from me. My name is Dogo and you've been watching Cold Fusion and I'll catch you again soon for the next episode. Cheers guys. Have a good one.
            • 25:30 - 26:00 Cool fusion. It's me thinking. I don't know. What we had was real.