Here's What GPT-5 Might Look Like & More AI Use Cases
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this engaging episode, "The AI Advantage" discusses the latest in generative AI, notably centered around the theme of a singular AI assistant akin to GPT-5. Although GPT-5 is not officially previewed, many new releases are seen as steps towards this goal. The video examines developments like GenSparkβs AI sheets and Notionβs AI updates, emphasizing the importance of a unified AI system. Furthermore, the video highlights tools like Spotter Studio for YouTubers and emerging models in ChatGPT aimed at developers. These tech advancements showcase an exciting vision for AI's potential to transform workflows and productivity.
Highlights
Generative AI is moving towards a singular, unified assistant model, perhaps represented by GPT-5. π
GenSpark is making strides in creating powerful AI tools that could be precursors to GPT-5. π
Notion's new AI features aim to create a comprehensive 'everything app.' π±
Spotter Studio emerges as a vital tool for YouTubers to streamline their workflow. π₯
New models in ChatGPT focus on developers and ease of code integration. π¨βπ»
Emerging tech like enterprise search and AI-driven memory systems are game-changers. π
Key Takeaways
Generative AI is shifting towards a unified assistant model, potentially realized in GPT-5. π
GenSpark and Notion are advancing AI features that hint at the future capabilities of GPT-5. π§
New tools like Spotter Studio help creators manage content creation efficiently. π οΈ
ChatGPT continues to evolve with new models and deep research functionalities. π»
Enterprise search in Notion and memory features exemplify the next steps for smart AI integrations. π
Overview
Generative AI is on a trajectory toward creating a unified assistant model, and while GPT-5 hasn't been officially introduced, this video explores the current technological advancements that indicate its inevitable arrival. Key releases from platforms like GenSpark and Notion suggest that the future of AI will involve more seamless, integrated systems that can handle a wide array of tasks autonomously.
This week, the focus is on practical applications of these technologies that can make everyday workflows more efficient. For instance, GenSpark unveiled AI sheets, a tool designed to streamline tasks that involve spreadsheets, while Notion's latest updates position it closer to becoming an all-encompassing app with centralized AI capabilities. The video also discusses how advances like deep research functionality in ChatGPT are reshaping the landscape for developers and users alike.
Spotter Studio is highlighted as an essential tool for YouTube content creators, offering data-driven insights and project management capabilities. The rapid development of enterprise search features and AI memory functionalities further illustrate the ongoing evolution of smart systems that anticipate user needs, making digital assistants not just reactive, but truly proactive tools in our digital lives.
Chapters
00:00 - 01:30: Introduction and Weekly Generative AI Releases The chapter discusses the recent releases in Generative AI, highlighting a common theme of convergence towards the development of a single AI assistant, often referred to as 'GPT5'. Although OpenAI has not provided a concrete preview of this system, the trend is evident in the advancements and releases observed.
01:30 - 04:00: GenSpark AI Suite Update The chapter discusses recent updates to the GenSpark AI Suite, highlighting trends towards a unified AI system with vast context and computing abilities. This week's releases demonstrated what this next-gen system could look like. New features include AI sheets for spreadsheet management and a tool for easily downloading internet videos. Additionally, a significant update to Notion AI centralizing its features is also mentioned.
04:00 - 08:00: Spotter Studio Sponsorship and YouTube Content Creation The chapter titled 'Spotter Studio Sponsorship and YouTube Content Creation' discusses an app notion becoming an 'everything app' by integrating AI functionalities. Furthermore, it covers updates in AI news, including a new model in ChatGPT and short stories about AI releases. It emphasizes using AI tools that are currently available and beneficial. Going into detail, the chapter begins with discussing the Gen Spark suite of agents, highlighting their potential and impact.
08:00 - 11:00: New Chat GPT Models and Features The chapter focuses on the new chat GPT models and features, highlighting a holistic approach to integrating various functions into a single system. A new element, referred to as a 'super agent,' is introduced, capable of commanding multiple modalities. There's a mention of AI sheets as a new addition to the system. The chapter also touches upon competitors like Manosai, now free to try, and OpenAI's operator, which is more limited in scope.
11:00 - 13:00: Notion's AI Features and Enterprise Search In this chapter, the discussion revolves around Notion's AI capabilities and how it aligns with advancements anticipated in future models like GPT-5. There's a focus on the simplification of AI systems, eliminating the need for model selection, and moving towards a singular, all-encompassing system ('GBC5'). The chapter also highlights the potential integration of advanced tools, such as deep research functionalities, image creation, and an improved operator tool, which acts as a browser agent.
13:00 - 15:00: Practical Prompt for Chat GPT Image Generation The chapter discusses the current capabilities and limitations of certain internet-based applications for generating and downloading content, particularly focusing on image generation using Chat GPT and video downloading from platforms like YouTube and TikTok. It highlights the obsolescence of some tools due to lack of updates and contrasts this with Gen Spark, which is actively being developed. The chapter ends with an intent to test downloading videos from TikTok, a controversial topic due to legal constraints.
15:00 - 18:00: Anthropic's Model Context Protocol (MCP) The chapter discusses a new capability of Anthropic's Model Context Protocol (MCP), which involves downloading videos directly from platforms like TikTok and integrating them into an AI agent. However, this protocol does not currently work with YouTube videos. The innovation lies not just in the ability to download files, but in enabling the agent to perform further operations, such as generating a list of similar videos along with their views and likes, by simply providing the link to a video.
18:00 - 23:00: Quickfire Segment The 'Quickfire Segment' chapter discusses the advanced capabilities of chat GPT, emphasizing its ability to integrate various tools like data analysis and a canvas for executing complex tasks. Unlike traditional web browsing, chat GPT utilizes these tools in synergy, enhancing its utility. The speaker notes that this integrated approach is different from simply browsing the web and providing context links. The evolution in models like GPT-3 indicates a progression towards more comprehensive functionalities, positioning itself as a more user-friendly AI assistant.
Here's What GPT-5 Might Look Like & More AI Use Cases Transcription
00:00 - 00:30 So, as per usual, we have a bunch of Generative AI releases to talk about this week. But there's a clear theme amongst everything that has been happening this week, and that is not just the fact that we've seen before that these apps are converging. They're converging towards a specific goal. This goal of a single AI assistant helping you. You could also refer to it as the ultimate agent, but truly, I think the correct word that most people would use to refer to this idea is GPT5. And no, OpenAI did not give us a concrete preview of that system, but most
00:30 - 01:00 releases in this week are pointing towards this common direction of one AI system that has your context and has the computing power to get things done for you. So, if you were wondering what this nextG system looks like, I think this week's release has showed really well. We're going to be looking at it from different angles. For example, GenSpark releasing AI sheets, an ability for the agents to work with spreadsheets, and also they added capabilities to download literally any video on the internet really simply. And also we have notion coming out with a big notion AI update which essentially centralizes a lot of
01:00 - 01:30 these AI functionalities into this everything app that notion is trying to be. And then there's a bunch more like a new model inside of chat GPT and a whole lot more short stories in this week's episode of AI news you can use to show that pulls together all the AI releases from this week filtering for the ones that you can actually put to work today and then me showing you the ones that matter. And that has been long enough of our intro. Let's get straight into the first news story which is an update to the Gen Spark suite of agents. We talked about this before and they're really trying to do a lot with this single product and honestly speaking I'm
01:30 - 02:00 featuring this here because I feel like it represents whole category of products. Now in the quickfire segment later I'll tell you about a competitor Manosai now being free to try. Technically there's also OpenAI's operator that competes with this but it's really more limited. This is trying to merge all these different modalities, all these different functions into one system. And this is what it looks like in practice. You have the super agent here that can command all these different things. And then you have these various modalities if you want to focus on a specific one. This week they added AI sheets. And there's also this
02:00 - 02:30 AI drive where you can go in and download anything. But really, they're trying to build the type of product that I believe is going to be very similar to what we're going to get from GPT5 eventually because one thing they clearly said is that they're going to get rid of all of these model picking in chat GPT. It's just going to be one system GBC5 with all of this tooling, including the ability to deep research, including all the tools like creating images. Plus, it will hopefully have a better version of operator built in, which if you're not familiar is the browser agent where you tell it to do
02:30 - 03:00 specific things. It clicks around the internet. It's a little simplistic now. I wouldn't recommend using it at this point in time to be honest, but they're not even pushing new releases to this whereas Gen Spark is and that's why we're looking at this now. So, first up, I want to test this ability to download anything cuz this is a pretty controversial one. We've had these websites that kind of download YouTube videos for you since forever. They always get taken down, etc., cuz they're against the terms of service. But the new ones pop up. But let me just find a random video off of Tik Tok here. Literally just going to take, how about this little one with some AI generated product mockups. I'm just going to take
03:00 - 03:30 this link, post it into this download into AI drive function. Yeah. Let's see how this works. Download starting. Downloading. And that's it. Successful. What? Okay. So, now it ripped this video straight off of Tik Tok into my agent, and now I can keep working with it. I mean, how about the launch video from GenSpark themselves? Okay, so it doesn't work with YouTube videos. Interesting. But the fact that it can download files now is not revolutionary by itself. It's a fact that you can let it now do things like, hey, giving it the link to a video and then asking it to create a sheet with similar videos to that along with all the views, likes, and then you can
03:30 - 04:00 really think of this as chat GPT with all the tools that it has like data analysis, canvas, etc. All working in unison to fulfill your request while also using a browser to do it. That's different from web browsing giving a few links of context and then chat GBT working with it. In something like 03, it goes further. It does more work. It has more tools. And that's why I'm saying I believe that is going to be the next evolution of this product here because that's a more user-friendly form an AI assistant can take for you instead
04:00 - 04:30 of model switching and tool selection. It happens automatically. And this startup is just a bit faster at getting it to market than OpenAI is because OpenAI can wait and then steamroll competitors like this most likely and then they'll have to pivot. Point is, this is pretty impressive. So, I'm just sitting here and it's doing all the work that I would be doing manually if I wanted to create a Tik Tok content strategy around a specific niche. Okay, maybe not all the work, but you get the point. This is a serious tool and for my initial testing for this type of thing, it works better than opening eyes operator does right now. Honestly, at this point, I don't even consider as a
04:30 - 05:00 tool choice. So, this has been working for the past 10 minutes and it's still going. Look, it found six other videos, pulled the likes data where it could it couldn't get the views from these and it just keeps going, keeps thinking, keeps adding more videos. And I'm going to end the segment here, but I think you could imagine what happens if you give an agent like this the ability to not just think for 10, 15, 20 minutes, but the ability to work for a day. Maybe you go to bed and let it run for 12 hours straight. And when you wake up, you have all of the research prepared for you. Something that would have took you hours. And while this product is
05:00 - 05:30 certainly not perfect, I really think that this is a genuine glimpse into the future. And yeah, therefore, I wanted to share it with you. Onto the next one. Okay, quick note from editing Eigor, I suppose, out in the real world. I recorded the segment for this video this morning, Friday morning. And Friday afternoon, OpenAI released the new codeex inside of CHPT with the pro subscription. We're not going to cover that in this video. It's too fresh. But I wanted to point out that it aligns with all the themes we talk about in this video. And I'm going to create a separate one just on it cuz that's really worth it. All right, back to the video. Okay, so as you might know, I've
05:30 - 06:00 been running this YouTube channel for years. And if you're familiar with my story a bit, this is actually channel number five. I started the first one when I was 17, which is 14 years ago at this point. And I got to say, I did underestimate how complex content creation is in the tech space at this level. The filming, this part, while not always easy, is pretty straightforward. So much preparation goes into these vids that I just got to sit down and tell you about the points that matter in the current video. And hopefully that inspires you to do more with Generative AI. But honestly, most of my time is
06:00 - 06:30 spent in the preparation stages of the video. researching topics, testing tools, evaluating those tests, putting together scripts, deciding on the packaging, recording, editing, reviewing, uploading, and then evaluating data. That's a lot of steps. And if you don't have an efficient system in place to manage all of this, especially as more people get involved, your chances of success are slim because you might feel like you can hold all the context in your head, but it quickly becomes overwhelming, especially when working on multiple videos at a time. So, if any of that resonated and you're a creator like me, you really need to check out Spotter Studio, the sponsor of
06:30 - 07:00 today's video. Spotter Studio is the all-in-one AI powered ideation system built for pro YouTubers. It pulls real-time trend data to tell you what topics will resonate with your audience, gives you packaging ideas on demand, and then let you turn those ideas into trackable, organized projects all in one workspace. And Spotter Studio does this for free main tools, Outliers, Brainstorm, and Idea Bank. It's actually quite simple. Outliers pulls together real-time YouTube analytics and highlights topics, titles, and thumbnails that are gaining popularity right now for your specific channel's
07:00 - 07:30 niche. Once you found an outlier you like, you brainstorm off of that outlier. It shows you a list of potential title thumbnail concepts, hooks, and supporting talking points that are personalized for your channel. And the third part is all about organizing your upcoming ideas. And because YouTube really is a marketplace for ideas that uses the video medium to communicate them, the third core tool is called Idea Bank. And this allows YouTubers to rapidly add, organize, and prioritize your video ideas. You can see which videos have the strongest titles, prioritize the videos with the higher
07:30 - 08:00 scores, and then you can open up the idea to further develop it, adding details, previewing what it would look like on YouTube, and using the story beats feature to further develop this idea and move it along the workflow. As you can see, all of this is just making it easier for you to actually make the video. And everything lives in one place, and nothing slips through the cracks. So, if you want non-stop inspiration, a databacked content plan, and a workflow that actually keeps up with you, check out Spotter Studio today. You can click a link at the top of the description to get started with Spotter Studio right away. And if you sign up now, you'll get the limited time
08:00 - 08:30 offer of a full year of Spotter Studio for just $99. A big thank you to Spotter Studio for sponsoring this video. And now, let's get back to the next piece of AI news that you can use. Okay, next up, we have some new models inside of Chat GPT, which are interesting for a specific subset of people, and those are developers. If you go to the model switcher here on the more models, you're going to see GPD 4.1 and 4.1 mini. Honestly, this is probably a reaction to the increased popularity of Gemini's 2.5 Pro model, especially amongst developers, but also a lot of common users IC switching to Gemini 2.5 Pros.
08:30 - 09:00 So, as a reaction to that, they now released this coding focused model that previously was only accessible for the API, meaning you had to pay usage and they mostly thought of it as something that you might want for app development. and they thought of it as a model that you will only want for development and you'll be using it for the API anyway. Well, now they have it in here to further increasing model choices here. One thing that I guess I forgot to note is that 4.1 is not a thinking model. It's a GPT model that just gives you the answer right away. So, it answers right
09:00 - 09:30 away. It's also rather slow, but the quality of the code you get out of this is really high as that's what the model has been trained to do well. Beyond that, we have one more chat upgrade which actually I find quite significant as a regular user of deep research. But if I pull up this deep research that I did on a medical condition that I was dealing with a while back. Well, there's this new subtle but very powerful button at the top of the deep research that allows you to download this as a PDF. And as some of you might know, you can always prompt Chat GPT to save something
09:30 - 10:00 as a PDF. But that PDF usually has a lot of mistakes in it. This one is kind of perfect. It includes inline references to the various articles. The formatting is perfect and yeah, I would say it's equally as good experience to read the entire article as it is in chat GPT with difference that now this is transferable. This is a way better file format if you want to send this per email to somebody and it's also super convenient way to transfer this content to somewhere else because sure you could copy all of this but that gets messy quickly. So if I'm doing a new chat I just add this PDF and again we discussed this before PDF is not the perfect file
10:00 - 10:30 format to attach. Preferably, you want something formatted as a markdown file or a JSON file or something else that is native to a computer. PDFs are formatted to be readable for humans. Nevertheless, this does work super well. I personally would also like a markdown download function. But this way, you can take the deep research data and then add it to new chats, add it to projects, add it to knowledge bases of chatbots, transfer it into a different application really simply. as I still see deep research as the most powerful feature in chat GPT. I think this is a very welcome addition
10:30 - 11:00 that everybody using it should know about. Hey, if you're watching and enjoying this video, it would really help the channel if you also leave a like. It only takes a second and every time I see how much likes actually influence the video's ability to be distributed to more people, I'm surprised. So yeah, if you like the video, don't forget to express it. And now let's move on to the next story. Okay, next up we have another story that aligns with this theme of the next generation of AI models kind of happening outside of OpenAI right now. As I said, they'll probably catch up very soon here. I don't know the timeline of that, but I can tell you
11:00 - 11:30 that Notion is also going for this holy grail of the ultimate AI assistant. And they're doing that by extending their everything app with a bunch of new AI features. I think some of them are less exciting like the AI meeting notes that they added now. So, as in every other meeting application, you can now transcribe notes and then work with them. That's great. But really what I'm looking at here is this enterprise search. There's a bunch more here. Like they have their own version of deep research now. They have a model switcher now. And it's all priced under one subscription which looks like this. Right now you need this business
11:30 - 12:00 subscription per member to access all of this. But this enterprise search function where it doesn't just connect to your notion pages, but it also includes your Slack, GitHub, Google Drive, Jira files. I think this is really significant because this is in my eyes where the ball is going here. an AI assistant that you don't have to prompt manually for every single little piece of context that you might want to evolve there, but one that already sees the files, one that already knows. I mean, heck, would you want a physical assistant that is a blank slate every day and you have to retach him or her
12:00 - 12:30 how to do things? No. There's a lot of value in a trained employee who already has all the context. That's what this enterprise search is about. It's about giving it access to these apps and it's figuring out what is needed for the task at hand. Now sure we see versions of that in apps like chatbtd already. You can go in here and connect your Google drive to this but it's different because chatb is not a knowledge base of files that it looks over at least for now. This is what I think is going to happen with GBT5. Whereas notion is by default a knowledge base for all your files. I mean heck I even use it as operating
12:30 - 13:00 system for the entire company. Now you can extend that beyond the confines of notion and use it all in unison with their AI, making them another competitor in this race for the ultimate AI app. And I think this overarching theme is something that everybody following this space should at least be aware of because it's one thing to say, hey, these tools are getting smarter every day. And another one to at least have a working theory of what that looks like in practice over the next weeks and months because then you can potentially start building those knowledge documents. Transforming your files into
13:00 - 13:30 markdown files, reorganizing your company departments for a world where those tools seamlessly integrate into your workflows rather than compete with them. So yeah, that's the notion update, but as you might imagine, there's more. Okay, so for the next one, I want to show you a practical prompt that actually popped up inside of our community. Big shout out to Shay who's actually been with us since day one. And the prompt is one that you can use with chat GPT image generation which by the way is now also available through 03. As you can see a lot of things are converging over the past weeks especially this week. So other words if
13:30 - 14:00 you want to generate images on a free planner you can just use 40. But if you want to use all the other things like almost every other tool in chat you can actually do this directly inside of free. So I'm just going to demo it in here. And this prompt produces these very interesting images because you know how they say a image can say more than a thousand words. Well this is sort of four images. And what allows it to do versus just having one image is actually tell a story. You could also use these as infographics as they hold so much information and they could obviously be used for all kinds of purposes and you can fully customize this prompt for
14:00 - 14:30 yourself with 40 O3 and make images like this yourself. I particularly like this use of it explaining a concept in a very very approachable manner. Like literally everybody ranging from a 5-year-old that can read all the way to a senior would resonate with this type of imagery and text. And this is what the full prompt looks like. I'll put it in the description below. Again, thank you Sh for sharing this and giving me the permission to add this to the YouTube video here. So the basic instructions are please use this two times two visual panel prompt template for and then you got to add your topic. The rest of it is
14:30 - 15:00 set up to be a educational panel set with all the specifications and a base structure for the different panels. So the first one is talking about what's happening then why it matters what's unclear and what you can do. Obviously, you could change the theme of this and you could also change it from educational panel, but this really works well. And all you need to do is change the topic here in the beginning. I'm just going to go ahead and say, I don't know, AGI. That's the first thing that comes to mind. And I'm going to do this with 03 just cuz I can. And in this case, 03 actually reasons over this, but doesn't create the image. And you know
15:00 - 15:30 what? We're actually going to keep this in the video. I want to keep it real. So, looks like for O3, we need to add generate an image to the prompt, which we'll do for the one in the description. And there you go. and created a little 4x4 educational image about AGI. Yeah, wonderful. Just as expected, opportunities, challenges, and the final one of focusing on what you can do. And all you need to do to customize this for yourself is customize this one word. Pretty powerful stuff, if I might say so myself. All right, so for our next segment, I want to talk anthropics MCP that has been becoming increasingly
15:30 - 16:00 popular. We covered it many times for the show, but for anybody who might be new here or who needs a refresher, you can think of MCP, aka the model context protocol, as a standardized way to plug various functionalities into LLMs. It's very similar to Chat GPD plugins that came out 2023, if you remember those. Back then, Chep GPT added this functionality where you can essentially add Expedia to your Chat GPT experience and then it could talk to the site. Now that interface turned out to be well completely misguided to say the least because they ended up removing it again.
16:00 - 16:30 Nobody was really using it at all. But MCP does a similar thing on a completely open protocol that can build with any chatbot any LLM where it allows to plug in all of these various services into any let's say chatbot. And now this week I want to round out some of the news around this. Not just that we're getting interesting servers by the day added to the list of I think 3,000 servers at this point that you can kind of use and easily plug into your well for example enthropic clouds desktop app or if you're coding with cursor MCPS are the way that people are extending their
16:30 - 17:00 experience there. And there's a few pieces of news this week. The first one is there's actually a brand new course that is currently freely available. And as the mission of this channel is to help people adapt to this new technological age courses like this are essential for that. Now, this is not something you will take if you're completely new. They even say it's an intermediate course. And if you want to learn more about MCP, the first videos make sense, but beyond that, it does get a bit technical and it sort of assumes that you're going to be developing applications and working with GitHub repos, etc. But beyond that, some of you might be already working with some of
17:00 - 17:30 these vibe coding workflows as you'll see in a video coming up soon on the channel. And along with that, I want to cover some updates that also recently happened to clots codes. They actually changed so much in it ever since the release. Also, it should be noted there's a competitor from OpenAI called Codeex CLI, but I myself have played with both. And maybe it's the power of habit because I started using this on the day of its release. But I still like Claude and also actually thinking about it, I think it's more about the fact that Claude 3.7 just kind of does things and figures everything out. And the OpenAI version of it, it's more
17:30 - 18:00 targeted. It gives you these follow-up questions to really nail down what you want. And I kind of like this wildcard nature of clot code. It's just a really fun experience to hit a button and then have it think for 3 minutes and come up with features and extensions that you didn't even ask for but that do make sense most of the time. Anyway, point being there's new features there that are absolutely incredible. They added things like web search in cloud code so you can access the web with it and they even launched a subscription where you don't have to pay for usage. Usually a few hours of playing around cost anywhere from $5 to $20. But most of
18:00 - 18:30 all, they added a feature this week, which I honestly did not expect. And that's the feature that you can actually interrupt it in the middle of its coding frenzy and redirect it. Cuz what it does is it comes up with these to-do lists of things that it needs to do to like fulfill the request that you prompted it for. And then up until now, you were kind of forced to sit there and watch all the madness unfold in front of you. Now, you can actually interrupt it and readjust the plan as it goes. So, as you can see here from this quick little demo video, look at that. It creates a to-dos. It updates them. And as soon as it's performing the to-dos, you can
18:30 - 19:00 actually hop in and say, "Actually make it green." And it will readjust the plan. And instead of creating a blue bar, it will create a green bar. Very simple example here, but often these to-do lists are like six or seven items long. And you're sitting there for three, sometimes even up to 4 minutes as clot code does its thing. Now you can interrupt it. I think this is a big development because using this and other agent coding tools, it always feels like you're not fully in control. It's kind of doing its thing. And now we have a bit more control. I like that trend and I think this one really delivers that. So, this is from a company called Mem
19:00 - 19:30 Zero that I've had my eyes on for a while. They basically open sourced a protocol that allows you to add short-term memory to chat bots and agents. Now, they put this inside of MCP server making it simpler than ever before to add memories, the feature that many of you know and love within ChatGpt to any agent. Okay, it's 100% local. gives you persistent and portable memory, meaning that if you're using different apps and different agents, this memory can transfer between them. And all of that is standardized because it's MCP. So, if multiple agents plug
19:30 - 20:00 into MCP, which most of them do, with this, you can save all the short-term memories in one agent and then easily pass them along to others. And let me tell you, this is a massive step when it comes to using some of these agentic systems. Up until now, they really worked in silos and bringing things over, bringing the context over was a rather tedious task. And in most cases, it wasn't even worth it. Now with things like this emerging, all of these tools start melting into one ecosystem that is not yet automated. But as you can see, we're getting there and the companies are pushing it in this direction. Big
20:00 - 20:30 development here of which we will feel the consequences of over the coming months and weeks. But yeah, if you're using any agentic system or you're building any agentic system, have a look at this because I don't know if it's going to be exactly this form or something else, but the fact that chat GPT added memories and it went giga viral and people were sharing these prompts all across the world that used memories is only a proof of concept of the fact that this memory modality is something that doesn't just sound good in theory, but is actually something people want. And that means we're going to see memories all across the place
20:30 - 21:00 soon, whether it's this implementation or another one. And what that essentially means for you as just a consumer of these apps or just someone interested in this is that hey, you should probably spend some time crafting some memories that matter. Crafting custom instructions that are really accurate to what you do and not just relying on chatb kind of collecting scraps from your chats, but actually putting some thought into how would I describe myself to a person that I've never met before. If they're supposed to help me with a specific set of tasks, like my work tasks, what would I tell them about me, my history, and my
21:00 - 21:30 preferences? Even if you just draft it out in a few sentences, you're going to be further along than most people. And then over time, you can develop them, make them more detailed, and have an answer to the question that many of these programs are going to pose you, which is, "Hey, what should I know about you? What are the custom instructions? What are the memories you want to give to me so I can help you in the most efficient manner possible?" Okay, enough said. Let's move on. Okay, so let's do the quickfire segment that we introduced a few weeks ago. And not just I enjoy making it, but you guys also seem to love this part. So here we go. Here's all the stories that are interesting,
21:30 - 22:00 but maybe we don't need to spend multiple minutes on them. First of all, do you remember Manus the AI gent that had its viral moment, but most people didn't get beyond the weight list? Well, by now, not just that they removed the weight list a while ago, actually, but they're also giving away 300 credits to anybody who wants to try the application. So, if you ever wanted to get your hands on this, well, there's no more weight list and it's free to try. And right now, this is the main competitor to the story we talked about earlier, the Genpark agent that can manipulate cheats. Now, then next we have a story coming out of Tik Tok. They're adding a brand new feature that turns images into videos. So, now
22:00 - 22:30 they're making the creation of Tik Toks even easier. All you need is an image and AI does the rest. I don't know how popular this will be, but I wanted to show it off because it's a great example of how AI tools that we look at eventually get implemented into all of these consumer apps like Tik Tok. And then also in this interesting video, the US Senate was questioning some leaders in the AI space about chat GBT use cases. sort of interesting, although I don't think this entire thing is worth listening to for most people. And Sam Alman shared that he finds it particularly useful to raise his child.
22:30 - 23:00 And he pointed out that he's not sure how people were raising kids without the help of chat GPT. I mean, I kind of like that point. Not a use case that I thought of as I don't have any kids myself. Yeah, there you go. AI use cases being discussed in the US Senate. And then for anybody interested in super fast and super super cheap transcription services, there's a new hugging face space called Whisper Large V3 Turbo which transcribes at 100 times real-time speed, meaning around 2 minutes of audio will be transcribed at around a second and it only costs 80 cents per hour if
23:00 - 23:30 you use the suggested GPUs. If you need to transcribe a lot of video or audio files, this is probably worth bookmarking. And that's really everything I have for today. I hope you find something that will be useful to you. If you enjoy the show, don't forget to subscribe to the channel. I do this every single Friday.