Build Anything with Llama 4, Here's How

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    David Ondrej introduces the revolutionary Llama 4, an open-source AI model by Meta AI that is integrated across multiple platforms like Facebook, WhatsApp, and Instagram. Boasting three model sizes, including Behemoth with an astounding 2 trillion parameters (yet unavailable), Maverick with 400 billion parameters and natively multimodal capabilities, and Scout with an impressive 10 million token context size, Llama 4 offers unparalleled AI experience. Despite its vast capabilities, local running of Llama 4 is challenging due to its size, but Vectal.ai offers a venue to utilize Llama 4's prowess for free. Ondrej demonstrates building productivity-enhancing AI agents using Llama 4 in a beginner-friendly manner, showcasing its potential in real-world applications.

      Highlights

      • Llama 4 by Meta AI is a groundbreaking open-source model with immense potential and integration capabilities across major platforms. 🎉
      • With 10 million tokens, Llama 4 smashes the context window benchmarks. 🌟
      • Maverick, the medium-sized model with 400 billion parameters, is natively multimodal, blending various inputs efficiently. 🎨
      • Llama 4's model Scout, although the smallest, dominates with the largest context capacity among AI models. 📚
      • Ondrej showcases building a productivity AI agent using Llama 4, blazing through in just 30 minutes! ⏱️

      Key Takeaways

      • Llama 4 is the newest open-source AI model by Meta, and it's super powerful! 🚀
      • It features a massive 10-million-token context window, which is huge for AI models! 😲
      • Llama 4 is available on major platforms like Facebook, WhatsApp, Messenger, and Instagram! 📱
      • Three sizes of Llama 4 exist: Behemoth, Maverick, and Scout, each with unique strengths. 💪
      • Vectal.ai offers free access to Llama 4 to explore its capabilities effortlessly. 💡
      • Llama 4's uniqueness lies in its 'mixture of experts' architecture, enhancing its efficiency. 🤓
      • Despite its excellence, Llama 4's massive size makes local operation challenging. 🖥️

      Overview

      David Ondrej introduces viewers to Llama 4, the latest disruptive open-source AI model launched by Meta AI. Llama 4 isn't just any AI model; it boasts a remarkable 10 million token context window, setting a new benchmark in AI capabilities. Ondrej explains how Meta AI has strategically integrated Llama 4 into major platforms like Facebook, WhatsApp, and Instagram, making it accessible to around 4 billion users, far surpassing existing models.

        The video delves into the details of Llama 4's three distinct model sizes: Behemoth, Maverick, and Scout. The Behemoth, although not yet available, is geared up with an astounding 2 trillion parameters. Maverick, with 400 billion parameters, is the available marvel among the trio, offering enhanced multimodal abilities. Scout, despite being the smallest, shines through with the largest context length, making it fit for extensive prompts and tasks.

          Ondrej effectively demonstrates building a productivity-enhancing AI application using Llama 4, guiding even beginners through the process. He cleverly utilizes Vectal.ai to bypass local running limitations, allowing free access to the impressive Llama 4 models. The creation of AI agents becomes a breeze with Ondrej's instructions, solidifying Llama 4's status as a go-to for innovative AI development.

            Chapters

            • 00:00 - 01:30: Introduction to Llama 4 The chapter introduces Llama 4, the latest AI model from Meta AI, highlighting its status as the most powerful open-source model. The speaker, David Andre, promises to show how to utilize Llama 4, regardless of the audience's beginner status. Key features of Llama 4 include three different sizes and a substantial 10 million token context window, making it highly versatile and ubiquitous across Meta's platforms such as Facebook, WhatsApp, Messenger, and Instagram.
            • 01:30 - 03:00: Llama 4 Model Sizes and Features In 2025, AI platforms are projected to have around 4 billion monthly active users, making them the most popular AI chatbots, surpassing the likes of Chad GBD. The new Llama 4 models set impressive benchmarks, particularly in LM Arena ELO in relation to cost: the higher the score, the better, and more to the left indicates a cheaper model. The Llama 4 Maverick outperforms all existing models and is available in three different sizes.
            • 03:00 - 04:30: Running Llama 4 Locally This chapter introduces three versions of Llama 4: Behemoth, Maverick, and Scout. Behemoth is the largest with two trillion parameters but is not currently available. Maverick, which is available, has 400 billion parameters and is natively multimodal with a context length of 1 million, similar to Gemini models. Interestingly, Scout, despite being the smallest, has the largest context length of 10 million tokens, enough to accommodate approximately 100 short books.
            • 04:30 - 06:00: Accessing Llama 4 via Vectal The chapter discusses accessing Llama 4 via Vectal, focusing on its running models and building applications and AI agents. A key attribute of Llama 4 is its 'mixture of experts' architecture, which allows specialized experts to manage different subtasks. This architecture is visually explained by an example prompt 'what is one plus 1', showing how various experts like one on punctuation collaborate to process the input effectively.
            • 06:00 - 07:30: Choosing a Project Idea using Vectal The chapter discusses the process of choosing a project idea using Vectal, focusing on the efficient use of AI models like Llama 4. It highlights how Vectal activates specific 'experts' in AI for different inputs, such as numbers or verbs, to optimize resource use and inference costs. Although Llama 4 is open-source, cost-efficient, and highly capable, the chapter acknowledges challenges in running Llama 4 models locally due to resource demands.
            • 07:30 - 10:00: Implementing Screenshot Feature The chapter titled 'Implementing Screenshot Feature' discusses the accessibility and availability of Meta AI services, noting that they are not available in many countries, including where the narrator resides. To overcome this limitation, the narrator has incorporated Llama 4 into Vectal, a platform where users can access the AI for free. The narrator encourages viewers to sign up on vectal.ai to start using the service without any costs and guides them on setting up an account to begin this process.
            • 10:00 - 14:30: Setting up Llama 4 with Open Router The chapter guides users on setting up Llama 4 with Open Router. Free users can choose between models like Llama for Scout and Deepseek R1, while Vectal Pro subscribers have unlimited access to Llama for Maverick and other premium models. The platform, vectoral.ai, offers an AI-powered productivity app which assists users in task completion. Interested users can start using the service for free by signing up. The chapter also invites feedback for custom AI agent or app development suggestions from viewers.
            • 14:30 - 18:00: Debugging and Permissions This chapter focuses on selecting a build idea that highlights the capabilities of the new Llama 4 model, particularly its multimodal features. The speaker contemplates seeking suggestions from Llama for Maverick, a fast multimodal conversational AI, which aligns with Meta AI's emphasis on Llama 4's capabilities in this area.
            • 18:00 - 23:00: Integrating and Testing Llama 4 Analysis The chapter discusses the integration and testing of Llama 4 models, with a focus on leveraging their native multimodal capabilities. The team considers building a feature within their platform, Vectal, to utilize these capabilities. They brainstorm several project ideas to showcase the multimodal powers of Llama 4, using an 'ideal list agent' for this process. This agent is part of Vectal's system, created to facilitate rapid idea generation.
            • 23:00 - 25:00: Conclusion and Features of Vectal The chapter discusses the task management features of Vectal, a software tool. It highlights how Vectal organizes tasks and notes effectively. The notes section is designed for long-term memory needs, while the ideas section supports brainstorming and creation of new concepts. The system uses AI agents to generate multiple ideas from a single prompt, demonstrating its capability to facilitate creative processes. These ideas can then be easily converted into tasks or notes, showing the flexibility and usability of the software in managing and organizing work efficiently.

            Build Anything with Llama 4, Here's How Transcription

            • 00:00 - 00:30 my name is David Andre and here is how to build anything with Llama 4 so the future is here meta AI just dropped Llama 4 the most powerful open-source model in the world it comes in three different sizes and it features a shocking 10 million token context window so in this video I'll show you how to build anything with Llama 4 even if you are a complete beginner now Llama 4 will be absolutely everywhere because Meta is adding Llama to all of their platforms which means Facebook WhatsApp Messenger and Instagram and in total these
            • 00:30 - 01:00 platforms in 2025 average around 4 billion monthly active users which will instantly make it the most popular AI chatbot in the world far surpassing Chad GBD and when it comes to benchmarks these new Llama 4 models certainly do not disappoint on this chart you can see the LM Arena ELO compared to the cost of the model so the higher the better and the more to the left the cheaper and as you can see Llama for Maverick absolutely destroys all of the other models available now actually Llama 4 comes in three different model sizes
            • 01:00 - 01:30 behemoth Maverick and Scout llama for Behemoth is the biggest one it has two trillion parameters however it is currently not available llama for Maverick on the other hand is available and it has 400 billion parameters and it's natively multimodal with a 1 million context length matching the Gemini models now here's where it gets interesting because Llama for Scout while being the smallest of these three models has the biggest context length with 10 million tokens you can fit like probably 100 short books into a single
            • 01:30 - 02:00 prompt so later in this video I'm going to show you how to run these two models and how to actually build apps and AI agents with them now a major reason why Llama 4 instantly became the best open source AI model in the world is mixture of experts this is an architecture that Llama is built upon where you have a lot of specialized experts that handle different subtasks and you can understand it by looking at the image on the right so here is an example of a prompt that a user might say what is oneplus 1 this prompt goes into the model and as you can see here are four different experts one on punctuation one
            • 02:00 - 02:30 on verbs one on conjunctions and one on numbers since this is a question about numbers the number expert gets activated and provides the answer while the three other experts did not have to be activated saving tons of resources and inference costs now Llama for Maverick has 128 different experts but with each input only a few of them are activated this makes the model much faster and much more efficient now even though Llama 4 is amazing I mean it's open source one of the best AI models in the world super costefficient there's a huge problem running Llama for models locally
            • 02:30 - 03:00 is not viable for 99% of people since they are so big on top of that Meta AI the service where you should be able to chat with them is currently not available in most countries i mean even I couldn't use it while filming this video that's why I added Llama 4 into Vectal where you can use it completely for free so if you want to have free access to Llama Force Scout go to vectal.ai and sign up with that being said let's get to building so first go to vectal.ai and then create an account as I said you can get started completely
            • 03:00 - 03:30 for free once you have an account go into the bottom left and select the model now free users can choose between Llama for Scout and Deepseek R1 while Vectal Pro users get unlimited access to Llama for Maverick plus access to other premium models we have so if you want to have the world's most intelligent productivity app with built-in AI agents that help you complete your tasks go to vectoral.ai and sign up you can get started completely for free that said let's choose the build idea for this video and by the way if you want me to build some specific AI agents or some specific apps just comment below and who
            • 03:30 - 04:00 knows maybe in the next video I'll choose whatever you suggested okay so I'm going to say help me choose a build idea that will showcase the powers of the new Llama for Moros suggest seven different options let's send that to Llama for Maverick as you can see it's very fast multimodal conversational agent AI powered research assistant translation localization tool so I think it really wants to go on the visual stuff on the multimodal stuff which makes sense because if you look at the Llama 4 article from Meta AI they heavily focus on the multimodal capabilities right as you can see right
            • 04:00 - 04:30 here again it's mentioned 10 different times so maybe it's really the move to build something where we utilize the native multimodal capabilities of these llama for models all right so let's go back to vectal and I'm going to ask it let's brainstorm project ideas that really showcase the multimodal powers of the llama for models create seven different and unique ideas and I'm going to send this and as you can see it's delegating to our ideal list agent because inside of vectal we have the view for ideas which is basically a place where you can quickly brain dump your thoughts so the main thing is
            • 04:30 - 05:00 obviously your task list right these are your main tasks your main work but inside of vector you also have notes which is you know something like you want to remember for a long time but ideas this is where you can easily brainstorm new stuff such as right now when we're brainstorming what we're going to build so as you can see just by me sending a prompt the chat agent delegated to the idea agent which created seven different ideas in here and then we can easily choose whether to delete it whether to convert it to note or a task so let's take a look at what ideas it came up with realtime code
            • 05:00 - 05:30 explainer AI travel guide cooking assistant okay so while these ideas aren't necessarily bad I would say they're pretty easy like they're pretty you know none of these are quite impressive they're quite predictable so I did some thinking and I have a better idea so what I'm going to do is I'm going to say archive all of these ideas and then I'm going to give it my idea which is a program that takes a screenshot of your screen every few seconds and then using the multimodal
            • 05:30 - 06:00 capabilities of Llama 4 will give you a productive critique or opinion on what you are doing right this will be obviously a bit harder to build than these but the result will be a lot more impressive a lot more useful i'm just going to clear the chat so I'm going to send this prompt and I'm going to switch to chat mode which by the way inside of vectal the chat and agent mode works the same as in cursor chat mode cannot make any changes right so when you're asking questions when you don't want your tasks to be changed or you know new ideas created stuff like that just use chat
            • 06:00 - 06:30 mode in agent mode obviously vectal can help you complete task create new tasks organize them and do all sorts of things all right let's see what the response is okay so vectal gives us a pretty solid outline how to do this in multiple steps and yeah so I guess what remains is creating a new project so now that we have the build idea chosen I can mark this task as completed and let's create a new project inside of cursor so let me open cursor okay open project i'm going to create a new empty folder boom open okay and then I'm going
            • 06:30 - 07:00 to follow the instructions that vector gives us or actually I can even ask it for more granular instructions give me more granular and stepbystep instructions how to set this Python program up inside of cursor answer in short boom okay so as you can see vectal decided to once again call perplexity pro to do a quick web search and so first we need to install cursor so obviously if you don't have that install cursor first then prerequisites you need
            • 07:00 - 07:30 python uh I guess we don't need git for a small project like this but for a bigger project definitely and then we need to set up a python so okay actually we can say list out all conds and I'm going to say activate the test env so as you can see I'm using uh Gemini 2.5 Pro here and let me check if cursor has llama for available okay okay so if you go to cursor settings cursor settings and click on models you can see all of the models that are available
            • 07:30 - 08:00 inside of cursor seems like they do not have llama 4 so I'm glad that in vectal I've been able to add it faster than you know much bigger AI companies such as cursor anyways the cursor agent has listed out all my cond environments and activated one again I haven't written a single line of code i just spoke to it in plain English and it did what what I wanted it to do so okay that's great now I'm going to say create a new main py file boom and just like that it can
            • 08:00 - 08:30 create a new file like we don't even need to create files in 2025 the AI agents can literally do it all all right so let's go back to vectal and let's follow the steps it gave us so first we need to set up the screenshot capture actually I'm going to describe the idea so build idea okay so I just added a brief description of the build idea at the top of our file alternatively what I can do is create a new file that cursor rules and I can add the build idea in here as well that way with every single prompt I send into cursor it is aware of
            • 08:30 - 09:00 what we are building so it's definitely a good idea to have a cursor rules file and to extend it and improve it over time okay so let's go back to vectal and I'm going to copy the first step which is setting up the screenshot capture and I'm going to paste it into the agent and I'm going to say help me execute this first step in main i'm going to tag the file do not do anything else this is a good prompt to ensure that the AI agents do not go off the rails especially if you're using cloth 3.7 right now I'm using Gemini 2.5 Pro which currently is
            • 09:00 - 09:30 the best coding model in the world so I definitely recommend using Gemini 2.5 Pro and of course we do have it available inside of Vectal as well if you are a Vector Pro user okay so this this is the code super simple let's accept that and as you can see we have a error here which because we don't have this package installed so I'm just going to click on fix in chat and Gemini 2.5 Pro should be able to easily handle this pip install pi autogy so this is what will actually let us take the
            • 09:30 - 10:00 screenshots of our computer and also what's important is that in the bottom right corner I actually have to select this test cond environment boom and there it is as you can see it's no longer highlighted so now this should be able to work let me just ask a question does the code work is it ready to be tested or is something else missing just answer the code in main is syntactically correct running it okay so let's see let's run it and let's see if it can
            • 10:00 - 10:30 take screenshots every 5 seconds then okay so there it is it needs more permissions cursor would like to record this computer screen and audio so open system settings i need to give it this permission okay seems like we need to restart cursor it's not an issue so there it is okay so let's rerun the program capture screenshot one okay where are these being saved though okay um let's stop the let's kill terminal i'm going say where are these screenshots being saved i'd like to create a new folder in this directory
            • 10:30 - 11:00 where main py is located called screenshots and make sure they are saved in there update main py accordingly do not change anything else so you can actually see how I'm using cursor how I'm using these AI agents to build software i mean this process is what I follow when building my own AI startup Vector.AI which currently does over 5 figures a month in monthly recurring revenue so this is a successful AI
            • 11:00 - 11:30 startup obviously it's not like a billion dollar company yet but I built it myself for the first three and a half months with no developers with the help of AI tools such as Cursor such as Claude and now it's never been easier i mean Llama 4 wasn't even out when I started building and Llama for Maverick is much better than CL 3.5 which is you know what I was using to build Vectal anyways I've requested a change so that the screenshots get saved into the folder of this directory not some random folder so I'm just going to accept that
            • 11:30 - 12:00 and let's run it again okay new folder has been created and in here we should start seeing screenshots appear as they're being taken so I see we can see the print messages but I do not see any screenshots being added to the folder i'm going to stop this i'm going to say the folder was successfully created however even though I see the print statements in our terminal and I'm going to attach exactly this the folder still
            • 12:00 - 12:30 remains completely empty with no files inside of it which means something is very wrong okay so I think Gemini 2.5 Pro is probably good enough to debug this let's see this behavior is common in Mac OS when application so seems like it still doesn't have enough permissions okay so actually you know what let me ask Veal because Veil has built-in Proplexity Pro web search which should be easily able to resolve this so I'm going to say here's my code so far boom
            • 12:30 - 13:00 let's copy the entire file please browse the web to figure out what I have to do allow Pi Autogy to take and save screenshots on my MacBook answer in short boom to enable py screenshots grant screen recording permissions to terminal or Python via system preferences screen recording okay add terminal or Python and restart okay so
            • 13:00 - 13:30 screen and system audio recording and we need to add terminal and Python in here okay so there is terminal okay quit and repo open so the terminal should now have the permissions let's go to cursor and let's see if this works we still might need to run this in the terminal though okay let's see what u Gemini 2.5 Pro says i described the issue and this is how you fix problems you know VIP debugging is obviously harder than VIP coding but let's see if permissions are correct okay it's trying to use a different library so MSS let's try that
            • 13:30 - 14:00 let's run it oh there it is screenshot one okay screenshot two so let's uh switch to the browser let's be on open router for a few seconds you know that way the screenshots are not the same that way they're not all from cursor okay let's go to vectoral for a few seconds and when we return we should have screenshots from these different apps so let's uh let's pause this i'm going to kill this process okay all right so
            • 14:00 - 14:30 there it is screenshot number five was from vectal open router cursor amazing so now this works this works so we can continue with our plan really good so all it took really is describing the issue and yeah Gemini 2.5 Pro was able to solve it okay so now that we can successfully take the screenshots the next step is to have Llama 4 analyze each of these screenshots right so what we need to do is then we need to go back up and we need to ask how to set this up
            • 14:30 - 15:00 in open router i don't know why it's suggesting OpenAI but I'm just going to copy this part and I'm going to ask Beal how do I set this up in Open Router give me stepbystep instructions okay to set up Llama for integration with open router I'll guide you through the steps need to confirm if you have open router account so let's uh go to open router if you don't have an account create one super
            • 15:00 - 15:30 easy we will need to get an API key i already know that okay to set up yeah lab 4 boom boom boom acquire API key integrate so let's do that and actually you know what i'm going to copy I'm going to copy this into cursor i'm going to say update main py accordingly goal we want to have Llama 4 analyze each of the screenshots taken by
            • 15:30 - 16:00 our program and give a brief critique of whatever we're doing helping us be more productive okay so I gave it the goal that way it knows what we're trying to accomplish and while this is running we can go back to open router and generate a new API key so again open router.AI create an account super simple go to the top right click on keys and click on create key i'm going to name it llama
            • 16:00 - 16:30 for test create and copy this do not share API keys with anybody treat them as passwords obviously I will delete mine before uploading this video so let's go back to cursor here while the changes are being applied okay so only 32 lines of code so far it's pretty pretty clean i'm just going to accept the changes so far and we need to put our Okay I don't know why this is in this stuck state actually I'm
            • 16:30 - 17:00 going to restore the checkpoint i'm going to do close 3.7 sonet max i don't know why Gemini 2.5 Pro does this sometimes it like gets stuck in this half applied state and over complicates things so currently my setup is switching between CL 3.7 and Gemini 2.5 Pro Max inside of Cursor once they add llama for Maverick obviously I'll integrate that as well so I still think this is too complex anyways I'm going to reject these changes i'm going to create a new chat at the top i'm going to save
            • 17:00 - 17:30 open router API key i'm going to paste this in here obviously ideally you would have a ENV file like that uh it doesn't matter for this tutorial you can just save it here but it's a better practice to create a you know env anyways what I want to do is I want to go back to vectal i'm going to clear the chat i'm going to say look up the official open router documentation for making an API call and how to pass
            • 17:30 - 18:00 images in the input parameters for the AI model i want to use llama for and I want to give it screenshots to analyze give me stepbystep instructions how to do this and actually I'm going to use ultra search which is a feature inside of vectal that is the next level of deep research basically and it's powered by perplexities deep research which which is already like really really good but on top of that it takes
            • 18:00 - 18:30 into account all of your tasks all of your projects all of your ideas all of your user context everything you have inside of vectal to make the search results relevant to you so that is the main difference and that is why for me this is my favorite deep research feature by far not only because you know I've built it so I know how it works but because it's built on top of perplexity deep research and obviously Perplexity knows what they're doing in terms of web search and it takes into account all of the context relevant to me so always
            • 18:30 - 19:00 gives me the most relevant just the things I'm looking for exactly it's like reading your mind almost now while this is running we can actually complete this task creating a new project because we did that so the next step is connecting open hour and as you can see the ultra search is finished and it's very efficient unlike openai deep research you know in chat GBT which can take 10 to 15 minutes I mean who has that much time this took like 30 seconds and it's checks 50 plus sources every time and is super accurate so what I'm going to do then I'm going to pass the ultra search
            • 19:00 - 19:30 results into cursor so this is why like having multiple AI tools is essential you can achieve so much more okay so there it is the whole ultra search results as you can see It's quite extensive and first it does some reasoning which we do not need to copy so I'm just going to copy the output the results and I'm going to feed this into cursor which doesn't have a ultra search feature and I'm going to say update main.py to implement open router in the simplest way possible the fewer lines of code changed the better do not do
            • 19:30 - 20:00 anything else when using clot 3.7 set saying do not do anything else at the end is very good practice because this model is um it's like hyperactive you know it likes to do a lot of changes so now I've given it all this context of the ultra search and it fought for 5 seconds about how to implement this and now it's going to suggest the changes for our code as you can see it suggested adding 49 new lines of code and the
            • 20:00 - 20:30 model looks correct llama for maverick we can actually double check that by going to open router in here and among models Llama for Maverick so they have a free versions but these ones usually have very low raid limits and very bad up times so I do not recommend using the free versions i recommend using the paid ones you can literally charge up like $5 and that's going to be enough for for most of you for weeks i mean unless you have tons of AI agents running you should be good with that but this is the name of the model we just need to make sure this matches perfectly here and it
            • 20:30 - 21:00 does so that's good i mean the ultra search the ultra search is good so that's that i'm going to accept this and let's just try it i'm let's switch back to Gemini 2.5 Pro and say is made py ready to be tested or is something else missing and just answer no changes just answer uh yes okay so I'm going to delete these screenshots boom empty our screenshots folder and let's run it again we need to
            • 21:00 - 21:30 see the analysis somewhere so we capture the screenshot and then um boom there it is okay let's see let's see the screenshot shows a coding environment specifically cursor with Python script made up open the script is designed to take a screenshot of the model okay screenshot too we need to change the screen we need to change the screen okay uh let's see we're we're going to be in vector right here hopefully it gives us some feedback on vector let's give it a few seconds because I think right now it runs every 10 seconds if I remember correctly okay
            • 21:30 - 22:00 let's switch back let's see there it is last screenshot is a desktop screen showing a browser with multiple tabs open alongside code editor blah blah blah blah blah advice uh um wait was that screenshot four no this this was screenshot three okay let's see what happened okay screenshot three is inside of vector as you can see there it is there is screenshot free so let's look at the analysis the browser tab on the left appears to be presentation or llama 4 on the right appears to be application related to vectari the code in the editors implementing context management and error handling for llama for model
            • 22:00 - 22:30 okay really good let's let's test one more site let's maybe go to open router over here let's see what it suggests about open router and actually the prompting we can change so that it's more concise but wow this is good this is good let's look at screenshot eight okay screenshot eight open router screenshot shows a web page for llama for Maverick on open router platform okay key pricing actionable advice this is good this is good okay so this works so let's kill this terminal
            • 22:30 - 23:00 and yeah we've basically built a productivity assistant that can take a screenshot every 5 seconds you can obviously set this to whatever you want and using Llama for Maverick you know which is currently the best available one it's the middle tier but the behemoth is not available so using llama for maverick which is one of the best AI models in the world it analyzes what you're doing and gives you actionable productivator advice and we've been able to build this just with the help of cursor and vectal in a matter of like
            • 23:00 - 23:30 what 20 30 minutes i mean the only hiccup was Mac OS you know causing trouble with the screenshot capture but we've been able to solve that relatively easily so we connected to open router and the model config is actually already optimized because we did the ultra search in vectal as you can see it set the temperature topy all of that stuff so we didn't even have to do that vector did that for us and that's because it knew our tasks this is what you guys don't understand ultras knows your task so even it like does work that you
            • 23:30 - 24:00 didn't even think was related so it knew this while doing the debarch that way it could have predicted that and yeah the build is now finished so guys this is the power of Llama for Maverick it's very very good model you can build AI agents with it it's super like the cost effectiveness is crazy right look at this 0.2 million and 0.6 this is amazing cost effectiveness and the multimodel capabilities are among the best in the world so this right now will be one of my main AI models to build AI agents with i've added there's a reason why I
            • 24:00 - 24:30 added these within a few hours into vectal both llama for Maverick and both llama for scout obviously when Llama for Behemoth gets released I'm adding it instantly into Vectal and again if you want to have unlimited access to both Llama for Scout and Maverick go to Vector AI and get the Vector Pro plan with just one subscription you get access to all of these AI models as well as all the other advanced AI agents such as Ultra Search such as Infinite Thinking and every single productivity feature Vectoral has to offer such as
            • 24:30 - 25:00 ideas tasks notes and now projects where you can create custom projects and give each project custom system prompt so that the AI agents know what this project is about all of that is available inside of Vectal and the best part you can get started completely for free just go to vectal.ai and give it a shot with that being said thank you guys for watching and have a wonderful productive week