Unlock the power of AI in coding! ๐Ÿ’ป๐Ÿค–

Code 100x Faster with AI, Here's How (No Hype, FULL Process)

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In this insightful video by Cole Medin, viewers learn to harness the full potential of AI as a coding assistant. The video emphasizes the importance of a refined approach, offering a detailed workflow to maximize productivity when developing with AI. Cole covers the entire process, from creating planning documents to setting up MCP servers, and illustrates how to maintain clean, efficient code through meticulous documentation and testing. The overall aim is to enhance coding efficiency, aid in project management, and ensure security by understanding and implementing a thoughtful workflow with AI.

      Highlights

      • Use structured documents to guide AI coding assistants. ๐Ÿ“š
      • Higher-level planning and task management are key to AI success. ๐Ÿš€
      • Maintain under 500 lines of code per file to avoid AI hallucination. ๐ŸŽฏ
      • Break tasks into single requests for better AI performance. ๐Ÿ› ๏ธ
      • Protect sensitive information by manually implementing environment variables. ๐Ÿ›ก๏ธ
      • Utilize Git for version control to save progress and revert changes. ๐Ÿ’พ

      Key Takeaways

      • The future belongs to those who embrace AI in coding! ๐Ÿ”ฅ
      • Refining your workflow is key to unlocking AI's full potential. ๐Ÿ”
      • Learn to use markdown documents to give context to your AI. ๐Ÿ“„
      • Avoid overwhelming your AI; be specific and straightforward. ๐Ÿ’ก
      • Always review AI-generated code for security and efficacy. ๐Ÿ”
      • Testing is crucial; ensure your AI writes tests for every new feature. ๐Ÿงช

      Overview

      Everyone knows that AI coding assistants can boost productivity significantly, but the secret sauce lies in how you use them. Cole Medin emphasizes refining your workflow with AI tools like Windsurf or Cursor, suggesting simple yet effective strategies to avoid frustration during development. By following step-by-step guidelines, developers can enhance their productivity 10x or even 100x.

        The key to successful AI-assisted coding lies in using structured documents to provide context to the AI, preventing it from making erroneous assumptions. Cole presents the concept of 'golden rules,' emphasizing the importance of concise coding, managing conversations effectively, and being specific in requests to the AI.

          A standout part of the process involves setting up MCP servers and version control systems like Git, which are pivotal for organized and secure coding. Cole walks viewers through the practical implementation of these servers, advocating for constant testing and documentation to ensure code quality and security through the development process.

            Chapters

            • 00:00 - 00:30: Introduction to AI coding assistants The chapter emphasizes the growing importance of AI coding assistants in development. It notes that while these tools are essential, many people struggle to use them effectively. There is a tendency to misuse AI tools like Windsurf or Cursor, expecting them to produce excellent results without a structured approach. The frustration comes when the AI's performance is inconsistent, sometimes behaving like an expert and other times offering subpar results.
            • 00:30 - 01:00: The importance of a well-defined process This chapter emphasizes the importance of having a well-defined process when using AI for coding to ensure high-quality and consistent outputs. The speaker promises to share their detailed workflow by the end of the video, enabling the audience to elevate their AI development practices.
            • 01:00 - 01:30: Promises for the video This chapter introduces promises to enhance productivity with AI in development. It focuses on three main commitments: firstly, maintaining a simple setup avoiding common over-complications; secondly, creating a practical and functional example using a superbase MCP server rather than just a toy project; and thirdly, ensuring the process is universally applicable.
            • 01:30 - 02:00: Resources for leveling up AI skills This chapter highlights resources for advancing AI skills beyond just coding workflows. It introduces Dynamis.ai, a new community platform that offers in-depth courses, live workshops, weekly sessions, daily support, and a community for early AI adopters. The speaker encourages joining the waitlist to gain early access to these resources.
            • 02:00 - 02:30: Overview of the AI coding process The chapter provides a comprehensive overview of the AI coding process as documented in a Google document linked in the video description. This document is an essential resource that outlines the entire process, beginning with the "golden rules." The chapter emphasizes that these rules are crucial as they influence various subsequent steps in the process, beginning with step two. The document and these principles are repeatedly referenced, highlighting their importance in understanding and implementing the coding process with AI.
            • 02:30 - 03:30: Golden rules for coding with AI The chapter discusses essential golden rules for coding with AI, emphasizing the importance of using higher-level markdown documents. These documents should include installation instructions, documentation, planning, and tasks, which provide context for the LLM (Language Model). The chapter suggests leveraging AI in creating and managing these documents to assist throughout the project's development. It also hints at additional rules focused on not overwhelming the LLM, as providing longer context might affect its performance.
            • 03:30 - 05:00: Planning phase The chapter titled "Planning phase" discusses strategies for optimizing interactions with language learning models (LLMs). It emphasizes the importance of keeping code files under 500 lines to reduce the likelihood of LLMs hallucinating, encouraging starting new conversations frequently to prevent them from getting bogged down. The chapter also advises against overwhelming LLMs by multitasking and suggests focusing on implementing one new feature per prompt for better results. Lastly, it warns against handling too many tasks at once with an LLM.
            • 10:00 - 13:00: System prompts and global rules This chapter discusses the importance of giving clear and specific prompts to AI and LLMs to ensure good results. It highlights the need for precise and consistent requests, including asking AI to write tests for the code it generates. Providing context is emphasized as essential to improve the AI's performance, without overwhelming it with too much information.
            • 13:00 - 16:00: Configuring MCP servers Configuring MCP servers involves detailed planning and specification. It's vital to define the desired outcome not just at a high level, but also in terms of the specific technologies, libraries, and the expected output. Additionally, maintaining updated documentation and inline code comments is crucial for clarity and functionality.
            • 16:00 - 19:00: Initial project prompt The chapter discusses the importance of understanding code both for personal comprehension and for maintaining references in future discussions. It emphasizes implementing environment variables independently rather than relying on the language model, to protect sensitive information such as API keys and database security. The text humorously references an instance where someone overly relied on automated coding, resulting in negative consequences. This serves as a cautionary tale against becoming too dependent on automated coding tools without understanding the underlying code.
            • 19:00 - 23:00: Creating a Superbase MCP server The chapter titled 'Creating a Superbase MCP server' highlights a significant event where the protagonist discusses the potential and risks of AI in server management. On March 15th, someone remarked, 'AI is not just an assistant. It's the builder now.' However, this optimism is quickly followed by a cautionary tale two days later when the server gets hacked. The issues arose due to maxed-out usage on API keys and unauthorized bypassing of subscriptions, attributed to inadequate database security protocols. This incident underscores the critical importance of comprehending the code produced by AI and not solely relying on AI for managing security details like environment variables and database security. The narrator emphasizes the necessity for developers to thoroughly understand the AI-generated code to prevent such vulnerabilities.
            • 23:00 - 26:00: Saving project state with Git In this chapter, the importance of understanding the security of your project is emphasized, even if you're coding on the fly ('vibe coding'). The chapter introduces the concept of planning before starting a project, highlighting the necessity to create planning and task files before writing any code. This approach ensures a clear direction and structure is established prior to development.
            • 26:00 - 29:00: Iterating on the project This chapter focuses on the importance of having a structured planning document for a project. The document includes the vision, architecture, and constraints to provide context for the LLM (Language Model) used as an AI coding assistant. This setup is especially beneficial at the start of new conversations, allowing the model to quickly understand the project's objectives and current status without needing to analyze individual code files. Additionally, the chapter mentions the use of task markdown files, which likely serve to detail more specific tasks.
            • 32:00 - 39:00: Testing the server This chapter focuses on the processes and benefits of using AI, particularly an LLM, to manage project tasks effectively. The LLM is able to update, create, delete, and mark tasks as done throughout interactions, essentially serving as a project manager for an AI coding assistant. This approach empowers users to maintain control over their project and ensures that the AI assistant can effectively contribute to task management and project planning.
            • 39:00 - 45:00: Project deployment In the 'Project Deployment' chapter, the discussion centers on using AI tools and chatbots to assist with setting up a Superbase MCP server. The speaker mentions using tools like Claw Desktop and their own AI agents as connectors to Superbase, enabling database operations. The deployment plan involves the use of the Brave API, supported by the existing setup of the Brave MCP server. The speaker notes they will not delve deeply into the specifics of this setup.
            • 45:00 - 50:00: Conclusion and closing remarks The chapter titled 'Conclusion and closing remarks' discusses the use of a service or tool to automate tasks. It underscores that the video provided is an example of how to quickly request the creation of both 'planning.md' and 'task.md' files using web resources. The narrator allows the tool to perform web searches, and once finished, confirms that the files have been created by a service called Claw Desktop. These files are generated and saved directly to the user's file system through an MCP server.

            Code 100x Faster with AI, Here's How (No Hype, FULL Process) Transcription

            • 00:00 - 00:30 Everyone knows that if you're not using an AI coding assistant, you are going to fall behind no matter what you are developing. But what people usually don't know is how to use these AI coding assistants effectively. I mean, sure, you can't just throw whatever you want at an AI IDE like Windsurf or Cursor, and sometimes you'll get good results, but if you don't have a clear process for working with them, that is definitely not the case all the time. And it is so frustrating when the LLM goes from a senior software engineer to what seems like a pack of primates
            • 00:30 - 01:00 mashing on a keyboard, deleting parts of your code, implementing features that you don't want, you know the pain. And you have to have a well- definfined process when using AI to code if you want highquality outputs and you want them consistently. And if you don't have that refined process yet, I promise you that by the end of this video, you'll be at a whole new dimension of developing with AI, cuz I'm going to walk you through my full workflow step by step and just get into those nitty-gritty details so you can just copy my process
            • 01:00 - 01:30 to 10x, even 100x your productivity when developing with AI. And to make this well worth your time, there are three things that I'm going to promise to you. The first is that we're not going to over complicate our setup, which I see people do a lot with AIDS. The second thing is we're not just building a toy. We're going to use this workflow to build a full practical and useful example with a superbase MCP server. More on that in this video. And then the third thing is that this process is going to work for you no matter what you
            • 01:30 - 02:00 are developing or which AI IDE you are using. Last thing really quick. If you want to level up more than just your AI coding workflow, but your AI skills as a whole, I have the perfect thing for you. Check out dynamis.ai. It's an exclusive community that I just launched the weight list for. It's where I take my expertise that I bring to you constantly on YouTube to a much deeper level with courses, live workshops, weekly sessions, daily support, and the best part, a community for you to join with other early AI adopters. So check out the link, join the weight list, and with
            • 02:00 - 02:30 that, let's dive into our full process for coding with AI. So what you're looking at here is a Google doc, which I'll link to in the description of this video, that gives my full process of coding with AI. So use this as a resource for yourself. We're going to be referencing this constantly throughout the video as well because this is everything. This is our full process, starting with the golden rules at the top. So, I'll cover these quick and then we'll see how they really dictate a lot of the rest of our process starting with step number two where we'll go into
            • 02:30 - 03:00 actually starting our project. And so, the very first golden rule, this one's super important. We want to use these higher level markdown documents that have instructions for installation, documentation, planning, and tasks. Use these to give context to our LLM. And so we're going to be creating and managing these using AI to help throughout the creation of our project. And then the next three rules here are all about not overwhelming the LLM because the longer context you give to an LLM, the more
            • 03:00 - 03:30 likely it is to hallucinate. And so, for example, you want to keep all of your code files under 500 lines. You want to start fresh conversations often because longer conversations can really bog down an LLM. And then also, you don't want to overwhelm the LLM by asking it to do too many things at the same time. In fact, I usually find that it's best to just ask it to do one new feature or implement one new thing per prompt that I give it. Much better results. If you have really long files, long conversations, you're
            • 03:30 - 04:00 asking for a lot of things at the same time, that's when you start to get terrible results no matter which LLM or AI IDE you are using. And then also, not everyone likes tests. Most people don't. But it's very important to ask the AI to write tests for its code. Ideally, after every new feature that it implements, and that's how you really get consistent output. Also, be specific with your requests. And so, this is where it's actually better to provide a little bit of extra context. And so, you don't want to overwhelm the LLM, but you also don't
            • 04:00 - 04:30 want it to be left to its own devices. You want to be very specific with what you're looking for. Like don't just describe what you want to build at a high level, but get into the details of the technologies that you want it to use, the different libraries, what you want the output to look like. Be specific and that helps a lot as well. And then the last two rules here, write docs and comments as you go. You want the LLM to constantly be updating documentation both in these uh you know higher level core files, but also comments in the code as well. that helps
            • 04:30 - 05:00 both yourself to understand what it's doing and also itself as it references these files in later conversations. And then the very last thing is to implement environment variables yourself. Do not trust the LLM with your API keys and securing your database, all that good stuff, because you do not want to be this guy. I put this link in here cuz I just thought it was really funny. This is an example of what can go seriously wrong when you vibe code. This guy built his full SAS with Cursor. He didn't do any of the coding and he's all hyped
            • 05:00 - 05:30 here. This is March 15th. He's like, "AI is not just an assistant. It's the builder now." But then look at this. Two days later, he gets hacked. Random things are happening. Maxed out usage on API keys, people bypassing subscriptions, probably because he didn't have correct database security. All these things going wrong because he trusted AI to manage his environment variables, his database security, all of the things around security. You have to make sure you understand yourself. And really, you should understand all the code that AI is producing. I generally
            • 05:30 - 06:00 don't recommend vibe coding, but even if you do vibe code, at least make sure that you understand if your project is secure or not. That is super important. And so, with all these golden rules out of the way, we'll see these constantly throughout the rest of this document, let's get into starting our project, beginning with our planning. So, for the planning phase, it's pretty simple. We just want to create our planning and task files. And we're going to do this before we're even getting into any coding at all because we want to have that higher level direction before we write a single line of code. And so our
            • 06:00 - 06:30 planning document, this is where we have the highle vision, architecture, constraints, all of these highlevel pieces of information that we want to give as context to the LLM. And we can ask the LLM, the AI coding assistant throughout the process to reference this file, which is especially useful at the beginning of new conversations so it can quickly get up to speed with everything that we're doing in our project. So it doesn't have to analyze different code files to figure that out itself. And then at a slightly lower level, we have the task markdown file. This is where we
            • 06:30 - 07:00 track all of our tasks, things that have been done and still need to be done. And throughout the conversation, we can have the LLM update and create new tasks, delete tasks, mark them as done, all that good stuff. And that really allows us to be the project manager for the AI coding assistant, which is important because we want to dictate everything that it does. And this is just a resource for us to do that well and have the AI coding assistant help in that process, too. And so for creating these
            • 07:00 - 07:30 files, I usually don't even do this in an AI IDE. I'll just use a chatbot like Claw Desktop for example. And so I have this sample prompt right here for the Superbase MCP server that we want to create. And so we're using MCP as a connector. So we can use apps like Claw Desktop, our own AI agents to connect to Superbase so that it has tools to do things in our database. And so for the planning for this, I'm just telling it to use the Brave API because I have the Brave MCP server set up. I'm not going to get into a ton of details on this
            • 07:30 - 08:00 because that's kind of outside of the scope of this video. Just use this as an example for very quickly asking it to plan both the planning.md and task.md files. And so I'll send in this prompt and then it'll search the web for me. So I'll allow it to do that and then let me come back once it has created both of these files. And there we go. Claw desktop created both files for me. And it did it right there in my file system because I have the MCP server for that too. So it made him right here in this directory. So I can open that up in
            • 08:00 - 08:30 Windsurf which you can use any AI coding assistant. Just keep that in mind. I'm using Windsurf but this will work with cursor client root code. It doesn't matter. This process applies to all. And here we have our planning file. So I'm going to need to open this preview here and take a look at this. We have our overview. We have the project scope, the technical architecture, technology stack, all these things that we can now give as context to the AI coding assistant when it starts the project to make sure that it's starting on the right foot. This is certainly not a
            • 08:30 - 09:00 perfect document. Like I wouldn't really want a what is MCP and what is superbase section. We don't really care about that. And so I'm going to edit this off camera and just save you the boring details here. But this is a good starting point. And typically you will want to iterate on this quite a bit within the chatbot or wherever you're working on this file. And then we've got our tasks as well. So I can open a preview for this. We can see all the different tasks that it set up for us. And we'll have the AI coding assistant knock these out one by one and add new ones as necessary as well. And this is
            • 09:00 - 09:30 pretty overkill. So again, you're going to want to iterate on these thing. Like we don't need that many tasks for this. This is pretty crazy. Um but yeah, I mean this is a good starting point. One last tip that I want to give on the planning phase, something I like to do a lot is instead of just using a single LLM like Claude, I want to use multiple different large language models to help me plan my project. So, I'll just give the same prompt to each of them and then combine everything at the end. Now, the trick is you have to have a good platform that can help you work with these different LLMs in one place
            • 09:30 - 10:00 without having to pay multiple subscriptions. And there are a lot of different apps out there for that. But one of my favorites that is sponsoring this video, but I do use them a lot is Global GPT because they're very affordable. You can get started for free with access to all the best LLMs like Deepseek and 03 and Claude. And they even have tools added in like Deep Research or Perplexity for example. If you really want to go deep in your planning, you can do that with Global GPT. And there's a lot of other tools as well, like if you wanted to use
            • 10:00 - 10:30 ideoggram or midjourney to help you plan your assets for your project as well. You can do all of that. So, you're going to have an interface that's very similar to Claude Desktop like we just saw, but just being able to access these different LLMs very easily, do things like deep research. And so, yeah, definitely use a platform like this if you want to really dive deep in your planning for your project. So, I'll have a link in the description to Global GPT. Definitely recommend checking them out. But yeah, that's just the last quick tip that I wanted to give for planning your projects. Now, let's get into the next
            • 10:30 - 11:00 phase. Now that we have the planning and tasks done, it's time to move on to global rules. This is essentially system prompts for your AI coding assistant. So, all the high-level instructions that you want to give to the AI coding assistant, you do in the global rules so you don't have to type it out explicitly every time. Like for example in our global rule we can say always read the planning file at the start of a new conversation. So that way when we start the conversation ourselves we don't have to explicitly ask it to do so. And so it
            • 11:00 - 11:30 saves you from typing a lot on all your different requests because instead of saying please implement this and then write tests and then check off the task, you can just tell it to do so in the global rules like check the tasks, make sure you write your tests for all your new features. We have that all now as kind of a system prompt to the LLM. This is the global rule that I have set up in my AI coding assistant. You can use this yourself. Tweak it to your needs depending on your technology stack and any other requirements that you have.
            • 11:30 - 12:00 But yeah, this is a really good starting point for you. And then I even have instructions for the four most popular AI idees how you can get this set up. And so specifically, let me copy this. I'll show you how to do it in Windsurf because that's just the AI IDE I'm using in this video. I'll copy this and then I'll go over to Windsurf and then you click on the additional options in the top right manage memories and then you have your global rules. This is going to apply no matter what directory you're in. And then you have workspace specific
            • 12:00 - 12:30 rules. So if you want to have rules for just your current project, you set it up here. Typically, it's recommended to have workspace rules because a lot of times different things like the technologies that you're asking it to use are going to be specific to that project. And so, I would typically recommend workspace rules. So, we can go into here and then just paste in everything that we copied from the Google doc. And then I can just like my other markdown files, I can open up a preview and see that we have now have these rules set up for the AI coding
            • 12:30 - 13:00 assistant. So, we tell it how to use these different markdown files. This is how without having to always ask it to look at planning and mark off tasks in the prompts on the right hand side, it'll still know how to work with these files. And then we tell it about some of our other golden rules like don't create files that are longer than 500 lines. We tell it about creating tests for each of the features. Um, and then we give it some style guidelines as well to make sure that the code is clean that it produces. We tell it how to work with the readme file so it can maintain
            • 13:00 - 13:30 documentation. And then at the bottom, I like to have a bunch of kind of miscellaneous rules to help it as well. So that is our rules as a whole. And now we can keep our prompts to the LLM quite simple because we have all these different things we're trying to have it do and make sure it's following, but we don't have to ask it each time. Now, that's the important part with global rules. So, we've got our global rule set up now. Should have just taken you a couple of minutes. Now, we can move on to configuring our MCP servers. So, if you are not familiar with MCP, I would highly recommend checking out this video
            • 13:30 - 14:00 that I linked to above where I did a comprehensive overview, but really it's just a way to give more tools to our AI IDE so it can do things like search the web with Brave. So, these are the core three servers that I always use that I'll go over in a bit. I've got links to install each of them. I've got instructions for how to set up MCP in your different AIDes and even a link here to a list of other MCP servers. You can go here and download any other ones that you might want to include. You can click into any of these links to see
            • 14:00 - 14:30 exactly how to set it up within your AI IDE. So, it gives the instructions for setting up your configuration. So, the core three servers that I always use, I want my AI IDE to be able to interact with the file system, not just the current project, but other folders on my computer as well. Like maybe I have an image folder. I want it to be able to pull assets into the project from that. or I want it to reference other projects to learn how I did something previously. I can do all of that when I have the file system server. And then I also want
            • 14:30 - 15:00 my AI IDE to be able to search the web. And some AI idees have web search baked in. But the Brave API is very powerful in the way that it actually uses AI under the hood to summarize a bunch of the different web search results. So you get some really powerful output. You can use this to do things like pull documentation for tools, libraries or frameworks that you are using. And then the last one that I really like using is Git. And the reason for this is you should really set up every project as a Git repository. So you have version
            • 15:00 - 15:30 control. You can manage backups of your projects, have different versions that are all saved. And so with the git MCP server, you can do something like this example prompt that I have here where you say like, okay, I like where we're at right now, and before I implement more features, I want to really have a backup of the current state. So please make a git commit to save the current state, which by the way, this is something that I highly highly recommend doing because sometimes you can go five requests, 10 requests to an AI IDE. It
            • 15:30 - 16:00 implements all these things and you realize that it completely broke the project five requests ago. So if you have backups along the way, you can revert to a working state. Otherwise, sometimes you'll get into this state of hell where your project is totally broken, but you've gone too far and you can't really go back. And so Git is your savior for that. So that's why I love using this server so much. And then also, if you want to have more like long-term memory, implement rag within your AI IDE, you have a lot of other tools. For example, the Quadrant MCP server. So, I'm not going to get into
            • 16:00 - 16:30 that cuz a lot of AI idees like Windsurf already have memories. Like I can go in here and manage memories and it can generate those for me. I just have a blank slate right now, but you can ask it to keep track of memories between projects and stuff. So, might not need something like this, but there's still great MCP servers anyway. So, yeah, get all these set up. In my case, let me go back over to Windsurf here, and I'll open up my configuration. You just click on configure MCP right here. And I have this MCP config.json. So yeah, you can see I have my file system, Brave Search,
            • 16:30 - 17:00 Git, and then this is kind of out of the scope of this video, but I have Archon, which is my AI agent builder that I have baked into Windinsurf as well. So I set up all these servers. Just follow the instructions that I have laid out right here in this document. So get your MCP set up and then use those as I describe throughout your project and it's going to help you a lot. So now it is time to give that initial prompt to the AI IDE to start our project because we have our MCP servers configured, our global rules
            • 17:00 - 17:30 set up and we have those planning and task documents created. And like I call out here, even though we have these documents that give those higher level details to the AI, we still want to be very specific with our initial prompt because that determines the entire starting point for our project and that is one of our golden rules to be very detailed in what we are looking for. And so this can mean a lot of different things depending on your project. But the key piece of advice that I have here is to give a lot of documentation and
            • 17:30 - 18:00 examples to the AI coding assistant. And there are three different ways that we can provide examples and documentation. The first one is that a lot of AI idees have built-in features for pulling in documentation. For example, in Windsurf, I can type at MCP and then hit tab. And that's going to now include the MCP documentation in the prompt to the LLM. So it'll use rag under the hood to search through the documentation and use that to augment its answer. You can do
            • 18:00 - 18:30 something very similar in other AI idees like cursor as well. The other option you have is using the Brave MCP server or any other MCP server that you set up for web search. So you can ask it to you know go through the internet find documentation for whatever libraries or tools you are using like for example search the web to find other MCP server implementations or documentation. So you can pull examples and docs this way or just provide them manually like in this example prompt that I have that we're going to be using to create the superbase MCP server. I'm just giving it
            • 18:30 - 19:00 a link to a GitHub repo that has an already existing implementation of a Python MCP server. So it's using this example kind of as documentation along with I call out the documentation for MCP and superbase at the top here. And so this is an example prompt that is very specific to what I am creating here. But use this just as a template for you. You want to call out documentation, give examples, be very specific in what you want it to build and then you're going to get much better
            • 19:00 - 19:30 results than just saying something super bland like build a superbase MCP server. So, we're going to take this prompt, go into Windsurf. I already have this entered in. So, I'm going to go ahead and send this in, and we'll watch it rip. And for example, I'm not even calling out the planning.md file anywhere in this prompt. But it's still going to reference that after it pulls the documentation for MCP and Superbase because we have that called out in the project level rules, those global rules that we set for just this workspace. And
            • 19:30 - 20:00 so, first it's looking through the GitHub example I gave it. It's going through the documentation. It's using the Brave web search to get Superbase Python client documentation. It's doing a lot here. And this is taking a lot of flow action. So, a lot of credits being spent here, but it's important to have the best starting point you can. And so, generally, I'm okay with having it do quite a bit at the start here. And you can always prompt it to not do as much. Um, and so I'm going to let it go
            • 20:00 - 20:30 through all of this documentation searching here, and then I'll come back once it's moved on to the next step. All right, so it's finished looking through all the documentation. Now, look at that. Boom. We're moving on to analyzing the planning and tasks files. So, it's pulling in all of that extra context. And then, for some reason, it wants to create a directory, I guess. Yeah, it's planning to write the tests already. I mean, this is good. It's following that global rule as well. So, we'll let it create that directory. Like, there we go. Superbase mcpests. And now it's moving on to coding up everything. So it starts with the
            • 20:30 - 21:00 requirements.ext and then I assume it'll get into creating Yep, there we go. The server.py. And this will take a little bit because there's going to be a good amount of code that goes into this. So what I'm going to do here is pause and come back once it has implemented the first iteration of my server. And there we go. Windsurf created the full superbase MCP server for us in a single prompt. And this is not a basic implementation. It's almost 300 lines of code and it looks really good. We've got the ability to delete records, update records, we can create records. And then
            • 21:00 - 21:30 last, we have a tool to read rows in a table. And this very much shows that Windinsurf understood the MCP documentation. Like even the way that it sets up the Superbase client and defines all of our tools. It very much read through documentation, synthesized that with our markdown files for planning and tasks to create this beautiful piece of code. And if we go into the tasks markdown file, take a look at this. It marked off the tasks as complete. It didn't make any tests or update the readme, which I kind of wish it did. It
            • 21:30 - 22:00 made the tests folder, but then didn't write tests, which is kind of strange. I mean, you get weird behavior like that sometimes where it it understands the global rules, but doesn't do everything you would want. But that's okay cuz we'll just do that in a follow-up prompt, and I'll show you that in a second. But first, let's actually test this and see if it's working. So, we have this MCP server created locally now. And I do want to make a dedicated video on creating MCP servers, by the way. So let me know in the comments if you're interested in that. I'm going to kind of gloss over the implementation and running this right now because the point of this video is to show my
            • 22:00 - 22:30 process for coding with AI, not making an MCP server. So anyway, in cloud desktop here, I'm going to hook in the server. So you just go to the top left, file, settings, developer, and then you click on edit config. It'll open the configuration folder. We can go to the cloudmcpconfig.json. JSON file which I have open right here in windsurf. So this is where I have all of my servers set up for cloud desktop specifically. And so to add in superbase I just added in this line right here or
            • 22:30 - 23:00 I have the command to my Python executable specifically in the virtual environment that I created. And again these details aren't super important for this video. I'll do a follow-up one for making a server. Then we have the arguments which just points to the server that Windsurf just created for me. And then for my environment, I'm passing in what I have redacted here, but I'm passing in my URL and service key from Superbase. And so with all of that there, you just have to restart Cloud Desktop. And then if you open up your MCP tools that are available, let me scroll down to the first Superbase
            • 23:00 - 23:30 one. Yeah, here we go. So create table records. So we have this and then there's the three other tools that we have, which I'd be able to find if I just scroll through the list of tools here. And so now I can ask it something like um what records do I have in my document meta data table? This is a table from a different video on my channel with some rag stuff in N8N. And so it's now going to call the superbase tool. I'll allow for this chat. Okay, I just did one prompt. I did not do anything for follow-up prompting yet.
            • 23:30 - 24:00 And look at this. We just one-shotted a Superbase MCP server with Windsurf with this strategy. And this would not be possible if it wasn't for the rules that I have and having it search through documentation, giving it an example. All of that together is what made this possible. And I'm I'm actually blown away. It this was a big implementation for it to oneot. And it did that successfully. These are indeed the records that I have in this documents metadata table in Superbase, which is
            • 24:00 - 24:30 super cool. Okay, so now at this point, we have a good state of our MCP server that we want to save with git. so we can revert back to it if we encounter any issues down the line where the AI IDE breaks things as we're adding a readme, maybe tweaking the functionality of our server or adding our tests. And so I'll go into a new conversation here and I'll tell it to uh make a git repo for this project and make a commit. And so you can either use the git mcp server or you
            • 24:30 - 25:00 can just use a lot of these AI idees have native commands that support working with git repositories. And so for example in this case it tried to use a tool and failed for some reason. So now it's just using the git status here in the terminal and that works as well. So you can use either. And so yep it looks like there is a git repo that's initialized. I was trying to run this earlier. I ran it once before which is why it already made the git repo. So now it's making the git ignore which looks good. We definitely want to have that. And now it's doing the get add command. Looks good. And again, you could use the
            • 25:00 - 25:30 git mcp server for this as well. Um, but I'm just using the commands. So I run the status. And then now finally, it's going to make a commit. And so there we go. Boom. We make a commit. And now everything that we have is saved. And so we can ask it later to revert back to this commit if it messes up anything. And we want to just go to this current state. And so with that done, now we can move on to the next step where we want to create tests and do some things that it missed out on like creating the readme as well. And that brings us back
            • 25:30 - 26:00 to our main document because we're going to knock out steps number six and seven at the same time. Step number six goes through what it looks like to iterate on our project after the initial prompt. And the important thing that it covers here is that golden rule to only ask for one change at a time. I've got a good example of that and a very bad example. The important thing is to not overwhelm the LLM with your requests. And so in our case, we know there are a couple of things that we want to implement right away. We want to create that readme, the
            • 26:00 - 26:30 project documentation, because it didn't do that at first for some reason. But then the other thing that we want to knock out right away, this just brings us right to section number seven is we want to create tests for that initial version of our Superbase MCP server. And so we'll just ask the LLM to create unit tests for each of the tools that it made in the server. And I outlined best practices here for testing. These are the things that we want to give to the LLM, which you can just do in the global rules. And so in this template that I
            • 26:30 - 27:00 gave for you for the global rules, everything that we saw in that section for best practices for testing, I just list out right here for the LLM. So you don't even have to understand exactly what mocking is, for example. You can still give that as a rule to the LLM because we want to have a dedicated directory for our tests. We want to mock calls to our database and LLM so we're not using it for real because we want our tests to be fast and free. And then we also want to test a successful scenario, make sure we're handling errors properly in a test. And then a edge case as well. And so we just
            • 27:00 - 27:30 include all of that as global rules. And so back over in windsurf now I can just have a new conversation up and I can ask it to create tests for server.py in and then I can just call out the test directory that I already have right here. Um it looks like there's way too many directories here. So I'm just going to delete that at and say in the tests and so boom there we go. And I could be way more explicit here and it's probably helpful to provide some more instructions, but remember because of
            • 27:30 - 28:00 the global rules, it's going to follow those best practices without me doing anything in the prompt related to it. And so I can be very simple here. And so yeah, what I'm going to do is pause and come back once it is done creating those initial tests for me. All right, windsurf created all of the tests for our server. And there was an issue initially where 12 of the tests were passing and then two were unsuccessful. So I went through a little bit of an iteration there just with a couple of follow-up prompts. But after that it is
            • 28:00 - 28:30 working beautifully. So now in my terminal I can just run the command pi test and then I just reference my test folder right here from the root directory. And look at this 14 tests and they all pass. and it's 14 because we're testing a successful failure and edge case scenario for each of our different tools. And then I guess a couple of extras as well to do things like test the lifespan with the environment variables. And this is super comprehensive. Like this file is massive, almost 500 lines of code. And
            • 28:30 - 29:00 that's generally what your testing files are going to look like. Sometimes they're longer than the base file itself just because you want to hit on all those different scenarios. And so yeah, these are really solid tests and everything is mocked. This is beautiful. And so now at this point, we have our first version of the Superbase MCP server. It's working. We tested it in cloud desktop. We've got unit tests. Everything is good. Now we can move on to iterating things as we want. We can also create a readme file. All the while
            • 29:00 - 29:30 making sure that we are keeping our tasks and planning markdown files up to date because again that's that all important context that we need especially if we are starting new conversations. Then there are a lot of other things that we'll want to do to iterate on our superbase MCP server. I'll just do that off camera though because at this point I've already demonstrated this entire workflow. The very last step that we have here is deploying our project. Because once you're at a state with what you're building where you want to deploy it, package it up to ship it to the cloud,
            • 29:30 - 30:00 share with other people, you can do that with the AI coding assistant as well. And my favorite way to do it is through Docker or another similar service like Podman. And the best part is LLM are very good at working with Docker just because it's been around for so long. There's so many examples of it on the internet that's been used to train the LLMs. And so they can help you create a docker file to package up and deploy your application even including giving you the commands to use as well. And so like this is an example prompt that I
            • 30:00 - 30:30 have here. Just write a docker file for this MCP server using the requirements.ext file for all of the Python requirements. Give me the commands to build the container after. And so I did that already. I didn't want to bore you with the details of waiting for this to complete. So I created this docker file to package up the MCP server. And then I also had it create the readme in a separate command because remember one thing at a time. I had it create this readme as well. So we have full instructions for running everything. So installing it from git
            • 30:30 - 31:00 which by the way you can follow this. You can go through these instructions yourself to have this superbase MCP server running yourself which is really cool. So we built out a full working example in this video and then you build the container and then you can set up the configuration within your cloud desktop or your winds surf your cursor whatever just using this as the config example and that's it like that's all it takes to get started. So we have this deployed now you can literally clone this repo and do it all yourself and
            • 31:00 - 31:30 that is our full process. We literally went from start to finish with this document. Ideation to implementation to testing to writing out our documentation all the way to deploying. We took care of it all. And so that was a lot. Honestly, this video took a very very long time to prep and record everything for you. So I hope that this process was super helpful for you. And yeah, there might be certain parts to it that I would dive into in more details in another video. Also, I really do want to
            • 31:30 - 32:00 make a full video on building a MCP server from scratch and going into a lot more detail on that. So, let me know in the comments if you'd be interested in that as well. So, I hope that this video can help you be way more efficient when coding with AI. And please let me know in the comments what your process looks like, too. I'm super curious. I'm sure there's things that you would want to add to what I'm doing, or you just have other things that work really well for you. So, I'd love to hear it. There definitely isn't a one-sizefits-all approach for using these powerful tools. I just wanted to share what works well for me. So, if you appreciated this
            • 32:00 - 32:30 video and you're looking forward to more things AI coding and AI agents, I would really appreciate a like and a subscribe. And with that, I'll see you in the next