5x Your Cursor AI Coding Quality With These Pro Tips
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this informative video, Yifan from 'Beyond the Hype' takes viewers on a deep dive into maximizing the potential of the Cursor AI tool through advanced workflow techniques. Most users barely scratch the surface of Cursor's capabilities, typically utilizing only about 20% of its true power. Yifan demonstrates how to leverage the AI tool for software development projects effectively by breaking down larger tasks into manageable steps. This approach not only enhances productivity but also minimizes errors, offering a more reliable and maintainable way to code. Alongside adjusting project context and testing strategies, viewers can learn how to better integrate AI into their development process for enhanced coding quality and seamless project management.
Highlights
Yifan explains the importance of tapping into the full power of Cursor, an AI productivity tool. ๐ช
The video demonstrates a practical project: a podcast transcript editor using Typescript. ๐ง
Learn how to implement fine-grain permissions controls with AI assistance. ๐
Break tasks down into smaller steps for efficient, error-free coding. ๐
Understand the importance of context in AI project implementation. ๐
Effective database handling and UI integration with Composer in Cursor. ๐ ๏ธ
Consistent project output with Cursor rules and end-to-end testing. โ
Key Takeaways
Maximize Cursor's potential by utilizing more than just 20% of its features. ๐
Break down big tasks into smaller, testable steps for better AI guidance. ๐งฉ
Treat AI as an intern; you are the architect of your project. ๐๏ธ
Use Composer's preview tool for a detailed project plan, despite its cost. ๐ก
Cursor rules ensure consistent AI implementation across every project. โ๏ธ
Testing remains crucial even with AI-generated code, ensuring no conflicts arise. ๐งช
End-to-end tests are essential for larger projects to maintain reliability. ๐
Overview
In this video, Yifan demystifies the advanced features of Cursor, an AI tool that many users struggle to use to its fullest extent. With comprehensive guidance, Yifan showcases a project involving a podcast transcript editor, illustrating the process of dealing with complex coding tasks by breaking them down into simpler, manageable parts. This approach, he argues, helps avoid common pitfalls like vague AI instructions that can lead to errors or unmanageable code.
Yifan emphasizes the value of using Cursor not just as a coding assistant but as a critical part of the development process akin to having a diligent intern. By leveraging AI for breaking down large tasks and generating code, developers can maintain clarity and control over their projects. Furthermore, he points out the benefits of using advanced features like Composer's one preview, despite the associated costs, for an in-depth breakdown of work plans.
Towards the end, the importance of testing is reiterated, despite AI handling much of the heavy lifting. Yifan insists on the necessity of end-to-end testing to ensure that new features do not cause unexpected issues in the existing codebase. Overall, the video is packed with practical insights for improving AI-driven coding workflows, establishing that AI can enhance, but not replace, the nuanced expertise a seasoned developer brings to a project.
Chapters
00:00 - 00:30: Introduction to Cursor AI The chapter 'Introduction to Cursor AI' explains that most users only utilize 20% of Cursor AI's capabilities. The narrator promises to guide viewers through advanced workflows using Cursor's powerful tool, Composer, combining insights from their experience both with Cursor and in software engineering. Despite a slight speech impairment due to a bitten tongue, the narrator assures quality content in the demonstration.
00:30 - 01:00: Podcast Transcript Editor Demo The chapter discusses a weekend project undertaken by the speaker, which involves developing a podcast transcript editor. This tool enables users to edit audio transcripts directly without the complexity of waveform editing. The project is advanced, incorporating intricate sound manipulation, transcript editing, and UI enhancements. The tool is already substantial, comprising 2,700 lines of TypeScript, surpassing typical toy projects seen in AI demonstrations. The focus of the current effort is on adding more detailed features.
01:00 - 01:30: Implementing Fine-Grain Permission Control The chapter discusses the implementation of fine-grain permission control, aiming to restrict user access to their own transcripts while preventing access to those of others. Additionally, it covers the introduction of features allowing users to invite others to collaboratively edit transcripts. The text emphasizes the complexity of these features and suggests using 'Composer' to facilitate the process. It advises on the importance of detailed planning in major feature implementation, warning against providing vague instructions to AI.
01:30 - 02:00: Importance of Breaking Down Tasks This chapter discusses the importance of breaking down tasks, especially when working with AI, to minimize mistakes and create more manageable, debuggable code. It mentions that prompting AI with smaller, testable steps is best practice. The chapter also introduces a method to leverage AI's capabilities in automating this task breakdown process.
04:00 - 04:30: Updating Database Schema This chapter discusses the process of updating a database schema, emphasizing the importance of breaking down the work into multiple steps. It highlights the necessity of providing clear intent and context to ensure the system understands the existing functionality. Key actions include using command entries or codebase context to inform the system of current features, interfaces, and relevant database fields.
04:30 - 05:00: Testing Updates and Committing Code This chapter discusses the process of testing updates and committing code to ensure full context and leverage existing implementations. It highlights the importance of understanding various parts of the codebase, such as components, database migrations, and page loading processes. These aspects are examined to add necessary context before implementation, demonstrating the control over information and ensuring thorough integration of updates.
05:30 - 06:00: Creating New Functions In the chapter titled 'Creating New Functions', the process of updating the database schema is discussed. The main focus is on enhancing the functionalities related to transcripts, which includes the creation and fetching of transcripts.
09:00 - 10:00: Implementing UI Changes The chapter discusses the importance of breaking down tasks into smaller, manageable units for effective implementation of UI changes. It mentions the potential pitfalls of prompting AI to handle large tasks in one go, as this can lead to increased errors and debugging time. Early in the implementation process, when the codebase is small, AI may handle prompts adequately. However, as the codebase grows, relying on AI for large tasks can lead to more errors, emphasizing the need for careful task segmentation.
12:00 - 13:00: Adding Automated Tests The chapter discusses the importance and cost-effectiveness of automated testing, particularly in AI implementation. It suggests using a specific preview feature that costs 40 cents per invocation but emphasizes that it is only necessary for significant feature implementations. This cost is considered reasonable considering the benefits. Most of the time, a tool called Sonet suffices for testing needs. The chapter concludes with a reminder that AI should be viewed as an intern, supporting but not taking over the role of an architect.
16:00 - 17:00: Final Thoughts and Outro This chapter discusses the importance of planning and designing before implementation in product development, highlighting how AI can assist in this process. It emphasizes that a clear plan leads to better implementation and easier debugging. The chapter also mentions updating the database schema and using the composer interface, although it ends abruptly, suggesting a continuation of the topic.
5x Your Cursor AI Coding Quality With These Pro Tips Transcription
00:00 - 00:30 did you know that most people are only tapping into 20% of cursor's true power in this video I'll walk you through all of the pro workflows I've discovered with cursor's most powerful tool composer these insights combines my month of cursor usage and the decade of software engineering experience this is TR to take your cursor usage to the next level if my words sound a bit scrumbled today I've been my turn earlier today but don't worry the content still at sharp welcome back to the channel let's Dive In app I'll be demoing today is the
00:30 - 01:00 weekend project that I've worked on it's a podcast transcript editor that allows people to directly edit audio transcripts without having to deal with waveform editing just implementing those features due to a lot of the sound manipulation and the transcript editing and display a lot of the UI finessing this project is already 2,700 lines of typescript L already a citable project to work with Way Beyond the toy project you see with most other AI videos that you might have seen what we're going to do today is to add in more fine grain
01:00 - 01:30 permission control to allow users to only see their own transcripts without accessing other users and as well as adding the capability to invite others to edit the transcript together with you definitely not an easy feature to add but we'll see how we can leverage composer to help us through this whole step first of all you need to put your product hat on and think about with any major feature implementation the last thing you want to do is to give AI a super vague instruction and then just
01:30 - 02:00 let it hash it out because the bigger step you ask AI to take the more likely it is to make mistakes and the more likely it's going to create annoying code that's very hard to debug the best practice is to break the task down into multiple smaller testable steps that you can Implement and prompt the AI separately to deal with gladly we don't have to do that manually we have ai to help us actually do that here's a prompt I've given it to actually break that down task for me so just giving it Con text that I want to break the prompt
02:00 - 02:30 down and then to make sure it knows my intent to want to split the work down into multiple steps and briefly describe the existing functionality the most important thing here is make sure you hit command enter or just mention codebase context directly on top so that it knows to pull in all of the codebase context such that it can figure out what are the features that already exist what are the interfaces the relevant Fields the database features that already
02:30 - 03:00 so that it can leverage directly without reimplementing so it gives make sure that it has the full context before implementation the moment once you hit that you can see it's gone through a ton of different parts of the codebase to pull out the relevant context and then in our case because we had information control it looked into the components the database migrations it looks into how the page is actually loaded and a whole bunch of things there to add the context and what it does at the end is now broken down let's see uh update the
03:00 - 03:30 database schema update the transcript creation process great update transcript fetching add the API create the API for sharing transcripts and upd the client side add sharing UI implements invitation acceptance Implement invitation acceptance update Access Control add error handling and user feedback so great it's already broken down our huge task into 10 much smaller
03:30 - 04:00 and manageable tasks so this is not to say you can't try to prompt AI to do one large task all at once but it's just way more error prome and you end up spending a ton of time in debugging rather than in implementation work it works mostly fine in the early stages of implementation where the code Bas is so small because it doesn't have that much context to really deal with but the moment your codebase grows it makes it easier and easier for AI to make even more mistakes if you want the best
04:00 - 04:30 breakdown experience you'll probably want to select the1 preview here the only problem with a one preview is that you have to pay 40 cents per invocation given that you only need to do this once every time you implement a large feature so you don't really need to deal with this too often I think that's a fair price to pay but most of the time Sonet is already more than good enough the thing to always keep in mind during AI implementation is that think of it just as your intern it's never the architect
04:30 - 05:00 you are the architect driving the design for the product so really it's always best to ask AI to plan out the work it's going to do before asking it to do the implementation directly the clearer the plan the better the implementation will be and debugging would be so much simpler okay let's go into the first step update the databas schema to allow the sharing let's bring up the composer interface create a new one and then given that I'm lazy I'm just going to
05:00 - 05:30 add codebase context so that you can figure out where it needs to take the stuff in every time you get composer to generate code for you it's always best to check how the div is actually made in our case you can see it generated a database migration file so I'm using next.js for dealing with this and uh it creat it up and down that all looks correct and it's also added the shed width inside my transcript type and that passes to the front end you can see there's an issue where this type is
05:30 - 06:00 adjacent B but our custom type the type definition is only specifying a string array let's ask it to fix that in this Cas you can see it still forgot to change the jsonb actually into a string slice let's make it do that explicitly okay it's successfully generated here and we can see the type field is still correct and if we go back to the interface let's close the chat to give us some more space let's run our
06:00 - 06:30 migration here successfully done migration and now the step is complete it might look like a small step but makes the code super easy to deal with it might use a bit extra of your premium credits but trust me it's worth it next part of the workflow make sure you commit all of the code that AI has edited that you've tested working directly into git we want to add the database migration files and the types one is not showing up just yet we commit
06:30 - 07:00 then done time for next tip given the sides of the codebase it's always good to give AI that context of what this project actually is so given the project context it can apply the changes accordingly without having to figure it out from the codebase directly itself you generally want to add in at least a brief line describing the project context what you're trying to implement in the prodject inside the cursor rules for those of you who haven't touched cursor rules it's this file that you can create at the roots of your repository
07:00 - 07:30 and then all of the content within this file gets prepended to every one of your AI chats inside of cursor whether it's in composer or inside chat or inside the command K you know inline implementation that you bring up which means that every single time it's going to act with the instruction that's set out inside the curer rules file ensuring consistency across all of its implementations for that update let's also commit that to the codebase right now we have a clean working state wait again if we go back
07:30 - 08:00 to the AI let's do the second St okay modify the get transcript to check if the current use has permission to access theing transcript if not it should return no on on authorized error now let's take the second step and implement it with composer again you want to press command n with command n you want to create a new composer that's p in the instruction as always we just shove in the code based content so that we'll let CA figure out what all the files it needs to access this looks decent less
08:00 - 08:30 read through it here's a breakdown user ID shed width after fishing great let's use it in larger view you can press command shift I to bring up the larger view if you're on Windows I believe it's control shift I return the user ID and shed with which is our new data one it makes sure that either if it's matching the user or if it matches the Shar width
08:30 - 09:00 the shed width here should also be using the user ID let's update that okay well let go through the types change is still the same great user ID and it now checks yes the shed withd slice includes our current sessions user ID looks great and we return this Shar user user ID and shed width is all added okay this code all looks good because we haven't added the UI bits so we won't be able to test on the front end yet but we can still go and validates the fronted still loads as
09:00 - 09:30 usual right yeah looking loading all looks good so we haven't broken anything the best thing about implementing small steps you rarely have any big errors to really debug now we repeat the last step if we go into git refr the code B changes great all of the stuff that we just committed let's add the code great committed so we move on to step three create a new function that transcript owned by and shared by the user and this
09:30 - 10:00 will be a user component create a new file as always we're add in the code base context hit go okay so we get a fetch transcript for user this now gets all of the transcripts either they own or the ones that they have been shared yeah I think this code looks good check out the code changes made good add the file comment cool okay great that worked really well uh let's skip forward a few
10:00 - 10:30 steps just so that we can see some changes in the UI let's work on the server components for the transcript list let's copy the instructions open composer create a new one add in the instruction mention codebase go and now we can see that this database query update this databas field is correct you also want to show the user so inside the transcript list page this all looks okay as previously mentioned make sure you
10:30 - 11:00 always use Sable rather than accept all until you make sure that the code works it keeps the pending changes easily in the code base and then the moment that you close the composer all of the changes get reverted which means that it never affects your working State and once you click on accept these green red blocks are all gone and then so the changes are already applied that's the thing you want to be careful with if you don't want to risk losing your existing work and given that we've added a a new
11:00 - 11:30 page for transcripts great now we got the transcript list to show everything and it's correctly indicating that's owned by you so I'm going to directly use my database tool to add in another transcript to Res shed with me now that's updated and let's refresh there we go and so you can see that shed with that's good we didn't have to do any amount of excessive debugging the code just works now that we we've validated that everything is working okay now and accept off if we go to the git page
11:30 - 12:00 refresh great it's shown the transcript list is updated the transcript list component here let's add it right now you can clearly see that by breaking down our actions into much smaller steps the implementations are way more reliable even when problems does come up it becomes very easy to debug so the next part of the workflow and it's now finally time for testing I know many of you will be saying oh man and I've already got AI can generate all of this
12:00 - 12:30 code for me do I really need to understand it do I still need test short answer absolutely yes uh when your code base is small when you're just working on a prototype it's easy for you to just spend maybe like 20 30 seconds to check out all of the functionality of your app but the moment it grows it's essential to have test to help you validate that your newly added AI features doesn't break any existing features and make sure that whenever you are making changes you have the confidence of my
12:30 - 13:00 personal preference especially at this stage is to focus on a few light and fuzzy into end test to test the overarching flow and to make sure the cold workflow actually still works we can still use composer to help us deal with this and now we have to make use of another major feature inside of composer notepads uh notepads is this thing that's sitting on the top left it essentially allows you to provide even further context on top of like cursor
13:00 - 13:30 rules because cursor rules is context that gets attached to every single one of your AI prompts notepads are the ones you selectively attach to the work that you're implementing say in our case I created specific Notepads for generating playright tests for those of you that don't know playright is this amazing in a testing framework for the browser if you haven't used it already do check it out just know that it's a super convenient way to bring up browsers emulate user action check for the UI changes to make sure
13:30 - 14:00 that we're validating changes from a user's perspective rather than just from A System's core functionality perspective so in my case I've defined a whole bunch of rules of how I wanted to do my end test generation you can also send the files to provide it additional context for most cases I found this to be unnecessary and right now C the hammer really add the ability to allow you to add whole folders or add Dynamic grouping of files so every time you have
14:00 - 14:30 to update the content it gets a little bit annoying and I just do the lazy work just mentioned add codebase on every single one of my implementations and then let AI figure out all of the contexts he needs to get out so to make use of this notepad we simply create a new composer and then on on top we can just mention the play right enter the cont configuration test and as always add code base we check out the code ISE actually added goes to the transcript list page uses the same Al Json that
14:30 - 15:00 authenticates the user great it uses the test ID for the transcript list to appear okay searches for the transcript items using test ID checks for each of the shed own transcript is that exists and then CH checks the transcript name and everything exists okay this actually looks good and then if we go into the page itself it's also added all of the relevant test IDs to help the test to
15:00 - 15:30 identify the page items all looks good let's hit save so now you want to make sure you're running the play right test UI you can do this by npx play right test-- UI to start the command and then you'll see a window showing up just like this um we'll see our inin test showing up here I just click a run well open up the test you can see loads the page locates the different items it needs to check find
15:30 - 16:00 that everything's relevant and visible test the page down and the test passes just that simple wow other than the apply error that we just saw had no issue that we had to deal with still quite magical right now I've already accepted the change save the file and now we can close this off refresh the page change it's now added the new test great and we commit again so you can see that Within that short span of time we
16:00 - 16:30 were able to add in quite a few features albeit slightly slower if you were just providing one giant prop for the AI to implement the whole thing but we're able to do it in a much more repeatable testable and maintainable fashion to make sure every step of the work it works if you want more tips on how to maximize your usage of cursor check out my videos up there I know I've also mentioned a couple of tools that I haven't dive into details for today if you want to find out more let me know in the comments below until until then Happy shipping and I'll see you in the