Innovative Uses of AI for Creative Content

Building Open Canvas With Memory

Estimated read time: 1:20

    AI is evolving every day. Don't fall behind.

    Join 50,000+ readers learning how to use AI in just 5 minutes daily.

    Completely free, unsubscribe at any time.

    Summary

    In this insightful video by LangChain, viewers are introduced to Open Canvas, a unique UX that integrates memory for generating creative writing or coding content using a language model. The system is notably characterized by its interactive features which allow users to highlight and alter specific text portions separately and employ quick actions for customization, offering both efficiency and personalization in content generation. The system uses machine learning to generate and store reflections about the user's style and preferences, ensuring continuity and context across different sessions. The video thoroughly explores the technical architecture and operational intricacies involved, emphasizing the system’s adaptability and user-friendly design.

      Highlights

      • Open Canvas makes code and text generation intuitive and interactive. 🚀
      • Allows users to manage and remember style preferences and personal facts across sessions using reflections. 🧠
      • Users can highlight specific sections for targeted changes or opt for complete rewrite commands. ✍️
      • The architecture supports flexibility and adaptability, with path nodes managing different content types and requests. 🔄
      • LangChain’s system offers free access for users to explore and contribute to project enhancements. 🆓

      Key Takeaways

      • Introduction of Open Canvas by LangChain combines UX design with memory features for creative content generation. 📜
      • Highlighting and editing specific text is easy, enhancing user interactivity and control. ✂️
      • Open Canvas smartly generates reflections to remember user preferences and style for future tasks. 🧠
      • Quick Actions feature allows dynamic text modifications like translation and format changes. ⚡
      • Insight into the system architecture shows a robust and adaptable design for varied user needs. 🏗️

      Overview

      LangChain's Open Canvas is a game-changing UX designed to seamlessly integrate memory into content generation processes. Whether you're generating creative writing, code, or more, Open Canvas provides an interactive environment that intuitively remembers your style and adaptive preferences, making it a powerful tool for creatives.

        With its sleek interface, Open Canvas allows you to highlight specific text for focused modifications, or command a complete rewrite for broader changes. Thanks to its memory feature, your style, and personal details persist across sessions, offering a personalized user experience that evolves with your input.

          Additionally, the back-end architecture supports a variety of functions—using a series of nodes to route requests and manage different content types. This video elaborates on the operational aspects and shows how users can take advantage of its functionalities to streamline content creation, further enhancing creativity and productivity.

            Building Open Canvas With Memory Transcription

            • 00:00 - 00:30 what's everyone it's brace from Lan chain and in this video we're introducing open canvas which is a new ux we built for um using LM to generate any sort of creative writing content or code with memory included so as you can see right here I send in a message it moves my chat window over to the left and then I get this nice marked and editor appearing on the right here I can do cool things like highlight specific text and then sending requests which will apply only to that specific text so I can say change change to Amazon I hit submit and as you can see it updated
            • 00:30 - 01:00 only that specific text to Amazon I can also use the chat window to paste in an entirely new request saying write me a simple python script to take a web page URL scrapes it and returns its contents as you can see it understood that I asked for a python script um so it changed the editor on the right to then render python um and then also generated that text I can also do what I did before where I can highlight some text I can say rename to get page contents hit submit and as you can see it's going to
            • 01:00 - 01:30 update only this specific highlighted text to what I requested I can also go back and select my original text here and then render that or once again go back to this python text as you can see in the top right I also have this Reflections button what this essentially does is it's performing a reflection agent on all of my um generated artifacts which is what they we're calling this content on the right and my chat window and it's taking that and it's using an llm to uh generate these Style Guidelines and content rules and
            • 01:30 - 02:00 memories about me to then include in future Generations so if I click on this and I say search for Reflections we can see it generated a handful of Reflections about me on my style um and then content like my name uh where I work and so on so these will persist across sessions I can do cool things like open up a new chat and I can just say what's my name occupation and one thing I dislike as you can see we're in an entirely new chat so this is not going to have access to any of the chat history your artifacts we previously
            • 02:00 - 02:30 generated but since it has access to those Reflections in memory it knows my name is brace I am an AI engineer at Amazon um because I updated that text to say Amazon and I strongly dislike traffic during my morning commute which is what I asked it to write the um uh blog post about in the initial question so now that we have an high level idea as to what this app does let's get into the nitty-gritty and talk about the architecture and how it's actually built so as you can see from our nodes that are automatically generated we have a few main um areas which can route to so
            • 02:30 - 03:00 the generate path node is always called as the first node and this is going to generate the path that our graph will take based on some inputs which can either be some deterministic inputs which we'll get to in a little bit or it sometimes we'll use an llm to then decide what node to route to um if it wants to do anything with the artifacts you see we have five different nodes for artifacts we have rewrite artifact update artifact generate artifact and these two around the different theme this is for rewriting the artifact text theme and this is for rewriting artifact code theme um and these are triggered if
            • 03:00 - 03:30 you use any of the quick actions which we didn't actually talk about in the intro but we can look at that right now so if we go here and we ask it to write me a poem about dogs keep it very short we see it's going to write me a nice little poem right here and then in the bottom right I get this quick action bar so if I click on this you can see that we have four different options this is translate I can translate to any of these languages I can also change the reading level so I can make it um you know a PhD reading level
            • 03:30 - 04:00 I can make it a child and then we also have this fun one for making it sound like a pirate I can change the length so I can make it longest shortest short and so on and we can do this ready so I'll say make it longest what it's going to do is it's going to pass in this field um in my state called artifact length and then in my initial generate path node it's going to see that artifact length is populated um and then it's going to Route it to the rewrite artifact theme node it does this for any of the state fields which correspond with the quick
            • 04:00 - 04:30 actions for text and that would be uh language artifact length regenerate with Emojis or reading level so if any of these State fields are populated we know that a user Ed one of the quick actions and we need to route to Art rewrite artifact theme the same goes for rewrite code artifact theme um but those have the quick actions of adding comments adding logs port to a different programming language or fixed bugs and if any of those are passed in we know we want to rewrite code artifact theme what these two nodes do do is they're pretty
            • 04:30 - 05:00 similar what they do is they have a unique prompt for each quick action you could take and based on the populated quick action in the state um it has a switch case and then uses that specific prompt so for rewriting like a pirate The Prompt is just saying you know take this artifact and rewrite it sound like a pirate if you're using the code quick action and say you wanted to Port it to uh PHP it's going to say given this code Port it to PHP um and the prompts are a little more detailed than that but that is the overall theme if you didn't
            • 05:00 - 05:30 populate any or if you didn't use any of the quick actions then we also have the ability to highlight which we did see in the input so if I were to highlight this and say uppercase and send it it's going to know that I highlighted this text because that's going to be populated inside of the highlighted field here and what this does is it basically just takes the start index and the end index of the highlighted text adds that to the state sends it to the LM and the LM says okay this highlighted State field is populated that means I know I need to
            • 05:30 - 06:00 route to the update artifact node and this node is specific for updating the highlighted text um and it's just some other prompting around saying okay given this specific highlighted text and then some context before and after because the elm needs to know more context than just what you highlighted um do what the user asks and it's prompted um very specifically to only highlight what is updated and those are the three deterministic routes that generate path can use if none of those State fields are populated then we can either
            • 06:00 - 06:30 generate an entirely new artifact or rewrite the artifact and finally respond to query so generating and rewriting the artifact are exactly what they sound like if I say write me a python script for logging hello world and I submit that it's going to know that I want to generate a new artifact um because that's clearly asking for a new request I can also say something like wrap it in a function and it's going to pass this uh current artifact and then also some context like my input and some previous chat messages to an l M the LM is going
            • 06:30 - 07:00 to say okay we probably want to rewrite the artifact so then it's going to rewrite the artifact and then finally I can also just ask my chat window some simple requests like you know how are you I send that and it's just going to reply here um and it's not going to actually update the artifact that's because the LM knows we don't need to touch any artifacts we just need to respond to query so after either update an artifact um we will then generate a follow-up and that's what these messages are right here these are the follow-ups that are generating generated after that we're going to reflect and this is what generates the
            • 07:00 - 07:30 um memories that are kept in our sh shared store so that in future Generations the LM has context into you know who I am what my name is um anything else I've told it and also some Style Guidelines um like after I said uh rapid in function it's probably going to add a style guideline saying for writing code wrap it wrap your code in a function um and then if it either updated an artifact or responded to a query it'll always go to clean State and that's because we have all these different state fields which only really matter for um individ idual requests
            • 07:30 - 08:00 right so highlighted or language or artifact length those are all like quick actions or things which don't need to persist the next thread so clean State just removes those from the um the current State field so that in my next request the LM doesn't think I'm highlighting stuff so we also have the ability to click on this icon right here it's going to take us to the Lang Smith Trace uh this is populated in every single request you ask you can always inspect exactly what's going on under the hood um and get an it into you know why the
            • 08:00 - 08:30 LM generated some text or how it generated that if we click here we see that I clicked on um this function for wrapped in a function which means that the LM is going to rewrite the artifact which is that node we just saw if we look at the prompt we can see that it's passed in the current artifact and then also some Reflections which are the user facts and then Style Guidelines uh that it generated after the fact using this reflect node this reflect node is essentially calling an entirely different agent this reflection agent
            • 08:30 - 09:00 super simple it just has one node reflect and it's passed in the current artifact in the chat history um and there's some prompting saying you know here are all the current uh Reflections and facts that you gener by the user here's the current artifact here are some messages uh regenerate all of them combine some you know duplicates um and then it's going to update our shared store to then persist those values in the Next Generation so that our llm always has context about who we are we can see that's triggered here um it's basically just calling a subgraph
            • 09:00 - 09:30 using the Lan graph SDK to then invoke that um and then we're also passing in some fields to say you know if you get duplicate requests cancel the original and then only use the most recent and then also wait I think I set it to 30 seconds so that we're not just reflecting on every single request um because that would get pretty pricey if the user is going back and forth to the LM so what it does is it delays 30 seconds and if there have been no requests um for 30 seconds it goes and it reflects we can see all those reflection here and then the UI will also let us
            • 09:30 - 10:00 clear these Reflections so that we can start fresh speaking of starting fresh if I'm here and I don't want to start with a chat window right let's say I'm writing some code in my IDE and I want to use open canvas to iterate on it I can select this quick start code and I can do the same for text if I select quick start code I just need to specify a language like typescript and then it opens up a brand new totally clean um code editor and then a chat window on the left and then I can paste in some
            • 10:00 - 10:30 code here right or I can just modify this so I can let's say I modify it to say very simple you know import react from react our llm is going to always be past this because it's the current artifact we're viewing so I can then say would I just write my code window since this artifact is pass the LM it knows exactly what I've been writing so it'll always have context into exactly what you're editing I can also close this you know reopen it um so if I had multiple ones like I said write meia poem it's going to generate an entirely new um uh
            • 10:30 - 11:00 mark down editor right here and then I can close this out of course and go back to my code or go back to my poem we can now see my Reflections I think they have been generated for this okay no we need some more time um but in a few seconds it'll generate some Reflections um about you know this user rights in react and whatnot so now that we know how it works at a high level let's take a look at some code and see exactly how we're doing some of the more complex things so
            • 11:00 - 11:30 a lot of this code is kind of uh redundant where it's just using different prompting techniques to generate uh different text around your artifacts or code uh so I'm going to take a look at just two of these nodes which do some cool things uh like using llms to Route the uh path and that's what we're going to look at first so we're going to open up the first generate path node as you can see all the nodes for the main open canvas agent are inside this nodes directory um and they're all named the same or the file names all have the same as the function name so as you can see here we have
            • 11:30 - 12:00 these three if statements and these are the deterministic routes I was talking about in the beginning the first is highlighted so of course if you've passed in this highlighted State field we know the user is highlighted text and we're going to want to route to the update artifact node which only handles um updating highlighted text next if you pass in any of the state fields around the text quick actions which are changing the language changing the artifact length regenerating with Emojis or changing the reading level we know we want to re reroute to the rewrite
            • 12:00 - 12:30 artifact theme node and then finally we have the same for the code quick actions which are add comments add logs Port language and fix bugs and those will of course go to the rewrite code artifact theme so you can see we're always populating this next State field and that's because generate path will always route to this route node conditional Edge and this route node conditional Edge just says if state. next is false they an error this should never be the case and if not we use the send class to then kick off the next node dynamically
            • 12:30 - 13:00 um passing in a field of the node which is the next State field and then the current state we also have this clean State um node here which as we can see will always run before the end and that just has these default inputs um that clears all these states that should not persist to the next um iteration in the graph like next you know highlighted uh fixed bugs and so on okay so now that we've talked a little bit about our generate path deterministic routes we can scroll down
            • 13:00 - 13:30 and we can see how we're using llms to dynamically route it based on an input query the first thing we do is we extract the selected artifact we use the selected artifact ID if it's populated and that's populated when the user you know has selected an artifact like this um but let's say they don't have an artifact selected in that case we just take the last artifact in the list um which is going to be the most recently generated or updated artifact after that we get a list containing all of the other artifacts
            • 13:30 - 14:00 that are not the generated ones because we want to include those in the context for the LM and then we format our prompt so we have this route query prompt which is pretty long and it's just saying you're an assistant your task with routing the user's query um and then we give it some context about the different options it can choose we give it the recent messages in the chat history and for this we just take the most recent three messages and then format them into a nice string we also give it the curent current artifact um or sorry we have all
            • 14:00 - 14:30 of the artifacts in the history users generated um and we're using this uh format artifacts UIL which essentially just takes 500 characters because ZM typically will not need the entire artifact for this generation like say you're asking to write a blog post and you have you know five different blogs in your history you don't want the llm to be having you know like 30 paragraphs in the context it doesn't need that and then finally we end it with the selected artifacts the LM knows um exactly what
            • 14:30 - 15:00 artifact you're looking at right now if you are looking at artifact um so that it can use that as proper context for routing your query once we populate all that we use GPD 40 mini uh for a small llm and then we bind a tool using withd structured output that has two Fields uh the route and that's either update artifact respond to query or generate artifact we talked about what those do in the beginning and then we have artifact ID uh the LM should populate this if it wants to update the artifact and this format artifacts UIL function
            • 15:00 - 15:30 will add the ID in context the LM knows exactly what artifact to populate the ID for after that we invoke the model once again we're using a small model so it runs quickly and we can quickly route to the next node and then we say if the route is update artifact um the update artifact node name is actually already taken by the Highlight um or the node which updates the highlighted text so we just uh update that to the rewrite artic node and that's the name of the node
            • 15:30 - 16:00 which will rewrite the entire artifact and then also making sure the selected artifact ID is populated with whatever artifact ID the LM um used or sorry included in the generation if not then it will either be respond to query or generate artifact and we just pass that via the next field then of course as we saw before this goes to the route node and it will use the send class to then invoke that next node so now that we've talked about the initial router let's go into to um how we're actually generating
            • 16:00 - 16:30 these Reflections and storing them in our shared store so to do that you're going want to open up the reflection directory and then the index.ts file this is also relatively simple what we're doing is we're extracting the store which is a new API we added in the lingraph API from the config and we're using this new type we added in lingraph called lingraph runnable config and it's just adding the store field to the normal runnable config you're probably familiar with if you've used L graph or Lang chain this util function just
            • 16:30 - 17:00 verifies it's in the store um and if you're using langra Studio or langra Cloud to deploy or develop your application you will always have a store included and then it gets the memories from the store or the Reflections in order to do this we have a name space where these Reflections are stored and they're Nam Space by memories and then the current assistant assistant ID once again if you're using Lang graph cloud or Lang graph Studio you will always have an assistant ID populated after that we have a key which is the reflection key um this has to be
            • 17:00 - 17:30 a unique key uh in our store in order to store these specific Reflections in the database we then call store.get and we get the Memories Back Then if memories are populated we format them into a nice string or we say no Reflections found we then Define the tool schema for generating New Reflections where we have two different fields style rules and content rules as you saw in the dialogue I open up in the UI we had two sections for style rules and content rules and these break out things like you know the
            • 17:30 - 18:00 user likes his functions or his code wrapped in functions um and that would be stored in these style rules and then for Content rules it would be things like the user's name is brace after that we use Claude 35 sonnet because we want a powerful model that is able to perform these somewhat complex reasoning tasks and it's running in the background so we don't need to worry about speed we then bind the schema to it and give it a name of generate Reflections and then we have this uh another long prompt which of course you can look at in the uh open source repository but this is just
            • 18:00 - 18:30 giving some prompting around here's all of the um Reflections you've already generated here's the chat history in this uh conversation and then also here's the current artifact the user is looking at given all this regenerate all these Reflections and then once we get the result of this we create this new me new memories or New Reflections object containing the new style rules and content rules and then call store. put using the same namespace and same key and that's going to replace the values that were previously in our store with
            • 18:30 - 19:00 these new values so that um they will be included in all future Generations so that is the high level um and then architecture implementation as to how we built open canvas which is somewhat inspired from open AI open canvas uh but we do different things like adding memory and then some other nice ux features which we believe to give it a uh better interaction of course it's open source and if you want to contribute we have a few different issues here uh with requests that we would like to add like adding some evals
            • 19:00 - 19:30 for memor so the memories can be uh better uh we want better memory prompting and whatnot and then some updates to the markdown editor um so I hope to see you all interact with this repo um Fork it improve it yourself or contribute back to it um and of course it's going to be deployed for free in production so if you want to interact with it uh go ahead I will see you all in the next video