n8n - LangChain π¦οΈ Integration - Build great LLM Applications with drag and drop π
Estimated read time: 1:20
AI is evolving every day. Don't fall behind.
Join 50,000+ readers learning how to use AI in just 5 minutes daily.
Completely free, unsubscribe at any time.
Summary
In this video, Rohan-Paul-AI presents how to integrate LangChain into n8n, a powerful no-code automation tool that allows users to leverage large language models with simple drag-and-drop functionality. He focuses on the LangChain integration capabilities within n8n for simplifying complex workflows involving large text data processing, which traditionally required extensive coding. The video provides a walkthrough of using n8n to perform tasks such as connecting to Google Drive, downloading PDFs, converting them into vectors, and interacting with them using Pinecone and OpenAI, demonstrating its key features and the ease of use without needing to write a single line of code.
Highlights
Discover how n8n's LangChain integration brings the power of LLMs to your fingertips with no code! π
Transform massive PDFs into actionable insights with simple drag-and-drop functionality! πβ‘οΈπ€
Effortlessly execute complex workflows like vector storage and querying with just a few clicks! π±οΈπ‘
Key Takeaways
n8n integrates LangChain for no-code workflows with large language models. π€
Leverage n8n to interact with complex data like PDFs and vector stores easily. π
The integration supports tasks such as summarizing, Q&A, and more with LLMs. π
Overview
In this exciting presentation, creator Rohan-Paul-AI showcases the remarkable integration between n8n and LangChain. This combination means you can harness the power of large language models while bypassing the intricacies typically involved in coding. Itβs all about simplifying tasks like interacting with PDFs and using AI to its fullest potential, making automation accessible to everyone.
The video meticulously walks through setting up n8n with LangChain tools. It starts with connecting to data sources like Google Drive, proceeding to download and process data, converting it into vectors using Pinecone, and utilizing OpenAI for various NLP tasks. Each step showcases how n8nβs drag-and-drop feature streamlines what would ordinarily be a tedious setup, allowing even those with limited technical skills to create sophisticated automated processes.
With the framework provided by n8n and LangChain, possibilities become almost endless. Whether youβre summarizing massive documents, crafting question-answering systems on complex data sets, or optimizing workflows with AI, the integration offers a seamless, user-friendly experience. So whether you're a seasoned coder or a newcomer to automation, this video proves you can execute powerful applications with ease!
n8n - LangChain π¦οΈ Integration - Build great LLM Applications with drag and drop π Transcription
00:00 - 00:30 hey everyone how are you in this video I'm going to talk about a great Auto integration tool for your coding workflow named n8n and specifically I'm going to cover Lang chain integration with n8n so basically to bring in langen as a language agent and it's a awesome ecosystem of all the large language models you no more have to write codes you can pretty much just do drag and drop to implement the awesome workflow
00:30 - 01:00 of langen module for example you can do chatting with your PDF summarizing from multiple web pages get the gist of YouTube videos do question answer on your vector databases and many many more stuff just by doing drag and drop with na10 right now I'm looking at their official website and you can see that in GitHub they have a huge 33,000 stars and uh there's so many clients so many big corporate names who have already
01:00 - 01:30 integrated na10 into their workflow so you can just go through the sites uh but remember in this video I'm going to focus mainly on Lang chain integration because na10 is in business for quite some time and many of you probably already know it uh how to integrate uh Google drive or Google sheet or a plethora of applications for auto integration and do automated staff or automated execution in your coding workflow but in this video I'm going to talk about their very new integration
01:30 - 02:00 with Lang chain so that you can use the power of all the large language models automatically without writing any codes so uh yeah this is your GitHub um you can actually go through uh the source code to if you are are really interested to go and dive deep and first you need to install it in your local machine uh to to of course you can directly run it uh in the cloud but in that case uh you have to sign up and uh uh create an account I think think you may have to
02:00 - 02:30 provide your credit card details uh for a free uh trials but um in this in this case I'm just going to install it locally and their documentation is pretty exhaustive you can just go to the installation section of the documentation and uh there are quite a few option you can use npm you can use Docker or you can use server setups so um for the npm it's uh just for trying it you can do npx NT n8n that's a simple
02:30 - 03:00 command that's all you need to run and for installing globally you have to do this npm install N8 n-g and uh yeah uh and then you can launch it by running this command n81 uh n n81 yeah so it's pretty simple and in this particular case I have um installed with a Docker uh because right currently I'm running it in my windows 11 machine and I have a Docker desktop and uh uh
03:00 - 03:30 just with these two commands you can launch it and start working on it directly and remember when you uh run these Docker commands in your machine uh this command will download all required in images and start your container and expose a port that is 5678 and then you just go to your local host um 5678 and start U running an A10 so in this video the plan is to implement and take a couple of um use
03:30 - 04:00 case of N1 and the first use case that I'm going to see is what uh you are currently seeing in this picture so it's a running n8n instance in my local machine and it's running in U in my local host as you can see right here and uh so basically this is the whole diagram and what I'm doing here that is uh I'm integrating Lang chain here so this is a classic example and the most most popular example of the application
04:00 - 04:30 of Lang chain in an industry which is doing a chat with a PDF so I have a PDF long PDF so you cannot just pass it to your uh chat GPT or chat or GPT 3.5 or gp4 API because it's a very long context window probably have 200,000 or 300,000 wordss or tokens so you have to uh do you have to integrate Lang chain to do a chat with this PDF and answer and to get
04:30 - 05:00 the answer of your questions from this PDF so basically here uh I have integrated my Google Drive because the PDF is in Google Drive then I'm downloading that PDF from Google Drive uh to through these na10 application then I am converting the PDF into a vector store and storing that Vector in Pine con which is uh uh one of the most uh one of the leading uh supplier or provider of vector store and um then I
05:00 - 05:30 am doing uh passing that to open AI edings uh and then a binary input loader with document loader where it will chunk my whole document into um into uh into so many different chunks because openi uh API cannot read the whole document like I was saying so you have to U buy you have to dissect it into many chunks and then uh once all this is done then again I am asking question on that
05:30 - 06:00 document and uh getting answers from the document so this is the whole workflow and that's what we are going to build now all right now to create a new workflow and just go to the workflows tab and then add workflow and you have an empty workflow absolutely so let's uh first create it and I'm choosing Lang Chen AI nodes this is the new addition that uh n again Med so just click here
06:00 - 06:30 and the first thing for my like I said that I want to do a chatting with PDF so I need to get the PDF first so I've already downloaded uh sorry uploaded the PDF in my Google Drive and now I need to download that so first I need to uh integrate the Google drive here so just search Google Drive and the thing that I need to do here is download file as you can see there are so many you can do uh with
06:30 - 07:00 Google Drive uh create share Drive get share Drive uh upload file but here I need to do the download file all right now here a couple of things you need to remember to integrate your Google Drive the first thing you need is set up your Google Cloud account and actually if you just here you can see parameters Tab and also the docs tab uh parameters tab is where you have to actually work and put your Google Drive account details
07:00 - 07:30 O2 API Etc but if you are doing it for the first time I really recommend go to the docs and read their docs it's pretty exhaustive here so what basically they are saying that uh for creating your Google Drive integrating your Google drive with an A1 you need few things and you need a Google Cloud account you need Google Cloud platform project uh that's the main two things you need and then you have to set up an O and create the credentials and actually all these also direct you
07:30 - 08:00 to the relevant Google Cloud accounts documentation as well so I just followed these link and uh from the na documentation and that will take you to the official docs of Google workspace or Google Cloud account uh remember you if you have never worked with Google Cloud uh you first initially have to activate your account uh which will need debit or credit card and you will get a 90 days free credit of $300
08:00 - 08:30 you will not be charged anything at all during those 90 days for your Google Cloud so nothing to worry and then um N8 documentation says that they recommend this o 2 uh instead of this Google service account so there's two ways to uh establish the credentials one is o 2 and another one is Google service account and I initially set up the service account but it did not work properly and that was kind of expected because the documentation of na10
08:30 - 09:00 clearly said that they recommend what to so that's what finally I I set up uh you just followed this instruction here that's a long page but actually it's quite simple you just uh go to your uh billing account and then select o setup and it automatically will take you to the next steps so nothing to worry uh you just follow Google Cloud's step-by-step process and once you set up the o 2 you will get your client ID client secret ID all those things will
09:00 - 09:30 be needed when you come back to your n10 app and you choose your create new credentials that's where you will you're going to need those uh Google Cloud's client ID and client secret ID and there you just put it and as soon as you put it and put the relevant user email that because after you put all these new credentials also you have to sign in from here and for that you have to choose a particular Google email ID that is Gmail ID and that Gmail ID should
09:30 - 10:00 match when you set up your o 2 because during the process of watch to setup or and there's a particular step where you have to give a particular user email who will use these O2 in The Client app and that is the email that you have to give when you choose your new credentials here so anyway that is that may take uh probably 10 to 15 15 minutes for you to
10:00 - 10:30 set up so in my case I have already set it up so my Google Drive account is already connected here and so once I choose the resource as file then operation as download because I'm going to download a PDF file and then I choose the file as a from list there's other option as well by URL or by ID uh in the Google Drive for each each and every file and folder you have an URL you have an ID but here I'm just simply choosing from l and as soon as you click here it it will
10:30 - 11:00 automatically show it will automatically fetch the number of uh number of files that I have in my Google Drive so here I uh this is the research paper that I wanted to do uh chatting with langen so I choose this and that's it and let's now just click outside so my Google Drive account I have not note that I have not yet executed it because I'm going to execute it after I set up the entire thing but but if you want to execute you can execute even now for
11:00 - 11:30 just clicking on this button okay so now uh proceeding further as you can imagine that uh I am kind of building one chain after another that is this will be executed then another uh another some other execution will go on okay uh so the next one is uh I want to add the uh Vector store that is um from Lang chain the whole PDF that I just
11:30 - 12:00 downloaded in my from my Google Drive that needs to be converted into vector and that needs to be stored so for that uh let's go to Vector stores and there are quite a few options here uh in memory Vector Store Pine con insert Pine con load so remember they are still uh building it so you will very soon probably will see many other Vector uh store options like WV or so many other so for now I'm going to choose the pine cone uh insert all right now here again uh uh
12:00 - 12:30 for for the first time you have to set up your credentials and here the credential is much much more easier compared to the Google Cloud because all you need is a pine con account API and that is all and in my case I have already set up the API so that's what I'm choosing here uh and uh I'll just show you uh create new credentials this is the screen uh and here API key is the API key that you will get from Pine con
12:30 - 13:00 and uh environment that's actually the gcp uh session where which region it's running that's that that info so both of these you will get from Pine Cone so let's just quickly go to my pine cone uh screen here so this is my free uh free tier of pine cone and as you can see the API key there's a tab you will get your API key from there and the first thing you have to create a index and uh so when you try to create an index you have
13:00 - 13:30 to give a name you have to choose a dimension for the dimension I chose a very simple one 1536 that is a vector Dimension actually uh for uh for creating those Vector stores and cosine uh that's by default and I left it at cosine you can choose uh dot product or ukian based on different situation but cosine is the most common one for creating uh the for calculating the the distances between various vectors this is a metric based on which two vectors
13:30 - 14:00 will be compared that whether they are similar or not all right now when you create a new index so whatever the new index you create here the name should match exactly what you give here because here as well you have to give your pine cone index all right and uh also you uh from these indexes like you you can see that the environment here automatically it chose Asia uh uh and sorry uh Asia
14:00 - 14:30 Southeast uh one gcp free and that's what you have to put in uh here when you are choosing your uh credentials for pine cone so by default it has chosen us Central but you have to change it to whatever you see here in the pine con all right uh okay after those two after you input those two your credentials will be ready and um your pine cone actually will be ready and you can after you put those pine cone index and pine
14:30 - 15:00 cone namespace is something you can choose whatever you want here and uh then you can actually execute the node to see that your pine cone is working properly all right so now here as uh you can see that my pine cone node is already here but there's a warning so what is those two warnings that parameter Pine con index is required no Noe connected to required input document yeah so pine cone this needs two subnote
15:00 - 15:30 kind of things one as you can see there's two plus icon here so I have to complete them as well the first one is document second one is embedding so let's do that one by one first you click on document and because here I'm dealing with PDF that will come under these binary input loader now uh GitHub document loader Json input loader these are two other options that Lang chin gives but in my case uh my PDF will be under
15:30 - 16:00 this binary input so you click on that and from there you choose PDF loader and obviously you can see all the different loaders that langin gives you the option for CSV loader docs loader epob loader uh text loader but here I'm going to choose a PDF loader okay and there is nothing to execute here it is just a sub node so I just click outside to come out of it uh yeah so uh now we can see that
16:00 - 16:30 uh this document is being connected here and the next one I have to choose embeding and uh eming uh currently you are only seeing eming's open Ai and that's where again you have to connect your openai API account here mine is already connected but if you're doing it for the first time just uh very simple create new credentials all right and you give your API key that's as simple as this here uh connecting the open API is
16:30 - 17:00 the most simplest is a simplest one here you don't even have to give organization ID here that's an optional thing so uh yeah after you give your API key of open it will be connected and uh just click outside to come out okay so both of them is connected now and here again I can see another warning sign uh no node connected to required input text splitter that is expected because we can see there's a plus sign here text splitter and I have not done anything
17:00 - 17:30 there so let's just click on that so again if you are familiar with Lang chain terminology and the Lang chin ecosystem you know that Lang chain actually splits your entire text so here my original PDF may contain something like uh probably 5,000 words or 200,000 words for that matter and the entire thing cannot be passed to in a single window to open AI for for analysis right so that's why we need to split the text
17:30 - 18:00 and the most common one I in langen is the recursive character text splitter there's other option as well character text splitter and token splitter and just would like to uh mention a couple of things about this recursive text splitter so basically this text splitter operates based on a list of characters that serves as delimiters or split points within the text it attempts to create chunks of text by splitting these
18:00 - 18:30 characters one by one uh in the order they are listed until the resulting chunks reach a manageable size so it splits document recursively meaning uh by different characters starting from new lines then spaces then empty characters the advantage of this approach is that it tries to preserve as much semantic context as POS possible by keeping paragraphs andenes and words
18:30 - 19:00 intact meaning the words within chunks are often closely related semantically which is beneficial for any natural language processing task and like I said there are a couple of other technique as well to do the text splitting but the recursive character text splitter is the most common common one used for General documents such as text or mix of text and code and so on here our and for our purpose it's it's just a perfect so I'm going to choose that one and here you
19:00 - 19:30 have to mention your chunk size and chunk overlap so uh I'm going to use GPT 3.5 turbo so I can very well you know that by default it gives the uh 2,000 words 2,000 tokens of context window right so 2,000 token means uh I uh probably will have 1,500 or 1,600 words maximum so so for that I keep at uh maybe 800 because I
19:30 - 20:00 have to take care of both the input and the output from GPT 3.5 right and chunk overlap you can mention for 800 token size you can mention 50 now here's another thing if you are um subscribing to GPT 4 or uh the paid version of GPT 3.5 API then you will have access to 16k context window right so 16 with with 16k contract window you can set the chunk
20:00 - 20:30 size to much higher value you probably can set it at 4,000 here or here it may be maybe 400 something like that but for our purpose I'm just going to do it at 8 850 because the API that I'm going to be using uh GPT 3.5 turbo that has 2,000 context window all right click outside so yeah so now everything is ready I can run the entire uh workflow by just
20:30 - 21:00 clicking on this button so let's do that and you can see that as soon as I ran it gave me workflow has issues Pine con insert parameter Pine con index is required so let's see what's the problem here uh yeah so I did not uh put the pine con index name so what was my Pine con index name that was this one my index hyphen n89 my
21:00 - 21:30 N8 n all right and also give a pine cone name space uh so my [Music] n8n S SP okay uh I can choose you can choose clear name space uh okay all right now now run again that is execute the entire workflow and you can see it has started working here yep that executed just fine find con is getting executed and that's it my entire workflow got
21:30 - 22:00 executed now one quick thing before proceeding further the documents that I'm reading here uh through uh that is a document that I downloaded from Google Drive and which I submitted to Pine Cone and then to open I eding is this one it's just a two two pages from a long research report so the heading of these uh it it was published just very recently on 25th September 2023 and it deals with with disinformation detection and evolving challenge in the age of llm
22:00 - 22:30 so it talks about how uh different I'm talking about this document particularly because now the next uh uh steps of the work that I'm going to do with n8n and Lang chain you need to have a context that what's the document is doing and what kind of answers I'm getting from openai and how open a whether openai is uh the answers that openi is giving whether they make sense so this document is about uh this particular this whole research is about this disinformation of
22:30 - 23:00 AI and how uh the the simpler models like robera and all the previous model that were insufficient to handle all the to handle a checking whether AI is producing misinformation and what are the solution possible uh in the in these challenge that when AI will produce larger llm will produce many misinformation so yeah yeah so that's the document is about now let's go back
23:00 - 23:30 to our lank chain integration and coming back to na10 we can uh just quickly check uh the various nodes that just got executed perfectly so let's check out this pine cone node here double click on it and uh on the left you will see the input to the to the node in this case it is just a PDF file which came from Google Drive and you can see all the details for example the MIM type is application PDF file size is 285 KB Etc
23:30 - 24:00 and on the output uh we see the what this particular node did so the job of this node was to uh create chunks and factorize it with pine cone and that's what you see here and uh for within the metadata part you can see each chunk for example the first chunk uh that's uh uh that shows here I think that will show the line number here I just saw yeah uh for example here page number one
24:00 - 24:30 lines 1 to 21 that's your one chunk which is this one and then if you go to the next chunk that's from line number 22 to 36 that's your this chunk so there are so many chunks that it has uh broken the entire document too all right uh just click out sign okay so this part is over now the next part but before proceeding to the next part let's quickly recap what we did till now so
24:30 - 25:00 here we are using Lang chain pine cone and the openi edings to create emings of our PDF document we then use pine con to create a pine con index and add our document to it and those indexes can be seen by double clicking on the pine con node here and uh just to say what pine cone index is all about so a pine cone index refers to a structure that is used by Pine con to index and organize the vectors that are stored within that
25:00 - 25:30 database and index employs the Advanced Techniques such as approximate nearest neighbor algorithm dimensionality reduction for search efficiency and many more stuff internally and using an index in Pine Cone facilitates rapid similarity searches by quickly identifying the vectors that are closest to a given query Vector which facilitates an efficient information retrieval that's the whole purpose of
25:30 - 26:00 doing the pine cone node here and after this is done finally in the next step we will use this pine cone node or pine cone indexes to query our index and get back the most similar document or most similar answers to our questions all right so to do that I go to Lang chain AI nodes again and now this time I select chains and within chain I need this one vector store QA
26:00 - 26:30 chain this will perform a question answering operation on a vector store based on the input query and we can see there are other chains as well for example llm chains retrieval QA chain structured output chain summarization chain uh I will use the summarization chain in the next part of this video and uh structured output chain this one processes input text and structure the output according to a specified Json schema no we don't need that for this purpose there because here we already
26:30 - 27:00 have the vector and we just want to do question answer on that Vector store so let's uh just select that and remember one of the most powerful attribute of Len is uh being able to create These Chains so this is a unique offering from lanin amongst its many exceptional attributes that chains permits the integration of various components to craft a unified application for instance a chain can be developed that accepts user input processes it with prompt
27:00 - 27:30 template and subsequently transfers the formatted output to a large language model for further computation in this particular case what we are doing right here uh the chain is going to process the vector store and take our query and get the get the answers from those query from the large language model and in this case the large language model is open chat open a now what what you see here these top a and query I don't need
27:30 - 28:00 to change the query uh this Json anyway because it's just going to take from the input and a quick note on the top K uh by default it is taking the number four I actually don't intend to change the number four here let it be there but let's understand the significance of topk so topk sampling restricts a token selection to the K most likely next tokens because as you know the underlying architecture of openi is uh
28:00 - 28:30 is Transformer model and this Transformer model at the very basic at the very uh Nuance level it's about predicting the next token so this top K is about that that is uh it's a selection to the K most likely next tokens and after Computing the Logics and applying the soft Max Etc the model will Zero out the probabilities of all tokens except for the top K tokens that is a mathematical significance
28:30 - 29:00 internally uh you don't need to understand this to uh to um to operate in A1 or Lang chain at all you just um I just told you so that you it makes sense that uh what top Cas is all about and by default here it takes a value of four that is like the most uh most frequently used value so let's just keep it at that uh yeah let's uh bring it here all right now we can see that the there's further two sub nodes here just like we had for
29:00 - 29:30 our pine cone that is we had to include binary input loader that is a document input loader and the open embeding here as well I can see uh one for model one for Vector store so let's do that model and I'm going to select chat open AI uh and here again my credentials has already been taken which is uh which has open API here and for the model there's many selection you can select the 16k in which case it's definitely better
29:30 - 30:00 because the window length is window context window length is 16k is better uh if you have access to that and uh but of course that is a slightly higher price for this purpose because our document is relatively small and I have also chosen a smaller chunk size so I can just select GPT 3.5 turbo which is I think 4K or 2K context window but that will be sufficient for this purpose for this demo purpose but in actual case s s where you have a long document you definitely are better off by selecting
30:00 - 30:30 16k all right and options here again I have to choose uh I want to choose sampling temperature and let's put it uh [Music] 0.1 now once again what is this uh temperature sampling so temperature sampling affects the probability distribution over the vocabulary when generating the next token specifically it adjust the logic or pre softmax values computed for each
30:30 - 31:00 token now that's a mathematical explanation but basically what it means is that uh the higher the temperature the more creative or more Innovative GPT 3.5 will be to generate your answer but in this case I want it I don't want it to be creative because I already have passed the documents and I want to gbt 3.5 to specifically look into that document and give me the exact answer so I want it to be least creative and that's why I am giving a low value here
31:00 - 31:30 so basically as the temperature approaches zero the model becomes more deterministic and is more likely to pick the most probable token and the next thing I have to supply to Vector store QA is the vector store and we can see right now there's couple of options U uh super Base zip Vector store but we have used Pine con here so I'm going to use pine con load and uh Pine con API account has already
31:30 - 32:00 been taken if you again if you have not set up your Pine con API account just create one account and get the API and put it in here and that's is just matter of couple of seconds and pine con index you have to mention the same index name our index name was this uh my index n8n and what was my Pine con name space let I forgot that uh let me check yes that was my n just copied from here and and go back here right all right so I'm set up okay
32:00 - 32:30 yeah I have to choose eming as well embeding again I previously as well I used here openi embedding and here as well I'm using openi embeding and there is nothing to select here okay so I'm all done here and then I need to add a chat uh trigger here so just go to Leni nodes and manual chat trigger all right uh yeah and let's move it here and connect it to this like
32:30 - 33:00 that all right now I just click on this uh chat button right here and I have my uh chat window so here I want to ask question type your type in message right here now remember in our document the original PDF document uh which was which was about researching whether AI generated disinformation uh is a problem and how large language models of the recent times can uh check AI disinformation now
33:00 - 33:30 that document talked about how the earlier models like BD which are much much more smaller models whether they are they are uh they could check the AI disinformation so the question that I'm going to ask is that are smaller language models like bird good enough to detect this information in Ai and uh send it let's see awesome it took a second and let's
33:30 - 34:00 see the answer so this is on the left side what you see is the answer under response and then text so research in AI generated disinformation detection has predominantly focused on smaller language models like bird gpt2 and T5 these models have been effective in detecting thisinformation to some extent however with the Advent of large language models like gpt3 which have billion scale parameter the complexity of disinformation detection has escalated there is there is a gap in the literature addressing the detection of
34:00 - 34:30 disinformation generated by llms so while smaller language models like B have been useful they may not be sufficient to detect this information generated by llms exactly that is exactly how the research paper proposes that how the smaller language models were not good enough really to detect this information so yeah so this is really working uh you can ask any number of question in in here and uh uh the whole process will go on that is uh the
34:30 - 35:00 vector database in Pine con will be hit and then proper uh answer will be fetched for you so yeah you can see the whole question answer of PDF I just implemented without writing a single line of code it is entirely drag and drop all right now let's uh do another quick one uh which is conversation agent I just want to do a conversation with an
35:00 - 35:30 agent so let's um add my agent conversation agent yep and here I can see that there's quite a few supplemental nodes here for example I have model memory tools output Purser so for models let's add that uh chat open Ai and the rest of the things are just perfect uh open AI account GPD 3.5 turbo here again you can choose whatever you like uh let's select uh 16k here okay
35:30 - 36:00 and uh option add option again uh sampling temperature here I just want to give it slightly higher temperature 0.5 and okay and next I want to add the memory click on that window buffer memory so I have other these other memories option here radius chat memory which is a third party memory then motor head exot Zep but Windows buffer memory is like the
36:00 - 36:30 most um common one while working with Lang chain you can actually go to their documentation in Lang chain to read more about that so here they talk about conversation buffer window memory so basically there are different types of memories in langen a best practice When developing chatbots is to save all the interactions the chatbot has with the user this is because the state of the llm can change depending on the past conversation in fact the llm will answer
36:30 - 37:00 to the same question from two different users differently because they have a different past conversation with the chatbot and therefore it is in a different state and in the context of a chatbot customers would expect it to remember the things they talked about in the earlier part of the conversation otherwise it would be annoying to have to tell the same things over and over again so by providing memory types with different features langen uh makes it
37:00 - 37:30 easy to implement memory components into your applications and that's exactly what this conversation buffer memory is all about so what the chatbot memory create uh creates is nothing more than a list of old messages as simple as that so these old messages are fed back to it before a new question is asked of course the llm have limited uh context window so you have to be a little creative and
37:30 - 38:00 choose how to feed this history of messages back to the llm the most common method are to return a summary of the old messages or return only the latest n number of messages that are probably the most informative and uh the different memory that Lang chain provides are conversation buffer memory then conversation buffer window memory conversation token buffer memory conversation summary memory and there's even few more so let's quickly talk
38:00 - 38:30 about these one conversation buffer window memory so here langin provide several features to control the memory size one of them is a conversation buffer window memory which stores the previous K messages and the K value is determined using the K parameter and the first benefit of conversation Windows buffer memory is the reduced token usage by storing only recent inter actions that is the top K of the previous K number of messages chatbots can conserve
38:30 - 39:00 memory resources resulting in Faster response times and lower computational cost uh this efficiency is particularly useful when dealing with large scale conversational systems and remember uh the open will charge you by the token so the less number of tokens you remember the Lesser will be your cost for open eding next benefit is contextual relevance by retaining recent interactions convers ation buffer memory enables chatbots to access relevant
39:00 - 39:30 conversation context only this context is crucial for generating coherent and personalized responses chatbot can leverage this memory to understand user intents reference previous information and maintain a smooth conversation flow next I am going to add a tool uh let's just add uh Wikipedia you can see that there are so many options if you are just doing mathematical calculations and you want to chat about your calculation you can uh add calculator Tool uh for your questions on just code
39:30 - 40:00 you can use code tool SARP API is another great powerful API you can use for chatting and then Wikipedia UL from alpha workflow tool in this particular case Wikipedia is a best option for me because I just want to have a general uh general knowledge uh chatting with uh with it and the last one is output purer and the last one is uh output Purser for now I'm just going to select
40:00 - 40:30 the simple one item list output Purser okay and uh okay I need to also add the the chat U the manual chat trigger right yep so yeah let's just um yep now let's start chatting the first question I'm going to ask is when did gpt3 come out so
40:30 - 41:00 [Music] end awesome jb3 was released by openi in 2020 can you give me the exact date exact month letter send okay in this case as well it just gives me it was released in uh 201 20 next question um give
41:00 - 41:30 me Barack Obama's tenure as a 44th President of the United States awesome uh from 2009 to [Music] 2017 was the next uh [Music] president the next president was after Barack Obama was Donald Trump uh can it answer
41:30 - 42:00 when uh who was next who was after that awesome it also remembered all the sequence and just perfectly answered Jo Biden and on the right side you can just look into some of the metadata of these chatting so you can go chat open AI uh and you can see all the uh all the my new details of various option one option two all these things uh then you can
42:00 - 42:30 Windows buffer memory action chat history chat history okay these are just uh internal parameters uh in case you are interested for example if you just click on the Wikipedia tool right here you can see how uh my internal chat uh tool was quering Wikipedia so they were quing this page list of President of United States and the response that it got and and uh it answered from that response and also if you just click on
42:30 - 43:00 the openi tab right here you can see the details of the underlying uh underlying Json so this is a raw Json format we can see for example the uh the content here do your best to answer the question feel free to use any tool available to look up relevant information only if necessary however AV all else all response must adhere to the format of response format instructions so these are the internal instruction given by langin to open AI while uh sending it
43:00 - 43:30 the queries and uh yeah so we can see all our human that is I asked when did GB3 came out AI responded this let's go to the draw Json format again uh yeah and it goes on for each of the for for for each of the question it has human AI human Ai and all the details uh response format instruction uh output adjacent markdown code snippet containing a valid Json object of in one of two
43:30 - 44:00 formats all right so that was chatting with an conversational agent using langen Ai and again as you saw that we did not have to write any single piece of code and it all happened by just dragging and dropping all right the next thing I want to do is um creating a summarization of a PDF document so uh let's create the node Lang CH node and first thing I need is a Google drive because I need to download the
44:00 - 44:30 document first so it's the same steps I did for the first uh node where we did a question answer on the PDF document okay so here I am uh Google Drive credentials is already connected here resource file operation download let's see yep that's a one and then yeah just click outside next I want to I want to add the Lang
44:30 - 45:00 chain chains and then summarization chain okay now here we can see run once for all items and in in the type we have three option map reduce refine and stuff let's quickly get to know them you can actually go to uh Lang Chain's official documentation to read more about them so when you're dealing with documents you have staff refine map reduce uh these are the three uh three main types that we just saw here map ruce refine and
45:00 - 45:30 stuff so let's quickly see stuffing is the simplest method where you simply stop all the related data into the prompt as context to pass to the language model this is implemented in Lang chains as the staff document chain the prosar only makes a single call to the llm when generating text the LM has access to all the data at once but on the con side most llms have a context length that is a context window length and for large documents or many documents this will not work as it will
45:30 - 46:00 result in a prompt larger than the context length then uh the main the main downside of this method is that it is only it only works on smaller pieces of data once you are working with many pieces of data this approach is no longer feasible the next two approaches are designed to help deal with that uh map reduce that is the next method this method involves running an initial prompt on each chunk of data for summarization task this could be a summary of that chunk for question
46:00 - 46:30 answering task it could be an answer based solely on that chunk then a different prompt is run to combine all the initial outputs this is implemented in the Lang chain map reduce document chain so um prosar Can scale to larger documents and more documents than staff the calls to the llm on individual documents are independent and can therefore be parallelized and on the conside requires many more calls to the llm and the last one is refined this
46:30 - 47:00 method involves running an initial prompt on the first chunk of data generating some output for the remaining documents that output is passed in along with the next document asking the llm to refine the output based on the new document okay so you got the point uh for smaller document we can use stuff for larger or more number of documents we can use Ma map reduce uh but map reduce will have many
47:00 - 47:30 multiple calls to the llm so for this one uh Although our document is relatively smaller I just want to use map reduce so that I do not exceed the overall context window length of the llm because we are using GPT 3.5 turbo here which I think here by default has a 2K context window length and uh my two pages of PDF probably will not exceed that but still I don't want to take risk by uh that is I want to make as many calls to the llm as possible as needed
47:30 - 48:00 here uh that's all right but I don't want to miss out any part of the document so I'm not using staff I'm not using uh refine as well I'm just using the map reduce okay all right with summarization chain I have two sub nodes uh so the first one is model uh here as well I'm going to use chat open AI and GPT 3.5 turbo is okay uh my API is already
48:00 - 48:30 connected add option sampling temperature I want to give just 0.1 because I don't want the model to be too creative or random generation so I want exact answer okay that's fine and then the document for document I here it's a PDF document and I want to summarize it and I'm going to for sorry for this one I'm going to use binary input loader then Json loader PDF loader we already discussed this uh earlier that there are
48:30 - 49:00 so many loaders uh which reflect all the loaders that langen provides but this is a PDF document so I'm just using that okay and the binary input loader that also needs another sub node text splitter and again I'm going to choose recursive character text spitter chunk size let it remain 1,000 and chunk overlap let it give uh let me give 50 okay all right with that uh I'm kind of
49:00 - 49:30 done for this part the next one I just want to add a conversational uh agent so go to agents and conversational agent okay run once for all item run once for each item actually that's what I'm going to select only changes I want to make is Json do answer sorry that would be question and I'm getting this question from the Json input we will we will see
49:30 - 50:00 why I uh write question here because when the input that this node will get in that Json uh there will be question so that's why I'm uh I am typing question here okay and uh in the prompt in the system message I want to just add because this paper is a deep learning based paper and I want to I want to uh ask question based on these and I want the conversational agent to be an expert on this subject so I just am adding act
50:00 - 50:30 as a world famous deep learning engineer and then do your best to answer the question feel free to okay yeah that looks all right and this again needs quite a few uh sub noes uh but I do not want to give any memory here uh because uh I do not need the it to remember the old conversation it is not really a chatting that I'm doing here uh so uh and output purer that also I will skip
50:30 - 51:00 for now for the model I'm going to add chat open ey uh add option again [Music] 0.1 and then tools uh for tools it can again search uh Wikipedia tool okay yeah all right looks like uh I am uh done but but uh I actually forgot to include One Step here actually that's very important step but let's include it
51:00 - 51:30 now so this conversation that I'm doing I want to do this conversation on uh on a previous output all right and that output will be uh from Lang chain AI uh structured output chain so let's go to chains uh structured output chain okay uh we will come back to these input text and prompt later but first just place it here okay uh let's uh let's delete that
51:30 - 52:00 let's place it here connect it here and the output of this will go to the conversation agent and model again I choose chat open AI add option sampling temperature [Music] 0.1 yeah looks all right now let's start executing part of these no so execute the Google Drive first yeah it got executed
52:00 - 52:30 successfully uh then execute the summarization chain as well execute node yes perfect it got summarized and I it got uh it got me the summary as response. text and um so this is the result of the summarization chain uh from Lang chain and we can see the schema uh Json uh I have response then text that's this text is the summary of the entire document okay
52:30 - 53:00 and uh yeah so we can see all these green tick marks that means this part I actually manually executed I could execute right from here execute workflow but in that case the entire screen whatever you see all the nodes will get executed but what I wanted to do I actually wanted to know what would be my prompt or the Json format here uh so for that I needed to uh execute all these nodes previous to this note so that I
53:00 - 53:30 can see the output of uh I can see the input that this node is getting from the previous node all right so that's the reason I I executed all this part and now let's see uh what my parameter configuration will be for this node so you just double click on it and uh uh so this is the input that is coming to this node and we can see the Json here that's response. text so uh you can directly do
53:30 - 54:00 uh just drag this to here actually just uh for Simplicity delete entire thing and then drag this text into here and you saw beautifully it already formatted json. response. text that's what it will give and what I The Prompt in this case uh I want uh three question that this node that is a structured output chain this node should uh give me three
54:00 - 54:30 question uh for three topics based on the summary so please uh give me a list of three questions uh based on the previous summary I actually want to be even more specific that uh I want the question to be very
54:30 - 55:00 relevant for this uh topic that for the summary topic that they generated from the PDF so let me put that as well so what I'm saying is um uh yeah ensure the questions are very relevant to the topic uh think step by step okay now here Json schema so that's uh because it's a function calling of open a API if you are familiar with open API structure then you know that here I have to give a perfectly formatted Json
55:00 - 55:30 schema so I have it saved somewhere so I'm just going to copy that and paste it here yep so this is the output schema uh that this particular note should produce uh you can just go to Open Eyes documentation and read more about this Json schema stuff uh when you are doing function calling with openi API so basically again uh in this particular schema or in this particular structure
55:30 - 56:00 uh we will be will be adhered to in the output part of uh this node okay now you're getting the picture that um what I'm doing uh I am first sum getting the whole PDF in this node then summarizing the PDF then asking this particular node to generate three question based on the summary and then finally the conversation agent what I'm going to do I the question the three question that it generated I want to
56:00 - 56:30 answer those three question by the conversation agent so conversation agent will just read the three question that it got from the previous node and we answer those three question so my uh uh my uh prompt should be accordingly like that so I just uh have to change the prompt here so as a world famous deep learning and llm engineer do your best to answer the question feel free to use any tools available to look up uh relevant information only if necessary
56:30 - 57:00 okay so that's a whole uh workflow and the whole plan and now I'm ready to run the entire workflow so I just execute this workflow what that that will do is run the entire all the nodes okay yeah it started from the beginning Google Drive and then it will go to summarization chain then once it is done it will go to structured output chain yep and actually I also wanted uh the output from this node to be in a proper list formatted uh way so let's just do
57:00 - 57:30 that Lang chain just uh include item list so that it's properly formatted uh split out items okay and uh yeah so let's uh delete this one bring in here connect it to here and then further connect it to there now uh yeah so now for the parameters we can see the Json output here it is response do questions
57:30 - 58:00 all right so my field will be like this response. question uh let's just do like this spit out items here and um yeah so yeah that looks all right now from now just run this okay after the item list is run now also run
58:00 - 58:30 this you can see how this uh for chat open and Wikipedia this is uh happening one by one uh because there's a conversation going on between Wikipedia and chat openi whenever it is needed yeah awesome so my this node also got completed right all right now the execution of the nodes are all done and let's see what is the summary It produced so I'm just double clicking on the conversational agent and here are my
58:30 - 59:00 questions the three questions that it generated based on the on the on the summary of the PDF that I originally gave and uh then I asked it uh remember our instruction was that to uh for each of the question it has to Define what these are and it can searge the uh the the Wikipedia tool for generating the answer that is for each of the question question uh to explain each of the question so let's see what they produced so the first one how reliable are
59:00 - 59:30 current disinformation detection technique uh in detecting llm generated disinformation that was a question and the output is a reliability of current disinformation detection techniques in detecting llm generated disinformation can vary disinformation detection technique have evolved over time leveraging advancements in machine learning and artificial intelligence however llm generated thisinformation can be challenging to detect due to the sophisticated nature of the technique used and it goes on for the uh for
59:30 - 60:00 explaining the first question and the second question is can llm be uh adopted to detect disinformation generated by themselves for that question I have the answer here and for the last question What alternative SE Solutions can be considered if both current detection techniques and llms fail to detect this information the answer of that is here and again remember all these answers were uh produced by GPT GPT 3.5 using
60:00 - 60:30 the Wikipedia [Music] tool so we can see this entire quite complex workflow with Lang chain I did and executed successfully and got results without writing any single piece of code so that's quite wonderful and uh of course remember the langin integration was uh done absolutely just few days back so they are still in development and more modules of langel will be properly integrated because langen now has become quite a bit of
60:30 - 61:00 large ecosystem so you can expect uh more fine-tuned or more refined version of these within few more days and overall na10 provides features like custom scenario building integration with any app via custom HTTP request painless debugging and hosting on your own infrastructure so it's uh positioned as a a core tool to pump data across your Tex stack with uh use cases ranging from customer integration to CRM
61:00 - 61:30 customization tool etc etc so do give it a try