Learn LangChain Effortlessly

Learn LangChain In 1 Hour With End To End LLM Project With Deployment In Huggingface Spaces

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    Join Krish Naik in an in-depth, one-hour session on LangChain, where you'll learn to create a streamlined Q&A chatbot and deploy it using Hugging Face Spaces. This video is the beginning of a rich series focused on LangChain, tailored for both novices and experienced professionals entering the data science field. You'll grasp how to set up your environment, integrate various libraries, and explore the potential of LangChain in building robust applications. The tutorial not only walks you through the step-by-step process of establishing a practical application but also delves into deployment strategies in a comprehensible and engaging manner.

      Highlights

      • Krish discusses the growing importance of LangChain in the data science industry. 🌟
      • Introduction to LangChain and its application for creating end-to-end projects. 🚀
      • Detailed setup instructions for your development environment using VS Code. 💻
      • Demonstration of creating a chatbot using LangChain’s prompt templates. 🤖
      • Insights into deploying applications on Hugging Face Spaces for free! ☁️

      Key Takeaways

      • Explore the LangChain tutorial and build a Q&A chatbot in just one hour! 🚀
      • Learn to deploy your LangChain application in Hugging Face Spaces effortlessly. ☁️
      • Understand the nuances of integrating libraries like Streamlit for a seamless UI. 🖥️
      • Discover how to use prompt templates effectively to interact with LLMs. 🤖
      • Witness a step-by-step guide on setting up an environment and creating a dynamic app! 🌟

      Overview

      Krish Naik kicks off the LangChain series by introducing its pivotal role in the data science industry for projects involving large language models. With emphasis on practical orientation, Krish crafts a hands-on tutorial suitable for newcomers and seasoned professionals alike, particularly those transitioning into data science. The video aims to arm viewers with the necessary skills to start building with LangChain, cultivating a robust understanding for upcoming advanced projects.

        In this comprehensive video, Krish meticulously outlines the process of setting up a development environment tailored for LangChain projects. By integrating Python libraries and tools, he guides viewers through creating a functional Q&A chatbot from scratch. Emphasizing ease of access, Krish introduces Streamlit for the frontend UI, simplifying the complexity often associated with such projects.

          The tutorial culminates in a live deployment session using Hugging Face Spaces, a popular choice for its user-friendly and cost-effective platform. Krish ensures that viewers not only walk away with a completed project but also the know-how to deploy apps independently. The video is an encouragement to explore LangChain’s potential, promising robust support in future projects as viewers progress through the series.

            Chapters

            • 00:00 - 01:30: Introduction to LangChain The chapter introduces the LangChain series by Krishak on his YouTube channel. It marks the beginning of the series with an emphasis on discussing important components necessary for building projects using LangChain. The chapter highlights the growing interest in LangChain among students and experienced professionals who have recently transitioned to using it.
            • 01:30 - 02:30: Understanding LangChain & Its Importance In this chapter, the importance of LangChain in the data science industry is highlighted. It discusses how LangChain is increasingly being used in projects related to large language models. The chapter introduces the structure of a dedicated playlist designed to explore various aspects of LangChain. It emphasizes understanding the key components and practical applications, preparing the audience for upcoming end-to-end projects related to LangChain.
            • 02:30 - 04:30: Building Simple Applications with LLMs In this chapter, the focus is on building a simple application using Large Language Models (LLMs). Specifically, it discusses a Q&A chatbot project and explores different tools for developing the user interface. Streamlit is highlighted for its good UI capabilities, but alternatives like Flask and Gradio are also mentioned. The chapter will cover the step-by-step coding process, including every line of code, to ensure a thorough understanding.
            • 04:30 - 06:00: Environment Setup for LLM Projects This chapter focuses on setting up the environment for deploying end-to-end applications specifically using cloud services like Hugging Face. It provides an example use case where a chatbot is created and deployed. The chatbot can answer questions such as 'What is the capital of India?' and 'What is the capital of Australia?' demonstrating the practical application of the setup.
            • 06:00 - 08:30: Creating and Managing Virtual Environments The chapter titled 'Creating and Managing Virtual Environments' begins with a playful note asking viewers to identify the capital of Australia and leave their answers in the comments. The instructor hints at an application that will be created by the end of the session but emphasizes the need to first understand several important components in a tool called Lang chain. Before proceeding, there is a reminder to subscribe to the channel, activate notifications, and share the content with friends for collective benefit.
            • 08:30 - 11:00: Installing Required Libraries The chapter focuses on the practical implementation of Lang chain, providing step-by-step guidance on how to use its functionalities to build end-to-end applications. Emphasizing the importance of open-source sharing and community support, the chapter serves as a helpful resource for those looking to contribute to and benefit from open-source projects.
            • 11:00 - 14:30: Working with OpenAI API In this chapter, we focus on setting up the environment necessary to work with the OpenAI API. An OpenAI API key is required, and I'll demonstrate how to obtain one. We'll then build simple applications using Large Language Models (LLMs). Three key components are discussed: LLMs, prompt templates, and output parsing. LLMs include chat models which are essential for understanding context and reference in chatbots. Practical examples of prompt templates will also be covered.
            • 14:30 - 17:00: Introduction to Hugging Face Models The chapter provides an overview of Hugging Face models, discussing the important components like prompt templates, LLMs (Large Language Models), and output parsers. It highlights how these elements work together to generate desired outputs. Additionally, it advises on obtaining an API key from the OpenAI website to leverage the discussed models and tools effectively.
            • 17:00 - 21:00: Using Hugging Face with LangChain The chapter titled 'Using Hugging Face with LangChain' begins by instructing the reader on the importance of keeping their keys secure and explains that the author is demonstrating the steps by not sharing their own key. The reader is then guided through setting up a project environment in VS Code, emphasizing that all coding will take place within this integrated development environment. The chapter hints at the importance of creating a new environment as a foundational step in starting the project.
            • 21:00 - 27:00: Creating Prompt Templates The chapter titled 'Creating Prompt Templates' discusses the initial steps necessary for setting up an environment, emphasizing the importance of this step in various projects. The speaker mentions the use of command prompt over Powershell and begins with the environment creation using 'conda create'. It is highlighted that this step is crucial for future projects.
            • 27:00 - 31:00: Combining Chains with Sequential Chain The chapter discusses setting up a Python virtual environment using conda, specifically version 3.9, to ensure compatibility and isolate dependencies for different projects. The author explains the importance of defining a specific Python version and automating the environment setup process by providing the '-y' flag to bypass manual permissions. There's an emphasis on the significance of using virtual environments in managing project-specific requirements.
            • 31:00 - 35:00: Exploring Chat Models with OpenAI The chapter discusses the importance of creating a new environment tailored for specific projects, focusing on using only necessary libraries. It suggests using VS Code, though acknowledges PyCharm as an alternative, emphasizing personal experience with both IDEs but showing a preference for VS Code.
            • 35:00 - 41:00: Implementing Output Parsers in LangChain This chapter discusses the process of setting up and activating a virtual environment (VEnv) for a project. The first step involves activating the VEnv using specific commands. Following this, the next step mentioned is to write requirements to a requirements.txt file, which is crucial for managing project dependencies. The focus is on ensuring the environment is correctly configured to implement output parsers within the LangChain framework.
            • 41:00 - 50:00: Building a Simple Q&A Chatbot In this chapter, the focus is on building a simple Q&A chatbot. The process begins with setting up the environment by installing necessary libraries. The primary libraries mentioned for this task are LangChain and OpenAI, with a possibility of using Hugging Face later on. The chapter notes the importance of specifying and preparing the libraries required for development.
            • 50:00 - 52:00: Deploying Chatbot on Hugging Face Spaces The chapter titled 'Deploying Chatbot on Hugging Face Spaces' describes the process of installing necessary libraries for deploying a chatbot. The author explains using a command 'pip install -r requirements.txt' to install the required dependencies listed in the 'requirements.txt' file. Notably, the installation includes libraries such as LangChain and OpenAI, which are pertinent for the chatbot development and deployment process in the Hugging Face Spaces environment.
            • 52:00 - 53:00: Conclusion and Next Steps The chapter discusses the installation of necessary libraries to run Jupyter notebooks, emphasizing the need to install the IPI kernel. It mentions that all libraries listed in the requirements.txt file have been successfully installed, and the next step is to proceed with the IPI kernel installation. The chapter concludes with the intention to continue the process post-installation.

            Learn LangChain In 1 Hour With End To End LLM Project With Deployment In Huggingface Spaces Transcription

            • 00:00 - 00:30 hello all my name is krishak and welcome to my YouTube channel so finally finally finally here is the langin series and this is probably the first video where I'm probably going to discuss a very good one-hot video about Lang chain all the important components that you specifically need to know to build projects now why I'm stressing on langin is that because many people recently some of my students and some of experienced professional who switched
            • 00:30 - 01:00 into data science Industry they are getting work that are related to large language models and they're specifically using Lang chain and the community with respect to Lang chain is also increasing so that is the reason I've created this dedicated playlist and I'm going to discuss a lot many things this video we will understand all the important competents of this Lang chain what you specifically need to know with respect to the Practical orientation and from the next video onwards lot of endtoend projects are probably going to come up
            • 01:00 - 01:30 uh in this video also I'll discuss about one Q&A project uh chatbot in short you know we'll try to use streamlet it's not like we only have to use streamlet you can also use flask you can use anything as you want right but streamlet it provides you a good UI you can also use gradio if you want right it is up to you so what all things we are specifically going to discuss first of all we'll try to understand the agenda and then we will try to uh do step by step each and every line of code will be done and as I said this is just just like a kind of L
            • 01:30 - 02:00 chain one shot video just to give you an example that how we are probably going to create our end to end application after we deploy that in a specific cloud like hugging phas right it provides your entire environment to probably host your application uh this is what we are going to create this chatbot see if I probably ask what is the capital of New oh sorry of India then you'll be able to see that I'm I'll be able to get the answer what is the capital of Australia many people
            • 02:00 - 02:30 get confused with the capital of Australia please do comment down in the comment section what is the capital of Australia and let's see whether it is right or not so here you can see the capital of Australia scan so we are going to create this application at the end of the day but before that we really need to understand lot of important components in Lang chain so let's go ahead and let's start this particular session before I go ahead please make sure that you subscribe the channel press the Bell notification icon and share with many friend so that it will also be helpful for them and it will
            • 02:30 - 03:00 also be helpful for me so that you are able to provide this open- Source content to everyone out there many people require help and you should be the medium to provide this specific help by just sharing it right so what is the agenda we'll understand as I said this will be completely a practical oriented video it'll be a long video where I'll be talking more about practical things how you can Implement Lang chain how you can probably implement various important functionalities in Lang chain and use it probably to build an to end application
            • 03:00 - 03:30 so first of all we'll go ahead and do the environment setup this is commonly required you also require an open AI key API key so this also I will show you how it is done then we'll try to build simple applications uh with the help of llms three important things are there llms prompt templates and output parsel okay in llms you specifically have llms and chat models chat models are basically used in chat B to understand the context reference and all right I will also be discussing this practical promp templates can play a very
            • 03:30 - 04:00 important role and then coming to the third one that is called as output parser uh in short prompt template and llms and output parser gives a very good combination of output like it it give you a good output the output like you specifically want right so that is where output Parcels will also be used before to go ahead with what I will do is that uh first of all just go to the open AI uh website itself and get your API key how you should get it just go to this account and there you'll be able to see View API key and create the new secret
            • 04:00 - 04:30 key right so once you create it give the key name and then copy it and keep it right I'm not going to share mine so that is the reason why I'm showing you this specific step now I will go to my vs code all my coding will be done in this vs code itself here you can see it is completely blank right I've just opened a folder in the form of a project now all you have to do start your project over here now the first thing as usual what we need to do we need to create create a new environment right so
            • 04:30 - 05:00 this is the first step that you specifically need to create with respect to an environment so don't miss this specific step it is important and probably whatever projects that we are going to discuss in the future we have to do the specific step so I will write pip install okay sorry cond create I'm creating an environment so let me just not open in Powershell instead I'll go and open in command command prompt okay
            • 05:00 - 05:30 so here I will just write cond create minus p venv v EnV python equal to 3.9 so 3.9 is the version that I'm going to specifically use and also going to give- Y so that it does not ask me for the permission instead start creating the specific environment the reason why I'm using this environment understand one thing that guys for every project that we
            • 05:30 - 06:00 probably create we have to create a new environment that actually helps us to just understand or just use only those libraries that are actually required for this particular project okay so this is the first step go ahead do it with me you know uh and do it in vs code because vs code is a good ID if you want to do it in pycharm then also you can do it but since you following this tutorial I'm actually going to do it in vs code I feel vs code is good I I've used both okay I've used different different IDs but I field vs code is good okay so here
            • 06:00 - 06:30 it is now here you can probably see my V EnV environment uh the next thing we will go ahead and activate this VNV environment so I will write cond activate cond activate V EnV Dash Okay so this is my first step done I'm good with it now the next thing what I will go to do is that I will just go ahead and write my requirement. txt right because we need
            • 06:30 - 07:00 to install the libraries inside this particular venv environment so I will go ahead and write all the list of libraries that I'm specifically going to use now what are libraries I'm going to use over here uh we will just write it down so that uh the first library that I'm going to use is Lang chain then open Ai and I think I'll be using hugging face also later on it is also called as hugging face Hub that I will show you as we go ahead okay so these are the two libraries that I'm I'm going to specifically use okay now the next thing
            • 07:00 - 07:30 what I will do I will go ahead and write pip install minus r okay so let me just hide my face so that you'll be able to see pip install minus r requirement. txt so once this installation will take place that basically means my requirement. txt is getting installed that basically means all the libraries that I actually require over here that is getting installed and for this I require Lang chain and open AI okay so once this installation will be
            • 07:30 - 08:00 taken place like it will be done now one more Library I'll be required is called as IPI kernel because if I really want to run any Jupiter notebook over here I have to use that okay so let's wait and let's see uh once the installation is probably done and then we will continue the video so guys the all the libraries that were present in the requirement. txt has been installed now the next step that I'm probably going to do is also in install IPI kernel which will be
            • 08:00 - 08:30 required to run my Jupiter notebook so I will go ahead and write pip install iy kernel Now understand one thing I'm not writing this library in requirement. txt because when we do the deployment in the cloud IPI kernel will not be required okay so that is the reason I'm installing separately pip install IPI kernel in the same VNV environment because V andv environment also we are not going to push it right so you can probably see over here downloading this
            • 08:30 - 09:00 this will happen and automatically the download will happen itself right now the next thing what I'm going to do I'm just going to write Lang chain do ipynb okay py NB so this will basically be my Jupiter notebook that I will specifically be using okay so right now it is detecting kernel once this installation will probably happen and then we will also be able to see the kernel okay so this is all the steps all
            • 09:00 - 09:30 the basic steps that you probably require from this you can start creating end to endend project but at least you require this uh along with this I'm also going to add two more steps one is about environment files. EnV file okay so here what I will do I will write EnV file okay uh inside this EnV file the reason why I'm uh writing this EnV file because I need to probably use my open API key and probably mention the open API key over here so if I probably write it like
            • 09:30 - 10:00 this and whatever API key that I'm probably getting from the website I can upload it over here and using right uh load environment function right we can load this API key uh as a variable so that we can actually call our AP API key over there so I will update this later on as we go ahead okay so till here everything is done this is my langin ipynb file I will go ahead and detect the environment this is what 3.9.0 so so here it is so let's see
            • 10:00 - 10:30 whether it is in working or not 1+ 1 it's working right so this is perfectly all right so everything over here is done with respect to this and I'm very much Happy this is working uh which is really really good okay so this is done now what I am actually going to do over here is that we need to import some of the important libraries like open Ai and all right so for this I will go ahead and write from Lang chain
            • 10:30 - 11:00 llms okay import open AI see there there lot many models like open llms and all first we'll start with open AI understand one thing guys the reason why I say this is completely practical oriented because you need to have the basic knowledge of machine learning deep learning and all but this specific libraries is used to build application uh fine tune your application with respect to models with respect to your own data set most of the
            • 11:00 - 11:30 things just with writing proper lines of code will be implemented in an easier way so you really need to focus on things how things are basically done okay now the next thing what I will do I will write import OS and I will say OS do Environ environment okay and here I will give my opencore API uncore key okay this is how you should basically write it down to
            • 11:30 - 12:00 import the API key now what I will do I will keep this hidden from you so just imagine I cannot show you the API key because I will be using my own personal API key itself so I will go ahead and probably pause my video and update the API key and I'll remove this specific code okay that is what I'm actually going to do so that none of you basically sees that and it is important you have to use your own API key so let me quickly go ahead and do that and let
            • 12:00 - 12:30 me come back so guys this is how you have to probably import your API key this will I've made some changes so if you also copy it is not going to work I made some internal changes in between changes so uh you just need to write os. environment open API key and this API key that you have specifically got okay so this is the initial step uh I also imported open AI so that I will be able to call this particular uh open AI itself now this is done done this is good everything is working absolutely
            • 12:30 - 13:00 fine now what I'm actually going to do I'm going to create my llm model and go ahead and write my open AI let's see open AI open AI function is the called or not let's see okay open AI from L chin open Ai and inside this open AI what I'm actually going to do I'm going to basically call a variable which is called as temperature and temperature right now you can keep the value between 0 to 1 the more the value towards one the more different kind of creative
            • 13:00 - 13:30 answers you may get right if the value is towards zero then what kind of output you you are probably getting from the LM models it is going to be almost same from the uh anytime you number anytime you probably execute so here I'm just going to keep it as 6 so this is basically my open a llm model okay now this is done my llm model is there so here you can see did not find an open AI key please add an environment variable open AI key now this is the error that you are specifically getting right so
            • 13:30 - 14:00 why this particular error is probably coming you should definitely understand okay without this understand that this kind of Errors can come to you the reason why I will not edit this particular error because I really want you all to understand it is saying did not find open API key now what you can probably do with respect to this okay there are two different things that you can probably do either you can take this API key save it in a constant variable and try to use that particular variable over here right so for that
            • 14:00 - 14:30 also you can directly do that uh I have also created this do EnV file what you can do you can load this environment variable and probably directly read it over there right but let me just go ahead with a simple way you know so you will be able to understand with respect to that also so here what I'm going to do here I will go ahead and probably say open API key and here I will going to write OS do environment okay and here I'm going to
            • 14:30 - 15:00 Define my open API key okay now let's see whether this will get executed or not I will show you much more better ways when we are probably executing our end to endend application so let me go ahead uh open a key is not defined because I use double equal to perfect now you can see that it has got executed perfectly now when I am actually creating an end to end project I will show you a better way the most efficient way that we should specifically use when when you are building an end to end project but right now I'll go to focus
            • 15:00 - 15:30 like this now understand one thing is that with respect to temperature variable right the temperature that we have specifically used I will give a comment over here and you can probably see over here right so temperature value how creative we want our model to be zero means temperature it is it mean model is very safe it is not taking any bets it will risk it might generate wrong output it may be creative so more the value towards one the more creative
            • 15:30 - 16:00 the model becomes right it is going to take more risk to provide you some more better but again with respect to risk again there may be a problem you may get a wrong output perfect this is the step simple step that we have done at the end in this video only I'm going to probably do create an end to end project understand this will be very important for everyone because as we go ahead in the next videos the project difficulty will keep on increasing okay now this is done now what I will do quickly I will
            • 16:00 - 16:30 go ahead and write text let's say the text is what is the capital of India okay so here I will write print llm do predict and here I'm going to basically write my text so here you can see the capital of India is New Delhi so here what I have done is that this is my input and if we use lm. predict I'm going to probably get the text so if you're liking this
            • 16:30 - 17:00 video till here guys as I'm teaching you step by step I'm explaining you each and everything please make sure that you practice in a better way right if you like it please make sure that you subscribe the channel also okay so this is done lm. predict and we are able to probably get the output okay so understand what what all things we did we created an open AI model right but here in the open AI right you should also understand one important thing here there is a parameter which is called as
            • 17:00 - 17:30 model now what all model you can probably bring it over here what all model you can probably call so here I will go to my documentation page okay and uh if I probably click on the models now here are the set of models that you can probably use by default it is calling this GPT 3.5 turbo it is the most capable gbt 3.5 model and optimized for a chat at the 1110th cost of a Dy 003 will be updated with our latest model
            • 17:30 - 18:00 iteration 2 weeks after it is released it's not like you only have to use this you can use this you can use this you can use this you can use this whatever models you want you can probably use it if you want to probably go with GPT 4 you can use this you can use this you can use this but at the end of the day gp4 is the most uh amazing thing it is more capable than GPT 3.5 so this is what models is by default over there it is probably taking this specific model which which you can probably use it now
            • 18:00 - 18:30 as we go ahead it is not like you can only call this model itself in hugging face you have open source models also right from Google from different companies you have that uh open source llm models you can also call that and I will also show you an example with respect to that okay so till here I hope you have understood it I think you should give a thumbs up if you're able to understand till here now let's go ahead and do one more thing I will also show you with respect to hugging face now okay now with respect to hugging face what I will do I will quickly go
            • 18:30 - 19:00 ahead and write hugging phore Hub right I will probably install this library now let me go ahead and open my terminal okay so I will delete this and quickly we will go ahead and write pip install pip install andr requirement. txt done so this is getting executed and
            • 19:00 - 19:30 then I will probably show you with respect to hugging face also so hugging face this is done okay the installation is done so you have to use hugging face Hub now in case of hugging face also right you will specifically get a you will also be getting a because at the end of the day we'll also try to deploy it over here if you go to settings right and if you go to access tokens here you can probably see I have some kind of token right and with the
            • 19:30 - 20:00 help of this particular token only you will be able to call any models that are available over here so in this models if you probably go ahead and see there are lot of llm models that are available right so if you probably go over here natural language token classification question answer like uh let's say text to text generation this is basically one kind of llm model I will search for one name okay so the name is flan okay flan T5 base okay so this specific model T5 large this is a text
            • 20:00 - 20:30 to text generation model right so this is also an llm model if I want I can use this also why I can use this because this is an open source okay if you want to probably see the answer right text to text generat false or not false or false is uh if I want to compute it right it'll give me some specific output okay false or false or false is the verb the verb right something like this it is getting an output right so I can also use my I can also create my chat models with the help of this kind of models
            • 20:30 - 21:00 directly by using this API right but how to do it let's go ahead and see it okay so here what I will do quickly I will first of all import one more important thing again in my environment variable os. environment and here I'm going to basically write in the case of hugging P I have to use something called as hugging oops I have to write in capital letter hugging face Hub under _ API uncore token okay
            • 21:00 - 21:30 so here is my token okay hugging face Hub _ API _ token now with respect to this particular token I have to write my own token the token is basically given over here as I already shown you if I probably click on show you'll also be able to see it so I'm not going to do the show part over here so what I will do I will just pause the video upload it execute it change the token and then come back to you okay so let's go ahead till then you can go ahead and create a hugging face account also so guys now I
            • 21:30 - 22:00 have set up my hugging face Hub API token with this specific token again I've made the changes so if you probably use this again it will not work okay but I have actually imported it now let's go ahead and probably see how we can probably call llm models with the help of hugging face so again and again I'll be using Lang chain langin is just like a wrapper it can call openm models it can call hugging face uh llm models anything you can probably call with it
            • 22:00 - 22:30 okay so that is the reason why I'm saying it is powerful so I will write from Lang chain okay import hugging face Hub okay and then here I'm going to probably Define my let me just execute it and then probably I will call it so here I will go and write hugging face Hub here first of all I need to give my repo ID now repo ID is nothing but uh whenever you search for any model this will be your repo ID Okay Google SL this
            • 22:30 - 23:00 okay so here I'm going to probably go ahead and Define it okay and this will basically be given in a sentence form okay so this is done now the next thing that I will have is model quarks okay and here I will go ahead and Define my temperature and then here I'll go ahead and write
            • 23:00 - 23:30 max length column 64 that basically means I'm giving the maximum string length to 64 okay now let's see whether this will execute or not I know it is going to give us an error let's see so it is not giving you an error it is probably executing it perfectly it has probably taken that particular key itself inference API is working perfectly everything is working so I will go ahead and Define my variable so here I'm going to basically write my lmore hugging
            • 23:30 - 24:00 face okay is equal to this one right so this is done we have probably written this specific thing over here and then I will go ahead and write name or I will see output let's see the output and here I'm going to basically write llm do huggingface dot predict and let's say I'll say can you tell me me the capital
            • 24:00 - 24:30 of Russia okay so let's see whether we will be able to get the output or not so I I'm going to write print output and let's see whether we are so here you can see the output is a simple word right the capital of Russia is Moscow right but here the previous output that we saw with respect to the llm models here it shows that capital of India is New Delhi it is giving you entire sentence and probably here it is just giving your word and this is what the difference is
            • 24:30 - 25:00 with respect to an open source uh model itself and models like GPD 3.5 or four right so here uh you can probably see this okay and I will also execute one more thing over here let's see uh can you write a poem can you write a poem about AI let's say I give this what kind of poem it gives okay it probably taking
            • 25:00 - 25:30 time to probably give the I love the way I look at the world I love the way I feel the way I think I feel I love the way I love see what kind of output you're specifically getting right now if I probably give the same output and probably write llm okay dot predict llm dot predict and if I probably give the same sentence let's see what is the output now you'll be able to understand why we are
            • 25:30 - 26:00 specifically using this let's see it should give you a better output I think I love the way I look at the world oh so mysterious was so curious It's technology so Advanced Evolution so enhanced it's a tool that can assist um in making life much less hetic a force that can be used for good or a cause that's misunderstood it can make decisions so Swift and learn from the mistakes so Swift tool that can be
            • 26:00 - 26:30 better see so how what an amazing poem it has basically written and by this you'll be able to understand the differences why I'm specifically using this open AI model and with respect to hugging face yes there are also some models which you can probably take paid one in hugging face that will give you better output but this is what happens with respect to open source uh models I think we should uh we have lot of Open Source models that are probably coming up Mr all 7B and all that also we'll be seeing in this playlist as we go ahead
            • 26:30 - 27:00 but as I said this is just a basic thing to probably understand we will be focusing on understanding this things right so here uh till here I hope we have discussed so many things how to probably call open AI models with respect to open AI uh Library API itself and then hugging face API also llm models right and using Lang chain so here are what all things we have done now the next thing that we have probably going to discuss is about prompt templates prompt template is super amazing uh it will be very very handy when we are talking about things with
            • 27:00 - 27:30 respect to prom template and all so that also we are going to discuss as we go ahead so guys now let's go ahead and discuss about prompt templates which is again a very handy component in the Lang chain even in open AI with the help of prompt templates you will be able to get efficient answers from the llm models itself right I'm not talking about prompt engineering that can get you three CR package okay I'm just talking about simple prompt templates and how prompt templates can be probably used okay now let's go ahead and first of all
            • 27:30 - 28:00 import from Lang chain from Lang chain. prompts import prompt template okay uh what we are going to do is that whenever we call a specific llm model you know the llm model should know what kind of input it is probably expecting from the client or from the end user and what kind of output it should probably give it okay so what we do if you want to really Define how our input should be and how
            • 28:00 - 28:30 our output should be we can specifically use prompt template because understand if we directly use GPD 3.5 it can be used for various purposes right but here I want to restrict all those things within something right so with respect to the input and the output so here I'm going to probably Define my prompt template so let me go ahead and Define my prompt template now with respect to the prom template here first thing that we need to Define is our input variable so here I will go ahead and write my
            • 28:30 - 29:00 input variables okay now in input variables we need to first of all say that what input we are specifically giving there will be a fixed template in that template I really need to give my input okay so let's say here I Define my input and here I'm just using one input let's say the capital or the country I'm just going to write it as country okay so this is my first parameter that I'm going to probably give in in my template itself and this I will also store it in
            • 29:00 - 29:30 my variable which is called as prom template okay great now here I'm given country as my input variable okay here I will Define my template now inside this template what I'm going to say is that tell me the capital capital of this whatever template that I'm specifically giving that is country so this is just like my variable
            • 29:30 - 30:00 whenever I give the input to this variable it is going to replace it over here right so what will happen by this is that the open AI will be able to understand okay this is the question that is asked but this value is dynamic that I'm probably giving it during the run time something like that okay now here if I only want to execute this again I can also write prompt underscore
            • 30:00 - 30:30 template okay prompt underscore template uh dot format right so there is a function called as do format and here now instead of how should I give my input so here I will say country is equal to right whatever the variable name is country is equal to let's say India okay now here you can see that my entire prompt is generated in this way saying that tell me the capital of India
            • 30:30 - 31:00 okay so here you can probably see tell me the capital of India right now if I want to probably predict I will write llm do predict okay and here I will just say prompt template is equal to whatever my promt template is defined so here I'm getting an error predict missing one required positional argument text now inside this I've given this prom template but it is expecting one text right what is that particular text that is particular this particular value
            • 31:00 - 31:30 right so here what I will do I will go ahead and Define my second variable text and here I will write it as India let's see whether it will get executed still it is giving me an error so guys now you can see when I'm using this lm. predict and I've given my prompt template I've given my text also that was the error that was coming but still it is giving me an error saying that the prompt is expected to be a string instead found class list this this if you want to run the llm multiple prompts use generate instead okay something like this I will
            • 31:30 - 32:00 show you a way because that is the reason the reason I'm keeping this error over here there is a simple way of understanding things right because this is not the right way to call Prompt template along with the inputs itself so what I'm going to do quickly I will go ahead and import one important thing that is called as chains so I will say from Lang chain dot chains import llm chain understand one thing
            • 32:00 - 32:30 guys chain basically means combine multiple things and then probably execute I have my llm model I have my prompt I know I have to give my input based on that input I need to probably get the output so instead of executing directly by using llm do predict I'm going to use llm chain and inside that I'm going to give my llm model I'm going to give the promt template and I'm also going to give the input so this is what we are specifically going to do now this will get gots executed from Lang chain.
            • 32:30 - 33:00 chain import llm chain now I'm going to create my chain let's say the chain is equal to llm chain and here I'm going to basically write llm is equal to llm okay whatever is my llm model and my prompt is equal to my prompt template okay now this is there this is my chain now chain what it is doing it is combining the llm model it knows what is the prompt temp template right I'm going to use both of them and now in order to run it so I
            • 33:00 - 33:30 will write chain. run and here I will say India let's say I'm going to probably say India and I know what is the output it is going to get tell me the capital of India so if I write chain. run it is definitely going to give me the output the capital of India is New Delhi this is perfect right so I can also probably print it right see guys I'm not going to delete any of the errors I want you all to see the errors
            • 33:30 - 34:00 and then try to understand that is the best way of learning things okay other than this there is no other way right you need to find a way to learn things you don't worry about any errors that are probably coming up you just worry about okay fine you have got that error how you can probably fix that what is the alternate way of probably fixing it right so you can probably use all these techniques as we go ahead right so this this is with respect to promt template and here I'm going to also talk about
            • 34:00 - 34:30 and llm chain right so these are some important things because all these things we will probably be using in creating our end to end application now let's go ahead and probably discuss some more examples with respect to llm so guys now we are understanding one more important topic which is called as combining multiple chains using simple sequential chain till here we have understood about llm chain LM chain was able to combine an LM model along with the prompt template through which we can
            • 34:30 - 35:00 also give our input and get our specific output now if I have multiple chain let's say if I'm giving one input I want to use those input in both the chain or three chains four chains how can I specifically do it and for that I'm going to probably use Simple sequential chain okay so let us go ahead and let us probably see how this can probably done so first of all I will probably say Capital template prompt Capital prompt Okay so first like what is the capital of this specific country right so this
            • 35:00 - 35:30 will basically be my prompt so here I'm going to use my prompt template and here I'm going to basically use input variables is equal to and here I'm going to basically say country let's go to the next input the next input will basically be template and here I will say I want uh please tell me the capital of the whatever input I'm
            • 35:30 - 36:00 specifically giving away as country so this becomes my first template right and what I will do I will create a chain the chain name will be Capital chain okay and here I'm going to probably use my llm chain and my llm model will be llm okay and then I will also be using my prom template is equal to as capital template Capital template okay so this is done
            • 36:00 - 36:30 let's see Capital prompt where is Capital prompt oh sorry Capital prompt Capital prompt is not defined why uh please tell me the capital of this uh template oh double equal to Let's it no worries uh two validation error for LM chain so first I've used an LM chain where prompt
            • 36:30 - 37:00 template is equal to this uh where it is capital prompt so guys after just checking the documentation this should be prompt itself okay because in llm chain we have used prompt and here is capital template here also I'm going to probably use Capital template now if I execute this this works absolutely fine uh one thing you can probably see over here that I've given my template name and then I've also given the capital chain right so if I want to probably execute it I can just just give my chain. run and that part is parameter okay but now what I want is that I also
            • 37:00 - 37:30 want to create one more prompt template I want to give the same input to that chain also so here uh let's say I will write famore template and I will just say prompt template and here again my input variable what is my input variable so my input variable will be whatever specific things that I'm trying to give right let's say please tell me the capital of uh India if I say right the capital
            • 37:30 - 38:00 whatever Capital I'm going to get that variable only I'm going to pass it over here so my input variable will basically be my capital okay and this will be my second one and here I'm going to probably sayest template and I'm going to probably ask a question suggest me some amazing things amazing some amazing places place places to visit in that
            • 38:00 - 38:30 specific capital okay so this is what I'm probably telling right please tell me the capital of the country so I will have that capital information that will be my input variable to from this particular template uh in that specific chain okay so I will get two answers first of all I'll get the capital of that particular country and then what are the some amazing places to visit in that specific Capital place okay so these are all the information that I have put up okay so I hope this also works fine now what I'm going to create
            • 38:30 - 39:00 I'm going to create an another chain which will be for this particular famous chain right so here I'll write famous chain okay is equal to and I'm going to probably use my llm chain or llm chain and here I'm going to give my llm models but the second one that is my prompt is equal to uh whatever template that I'm going probably going to give the famous template Okay so so this is what I'm probably going to do uh and I've probably given this prompt also over
            • 39:00 - 39:30 there and this will basically be my chain so once I probably execute it both the chains are ready now I need to give one input it should go to one chain get the output from that particular chain and pass that output to the next chain okay so this is what I specifically want to do how can I do it so again from Lang chain dot chains I'm going to import simple sequential chain I know guys uh here you may be thinking why I have to use this see you're passing one input to
            • 39:30 - 40:00 the get the other output from the one chain and pass that particular output to the other chain to get the just output itself right so this is quite amazing when you see an end to end application there you'll be able to understand these are some of the important components you should definitely know and try to understand okay so here finally what I'm going to do is that I'm going to probably create my chain is equal to and this will be my final chain and here I will probably say I'll import this okay so I get that so chain is equal to
            • 40:00 - 40:30 simple sequential chain capital letter simple sequential chain and inside the simple sequential chain I just have to name all my chains what all chains are specifically there in the form of list so the first chain uh that I have over here is nothing but Capital chain the second chain that I have is something called as famous chain okay so both the chains are ready now in order to run it all I have to do is write chain. run and here I'll specifically
            • 40:30 - 41:00 give India okay done let's see what kind of output I will probably get so it is running it is a bustling Metro Police and a great place to visit for historical site cultural this this this red Fort see most popular city the iconic mon is a multi Vis it who fought in World War I the 16th century mugal
            • 41:00 - 41:30 era Tom in UNESCO world heritage site everything it is probably giving it right so it did not give us the first answer with respect to the chain because it only provides the last input information okay uh if you want to probably display the entire chain I will show you a way how to do that for that we have to use buffer memory uh there will be something called as buffer memory but one amazing thing I gave one input I got the output and then probably I passed that output to my next chain and I am able to get one amazing answer over here so definitely try it out from
            • 41:30 - 42:00 your side by using different different examples also now what I'm going to do is that I'm going to probably discuss one very important component about chat model open AI so that is also super important uh that is something related to chat models whenever you probably want to create a chat models you can have to use that okay so let's have a look onto that so guys one more additional thing that let's say I want to probably see the entire chain so here we will specifically use something called as sequence chain and let me just show you one example of that also uh it is not much to do with respect to that
            • 42:00 - 42:30 but you should definitely know this important video as said again I don't want to probably take more time with respect to this but it is good to know this okay sometime when you are developing things and that you'll probably be understanding once I start end to end project right today one end to end project will be done okay don't worry about this uh in this particular video it will be done uh but definitely I want to show this example also as we go ahead now uh let's quickly go ahead
            • 42:30 - 43:00 and do the same thing I will copy this entirely okay I will paste it over here okay now along with llm prompt template I will give my output key also so where I specifically want my output key so the output key will be nothing but it'll be something called as capital okay so this is my Capital chain with this specific output okay so here I have created this now let let's go ahead and probably create the next template uh that is this famous template
            • 43:00 - 43:30 okay so here also you can probably see the famous template uh suggest me some names of the capital and here I've probably created my template name uh and my chain is over here right so this will basically be my chain okay so the same name whatever output key over here I've given this as my input key and here uh I can also derive one output key like this output
            • 43:30 - 44:00 key places something like this okay so done this is done see two simple templates that I have actually created uh s me some amazing places to visit in this particular Capital uh the capital is probably given from here so now the chain will probably be able to understand each and everything as we go ahead you know where the output is and all right so here now what I'm going to do I'm going to probably import from Lang chain dot chains
            • 44:00 - 44:30 import sequential chain okay and then finally you can see I'll write chain is equal to simple okay I will let me just execute this because it is not giving me any suggestion so I will write chain is equal to sequential chain and now I'll Give All My Chains name so the first chain name is is um Capital chain
            • 44:30 - 45:00 D famous chain dong okay and uh after this you will basically be able to understand the input variables now the input variables that we specifically have input variables is nothing but whatever is my variable name what is the variable name in this case it is Nothing But Country okay and then my output variable
            • 45:00 - 45:30 I'll also create my output underscore variables so these are the two parameters see guys this this parameters is nothing but whatever parameters I'm specifically giving one is the capital and one is the places done so this is my entire chain now if I want to run any chain what I
            • 45:30 - 46:00 will do I'll basically write something like this and give what is my country name right so it should be given in the form of key value pairs so here is my country colon India right something like this now if I execute it I will now be able to see my entire chain right it'll take some time so what I have done over here I have in every llm chain that I'm probably creating I'm creating an output key uh two chains so two output key and
            • 46:00 - 46:30 here you can see chain country India country was India Capital the capital of India is New Delhi here are some amazing places to visit in New Delhi and all the information I have probably over here now let's go ahead and discuss chat models uh specifically in Lang chain and we also going to use one Library which is called as chat open AI uh this is also very good if you want to probably create a conversational uh chat bot
            • 46:30 - 47:00 itself so in chat models with chat open AI first of all what we will do is that we will go ahead and import Lang chain uh do chatore models and I'm going to probably import chat open AI so we will quickly go ahead and import it now after I specifically import this in chat open AI there are three schemas that you really need to understand understand okay whenever a conversation basically happens if a
            • 47:00 - 47:30 human is probably giving some kind of input and expecting some response that basically becomes a human message right if by default when the when your chat bot is probably opening a default message will probably come right and that is something related to domain like what that specific chat bot does right so it can probably come up with a new schema which is called a system message and then there is also one one more message which is called as AI message which is again a schema which probably
            • 47:30 - 48:00 gives the output right whatever the chatbot is giving an output uh the AI whatever models is specifically giving the output that is nothing but that is related to the schema that is called as AI message okay now from this here we what we are going to do we are going to import from langin do schema I'm going to import human message system message right as I said
            • 48:00 - 48:30 system message is also required and AI message here everything you'll be able to get this as an example because U probably in the upcoming videos we'll create conversational U chatbot right at that point of time we'll be seeing all these things what we will be using and how we will be using okay so here quickly we'll import this now uh obviously my llm model is there right by uh and while creating before the llm models how did we use it we basically used something called as uh open AI
            • 48:30 - 49:00 right we used open AI now in this case I will probably copy the same thing okay and I will just past it over here and write chat llm okay and instead of writing open AI I'll use chat open AI right so this is what I'm specifically going to use chat open AI with some temperature in this and here I'm also going to give one model so let me go ahead and write my model name uh the model name that we going to basically write from here is GPT
            • 49:00 - 49:30 3.5 Dash turbo right I showed you like what models we can specifically use so this is my chat llm model so here if I probably go ahead and write my chat llm so here you'll be able to see that it is a chat open AI uh and with all the information with so temperature what is the uh this and all open AI key I cannot show you so it shows you I key also over here so just going to remove this okay so that you don't find the opening I key
            • 49:30 - 50:00 now let's use this three schema and then probably see how my output will look like let's say first of all I'll create the schema in the form of list okay first of all the system message right let's say system message I will go ahead and initialize and I'll write a content content one variable is dead I'll say you are an you are a comedian AI assistant okay so this is
            • 50:00 - 50:30 the this is what I'm telling the chatbot to behave like right it is basically acting like a comedian AI assistant okay then in the next one I will say human message and here again I will go ahead and write the content and I will say please and this is what I will probably write as a human this is what the input that I am probably giving right so I'll say
            • 50:30 - 51:00 please make a comedy about or please provide some punch lines some comedy punch lines punch lines on okay AI okay so here you can probably see these are my two things these are the two information that I'm going to give to my chat llm models right and then let's see
            • 51:00 - 51:30 what is the output okay with respect to that now in order to give this input to my chat llm so I will write chat llm and here only I will open my brackets so it has two information first by default it knows the system is a comedian AI assistant and here as a human input what we are saying is that we are saying please provide some comedy punch lines on AI okay so if I execute this you'll be able to see I will be able to get an output now this is how we are going to design later on in the end
            • 51:30 - 52:00 to endend project we are not going to give this as a hardcoded it'll be quite Dynamic so here you can see AI message see this is the output if I'm getting the output that basically becomes an AI message so this schema that we are able to see from the output of this particular chatbot the system message is basically telling that okay beforehand you have to act something like that we instructing the chatbot to probably act act in that way right the human message is basically our input and AI message is what is the output so AI may be smart
            • 52:00 - 52:30 but it can tell me if your output makes look like a potato AI is a virtual therapist except it never judges you for eating an entire Pizza by yourself something like this so this is what comedy messages you can probably see right and I think this is quite amazing and you can probably set this up any number of time right you can probably say you can add this AI message over here and you can still build more conversational AI right so as AI also give the message you can probably store
            • 52:30 - 53:00 it inside this s let's say if I probably consider a list and I append this particular list with all this information it can act as a chat model as our as we go ahead right now guys we are also going to discuss about one topic and after this we are going to implement our project okay so over here we going to discuss about prom template plus llm plus output parcel now first of all we'll try to understand what exactly is output parcel now in order to make you understand about output PLA and how
            • 53:00 - 53:30 we can probably implement it I will use Lang chain again for this um as said guys langin is a powerful Library it has everything as a wrapper right so I will say from Lang chain okay from Lang chain. chatore models first of all I'm going to import chat open aai okay chat open AI see there are so many things chat vertex AI chat open AI very powerful very
            • 53:30 - 54:00 powerful and the way it is getting developed right quite amazing right so from Lang chain dot prompts I'm also going to use some prompts and like how we have a prompt template when we use open AI right similarly in chat open AI we use uh prompts which is basically called as chat promate chat prompt template okay so I'm going to basically import chat prompt value no
            • 54:00 - 54:30 template chat prompt template so I'm also going to import this along with this as I said output parser right output parser is that if I want to modify any output of an llm model beforehand right so I can specifically use output parsers so for Lang chain I will also import this from Lang chain do schema import base output parsel right so these
            • 54:30 - 55:00 are the three things I'm specifically importing and here I'll basically go ahead and write class let's say I am defining one output parser and I'll Define this in the form of class it'll inherit the base output class so let's say uh I will say comma separated output okay that basically means it is basically called as a comma separated output this is the class that that I'm going to Define and uh even even in the uh documentation it is given in an
            • 55:00 - 55:30 amazing way okay so comma separated output and this will basically be inheriting the base output parel okay now inheriting when I probably inherit right that basically means we inheriting this base output parel and we can call this along with an llm models here I will Define a parse method and inside this parse I will take self as one keyword and whatever text the output that we are specifically getting which will be in the form of string format all
            • 55:30 - 56:00 we'll do we'll just write return text. strip dot dot split right and this will be a comma separated split understand one thing now this is what is the class that I've defined and this is just like an output parser by default output parser is what you can probably see whenever I'm specifically using the chat models I'm I'm getting some kind of output right AI may be smart something it is giving in the form
            • 56:00 - 56:30 of sentence and it is adding a new line at the end but what I'm saying is that whatever output I'm getting I will take that output and divide all the words in comma separated okay something like that so for this again I will Define my template I will say you are a helpful assistant okay so this is my first message that is probably going as a template right so this becomes a system template the schema that we
            • 56:30 - 57:00 probably discussed right I will also give some information um let's say when the user gives any input okay you should generate five words okay in a comma separated list so this is what is my entire message okay the template I'm saying
            • 57:00 - 57:30 that whenever the user give any input you have to probably generate five words which should be comma separated okay so this is what I have specifically done okay now what will be the input what will be the text Will Define all those things okay and here I will say this will be my human template so what is the word that I'm going to probably give over here uh that will specifically Define over here right so here I will say okay test uh you should generate five words synonym let's say synonym
            • 57:30 - 58:00 I'll just go ahead and write synonyms okay synonyms and comma separated so here will basically be my text whatever text I'm specifically giving now I will go ahead and create my chat prompt now again from this chat prompt what I have to use I've already used chat prompt template okay and inside this I will say dot from message let's see that chat prompt template from from underscore messages okay now
            • 58:00 - 58:30 inside this from underscore messages I have to give two information okay whatever is the template right so first template is nothing but the system one so system information that I really want to give uh that system one is nothing but this normal template that I've defined and the second one will basically be my human template right whatever human message that I'm actually giving right and this will basically be defined as human undor templates template right so once I
            • 58:30 - 59:00 execute it here you'll be able to see this is my chat prompt okay now in order to execute this obviously I have to use chain right because I have a promt template over here I have a human text I have this specific template also so how do I probably combine all these things that is what I'm actually going to show you over here so quickly first of all I will use this chat llm okay chat llm now see this is quite amazing and this is the best way of running chains so I will
            • 59:00 - 59:30 say chain is equal to whatever is my chat prompt so this is my chat prompt to this chat prompt I will give my API whatever API I'll write over here control V so I have to just give a or sign kind of thing right so this is getting chained up this symbol basically says that it is getting chained up and remember the order also okay or you can also initialize chat open AI over here now along with this I will also give my
            • 59:30 - 60:00 output parser the output parser will be the last one right so this will basically be my output parser comma separated output okay now see I've given each and everything over here one by one list by list right so here it is so once I probably execute it it'll get executed so here what it is saying I'm giving this chat prompt the chat llm model is there and the output should be Comm a separated output which is getting derived from this particular class okay now here finally what I will do I will
            • 60:00 - 60:30 write chain do invoke and inside this I will again whenever I use chain I have to probably give it in a key value pair right colon something whatever the value is now in this case I will say the word is um intelligent let's say now I have to probably give it in the form of text right so that is what I really have to give it right so this text is equal to intelligent now let's see what is the output uh it is coming as okay there is
            • 60:30 - 61:00 some syntax issue that I have probably made because I have to close my dictionary over here now if I write chain. in workk you can see that five words smart clever brilliant shop astute I don't know this specific word but here you'll be able to see whatever output that I'm probably getting right so if I probably remove this let's see okay this is how the output will look like okay AI message content this this this right but just by
            • 61:00 - 61:30 adding this output parsel you can see what an amazing message you're able to get and you're getting able to get the right thing that you specifically want this is what powerful a prompt template is all about right now this is done right and this is more than sufficient to know because more new things about PDF how to read PDF how to what isch eming and all we will discuss as we go ahead but now let us go ahead and try to
            • 61:30 - 62:00 create a simple chatbot okay simple chatbot I'll create an app.py and by using the simple chatbot we'll try to understand how things actually work and what all things we can basically do okay again here I'm going to probably use streamlet and I'll be writing the code line by line so let's go ahead and have a look so guys finally now we are going to develop our Q&A chatbot uh with all the concepts that we have probably learned I'm just not going to use all the concepts right now in this specific
            • 62:00 - 62:30 video itself because we should have probably uh 10 to 12 projects that are going to come up in the future in this series of playlist so there we are going to discuss about more projects as we go ahead right but now in this video I'm going to probably create a simple Q&A chatbot just with the help of open Ai langin and obviously use open a apis and and uh llm models specifically to do that here I'm also going to use streamlet okay so let's go ahead and
            • 62:30 - 63:00 let's see initially what all settings I need to do see this was Lang chain. IB because I will be giving you this entire file uh again in the reference with respect to GitHub also so uh first of all in the requirement. txt I will be importing one more library because I need to install this Library this is the important Library itself which is python. right so python d.v actually helps us to create or upload all our uh environment variables that we have
            • 63:00 - 63:30 created with respect to our application so here this is the library that I have to do it and just go ahead and install the requirement. txt I've already done that uh and this will be a specific task to you now we are starting over here so from lin. llms I have imported open aai then from EnV load load. EnV so as soon as I probably call this it will take all the environment variables fromb file so here I've already created the environment variable I'm not going to
            • 63:30 - 64:00 show you again the environment variable because in short the environment variable will be something like this see I I I may have written like something like this open AP opencore API uncore key is equal to this particular environment variable right so this is basically my open uh API key itself right so I'm going to probably use this uh in my application so here uh these are the basic steps that we will probably go ahead with now along on with this what I'm actually going to do I'm also going to import one more Library which is called as streamlet because we
            • 64:00 - 64:30 are going to use streamlet itself so let me go ahead and open my terminal and quickly let's go ahead and write pip install minus r minus r requirement. dxt and then the installation will start taking place and the streamlet uh Library will also get installed streamlet we are specifically using for front end application uh see it's not like only you have to use streamlet it'll be very much easy for me to probably create it and do the
            • 64:30 - 65:00 deployment because I'm also going to show you the deployment in the hugging face uh space itself right what is exactly hugging face uh space I will also discuss about all those things so quickly uh let's do this uh it'll probably take some time and then I will go to my app. Pui uh let me do one thing quickly uh let me go ahead and import streamlet also so I'll import streamlet as s okay so this will basically be my streamlet itself okay so it'll probably take some time to
            • 65:00 - 65:30 download it let's see how much time it is going to take but again it depends on your internet speed and how fast your system is right my system is really really fast till it is taking time so for you it may probably take more time okay so let this installation take place till then I will go ahead and start creating our application now I will first of all create a function to load open AI model and get response okay get
            • 65:30 - 66:00 response so I will call this function something like definition um getor open AI response okay something like this and here I'm probably going to give since it is a Q&A chatbot so here I will have my question as my parameter which will be of a string type okay it can be a string type it can also be a numerical type so I will just keep it like this okay so this is done and here you can also see the installation is done so I will just close this now here as soon as I
            • 66:00 - 66:30 probably call this function what I really need to do I need to call my llm model so llm model I will say open AI okay open Ai and here I will go ahead and Define my model uh have I imported open AI yes I have imported model uh uh opening itself so I will go ahead and write model uncore name is equal to and I will Define my model I'll be using text Davin
            • 66:30 - 67:00 C uh this is one of the model that we have you can probably refer it so text Davy 003 and here I'm also going to Define my temperature temperature is equal to let's say 05 okay uh along with this uh I just go ahead and copy one more thing I will just set up my open API key also so I will set it up like by using this os. environment so here will be my first parameter okay so all this is done uh I
            • 67:00 - 67:30 think I need to also import OS okay so this is done in short what I'm doing is that I'm initializing my llm model OKAY in this code now the next thing is that I need to probably get my response so response will be nothing but llm directly how do I give a question over here I can probably give the question over here okay see I'm just creating a basic one then whatever things you really want to do from here you can
            • 67:30 - 68:00 probably do it try to create a own prom template try to use chain if you want try to do multiple things but just to start with I'm going to use a simple application where it is just taking an input and giving some kind of output it has no AI message set it has no human system message set no system message set also we have not given any promt template also over here this just to give you an idea how things starts okay so now we will initialize initialize our streamlet
            • 68:00 - 68:30 app okay now with respect to streamlet I will write St dot setor pageor config so this is one of the function in streamlet which will actually help you to set the page title so I will just go ahead and write title is equal to I will say q and a demo okay Q&A demo and this is done with respect to my uh and here I will set my another
            • 68:30 - 69:00 header the header will be something like Lang chain application something like this okay so I've given my header also with respect to this okay now I need to find out a way to get a user input okay uh if I get a user input then I should be able able to submit the button and I should be able to get the text itself so first of all I will go ahead and create my submit button I will say St do
            • 69:00 - 69:30 button and here I will go ahead and write generate generate or ask the question something like this okay if ask button is clicked right if it is clicked that be basically means if I write if submit okay if it is clicked this usually becomes true okay so if this is true it
            • 69:30 - 70:00 is probably going inside this particular function and here you'll be able to see s I'll just put a header and uh I'll say the response is okay and then I will probably write St dot right with respect to the response okay so this is what I am probably doing it okay I'm getting the response over here and with respect to this specific response I will probably this response is probably
            • 70:00 - 70:30 coming from here but still whatever is the input that input we are not able to capture it yet right because if we capture those input then only we'll be able to send that particular input somewhere right and for that also I may have to probably create another function so let's go ahead and handle the input part now so guys now what I am actually to do over here is that first of all we'll go ahead and capture our input text so let me go ahead and write over here input is
            • 70:30 - 71:00 equal to St do textor input because I'm going to use a text field over there and here I will probably be waiting for the response itself right so sorry from the request right so so here I will write input colon okay and I'll keep a space over here and I will just write key is equal to key is equal to input something like this so this will basically be my input itself okay now once I've done this okay once I've done this I'm going
            • 71:00 - 71:30 to take this particular input and now call I hope you should know what we should call we should basically call this function right so this here I'm going to probably write uh not here itself uh so let me just write it over here and this input I'm actually giving it over here okay so this will basically be my input over here uh whatever input I'm probably getting it it'll just go ahead with respect to this particular question and I will probably get the response here I will
            • 71:30 - 72:00 just go ahead and write return response okay and then I will store this particular variable inside my response okay done see the way I probably got the input over here I sent this input to my get open AI response my open AI model has got probably loaded and then it is basically calling with respect to this llm you can either call predict message or predict functionality also uh you can also use chain you can use multiple things you can assign promt template in
            • 72:00 - 72:30 this particular function and all right and then finally you have S do button ask the button and if submit this is there okay now quickly let's go ahead and probably run it okay uh let me see whether if I directly call python app.py it will give us an error why because this is a stream L library right if it was flask I would have probably said okay it would have working now it says key error open API key okay so os. envirment open API key
            • 72:30 - 73:00 load. EnV so guys one mistake that I have specifically done whenever I really want to call all the environment variables from EnV with the help of load uh this the specific library that is called as this this functionality which is load. EnV at that point of time I'll be using get EnV function and here I will we'll just remove all these things brackets and probably call this function now I hope so it should work and I think we should not get any problem so
            • 73:00 - 73:30 Streamlight run app.py and here we have our entire application quickly it's running let's see um this is getting loaded and here we have right now probably I'll ask the question what is the capital of India right so I'll just ask the question over here the response is the capital of India is New Delhi um let's see what is generative AI
            • 73:30 - 74:00 right so I'll ask the question you'll be able to see that generative a is a type of artificial intelligence that focuses on creating new data from existing data this this this is there and still I'm getting some kind of weird responses over here so that is the reason we'll also be using output parsers we'll make sure that we'll use conversation buffer memory we'll also Implement schemas like human message system uh human human system AI
            • 74:00 - 74:30 system um system messages all those things were there right all the schemas that part we probably discussed but this is a simple application that we are probably going to discuss with respect to this it is going to be quite amazing and uh you know this is just a basic Q&A chatbot uh wherein whatever questions you specifically ask like what is the please write a poem on on please write a romantic poem I'll
            • 74:30 - 75:00 just give it as romantic poem on generative AI something like this because many people are now using this ask the question so here you can see generative AI new love in my life your data driven Hur is perfect F algorithms so precise your knowledge is so wise so everything is over here now what I'm actually going to do is that I will go and show you the
            • 75:00 - 75:30 deployment part everything is working fine I will first of all log in into the hugging face go to the spaces and create a new space because I'm going to probably do the deployment over here let's say I will say Lang chain Q&A chatbot okay I don't have to use license this will be a streamlet now in space Hardware like you have PA Hardwares also but I'm probably going to use a simple one CPU Basics 2v CPU 16GB and I will
            • 75:30 - 76:00 create this as public so that you can also refer it um let's see okay I will just remove this spaces please match okay this underscore is not there QA a chatbot I will go ahead and create the space now after creating the space uh there's couple of things that I'm probably going to do over here is that uh this is where this is just like a GitHub repository you know you if you probably go to the files you'll be able to see it now here I'm probably going to upload the file that I have okay uh but
            • 76:00 - 76:30 before that what I'm actually going to do I'll go to my settings okay and if I go down right so there will be something called as secret keys because this secret key I have to put it with respect to open AI so here no secrets are there so I will go ahead and clear click on uh new secret and you know that with respect to the new secret what I have to probably use I have to use open API key I will put it over here okay and now
            • 76:30 - 77:00 oops just a second open API key let me open this and I will put it over here okay and what I'm going to do I'm also going to upload the value okay so I'll not show you the value uh let me update this and let me come back and quickly and show you the next steps that we are probably going to do after adding the open AI API key uh you can see it over here in the secrets you'll be able to see this specific key
            • 77:00 - 77:30 now what I will do after updating that I will go to my app now again my entire application will start getting buil up now here you can see as soon as I add the open AI API you'll be able to see my application will start running now here I can probably ask question what is the capital of India okay and uh you can see that I will be able to get the response now clearly you'll be able to see uh I've been able to do the deployment in hugging pH spaces uh it was very much
            • 77:30 - 78:00 simple you can see the files over here itself on the app.py requirement. txt I had commented out all the codes with respect to EnV and all because as soon as we add the secret variable as soon as this open a model is called it is going to take the open a API key from there and it is going to use it over here so yes this was with respect to the deployment and quickly we were able to also create a simple Q&A chatbot along with deployment uh this was all about
            • 78:00 - 78:30 this in this specific video now from the next video itself I'm going to increase the complexity of the project please understand you really need to practice this a lot if you practice it well then trust me all these things will be easy to understand so yes this was it from my side I hope you like this particular video please make sure to you subscribe to the channel press the Bell notification icon and I will see you all in the next video have a great day thank you take care bye-bye