Connecting LLMs to MCP Servers Seamlessly

Connect any LLM to any MCP server without any MCP client.

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In this insightful tutorial by AI LABS, learn how to connect any language learning model (LLM) directly to MCP servers without needing a client. The video introduces a new library that allows you to bypass the need for specific MCP clients, like Windinsurf, Cursor, and Claude Desktop. It guides you through setting up a Python environment, installing necessary packages, and creating a basic example using the Airbnb MCP configuration with OpenAI. It also explains how to handle multiple servers with this framework, ensuring flexibility and ease of use in creating modular and autonomous applications.

      Highlights

      • Bypass the need for MCP clients with a new library! 📚
      • Easy installation for Python enthusiasts! 🐍
      • Use OpenAI or Anthropic with MCP to make the magic happen! ✨
      • Multitask with multiple MCP servers effortlessly!
      • Tap into the power of LLMs for guidance and code-writing! 💪

      Key Takeaways

      • Connect any LLM to MCP servers without a client! 🚀
      • Easy Python-based setup; perfect for coders! 🐍
      • Customize your agent with endless possibilities!
      • Learn to handle multiple MCP servers intelligently!
      • Don't worry if confused; use LLMs to assist you! 🤯

      Overview

      In this video by AI LABS, you are walked through the process of connecting language learning models (LLM) directly to MCP servers without relying on specific MCP clients. Previously, you would have needed clients like Windinsurf, but now, a new library makes it possible to communicate straight from your code. It's perfect for those comfortable with Python and comes with a plethora of cool features.

        The tutorial explains setting up your Python environment and installing the necessary libraries. For Python version 3, using pip 3 is advised. The example given uses the Airbnb MCP configuration with OpenAI, showing a step-by-step approach. You'll learn to create an agent that interacts with an MCP server and modifies it as per your needs.

          Moreover, the framework discussed supports HTTP connections and multiple server management, meaning you can manage several MCP servers from one project efficiently. The library is flexible and versatile, opening doors to a variety of applications including autonomous systems like those showcased in previous AI LAB videos. If you need extra help, leveraging LLMs or coding assistants like Cursor can make things easier.

            Chapters

            • 00:00 - 00:30: Introduction and Purpose The chapter titled 'Introduction and Purpose' introduces a new library for communicating with MCP servers directly in code, using any LLM of choice. Unlike previous methods that required specific MCP clients such as Windinsurf, Cursor, and Claude Desktop, this library operates with an agent, and supports easy installation. While it offers advanced features, some coding knowledge is necessary to utilize it effectively.
            • 00:30 - 01:00: Installation and Setup The chapter titled 'Installation and Setup' provides a step-by-step guide to installing a Python-based library. It starts with ensuring that Python is installed on your system, followed by creating and activating a virtual environment. The chapter includes commands for both Windows and MacOS, which can also be found in the video description or obtained via chatgpt.
            • 01:00 - 02:00: Library Usage & Code Explanation The chapter discusses the importance of using the correct version of pip when working with Python 3 and emphasizes the need for installing the right libraries for different AI models. It provides guidance on setting up the environment for using different AI providers with a focus on Langchain for OpenAI and Anthropic. The chapter advises checking the required libraries for other providers like Grock and Llama and guides on opening the terminal post-installation.
            • 02:00 - 03:00: Running and Testing the Code The chapter focuses on setting up and running a Python project. It begins by confirming that the environment is Python-based and that necessary pip packages are already installed. The next steps involve opening the project directory in a tool called 'cursor' using a specific command, which directly launches the project within that tool. After successfully opening the project in cursor, the user is instructed to create a new file named 'env.' At this point, only the virtual environment folder exists in the project, so the user needs to manually add other necessary files and code. Specifically, the 'env' file must contain a line with an API key, where the key only needs to be pasted without additional context.
            • 03:00 - 04:00: Advanced Usage and Features This chapter covers the advanced usage and features of integrating multiple computational platforms (MCPs) with language models, particularly focusing on OpenAI's tools. It begins by explaining the initial setup where input keys from MCPs, Langchain, and OpenAI are necessary. The chapter emphasizes using OpenAI resources for language model tasks, detailing the steps to define essential functions and load environment variables efficiently. Further, it describes creating an MCP client by utilizing specific configurations from separate files, citing an example configuration for Airbnb MCP, which is detailed within the text.
            • 04:00 - 05:00: Using Documentation and Ingestible Formats This chapter discusses the setup and use of Large Language Models (LLMs) and agents within the context of using Airbnb's MCP server. It covers defining the desired LLM, choosing a model, and agent creation. The agent is responsible for taking the LLM and client, defining maximum steps, giving prompts to the LLM, printing results, and returning them to the user. This setup is depicted as a basic example that can be modified to create interesting applications, eliminating the need for a separate client.
            • 05:00 - 06:00: Multiple Server Management This chapter discusses the management of multiple servers using modular applications. It introduces the concept of binding a Large Language Model (LLM) to a Modular Control Panel (MCP), with an example of creating autonomous WhatsApp agents. It describes a server run scenario where an error occurs but still produces results, specifically receiving listings from Airbnb filtered by a new feature.
            • 06:00 - 06:30: Final Thoughts and Call to Action This chapter discusses the implementation and utility of an automated selection system, focusing on user preferences such as having a pool and good ratings. The text highlights the system's flexibility and potential to create a variety of agents. The chapter notes that the source code is accessible on GitHub for those interested in modifications, although the AI assistant 'Cursor' may lack the contextual understanding of the framework unless provided with further information.

            Connect any LLM to any MCP server without any MCP client. Transcription

            • 00:00 - 00:30 I'm going to show you a library that lets you communicate with your MCP servers directly in your code. You can use it with any LLM you prefer. Previously, communicating with MCP servers required specific MCP clients. We already have clients like Windinsurf, Cursor, and Claude Desktop. Now, you can use this new MCP client library. It works by using an agent and comes with some pretty cool features. It's also really easy to install. I'll show you how to set it up. If you're not familiar with code, keep in mind that this does require some coding knowledge to use
            • 00:30 - 01:00 properly. Even if you're new to it, that's not a big issue. I'll also show you how to vibe code with it. Let's get into the video. Let's look at the installation of the library. It's a Python-based library. So, the first step is to check whether Python is installed on your system. Then, you'll create a virtual environment. Once it's created, you need to activate it. The commands for both Windows and Mac OS are shown here, and I'll paste them in the description below as well. If you don't know how to code, you can also get these commands from chatgpt. These are the commands to install the library. They're
            • 01:00 - 01:30 available on the GitHub repo, too. If you're using Python version 3, which you'll know from the Python version command mentioned earlier, then make sure to run everything using pip 3, not pip, but pip 3. If you're planning to use an open AI key or model to interact with your MCPS, you need to install Langchain OpenAI. For anthropic, you need to install Langchain Anthropic. For other providers like Grock or Llama, you can find the required libraries listed at this link. So once everything is installed, open up your terminal. You
            • 01:30 - 02:00 can see that I'm in a Python environment. I've installed the pip packages. And now I'll show you how to move forward. First open this directory in cursor using this command. This will launch your project directly in cursor. Once it opens in cursor, create a new file named env. Your project will only have the virtual environment folder. At this point, you need to create the other files yourself and add the code manually. Create the env file and paste the following line with your API key. You only need to paste the key for the
            • 02:00 - 02:30 provider you're using. I'm using OpenAI in this example. So, I pasted that key. Let me quickly explain the code and how it works. At the top, you can see different inputs from MCPUs, Langchain, and OpenAI. As we're using OpenAI for the LLM, we define a function and load.env env loads the environment variables from the ENV file. Then we create an MCP client using the MCP configuration from a separate file. I've placed it here. You can see the Airbnb MCP configuration is right
            • 02:30 - 03:00 here. Next, we define the LLM we want to use. If you're using Anthropic, the setup will be a bit different. Then we choose our model and create an agent. This agent takes the LLM, the client we created, and defines the maximum number of steps it can take. It also gives a prompt to the LLM which is used on the MCP server. It then prints the result and gives it to us. This is a very basic example of using the Airbnb MCP. You can modify it however you like and build really interesting applications. You don't need a separate client anymore.
            • 03:00 - 03:30 You can bind an LLM to an MCP and create modular applications. If you've seen our WhatsApp MCP video, that same concept can be used here to make fully autonomous WhatsApp agents. Now, let me run it for you. The server has started and it's running. It looks like there was some kind of error, but we still got the output. We received the listings from the Airbnb and MCP. It gave us the links because we added a feature that filters listings by
            • 03:30 - 04:00 our preferences, like having a pool and good ratings. It handpicked based on those conditions. This is a cool implementation. It works, and the possibilities for creating different agents are endless. The code you just saw is already in the GitHub repository, so there's no need to include it in the description. If you want to modify the code, you can either write it yourself or ask Cursor to do it. One issue you might run into is that Cursor doesn't have the context of this framework. To give it that context, scroll down to the
            • 04:00 - 04:30 features section and go to docs. Add a new doc and in the link field, go back to the GitHub repo and open the readme file. You don't need to provide the link to the entire repository. Just use the readme file since it contains the full documentation. Copy the link and paste it into the doc section. Cursor will read it, index it, and use it as context. To use it in code, type the at sign, go into docs, and select MCP use
            • 04:30 - 05:00 docs. It will reference that and generate code based on the framework properly. Another thing you can do is convert the repo into an LLM ingestible format if you have any questions about it. To do that, you can replace hub with ingest in the URL. This will open the repository and get ingest. It will convert the entire repo into readable text that you can use with any LLM. You can then ask questions about it if you're ever confused or need clarification. You've seen it in action
            • 05:00 - 05:30 and you can check the repo for other example use cases like using Playright and Airbnb. I use the Airbnb one but with OpenAI. The Blender MCP server can also be used. This framework also supports HTTP connections which means you can connect to servers running on local host. It includes multi-server support too, allowing multiple servers to be defined in a single file. If you're working with multiple MCP servers, you can either specify which result should come from which server or handle it dynamically. By setting use
            • 05:30 - 06:00 service manager to true, the agent will intelligently choose the right MCP server. You can also control which tools it has access to. This is a solid framework and I'm already thinking of all the wild ways to build new applications with the MCP library. You should check it out too. I'm working on a few projects with it right now. If you don't fully understand it, I've already shown how you can use an LLM to make sense of everything. You can also ask chat GPT or let cursor write the code for you. If you like the video, consider
            • 06:00 - 06:30 donating through the link in the description and do subscribe. Thanks for watching.