In the previous lesson, you connected your chatbot to one server that you built. Now you'll update your chatbot so that it can connect to any server. You'll learn more about the reference servers developed by the Anthropic team and how you can download them. Let's get to it! So far, we've seen how to build MCP servers as well as clients and connect those on a 1 to 1 basis. Well, we want to start introducing now, is the ability to not only build multiple clients that can work with multiple servers, but also introduce the entire ecosystem of servers that exist out there. So I'm going to start here by taking a look at some of the servers that are reference servers from Anthropic on our repository. So let's go take a look on GitHub for the reference servers that we have with the Model Context Protocol. As you take a look through all of the servers here, there's a massive, massive list. So we're just going to start with the reference servers. These are ones that we have worked on and built at Anthropic. There are also many different third party servers and official integrations. Any data source that you can imagine talking to at this point probably has an MCP server. Instead of you having to download these servers and run them locally. We're also going to see how we can add the command necessary to run the server without that much hassle. The servers that we're going to be using are the fetch server, as well as the file system server. So let's take a look at the fetch server. What's so interesting about this is that if you look at the underlying source code for many of these servers, it's actually going to look pretty familiar to what you built before. We can see here that the fetch MCP server exposes tools and a prompt to us, and we can see what the installation is as well. Since this server is written in Python, we're actually going to use the uv command to directly run a command called MCP server fetch, which will run all the code necessary to download what we need and establish the connection. So instead of uv run, we're going to be using uvx. The fetch server allows us to retrieve content from web pages, convert HTML to markdown so that LLMs can better consume that content. The second server that we're going to be looking at is the file system server. And just like you can imagine, this is going to be a way for us to access our file system reading, writing files, searching for files, getting metadata and so on. We can see here there are resources and tools exposed, quite a few different tools for reading and writing files. If you take a look at the source code here, you can see that this is not written in Python. This is in fact written in TypeScript, which means instead of uvx, we're going to be using a slightly different command. If we take a look at the installation instructions to run this, the command necessary is npx /y. So that we don't need to press enter for any other installation instructions and then Model Context Protocol server file system. So similar to running uvx where we can download what we need and execute it right away, we're going to be using npx from the npm package manager. We then specify any paths that we allow for reading and writing files into. As you can see, each of these reference servers have a bit of configuration required. The name of the server, the command necessary, and so on. So what we're going to do is we're going to make some updates to our chatbot. Instead of hardcoding these server parameters. We're going to make a small JSON file that we can read from to figure out the necessary commands to interact with our servers. We'll be using the file system server, the research server that we're building, as well as the fetch server. And we'll see how we can put all three of those together to create very powerful prompts. In order to make this happen, we're going to have to go ahead and change the code in our MCP chatbot. The reference servers stay the same. Our research server stays the same, but we have to update the code a bit for our MCP chatbot. There is a good amount here that is relatively lower level and plenty of opportunity for refactoring. So I'll walk you through what we have here and I welcome any changes you'd like to make to grow this as it scales. In order to get this to work, we're going to have to set up our own JSON file to configure how we want to connect to each of the individual servers. And here's what that's going to look like. We're going to start with a little bit of JSON to contain all of our servers. Then we'll specify the name of those servers as well as the underlying command necessary. And any arguments required for the research server. This is going to look relatively familiar. And for the reference servers, since we're not downloading it and then running it locally ourselves, we're using commands like npx and uvx to run those immediately. So we're going to see this file. And you can find this file as well in your list of files for this lesson under the folder MCP project. For the File System server, if you remember, we had to specify the paths that we wanted access to. And here we're specifying a dot which means the current directory that we're in. So this is not going to be able to read or write from files or folders outside of this current directory. Now let's go ahead and take a look at the code necessary for our MCP chatbot to not only connect to multiple servers with multiple clients, but also correctly read the JSON file for the server configuration necessary. Let's go ahead and see what we need to update our MCP chatbot to handle these connections. If you take a look at the code that we have here for our MCP chatbot, there's quite a bit more happening under the hood, and especially some lower level ideas that I want you to not feel too intimidated by. The most important takeaway here is to understand how tools like Claude Desktop, Claude AI, Cursor, Windsurf, work under the hood when they set up multiple connections to multiple servers. What I'm going to do here is start by adding a little bit more to my MCP chatbot. I'm going to maintain a list of all of the sessions that I've connected to, as well as all of the tools and the particular session that that tool is related to. Again, this is not production ready. This is really just giving you a sense of how to get started. And the focus here is to make sure that we correctly map a tool to the session that we're working in. We have a type definition here, as our tools are a little bit more complex than we had before. We're going to have some similar code to connect to a server, except that since we have multiple context managers inside of an asynchronous environment, we have to set up our connection a little bit differently. So we use an async exit stack to manage our connections for reading and writing, as well as managing the entire connection to the session. Below, we're going to see some pretty familiar code. We initialize a session, we list those tools, and we take the tools and append them to our list of available tools. You can imagine that this is a function that's going to be run multiple times for each of the servers that we want to connect to. And that is exactly what we're doing down here. We're going to go ahead and read from our server config file. We're going to parse that JSON, turn it into a dictionary that we can then iterate over. And for each individual MCP server, connect to it. If you are familiar with asynchronous programing you can see that this code is blocking, and maybe you could refactor this to use async IO gather or so on. But again, the focus here is understanding conceptually what's going on and welcome any refactor as you'd like to do. Once we connect to all of these servers, we're then going to use some logic that looks pretty familiar as well. We're going to go ahead and get access to our model. We're going to pass in any information coming in from a query. And then if there is a tool that we need, we're going to go find it and call that particular tool. The rest of this logic is very familiar. The chat loop that we have is exactly as what we had before, with one small note, that when we need to go ahead and close any connection that we have. We do this using our context manager for multiple different connections. Our main function has a little bit more to allow us to connect to all of the servers that we need, and then start the chat loop. And once that's all done, we can go ahead and clean up any lingering connections that we have to these servers. And just like we had before, we're going to start this application by calling Async io dot run with our main function. So let's go ahead and write this file and we'll hop back to the terminal in the terminal here. I'm going to first CD into MCP project. And I'm going to see here that I have again a dot then folder. So let's go ahead and activate that virtual environment. Source dot venv bin activate. And then let's go ahead and run our chatbot. I'll clear so we can take this from the top. And I'll type in. You've run MCP chatbot dot py. What we're going to do here is connect to multiple MCP servers by setting up multiple clients. We can see here, we've connected to the file system with the allowed directory of the current directory we're in. We've connected with these particular set of tools. We've connected to our research server as well as the fetch server itself. We have the same exact chat interface that we had before. So I'm going to paste in this prompt where I'm going to ask it to fetch the content of the Model Context Protocol and save the content to a file called MCP summary, and then create a visual diagram that summarizes the content. So what we're going to be doing here is use a multitude of tools to fetch information and then to summarize that information. We're then going to have it draw a nice little diagram for us. So let's go take a look at what that looks like. We can see here it's saved to a file called MCP summary MD. So in our file system let's go take a look at what that file looks like. So we've got this nice little diagram here for the Model Context Protocol. This was done by fetching information from the website summarizing that information. Turning it into a nice visualization. And again we're going to see an even prettier one when we start bringing in tools like Claude Desktop. But now the UI is totally up to you for what you want to do. But you can do whatever you want with this file right now. So we've seen how a couple of these servers can work together. Let's try bringing all three together. So we'll say fetch DeepLearning.AI find an interesting term to search papers around and then summarize your findings and write them to a file called results dot txt. We're going to make use of the fetch tool here to visit a website and find the content of that website. Based on that content, we'll find some interesting terms. In this case we've got multi-concept pre-training. We're then going to go ahead and find papers related to that. We're going to take that data and we're going to write it to a file. You might not find yourself using a combination of these servers for many real-world use cases, but now your imagination can carry you. Any existing MCP server can be added with minimal configuration, and you can take the results of these different MCP servers to add all the context you need to connect models like Claude to the outside world. We can see here we've got a really nice summary. Let's go ahead and see what's been written here. So we got a very interesting result from our research. It seems that while MCP or Model Context Protocol is a very powerful tool, there also is another acronym for MCP for Multi Concept pre-training. So it looks like the model got a little bit confused here. When in doubt, this is why prompt engineering is so important. And we could even follow up with a follow of this is why you should include the Model Context Protocol and not other concepts as well. As always, if we want to leave this chat session, we can type in quit. Now that we have multiple servers connecting to multiple clients, let's start adding on a few other primitives like resources for read only data and prompt templates for the ability to generate prompts on the server that the user can use so that they don't have to write prompts completely from scratch. I'll see you in the next lesson.