In this first lesson, we'll go through how MCP makes AI development less fragmented and how it standardizes connections between AI applications and external data sources. Let's go. To answer the question why MCP or why the Model Context protocol? We like to say the models are only as good as the context provided to them. You can have an incredibly intelligent model at the frontier, but if it doesn't have the ability to connect to the outside world and pull in the data and context necessary, it's not as useful as it can possibly be. The Model Context Protocol is an open-source protocol that standardizes how your large language model connects and works with your tools and data sources. The idea here is not to reinvent the wheel and how we do things like tool use, but instead to standardize the way that our AI applications connect with data sources. The same way that we standardize how web applications communicate with back ends and other systems using REST, where we specify the protocol and statelessness and so on, we're trying to achieve the same thing with the Model Context Protocol. Everything that you're going to see with MCP can be done without MCP, but as we think about a world in which many different models communicate with many different data sources, and even with each other, we want to make sure that we're speaking the same language. We want to standardize how our AI applications interact with external systems, instead of building the same integration for a different data source over and over and over again, depending on the model or the data source, we're instead going to build once and use everywhere. The Model Context Protocol borrows a lot of its ideas from other protocols that aim to achieve similar kind of ideas. For example, LSP, or the Language Server Protocol developed in 2016 by Microsoft, standardizes how integrated development environments interact with language-specific tools. When you create extensions for particular languages for particular development environments, you don't want to have to write that over and over again for all of those development environments. So while MCP is very novel and what it's trying to do, it stands on the shoulders of many other protocols and ideas around standardization. Let's go show a quick demo where with just a few lines of code, we can bring in context to our AI application. On the left-hand side here, I'm using Claude Desktop, and I'm asking a question about retrieving some issues from a GitHub repository. On the right we can see this GitHub repository. And immediately through natural language I'm able to talk to this data source. This is the power of MCP. I have connected to an MCP server that's providing data necessary from GitHub, and I'm also connected to another MCP server for Asana, a popular project management tool. What I'm doing here is reading data from GitHub, and then I'm asking to triage particular issues and assign tickets in Asana. So I am reading from one data source and writing to another. We can see here in this interface there there's a human in the loop verifying the actions that I want to take with just very little code, I'm now communicating with external data sources with ease. This idea of being able to use MCP with any model provider completely open source, allows for seamless integration with different models and different data sources. We can see here I've created these tasks. Things are updating for me in the browser, and I can now continue to use natural language to iterate on this task I have here. I'm going to assign a task to an individual. I'm going to see that get updated and through the use of a very intelligent model, in this case, 3.5 Sonnet, a few tools provided by MCP, and an environment to run this over and over again, we're actually taking a look at a very lightweight agent to power this application. Like I mentioned, everything you could do with MCP, you could do without. But here's what it starts to look like. As you build these integrations where do you store your tools? Where do you store custom prompts that you have? Where do you store that data access layer and authentication logic? We found ourselves and for many different teams, repeating the wheel over and over and over again, many different AI applications, talking to a similar data source but written in a different way. With MCP, not only is this model agnostic, it's completely open source, so these tools and data connectivity are provided to you by the open source community, or you can build them yourself. With MCP, we shift the burden of responsibility and we separate our concerns in a really clean fashion. We build or use MCP compatible applications and connect to many different servers for the particular kind of data access that we need. We can have servers for data stores, for customer relationship management tools like HubSpot or Salesforce, even servers for things like version control. And the aim here is to use natural language to talk to these data stores without having to write all that logic ourselves. With MCP, the beauty of the server is that it's also reusable across many different applications. As we're going to see, there are reference servers that we can use or servers that we can even build internally and share amongst many different applications that we build. We might build a MCP server or use one for Google Drive, and depending on the application that we're building, that could be an AI assistant or agent or desktop application as long as it is MCP compatible, we can go and use that server and whatever else we want. You can let your imagination really start to carry you with all the different data access that you can bring in to your application, with minimal code and effort. With MCP, there are lots of wins for many different audiences. For application developers, connect to an MCP server with very little work. For API developers, build the MCP server once and adopt it everywhere. For users of AI applications, the idea behind MCP can be abstracted away quite a bit, so that you can bring a URL for an MCP server and simply have the data access that you need brought into your application. For enterprises and large organizations, you can think of the benefits of separating your concerns and building standalone integrations that different teams can use. As you might be aware, the MCP ecosystem is growing fast. We're seeing development not only from large companies, but also startups at the frontier. And we're seeing many, many, many different servers being built privately and in the open source community. The SDK or software development kits that we have to power MCP are written across many different languages and developed in the open source community, and also led by many different companies and AI developers. We're seeing MCP compatible applications across web applications, across desktop applications, and even agentic products as well. Before we wrap up, let's answer a couple of common questions you might be having about MCP. These MCP servers that we talk about, from GitHub to Asana to Google Drive. Who actually writes those? Well, anyone can. You yourself as a developer can build them, or you can go ahead and use community adopted ones. In the next few lessons, we'll see how MCP servers are built, and we're going to build quite a few of our own. You might think of MCP servers as very similar to working with APIs, and in fact you're not totally off. You can think of an MCP server as kind of like a gateway or a wrapper on top of an API, where if you do not want to bother calling the API directly, you can use natural language and let the MCP server handle that for you. MCP servers support tool use, but that's just one part of what MCP servers can do. The servers give you functions and schemas available to you, but as we're going to see in the next lesson, there's so much more that MCP provides. So we answer the question why the Model Context Protocol? We've seen a really nice demo for what it can do with a very little amount of work. And the next lesson we're going to start to explore how MCP works a bit under the hood and introduce the idea of hosts and clients and servers, and talk a little bit about some of the underlying primitives in the protocol, like resources, tools and prompts. See you then.