DLAI Logo
AI is the new electricity and will transform and improve nearly all areas of human lives.

Welcome back!

We'd like to know you better so we can create more relevant courses. What do you do for work?

DLAI Logo
  • Explore Courses
  • Community
    • Forum
    • Events
    • Ambassadors
    • Ambassador Spotlight
  • My Learnings
  • daily streak fire

    You've achieved today's streak!

    Complete one lesson every day to keep the streak going.

    Su

    Mo

    Tu

    We

    Th

    Fr

    Sa

    free pass got

    You earned a Free Pass!

    Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.

    Free PassFree PassFree Pass
In this course, you've learned about the core concepts of MCP. You've built a server that exposes tools, resources, and prompts, developed a chatbot, that can connect to multiple servers, use Claude Desktop to build more sophisticated applications and deploy your own remote server. Congratulations! MCP is constantly evolving. In this last lesson, you'll learn about other features of MCP and some of the exciting things coming soon to the protocol. I'll see you there. There's a lot you've learned about the Model Context Protocol. You've learned about hosts and clients and servers, tools, resources and prompts. And then you've got a chance to write some code to power larger applications using all of these ideas. But there's still a bit that we have not yet covered about the model context protocol. Much of this is an active development, and you can always examine the latest on the specification, run through GitHub and through the discussions that you can find. The first piece we haven't covered yet is authentication in the Model Context Protocol. In the March specification update, OAUTH 2.1 was added as the means of authentication with remote servers. What this allows for is for clients and servers to authenticate and send authenticated requests to data sources. You can imagine many different servers need to access data that requires some form of authentication. This requires the client making a request to the server. The server then requiring some user to authenticate. And once the authentication process is successfully done, the client and server can exchange a token and the client can make authenticated requests to the server and then to the data source. This part of the protocol is an active development, and there are always newer features and security pieces being added. But authentication will primarily be done with the OAUTH 2.1 protocol. To highlight that in some depth, this is an optional feature of the Model Context Protocol, but it is highly recommended for remote servers. With standard IO, we use environment variables and don't have the need for this kind of authentication. This is built on established standards that you can take a look at with these links below. So while we've explored primitives that to be exposed by the server tools, resources and prompts, we also have primitives that the clients can expose. These include roots and sampling. Let's dive into this. A route is a URI that a client suggests a server should operate in. The idea is centered around only looking in specific folders for files that you might need. When the client connects to a server, it can declare the roots that the server should work with. This can be useful for filesystem paths, but can be any valid URI, including HTTP URLs. The benefits of roots allow for security limitations. Allow for keeping the server focused on a relevant file path or location, and roots also have some versatility baked in, where again, they're useful for file paths, but also can be any valid HTTP URI. We're slowly starting to see more and more clients adapt this primitive, and it's an important one to keep track of as the protocol evolves. The last primitive that we're going to cover is sampling. Sampling allows for servers to request inference from a large language model. Kind of like the other side of communication, where instead of the client talking to the large language model, the server can talk back to the client and request inference. An example here might be a situation where your users report that a website is slow for some reason. Your MCP server can then collect server logs, performance metrics, error logs, and communicate with a variety of data sources to see what's going on. Instead of the server handing that back to the client, putting everything in the context window, or potentially any kind of breach of security between the server and the client, the server instead can talk directly to the large language model and ask it to diagnose performance issues. The large language model analyzes the patterns and returns data back, and then the server can generate the steps to make the website a bit less slow. When there are concerns from a security standpoint or breaching boundaries. Or you might not want all that data coming back to be put into context. Sampling and creating sampling loops, it's a very powerful way for servers to request inference and kind of switch the direction of communication from what we've seen before. This is also quite powerful as we explore agentic capabilities with the Model Context Protocol. As we start to move towards a world where more models are talking to different data sources, and we're giving more autonomy to models to call different tools and go off on their own, we believe MCP will be a foundational protocol for agents. As we start to think about how MCP can be used with agentic capabilities, you can imagine a scenario where an user and a large language model need to access a variety of MCP servers. What's so powerful about the Model Context Protocol is that there's a composable and somewhat recursive nature, where clients can be servers, and servers can be clients. What this allows us to do is to start creating an architecture where we can take advantage of the ability for clients to communicate with servers, but also for servers to request the data that they need through sampling back to a client. What we've set up here is the idea of a multi-agent architecture, where the application and a large language model communicate with an agent. This agent happens to be an MCP client and a server, and it can serve data back to the application, but it can also connect to other clients and servers through the Model Context Protocol. You can imagine that we have agents for analysis, for coding, for research that also happen to be MCP servers. And if they need to connect to other servers, clients as well. Through this composable nature, we can start to think about architectures that allow for multiple agents all speaking the same language with the same protocol. The next large piece that's on the roadmap for the model Context Protocol, is the idea around a unified registry. The purpose of this is to standardize the way in which we think about discovering servers themselves. As we've seen before, there's a lot of excitement in the open source community, and there are many different servers for data providers. Tools like Google Drive, GitHub and so on may have dozens of MCP servers, but just like with packages with NPM or PyPI, there's an opportunity for malicious code to exist inside of these particular servers. So the registry API serves the purpose of discovering servers, of centralizing where these servers live, and also verifying that these servers have been trusted by the community and by companies themselves. This also allows for versioning of particular MCP servers to lock in dependencies, just like you would in your application. What this also gets exciting is the ability for MCP servers to let agents self discover them. You can imagine a scenario where a user needs to fix a bug based on something in some logs. The agent then searches through the registry API for the official MCP server, installs it and queries and suggests the fix. In this particular use case, instead of requiring the application to be connected to a variety of servers from the start, we can start to build applications where MCP servers are dynamically discovered and connected to. As we think about layering this on with authentication, we can imagine that a user has a request that requires a server to be discovered. Similar to other protocols like OAUTH and the agent-to-agent protocol that Google recently announced, the idea of putting in a JSON file in a well-known folder is something that's been done before, except here in this MCP JSON, we specify the endpoint of a server to connect to, the capabilities or primitives that it exposes, and then the authentication that's required. So a user might ask how we manage my store on Shopify. The agent or AI application will see if Shopify has a well known MCP JSON file. And if it does, it will figure out what endpoint to connect to and what authentication is required. Once the user authenticates, the agent can perform the necessary action. Through a registry API, we can allow for this idea of dynamic discovery and through layering on Oauth2, we can ensure that these connections are secure. As you might see in the suggestions, and in the discussions, there's a lot more that's coming to the protocol. As more and more clients support HTTP streamable. The aim is to achieve a smooth transition between staple and stateless capabilities. As remote MCP servers continue to be developed, it's important to expand the ecosystem to support even more and more of those. As you can imagine, when multiple MCP servers are being used, it's very possible for tools to have naming conflicts. You can imagine servers that might have generic names of tools like fetch users, fetch entities, and the model might get confused about what needs to be fetched. So it's important to think about preventing collisions and creating logical groups of servers or tools. We spoke a bit about sampling or proactively requesting context. There's a lot of work that's being put into the protocol and it's conversations we have to enable primitives like sampling to become much more popular. And finally, while OAUTH2 is relatively new to the specification, there's still quite a bit more to think about with regards to authentication and authorization at scale. In just a short amount of time, you've seen so much about the Model Context Protocol. You learn conceptually about the primitives. You've built, servers and clients and hosts, and you've seen how to deploy remote MCP servers. There's still so much more to be discovered. So I encourage all of you to take a look at the discussions and the conversations, and keep building and researching as much as you can. Thank you so much for joining me in this journey, and I can't wait to see what you build with MCP.
course detail
Next Lesson
MCP: Build Rich-Context AI Apps with Anthropic
  • Introduction
    Video
    ・
    3 mins
  • Why MCP
    Video
    ・
    7 mins
  • MCP Architecture
    Video
    ・
    14 mins
  • Chatbot Example
    Video with Code Example
    ・
    7 mins
  • Creating an MCP Server
    Video with Code Example
    ・
    8 mins
  • Creating an MCP Client
    Video with Code Example
    ・
    9 mins
  • Connecting the MCP Chatbot to Reference Servers
    Video with Code Example
    ・
    12 mins
  • Adding Prompt and Resource Features
    Video with Code Example
    ・
    11 mins
  • Configuring Servers for Claude Desktop
    Video
    ・
    6 mins
  • Creating and Deploying Remote Servers
    Video with Code Example
    ・
    7 mins
  • Conclusion
    Video
    ・
    9 mins
  • Appendix – Tips and Help
    Code Example
    ・
    1 min
  • Course Feedback
  • Community
  • 0%