Fantastic progress so far. You're almost there. We're almost completing this course, and you have learned so much from designing to development to deploying AI agents. And I'm very excited to see what you're going to build in the future now that you have learned all this. So let's explore a few different ways that AI agents are reshaping that future and how you can be part of that as well. Now, I want to make sure that we're spending some time talking about what is the future of AI agents? What is actually going to look like six months from now or a year from now? We don't have as much control over that as we'd like to believe, but there's definitely some patterns that have been emerging that hints us in the direction that things are going. First thing is there is this idea of an agent management platform that has been coming up. And that agent management platform will take different shapes and forms depending on the tack that the customers are choosing or that the users and the companies are choosing. But organizations definitely need to have a management layer to orchestrate that agent adoption across their company. These platforms can serve as a single source of control for not only reusable use cases, but a centralized monitoring and futuring and securing. And the agent management platforms here will provide three core capabilities. The ability to build and integrate these use cases, so getting started extremely fast. The ability to build trust in these use cases, so observing and optimizing these use cases as they go. And the ability to deliver value, basically managing and scale these use cases as they get more and more traction. And this orchestration layer sits on top of running agents and gives the company's control over what happens, how it happens, and when it happens. So this is definitely one of the patterns that we are seeing in there as people are talking about adopting AI agents. Now, if you zoom in into those specific capabilities, what you will see is that they go into smaller specific features. And these features are many to name. On the orchestration side, you're going to see planning and reasoning and memory and guardrails and knowledge. But then when you're talking about observing, you'll see all the traces and training and testing that we have talked about, including events-based automations. You then have the building and integrate piece where you might want to build with code or you might want to build with no code, like what we did in Studio. You might want to use local tools or you might want to use other APIs or even external triggers. And on the management and scale side, you want to make sure that you can deploy these things extremely fast, that you can invite your team members over, that you can control permissions, and that you actually have some monitoring behind this. Some projects and companies address the entire workflow end-to-end while others will focus on specific capabilities. But the thing is, there is an orchestration layer that encompasses the entire process in here. Traditional software engineering patterns still apply as we have talked about. So from the observing and optimizing to the building, you're going to find a lot of common things. So if you look across all these things that we're talking about, there's definitely a few patterns that you can recognize between traditional software engineering and the agent engineering in here, like the observation layers that are typically very separate from the building and integration layers. So there is a separation of concern. And this is just one example, but there are many other concerns in here. So what we're seeing is as the world evolves, there's a new tech stack coming together, and this tech stack extends from the data management all the way to the larger language models, to the orchestration, to the observability, to the authentication, the connectors, and culminating into the GenAI applications, or what we call a Gen AI apps. At the outmost layer, the users are interacting with agents through interfaces that can be conversational, like chatGPT, or even traditional button-based designs. Three themes, though, consistently emerge from the user's feedback, and we talked about those before, but I really want to drive them home. Across the entire stack, there's a lot of most common frequently raised concerns, and one of those is something that we mentioned briefly on early lessons, and that was the customers that are avoiding vendor lock-in. The concern traces back to the cloud, where migration experiences were that companies became dependent on managed services, and organizations found themselves locked in as costs increased and struggling to migrate workloads to another alternative, like using Kubernetes, for example. So leaders who have experienced those challenges approach GenAI and AI agent adoption way more cautiously. So companies do want flexibility, and it is not only about vendor lock-in, what is definitely a problem when you think about hyperscalers or different deployment approaches, but also the ability to use the right tools for the job. Because you can use different LLMs for different agents, you can actually have an extra layer of optimization in there, and that also open room for you to use open source models as well, or even fine-tunes. But the idea here is that organizations want the freedom to run their workloads anywhere, even their own services, on-premises, or whatever hyperscaler they're using. And companies also want to use many different models, not only OpenAI or Claude, but pick and choose from whatever vendors they have procured. So at CrewAI, our framework and platform allows you to download the code for anything generated on your user. So at CrewAI, our framework and platform was taught out from the ground up to allow you to tap into this interoperability, not only on the LLMs, but even on the crews themselves. So even on the no-code experience that we had before, you're not locked into that, because you can download your code at any time for anything that you generate on the user interface, and take that with you because that's your IP. So everything that you create remains your intellectual property, addressing many of these concerns about platform dependency. So this is something that we should keep top of mind. Again, choosing the right model can and will have a major impact. And since we're talking about models, it's important for us to understand when to use open-source models and when to use closed-source models, and some of the benefits of doing so. Companies usually optimize either for open or closed-source models, depending on their needs. Running models yourself has become significantly easier, especially with tools like VLLM or TensorRT, LLM, and others. And that means that you can get some of these amazing models that are being produced everywhere in the globe, and host them yourself, or even fine-tune them to get them better. So closed models like GPT-4o and Claude sonnet 4.5 definitely come with batteries included and work out of the box, but give them an edge. But open models have actually caught up in capabilities with many companies now actually open-source their own models. And you can not only tap into those models, but you can actually fine-tune them for specific application domains, making them better agents for you and your use cases. You can also run them locally for privacy and making sure that you're still compliant with whatever governance policies that you have. And you can actually drastically reduce the usage costs and avoid any rate limits that you might encounter with these external providers as they're trying to serve the entire globe to use their models. Now, if you look at the closed-source models, you actually will find that they offer more advanced reasoning models for complex tasks, and they do provide a lot of safety and robustness features for predictable behavior. What allows you for easy integration through managed services? That there's no single right choice between open and closed models. However, you should be wary about running your agents on smaller models. Models smaller than 14B or 20 billion parameters actually typically don't work when you try to scale them for agentic behavior. They just cannot adhere to the instructions because they're not as capable as bigger ones. So agents usually require more powerful models to perform effectively. That said, as new models comes up, they are usually more advanced. So you're now seeing an entire new breed of models that are smaller but are extremely powerful as well. Like, for example, GPT Open Source 20B, one can definitely be used for some agentic behavior. You will find better performance on bigger models though. Now, I want to focus on fine-tuning for a second. And fine-tuning has been discussed extensively over the past few years, but most companies have not adopted it yet. And this is unfortunate because fine-tuning offers significant potential for cost savings and performance improvements, particularly in speed. The barrier for adoption is high though. Companies without AI training experience find it difficult to wrap their hands around fine-tuning models confidently. As the barrier for entry decreases and fine-tuning becomes more accessible for agents, adoption will likely increase. And the companies do want the benefits that come from fine-tuning. For example, you will save money because you can run smaller models, and they will run faster because they're more efficient. So companies building these AI agent use cases, they actually want the results from fine-tuning, but they struggle with the execution. As friction decreases though, more companies will definitely pursue fine-tuning. And fine-tuning models to run agents is a very effective approach to get better results as well. So looking ahead, there are two trends that are converging in AI agents development, and it's what we're hearing from the cutting edge from agent development, and that is self-improvement and long-running agents. And this is very important. For self-improving agents, we not only want them to acquire memory over time the way that they do right now, but we want to make sure that they keep getting even better at that, and they do that more effectively. And we also want these agents to keep self-evaluating their performance and adjusting that behavior accordingly, improving continuously. So the ability to do that across hundreds of thousands of executions is something that people are aiming to get better and better at. This self-improvement capability is built into CrewAI, and we are still investing heavily on how to enhance it further. Now, long-running agents is something else that companies are also thinking about. This idea of agents that can operate it for extended periods, sometimes even hours of work, without having humans interact with that, and still achieving an amazing outcome. So the duration depends on your configuration, the number of agents and your system setup, but definitely there is a sentiment that the industry is actively optimizing for self-improving and long-term agents and how to crack the code and do them effectively. So if you think about yesterday agents, you will find that they will wait idly for instructions. They will execute very short horizon tasks, maybe just a few minutes, and they will forget about their tasks after the executions. Now tomorrow's agents are all about being autonomously triggered by events. They execute in a way longer horizon for hours sometimes, and they're great at storing all the insights from the runtime in a way that they can self-improve to reflection over time. And that is a major drive on how the industry is trying to get into the next level of AI agents here. Now, I do want to make sure that as we're wrapping things up, I come back to one very important principle. So even after hearing about all these features, if there's one thing that you got to take away from this course, is that as you build agentic automations, you should not chase automation alone. You should build reliability into everything you create. And that makes your AI agents production ready. When you design, when you build, when you deploy your agents with reliability in mind, you can achieve remarkable results. So coming up next, we're going to have a graded lab and a graded quiz. And then I will see you on our final video. And I'm very excited about that one. Hopefully you have a great time with this lesson. I know I have, and I will see you there in a second.