Very exciting topic now. We're going to talk about CrewAI flows and agents. And this is important because when you go into the wild, the actual use cases, usually they live in the spectrum. And we're going to talk about that in a second. But this spectrum goes from less agency, where you want more control, into more agency, where you want the agents to figure out as they go. But the important thing about flows here is that it's a very thin, low-level layer that gives you all the control that you need to decide what happens in what order. And then you can opt in how much agency you want. This is a very exciting topic, and it's going to be a huge unlock for many use cases, especially the use cases that require more precision. So let's look into that right now. So up to this point, we have been talking a lot about agents themselves, but there are other mental models for agentic systems. Generally, AI agents are understood through three main mental models, graphs, events, and agents. And up to this point, agents have been very exciting for us, and we have seen how powerful they can be. Another mental model for agentic systems that has been picking up is this idea of graphs, where they work with more traditional graph architectures, like nodes and edges. But the adoption has been a little slower, and friction has been a little higher, since there are many concepts and graphs that are harder to grasp, and not many people get exposed to those on a day-to-day basis. And then there is the events mental model that is gaining a lot of popularity, because it resembles more familiar PubSub patterns common on the web, like most people have been developing web apps up to this point. So it makes it a little easier for most people to grasp these concepts and get up and running and productive with that. Independently of what the mental model that you are using, they all share some patterns and they all share some common ground around using the LLMs and deciding when to use the LLMs, and trying to be very smart about what kind of context you are adding into them. Given these trends, most people, and CrewAI in particular, are focusing on the events and the agents model when implementing agentic systems. And we have talked a lot about agents and how you can view them and crews now, and that has been very exciting. I wanted to move into focusing a little more on the events-based structure, and how that mental model applies, and how you can use that not only stand-alone, but with agents and with crews, and how powerful that can be. We're seeing now millions of executions of flows a day by using these combinations of CrewAI flows with agents. So let's dive into that. Flows are a very modular orchestration layer. Think about it as a backbone of what you want your agents and your crews to be. And what that means is that it allows you to define on a very low level everything that you want to happen in what order. And there's a few different pieces that you can bring into your flows. You can bring LLMs, you can bring regular Python code, you can bring one single agent using memory and guardrails, for example, or you can bring an entire crew. So in the end of the day, you can bring the entire set of things that we learned up to this point inside the flow, and the flow will provide you with the backbone and the structure to define what is going to be executed in what order. You can see how powerful that can be, especially when you want to control a lot of these flows as they get more complex. So a good example of a flow could be something like this. You might have initial code that is being executed, And this code might pull data from somewhere or check a specific behavior. This is just regular Python. But then from that code, you're going to have it to do two things in parallel. You might want to have that information to flow into a single LLM call. And this LLM can be doing anything. It can be filtering out information or extracting context or making decisions whether it has all the information it needs or not. And meanwhile, we can also have an agent working alone. And this single agent will do something with that information. They might have a memory, they might have guardrails, they might be producing some sort of output that you want. But from that parallel execution, you can then go back into one single function, and that's where you're going to have more code happening. And this code will do something with that information, and then it might trigger two different things. Maybe it logs something into a specific database so that you can monitor that later and kicks off an entire crew as a next step. So you can see here how through code and LLM and agents and crews, it gives you a lot of control. In this example, you can see how flows become this automated control layer and the backbone that is defining what is happening throughout the execution. Again, it's a very low-level orchestration framework that gives you the complete control of what is going to happen and allows you to decide how much agency you want to bring into our automation. Throughout this execution, you not only control the order, but you also have a state that is shared among the entire flow. And that means that throughout the different code blocks and throughout the different functions and LLM calls, you can actually put things into that state that you're going to be reusing later on. So as this event is happening, you can choose to use this state appropriately to make sure that you get all the information that you need where it needs to be. So at the end of the day, if we take one step back, what we're seeing is there are two different complementing ways to build agentic systems. One that is more optimized for agency, where you have agents, tools, and tasks, and memory, and everything that we have seen up to this point. And you're optimizing for these agents to just figure it out as they go. And on the other one, you're seeing flows that provides you this backbone of structure where you can still bring agents, LLMs, and entire crews into, but now you've got way more control on how you set it up. So they each have their situations where they shine. Crews who have a lot of more collaborative agents, so they are more autonomous and dynamic, they're better for exploratory and less deterministic tasks. So if you want to write a screenplay, or create a marketing strategy, or ideate for a landing page, crews are great at that. On the other hand, flows are more low-level, event-driven control layer. And they are really ideal for structured, repeatable steps with a tight control, so you might have auto-respond to an email, for example, or triggering and receiving an email that you might want to run some code blocks, and even a crew to respond to that email, depending on what is on it or not. Or it might be great as a meeting assistant, where the meeting assistant might be triggered when you save a meeting notes doc, and then from there generate tasks for the transcript using a crew, and those tasks then go into Trello or being sent into Slack just using traditional code. So again, the sky is the limit when you're talking about how you can tangle these two together to create very complex use cases. A growing pattern among both enterprise and individual engineers that we have been seeing is this idea of opt-in agency model, also what I like to call the minimum that I can get away with approach. It follows the long-standing engineering principle of keep things simple. This idea is to start with a structured backbone, such as CrewAIFlows, and then add agency only where and when it's needed at each step. So if a code solves a particular problem on its own, you don't need to introduce the cost and the latency and even the potential complications of an LLM or even an agent. So you should reserve agents and crews for the complex tasks that they are meant for where they can actually add value. And only then, if you do need an LLM, you can start with just one single LLM call and eventually get into an agent or an entire crew. So there's a lot of different ways where you can choose how much agency you bring into your flow and that allows you to make sure that you're being very effective and efficient on what you're building. So let's take a look at how we structure a real-life flow. In this flow, you have a start conversation function. And once that conversation starts, it will trigger a router function. And that router function is basically doing one single LLM request to decide whether it does a deep research or not. Let's say that this conversation has been going on for a while. And due to that, the agent and the LLM has already a lot of context. So there's a lot of messages in the state of this flow. Meaning that it doesn't necessarily need to trigger a new research because it already has all the answers in its context. So it can just go straight into an answer function. Then we do another LLM call and get back to you with an answer. Now, what if you have to trigger an entire deep research? Well, then that goes into a research function that will have an entire crew into that. And that crew will basically behave the same way that we have seen up to this point. Where it's going to go out and actually do the research and scrape pages and put together a final report that will later be used to give you back your answer. So if you really look into flows, it has a set of building blocks that allows you to build this very complex use case. And they're very simple building blocks because this is just a very thin layer. And we're going to talk about the three major building blocks here. The start, the listen, and the router. And all of those are just decorators that you can annotate your functions with. So the first one is the start decorator. And that just marks what is going to be the first function that is going to be executed. The other one is the listen decorator. And that basically marks the function that is going to be executed once another function is done. And then you also have the router decorator that defines this is going to be a conditional function. Where it's going to route requests in different ways depending on what happens inside the function itself. With these three very simple building blocks alone, there is so much you can do. And as we can put more blocks together, and we're going to talk about them in a second, you will see how you can do so much more. So here's an example of a very simple flow that leverages all these building blocks. All you got to do is import those decorators from early on. So in this case, we're just importing listen and start. And you can see that this is regular Python. So it's a very Pythonic way to do things. Where you're just writing down your functions and you're using these decorators to annotate your functions. So we're using the start decorator here to flag the first function as being the starting point of the flow. And then you also have the listen decorator that is basically listening to the start conversation function. And once that function is done, then it will call the trigger research function that we're annotating with the listen decorator. So you can see here how simple it is for you to just use start and listen to actually set up all the set of chain of events that you want to happen. And there's so much more than you can do beyond this basic flow. So if you look at some examples here real quick, you can see how the combination of these patterns can build something truly interesting. So you can have a first function that is using the start decorator that then triggers two other functions. They're listening to the same first function. So in that way, you can have the second and the third function run in parallel by just using these decorators. And then the rest of your flow just keeps going on. But there's a lot of more patterns in there as well. For example, you could make sure that you didn't have a for function that only gets executed when the second and the third function finishes. So when both are done, then that for function will be executed. So in here, we're using a combination of the listen decorator with the end annotation as well. And you see how we can do that in code. It's extremely simple. But you don't need to end there. You can actually have the same for function now set up to be executed either when the second or the third function is done. So it's another layer on how you can actually set it up. And there's even more ways. For example, if we start to use the router decorator, now we can have this second function to trigger either a third one or a fourth one. So look at all these different combinations and all these abstractions. I know they can be a little hard to visualize. But when you reply them to a real use case, they add so much clarity and they allow you to do so much more. So let's go back into that example that we talked about before of the conversation with a crew, the deep research use case for flows. And I want to go over this real quick. So the starting conversation function would be using the start decorator in this case, indicating that this is where the whole flow starts. The trigger deep research would be the router decorator that is deciding on whether it's going to send a request in one way or another, depending on what happens inside that function. And here we're doing an LLM call. We could be using a function call, for example, from the LLM to decide whether we go one route or the other. Now, if we go one route for the answer, well, that's going to be listening to a specific one side of the router. Or if we go with the research, that's going to be listening to another side of the router. And we're going to see how to do that in a second. But the cool thing is that as you build these things and as you build this entire workflow, CrewAI allows you to automatically plot this out so you can actually have a visual cue. You can have the entire visual system mapped out for you based on your code. So that makes documentation so much easier and allows you to understand everything that is happening inside your flow. All you got to do is go into the CrewAI CLI and type CrewAI flow plot. And what that will do is that will automatically look at all your functions and all your decorators and annotation patterns. It's going to plot that out for you. And that allows you, again, to automatically generate these diagrams to help you not only understanding what is happening, but also make sure that you can document this. Crews and flows can get very complex, especially when you're just staring at the code all day. So this really helps you see what is going on and spotting issues with your process. So here is an example of a deep research hack, but actually mapped out by the CrewAI flow. You can see here the getUserMessage as the initial starting function, then going into a router function that will either go into answering or clarifying. And clarifying here is asking extra questions to the user in order to understand how deep it should research and how it should focus its deep research. And then after that is done, it goes into a crew function that actually does the entire research for you using the regular crew pattern that we have seen before, that then goes into a generateReport function at the end that triggers back into the answer. So in here you have a very easy to parse visual model of your entire code so that you understand what is happening and what each function is responsible for. And this is something that I'm so excited about because as an engineer myself make my life so much easier. So if you want to start with a flow on your own and create an entire folder structure, you actually have one simple command that you can use. And that is CrewAI createFlow. All you got to do is type that into your terminal and give your flow a name and that will automatically create the entire project for you. And let's go over that folder structure just for one second. You can see in here that this is just a regular Python project using PyProject to define its versions and dependencies. And you can see that it's structured the host first directory to support your flow. So you're going to have one folder specific for crews, where in this case we have a research crew in there. And you have a specific folder for tools that can be reused across the entire crews. And your actually flow will leave in that main.py file where you can actually encode all your functions in there. And again, you can split this into different files if you want to. In the end of the day, flow is such a thin layer that it's all regular Python. So you can break things apart in any way that pleases you to make sure that this is something that is easily maintainable for you and for your team. Another very interesting thing about flows is that every flow has its own state and state management logic. And for the sake of comparison here, you can just think about this as being like a database. Where you can store anything during the flow execution in there. So let's say that from the start, from the first function, you produce some sort of data. Either by doing the LLM calls or by executing codes or maybe the final output of your crews or an agent. And you want to store that somewhere to be used later. You can just write that into your flow state and eventually in other functions you can read that out. So that allows you to have this one shared pocket of data that during a flow execution you can write things in and read things from. And that gives you so much flexibility on how you share all this data among the different functions as they get executed. But not only that, you can also persist that state. So that is stored in an actual database. So if you ever run that flow again, that state gets automatically preloaded from where it was saved and you can kick things off from where you started. And in order to do that, you can do something as simple as annotating the entire flow with a persist decorator. And that will automatically make sure that it's storing that into a local database for you when you're running this on your own computer. So you can think about how important this is because in our early example about the deep research conversation, as we send new messages, we want to make sure that we're persisting the entire messages and the outputs into this one single state that we can reuse as the conversation goes through. So there's an actual conversational history as the flow gets executed. So every time that we run the flow from the top, it automatically preloads all the conversations up to this point and allows you to just keep chatting with that. And again, that information can not only be used through all the functions, but it's also being used now across many flows executions because it's actually being persisted and preloaded. And a very cool thing is that as checkpoints, you can not only persist the entire state of the flow of the end of every single function, but you can be as granular as you want by going per function and defining on each function what are the functions that should actually persist the state. All you got to do is actually use this persist decorator to annotate every function that you want that by the end of that function to persist that state in the database as it is. And whatever one is the final version of that state, that is going to be automatically reloaded on your new execution of our flow. And you can still overwrite some of that if you want to. And we're going to see an example of that in a second once that we actually get our hands dirty. So I don't know about you, but flows feel so incredibly useful. And frankly, I'm seeing a lot of people using them on the day to day. And that has been their choosing way to move some of these use cases into a production environment. Because it allows you to really set this structure and have as many control as you want. And by applying this opt-in agency model, decide how much agency and how much intelligence you want to add into your automation. That said, they need to be designed well to work properly and they need to be built reliably. So in the next video, we're actually going to get our hands dirty and build this entire flow ourselves. I'm very excited about this. It's going to look awesome and you're going to love it. So let's jump right in there.