This is another incredible use case. For this, we're going to be building a crew that is able to create content creation at a scale. You will learn how to build crews that can not only monitor the web, but also try to understand what they should be searching about. And even using RAG in order to create amazing blog posts on the fly. This is one of the use cases that we're seeing a few customers deploy out there. What they do is monitor news around specific topics and make sure that they create timely, great content that can actually deliver results by doing content marketing. For this use case, we are going to start by making sure that our crew is able to talk with the internet, and that it's able to search the internet for content and relevant news. From that point on, we're going to have a few agents and tasks to actually be able to parse this information. We're going to start with a market analyst agent, and then we're going to go through a data analyst agent, a content creator in a chief content officer. These four agents are going to do four different tasks. They're going to parse the latest news. They're going to search for market trends. They're going to write blogs and content for social media. And then review all this content to make sure that it checks out. So these agents are going to be making sure that by doing our data sequentially, they can build amazing blog posts and social media content for us about trending topics that people are talking about right now. An interesting thing about this is that we can probably optimize some of these agents and tasks to be a little faster than what they would be. In order to do this, we're going to be using a major provider to power these agents. We're going to be using groq. And here we're going to be actually optimizing for speed. groq is one of the fastest inference providers out there. Their models run so fast that these agents can go through a bunch of news and market trends in the blink. So we're going to make sure that we have these agents running through using groq. And for this crew we're using a combination of tools that we have not used before. And this is super interesting. We're using serper that is going to allow us to surf the internet. But then we are using RAG as a tool. So instead of only being able to scrape entire web pages, we're going to be using a search website tool that is going to allow our agents to automatically download the content and back this content in the fly, and push that into a vector database, and then run search against it. So your agents are going to be able to do is databases and search it and use it alone by themselves. These unlock so many potential use cases. In the end of the day, what we're going to have is a content creation crew that is going to do a few different things. It's going to search for the latest news around a financial topic. It's going to then search and analyze any market data around it. It's going to create content for social media based on that, and create a full blog post based on the same topic and everything that we learned about it. And then to wrap it up, it's going to make sure you review the blogs so that it sounds good. And this content is going to be pretty great, especially if you use training and testing, as we learned in other lessons. Now let's jump into the code and build this crew yourself. So we are going to start by importing our classes and modules that we're going to need for this one. We're just making sure that we set all the environment variables and importing the main classes from CrewAI, as we have done in other times. Now we want to create a structured output to make sure that whatever data we get out of our crew, we can use it in push into other systems. This structure output is going to be pretty straightforward. We're going to have a content output module that has the article itself and social media posts. Social media posts is a list of a social media post model that basically has a platform and the content. So during this crew execution, it's going to be writing not only the article, but also social media posts to accompany this so that we can post is on LinkedIn, Twitter and everything else. Now let's load our agents and tasks Yaml files. Loading our agents and tasks the Yaml files. It's pretty straightforward, is the same way that we have been doing so far. So let's take a look at our agents and tasks. So in here we have four agents. We have the lead market analyst, the chief data strategist, the creative content director, and the chief content officer. Again, feel free to play around with this agents role goals and backstories as you want to tune things up and see if you can get different results. Now let's check out the tasks. We also have four tasks. The first task is about monitoring and analyzing the latest news. The second one analyzes the market data and trends about a given subject. In this, you can see that we are again interpolating a variable on our tasks as we have done in other lessons. That means that when we kick off this crew, we need to make sure that we are passing this as an input. You can see that we also have a content creation task that based on the insights, now tries to actually write not only the blog posts, but also social media updates to go along with that. And then we have the quality assurance task, that basically reviews and refine all the content regarding the subject, making sure that both the blog articles and the social media posts check out. Now let's go back into the code and keep on building. Now let's import our CrewAI tools that we're going to be using for this. The tools are pretty simple. We're using our usual serperdev tool that allows us to do Google searches, and also our scrape website tool, that allows us to scrape entire websites content. But we bringing a new tool into this, and that is the website's search tool. This is a RAG as a tool. What it does is, as our agents search the internet and find websites that have interesting information, it will automatically download those websites, break that into smaller chunks, embeded them, and save them into a local and embedding database so that you can do vector search in real time. Your agents will do all that on their own. You don't have to worry about any of that. They will generate embeddings and they will choose what to search and when to search it. Now that we know what the tools we're going to use, let's import and set up or our LLMs that are going to be using for this lesson. Keep in mind that we're using multiple models on this one to showcase the power and how you can optimize for certain things on certain tasks. So let's start by setting up our OpenAI model. And to do it, we're going to set this open the AI model name environment variable. And we're going to set the value to be GPT-4o-mini. That's already the default for CrewAI. But we're setting it up anyway just to be more verbose and direct. We're also going to use a groq model, specifically Llama 3.1-70b. And we can do that by setting up this variable that we're going to reference later when creating our agents to be the string that represents that model. So it's a groq model. And the model name is Llama 3.1-70b Versatile. Now let's create our agents, our tasks, and our crew. Let's start by creating our agents. For our agents, we're going to be using the agents config Yaml file. And you can see that the market news monitor agent has a different set of tools from the data analyst and the content creator. The market news has access to the serper dev tool that allows us to search the internet. It also has access to the scrape website tool, that allows you to pull information from an entire website. And this is good for this agent because it's trying to basically go broad and find as much information about the topic that we care as it can. And then our data analyst agents in our content creator can still do a Google searches, but now they also will do RAG searches on whatever content they find. That way they're not parsing the entire website and all the information on it, but they're searching about specific topics that they care about. And in the end event, our quality assurance agent doesn't have no tools and is responsible to making sure that the content looks and sounds great. Now that we create our agents, let's creates our tasks. We will also have four different tasks, one for each agent, the first one just monitoring the news. The second one just analyzes the market. The third one create the content. And if you see it, you can see that we are passing a context attribute where we're bringing the results from both the monitored financial news and the analyze market tasks into this one. So by the time that it writes the content and the social media posts, it has all the information that it needs in order to get this job done. Then our final agent, the quality assurance agent, is going to do this quality assurance task. And it's going to make sure that whatever output we get is going to use that model that we declared above, so that we have not only the actual article, but we also have the social post for all the different social networks. Now that we have our tasks created, let's put it all together on our crew. For our crew, we just need to bring our agents and tasks together into one single object and this group is good to go. Let's kick it off and see how this article looks like. Now we're going to kick off this crew. And this is going to be a lot of fun. Because think about everything that is happening here. We have agents that are researching the web. They're scraping web pages. Then we have agents that are downloading to content, breaking into chunks, creating embeddings and storing into a vector database, and automatically searching to find every piece of information that it needs to write the most amazing article it can. This is such a nice setup and allows for you to create so many different things. So let's kick off this crew and don't forget that you need to pass inputs on this one, because we need to tell what is the subject that we want this content to be about. And in this case, we're going extra hard. We wanted to find specific information on the inflation, the US and the impact in the stock market for 2024. So you can see this is a very hard task, but these agents can take care of it. So let's run them and see what we get. Let's start by monitoring our first agent the lead market analyst. Its first task is very straightforward. It's basically monitoring and analyze the latest news and updates regarding financial markets. And in this case it goes ahead and search for the latest news on inflation in the US, and its impact on the stock market in 2024. And we can see that it found good sources of information. You can see the Deloitte in here. Wall Street Journal, USA Today. So it found a bunch of information that it can now use. From this point on, you can see that decides to scrape a website content by looking at everything that is in there and making sure that we find all the information that is necessary to learn about this matter. Now that our lead market analyst wrap up its work, it gave us a full report in summary on everything they learned about a potential implications for the stock market. With inflation rates in the US, including consumer price index and so much more. Now that this first agent is done with the work, let's see how things are going to happen from here. Here we can see your second agent. The chief data strategist is starting by analyzing marketing data. And just to understand better the impacts of the US inflation in the actual stock market. And you can see that it starts by just doing simple search. Now that our chief data strategist wrapped up its work, you can see that it gave a pretty long summary, including what is the current state, the inflation rate, the stock market impact, and even key trends and opportunities and even risks. Now we're going to have our content creators start to write the blog materials and the social media posts around this. And here you can see your creative content director and like kick things off. And based on the insights, it's going to write, the initial version of this blog post. Let's see how that turns out. And here you can see that our agent does a search about the US inflation trends in 2024 and impact in the market analysis. And it finds a lot of data throughout the web. But then it doesn't stop there and decides to basically look into into specific web page from Deloitte in order to understand the impacts of inflation in the United States. And this is not only scraping the website this is using RAG as a tool. It's automatically download the content of this web page and embedding it on the fly and pushing this into a chromadb so that you can actually search this. So here, you can actually see the content coming back only for the specific things that this agent actually cares about. This is so interesting because your agents is basically doing RAG and during a vector database on the fly. Let's keep going and see how this agent does. So here we can see that our final agent, the chief content officer, finalize reviewing the content, and now give us back an object that follows that content object that we created early in this lesson. Now let's inspect it further so that we can see better what this article actually looks like. Let's start by inspecting the social media content that these agents created for us. And here you can see that it created web post for Twitter, LinkedIn and Instagram. And you can see that it has emojis for Twitter and Instagram. But no emojis for LinkedIn. So this is a very interesting concept that these models can actually capture the understanding that LinkedIn is probably not as receptive for emojis as other social networks. Now let's actually look at the blog post. Another thing that is worth highlighting and here is that you can see that on the LinkedIn post, there's very much a more professional tone than what you would see on the Instagram one. And even on the Instagram one, you can see it mentioning to swipe up to read expert analysis. So this is very much understanding that on Instagram we have certain dynamics and how people use this product compared to something like LinkedIn. Now let's look at the actual content of the blog post. All right. So I'm going to start by plotting this blog post as a markdown so that we can really read it and understand some of the impact in there. And this is it. This is the inflation in the US and its impact on the stock market in 2024 blog post. It has initial introduction. It has a current state of inflation. It talks about the key drivers for inflation, the project inflation moving forward implications, and the impact in the stock market. Even going into the short terms and long-term considerations and doing some recommendations for investors out there. So this is it. This is our final blog post written by these agents for us is starting from one single, very complex subject and then going around and researching the web to acquire a better understanding about the topic, then doing RAG as a tool specific web pages in order to gather the most important and relevant information out of them, and then making sure that they write an amazing content and getting to review this content to make sure that it's rock solid. And here you can have a great use case. And we have been seeing some customers and user of CrewAI, actually deployed this use cases for doing content marketing at a scale. So this unlocks a lot of use cases and how you could use this not only to produce content, but using these different tools and these different models in order to build interesting things yourself. Remember that two of these four agents we're running Llama 70b using groq. And that's super fast. So you can have a few agents are going to be optimized for speed. They're going to be using models like this. And providers like groq and other agents are going to be using their models like in the case of our content writer agents that were using ChatGPT. This was a very interesting lesson, and I hope you're having a lot of fun, at least as much fun as I am. Now let's jump in the next one and start to talk about more use cases. Okay. I'm going to see you then in a few seconds.