Welcome to Serverless Agentic Workflows with Amazon Bedrock. Built in partnership with AWS and taught by Mike Chambers, who is a senior developer advocate at AWS specializing in generative AI. Mike also taught previous serverless LLM apps with Amazon Bedrock course. It's great to have you back. Thanks, Andrew. I'm happy to be back and excited to be sharing insights this time on agentic workflows on AWS. In this course, you'll learn how to deploy a responsible agentic application into production using serverless technology. You initialize an agent, and tools, code execution, and guardrails, and then deploy it in a serverless environment. As generative AI grows applications that are becoming more complex and more sophisticated. In the past, you created a chatbot by adding conversational history to the LLM. Now, chatbots and RAG applications can be much more complex if we want to fetch information from the web or from local sources. And further, it might determined by yourself when the information is not enough and when to keep searching the web or other databases for more info. In other words, these applications have become much more agentic. Though with these complex workflows, when an agent may call loss of APIs the complexity of getting a system up and running has also grown. For example, there are now agents who have access to many dozens of APIs. Rather than keeping a lot of hot servers, they are paying for by the minute, ready to serve any API call. A serverless architecture lets you achieve the same effect with compute resources they get turned on only as you need it, without you having to worry about maintaining and scaling a bunch of servers. One of the important concepts you'll also see in this course, is the development of agents as a standalone service. We can start with a pre-built agent and configure or customize it to support your application. This change where you view an agent as a building block rather than the LLM as a building block is an important shift. And you also hear about this in this course. That's right. In this course you'll build a customer support agent for your business selling tea mugs. Here's how the course will progress. You get started with Amazon Bedrock where you create your first serverless agent. You'll learn to invoke it and to examine its trace, giving you visibility into the agent's thought processes. Next, you'll connect your agent to external services. It will fetch these customer details and log support tickets in real time, demonstrating how it can interact with business tools like CRM systems. You'll then equip your agent with a code interpreter, enabling it to actually perform calculations. This opens up possibilities for data driven decision-making, and truth be told, This is absolutely my favorite lesson. After that, will implement guardrails to prevent your agent from revealing sensitive information or using inappropriate language. Then you'll implement a fully managed RAG solution connecting your agent to support documents. This will help the agent to resolve issues independently and know when to escalate. Finally, we'll briefly tour the Amazon Bedrock Agents interface in the AWS console, setting you up for further experimentation. By the end, you'll have built a sophisticated AI agent capable of handling real-world customer support scenarios. Fully serverless and ready to scale. Many people have helped to develop this course, including on Antje Barth, Joe Fontaine, Anastacia Vandenberg and Benjamin Gruher from AWS, David Lin from Vocareum and Geoff Ladwig from DeepLearning.AI. As disclosure, I also serve on Amazon's board of directors. And with that, let's go on to the next video to get started.