Welcome to orchestrating workflows for Jen and Applications. It's a partnership with astronomer. in this course, you build a brand pipeline to ingest descriptions of bots from text files. They computed embeddings for the descriptions and source embeddings in a vector database. You automate the pipeline using airflow, which is an orchestration to ensure that the steps are executed in the correct order, and also the pipeline is triggered at the right time. I'm delighted that you instruct us in this course of content in this developer relations senior manager, as well as Tamara Finkelman, who is developer advocate at astronomer. Thanks, Andrea. We're excited to work with you on this course. To move your proof of concept from development to production. You need to transform your logic into an automated pipeline consisting of multiple steps, where each step represents one operation. Say, for example, you have access to a list of files containing product reviews, and you written a large block of code that summarizes each review for each product. To automate this workflow, you can break the logic into a sequence of steps such as gross. Find the location of dev, review text files for each product, then second, aggregate the feedback across reviews of each product that summarize the reviews for each product using an alarm, and then finally extract the sentiments from the summaries again using an alarm. This approach helps you easily identify failure points across the pipeline, and to cover from that For many joining our pipelines, failures can occur due to API rate limits or the API returning other errors, which can happen if the line has to process large volumes of product reviews. In this course, you'll learn how you can configure retries for your tasks. So your pipeline can wait for a little bit before trying again. You will also learn how you can process large data in parallel. For example, at the summarization step, instead of creating summaries of all products in one step. You can process the reviews of each product in parallel. And finally, you will also learn how the pipeline can be triggered whenever new data becomes available or gets updated. Like a new set of product review pairs. You will apply all these practices to your RAC example. You will start with a notebook that contains your RAC prototype, ingesting and embedding book descriptions. Then you will turn the notebook into an actual pipeline that is triggered manually. After that, you scheduled the notebook to run automatically. Make it adapt to your data at runtime, and add automatically twice and notifications to it. In the final lesson, you will learn how to apply JNI pipelines in real life. Many people at once to create this course. I like to thank from astronomer Stephen Kilian and from DeepMind AI Hara Salameh, and also contributed to this course. The process of turning a Jupyter Notebook to runnable production software is an important skill for any AI developer. In the next video, you go through this entire process and see how to do it yourself. One of the non-intuitive aspect for many developers doing for the first time is that the process of breaking down the workflow into sets usually ends up with a larger number of smaller individual steps than you might expect, and gaining intuition on how to do this will make your applications run faster and more reliably. so please go on to the next video to learn about this.