Welcome to Safe and Reliable AI via Guardrails built in partnership with GuardrailsAI. Guardrails are safety mechanisms and validation tools built into AI applications, especially those that use large language models or LLMs to ensure at runtime that the application follows specific rules and operates within predefined boundaries. Guardrails act as a protective framework, preventing unintended outputs from LLMs and aligning the behavior with the developer's expectations. They also provide a critical layer of control and oversight within your application, and supports building safe and responsible AI. This course will show you how to build robust guardrails from scratch that mitigate common failure modes of LLM part applications, like hallucinations or inadvertently revealing personally identifiable information. I'm delighted to introduce Shreya Rajpal, who is CEO and Co-Founder of GuardrailsAI and your instructor for this course. Shreya has been working on AI reliability problems for most of her career, including for self-driving car systems, where reliable behavior is critical for the safety of pedestrians and riders. And that's when she and I first met many years ago. She is also a founding engineer at Predibase, and so she's deeply familiar with building LLM systems. Welcome, Shreya. Thanks, Andrew. It's great to be here. I'm really excited to show you how Guardrails can help you create reliable chatbot applications, and help you realize the full potential of LLMs to power your projects. I've talked to many teams that are working to build innovative applications using LLMs. The availability of APIs access powerful models like GPT-4, Claude, Gemini, and many others has allowed developers to quickly build prototypes, which is great for early stage development. But when you want to move beyond to proof of concept, teams often encounter problems with a reliable LLM that lies at the heart of your application. The core challenge is that the output of LLMs is hard to predict in advance. While the techniques have helped significantly things like prompting techniques or model fine tuning, or alignment methods like RLHF or RAG techniques. All of these techniques have helped. But the core problem is they still can't fully eliminate output variability and unpredictability. So this can lead to significant challenges, especially when designing applications for industry with strict registry requirements, or for clients that demand high levels of consistency. Developers often find that techniques like RLHF and RAG are insufficient on their own to meet the stringent reliability and compliance standards that are required for many real-world applications. At GuardrailsAI, we work with many clients in sectors like healthcare, government, and finance. Who are really excited about the possibilities of LLMs for their businesses, but they can't use them in their products because on their own LLMs aren't trustworthy enough. This is where guardrails come in. These additional components of AI applications work to check that the output or input of an LLM conforms to a set of rules or guidelines, and this can be used to prevent incorrect, irrelevant or sensitive information from being revealed to users. Implementing guardrails in your applications can really help you move out beyond the proof of concept phase, and get your application ready for production. At the heart of the guardrails implementation, you learn in this course is a component called the validator. This is a function that takes as input a user prompt and or the response from an LLM, and checks to make sure that it conforms to a predefined rule. Validators can be quite simple. For example, if you want to build a validator, that checks if the text contains any personally identifiable information or PII. You can maybe use a simple regex expression to check for phone numbers or emails or similar types of PII data, and if any are present, then have the application throw the exception to prevent the information from being reveled to the user. You can also create more advanced validators that use machine learning models, like Transformers or even other LLMs, to carry out more complex analysis of the text. This can help you build systems that, for example, can help with chatbots stay on topic by checking against a list of allowed discussion subjects, or prevent specific words from being included in an LLMs response, which is useful for avoiding trademark terms or mentions of competitive names. You can even use guardrails to help reduce hallucinations from LLMs. In one of the lessons in this course, you'll use a natural language inference or NLI model to build a hallucination validator that checks whether the answer to a question in a RAG system is actually grounded in the words of the retrieved text. This means that the source text actually confirms the truthfulness of whatever the LLMs just generated. Guardrails are very flexible, and you can make use of many smaller machine learning models to carry out validation tasks. This helps keep your application performant and actually results in higher reliability than using LLMs alone for some failure modes. This course will walk you through the coding pattern that we use to build guardrails at our company, and you'll implement individual guardrails for a number of failure modes. You'll also learn how to access many pre-built guardrails that are available on the Guardrails hub. After completing the course, you'll be able to modify this pattern and build your own guardrails that are customized for your specific product or use case. I'd like to thank Zayd Simjee from GuardrailsAI and Tommy Nelson from DeepLearning.AI, who have worked to create this course. A lot of companies worry about safety and reliability of LLM based systems, and this is slowing down investments in building such systems. Fortunately, putting guardrails on your system makes a huge difference in terms of creating safe, reliable applications. So, I think the tools in this course will unlock for you more opportunities to build and deploy LLM-powered applications. Specifically, the next time someone expresses a worry to you about safety, reliability, or hallucinations of LLM-based systems, I think this course will help you answer in a way that helps reassure them. So that, let's go on to the next video where you explore the basic failure modes of a chatbot.