Learn the common failure modes of LLM-powered applications that guardrails can help mitigate, including hallucinations and revealing sensitive information.
We'd like to know you better so we can create more relevant courses. What do you do for work?
Instructor: Shreya Rajpal
Learn the common failure modes of LLM-powered applications that guardrails can help mitigate, including hallucinations and revealing sensitive information.
Understand how AI guardrails validate and verify your applications with input and output guards, ensuring reliable and controlled interactions.
Add guardrails to a RAG-powered customer service chatbot to create a new layer of control of the applications behavior.
Join our new short course, Safe and Reliable AI via Guardrails, and learn how to build production-ready applications with Shreya Rajpal, co-founder & CEO of GuardrailsAI.
The output of LLMs is fundamentally probabilistic, making it impossible to know in advance, or to guarantee the same response twice. This makes it difficult to put LLM-powered applications into production for industries with strict regulations or clients who require high consistency in application behavior.
Fortunately, installing guardrails on your system gives you an additional layer of control in creating safe and reliable applications. Guardrails are safety mechanisms and validation tools built into AI applications, acting as a protective framework that prevents your application from revealing incorrect, irrelevant, or sensitive information.
This course will show you how to build robust guardrails from scratch that mitigate common failure modes of LLM-powered applications like hallucinations or revealing personally identifiable information (PII). Youâll also learn how to access a variety of pre-built guardrails on the GuardrailsAI hub and are ready to integrate into your projects.
Youâll implement these guardrails in the context of a RAG-powered customer service chatbot for a small pizzeria.
In detail, youâll:
The tools in this course will unlock more opportunities for you to build and deploy safe, reliable LLM-powered applications ready for real-world use.
Anyone who has basic Python knowledge looking to enhance the safety and reliability of LLM-powered applications with practical, hands-on guardrail techniques.
Introduction
Failure modes in RAG applications
What are guardrails
Building your first guardrail
Checking for hallucinations with Natural Language Inference
Using hallucination guardrail in a chatbot
Keeping a chatbot on topic
Ensuring no personal identifiable information (PII) is leaked
Preventing competitor mentions
Conclusion
Course access is free for a limited time during the DeepLearning.AI learning platform beta!
Keep learning with updates on curated AI news, courses, and events, as well as Andrewâs thoughts from DeepLearning.AI!