DeepLearning.AI
AI is the new electricity and will transform and improve nearly all areas of human lives.

๐Ÿ’ป ย  Accessing Utils File and Helper Functions

In each notebook on the top menu:

1: ย  Click on "File"

2: ย  Then, click on "Open"

You will be able to see all the notebook files for the lesson, including any helper functions used in the notebook on the left sidebar. See the following image for the steps above.


๐Ÿ’ป ย  Downloading Notebooks

In each notebook on the top menu:

1: ย  Click on "File"

2: ย  Then, click on "Download as"

3: ย  Then, click on "Notebook (.ipynb)"


๐Ÿ’ป ย  Uploading Your Files

After following the steps shown in the previous section ("File" => "Open"), then click on "Upload" button to upload your files.


๐Ÿ“— ย  See Your Progress

Once you enroll in this courseโ€”or any other short course on the DeepLearning.AI platformโ€”and open it, you can click on 'My Learning' at the top right corner of the desktop view. There, you will be able to see all the short courses you have enrolled in and your progress in each one.

Additionally, your progress in each short course is displayed at the bottom-left corner of the learning page for each course (desktop view).


๐Ÿ“ฑ ย  Features to Use

๐ŸŽž ย  Adjust Video Speed: Click on the gear icon (โš™) on the video and then from the Speed option, choose your desired video speed.

๐Ÿ—ฃ ย  Captions (English and Spanish): Click on the gear icon (โš™) on the video and then from the Captions option, choose to see the captions either in English or Spanish.

๐Ÿ”… ย  Video Quality: If you do not have access to high-speed internet, click on the gear icon (โš™) on the video and then from Quality, choose the quality that works the best for your Internet speed.

๐Ÿ–ฅ ย  Picture in Picture (PiP): This feature allows you to continue watching the video when you switch to another browser tab or window. Click on the small rectangle shape on the video to go to PiP mode.

โˆš ย  Hide and Unhide Lesson Navigation Menu: If you do not have a large screen, you may click on the small hamburger icon beside the title of the course to hide the left-side navigation menu. You can then unhide it by clicking on the same icon again.


๐Ÿง‘ ย  Efficient Learning Tips

The following tips can help you have an efficient learning experience with this short course and other courses.

๐Ÿง‘ ย  Create a Dedicated Study Space: Establish a quiet, organized workspace free from distractions. A dedicated learning environment can significantly improve concentration and overall learning efficiency.

๐Ÿ“… ย  Develop a Consistent Learning Schedule: Consistency is key to learning. Set out specific times in your day for study and make it a routine. Consistent study times help build a habit and improve information retention.

Tip: Set a recurring event and reminder in your calendar, with clear action items, to get regular notifications about your study plans and goals.

โ˜• ย  Take Regular Breaks: Include short breaks in your study sessions. The Pomodoro Technique, which involves studying for 25 minutes followed by a 5-minute break, can be particularly effective.

๐Ÿ’ฌ ย  Engage with the Community: Participate in forums, discussions, and group activities. Engaging with peers can provide additional insights, create a sense of community, and make learning more enjoyable.

โœ ย  Practice Active Learning: Don't just read or run notebooks or watch the material. Engage actively by taking notes, summarizing what you learn, teaching the concept to someone else, or applying the knowledge in your practical projects.


๐Ÿ“š ย  Enroll in Other Short Courses

Keep learning by enrolling in other short courses. We add new short courses regularly. Visit DeepLearning.AI Short Courses page to see our latest courses and begin learning new topics. ๐Ÿ‘‡

๐Ÿ‘‰๐Ÿ‘‰ ๐Ÿ”— DeepLearning.AI โ€“ All Short Courses [+]


๐Ÿ™‚ ย  Let Us Know What You Think

Your feedback helps us know what you liked and didn't like about the course. We read all your feedback and use them to improve this course and future courses. Please submit your feedback by clicking on "Course Feedback" option at the bottom of the lessons list menu (desktop view).

Also, you are more than welcome to join our community ๐Ÿ‘‰๐Ÿ‘‰ ๐Ÿ”— DeepLearning.AI Forum


Sign in

Create Your Account

Or, sign up with your email
Email Address

Already have an account? Sign in here!

By signing up, you agree to our Terms Of Use and Privacy Policy

Choose Your Learning Path

Enjoy 30% Off Now. Cancel Anytime!

MonthlyYearly

Change Your Plan

Your subscription plan will change at the end of your current billing period. Youโ€™ll continue to have access to your current plan until then.

View All Plans and Features

Welcome back!

Hi ,

We'd like to know you better so we can create more relevant courses. What do you do for work?

DeepLearning.AI
  • Explore Courses
  • Community
    • Forum
    • Events
    • Ambassadors
    • Ambassador Spotlight
  • My Learnings
  • daily streak fire

    You've achieved today's streak!

    Complete one lesson every day to keep the streak going.

    Su

    Mo

    Tu

    We

    Th

    Fr

    Sa

    free pass got

    You earned a Free Pass!

    Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.

    Free PassFree PassFree Pass
Welcome to Safe and Reliable AI via Guardrails built in partnership with GuardrailsAI. Guardrails are safety mechanisms and validation tools built into AI applications, especially those that use large language models or LLMs to ensure at runtime that the application follows specific rules and operates within predefined boundaries. Guardrails act as a protective framework, preventing unintended outputs from LLMs and aligning the behavior with the developer's expectations. They also provide a critical layer of control and oversight within your application, and supports building safe and responsible AI. This course will show you how to build robust guardrails from scratch that mitigate common failure modes of LLM part applications, like hallucinations or inadvertently revealing personally identifiable information. I'm delighted to introduce Shreya Rajpal, who is CEO and Co-Founder of GuardrailsAI and your instructor for this course. Shreya has been working on AI reliability problems for most of her career, including for self-driving car systems, where reliable behavior is critical for the safety of pedestrians and riders. And that's when she and I first met many years ago. She is also a founding engineer at Predibase, and so she's deeply familiar with building LLM systems. Welcome, Shreya. Thanks, Andrew. It's great to be here. I'm really excited to show you how Guardrails can help you create reliable chatbot applications, and help you realize the full potential of LLMs to power your projects. I've talked to many teams that are working to build innovative applications using LLMs. The availability of APIs access powerful models like GPT-4, Claude, Gemini, and many others has allowed developers to quickly build prototypes, which is great for early stage development. But when you want to move beyond to proof of concept, teams often encounter problems with a reliable LLM that lies at the heart of your application. The core challenge is that the output of LLMs is hard to predict in advance. While the techniques have helped significantly things like prompting techniques or model fine tuning, or alignment methods like RLHF or RAG techniques. All of these techniques have helped. But the core problem is they still can't fully eliminate output variability and unpredictability. So this can lead to significant challenges, especially when designing applications for industry with strict registry requirements, or for clients that demand high levels of consistency. Developers often find that techniques like RLHF and RAG are insufficient on their own to meet the stringent reliability and compliance standards that are required for many real-world applications. At GuardrailsAI, we work with many clients in sectors like healthcare, government, and finance. Who are really excited about the possibilities of LLMs for their businesses, but they can't use them in their products because on their own LLMs aren't trustworthy enough. This is where guardrails come in. These additional components of AI applications work to check that the output or input of an LLM conforms to a set of rules or guidelines, and this can be used to prevent incorrect, irrelevant or sensitive information from being revealed to users. Implementing guardrails in your applications can really help you move out beyond the proof of concept phase, and get your application ready for production. At the heart of the guardrails implementation, you learn in this course is a component called the validator. This is a function that takes as input a user prompt and or the response from an LLM, and checks to make sure that it conforms to a predefined rule. Validators can be quite simple. For example, if you want to build a validator, that checks if the text contains any personally identifiable information or PII. You can maybe use a simple regex expression to check for phone numbers or emails or similar types of PII data, and if any are present, then have the application throw the exception to prevent the information from being reveled to the user. You can also create more advanced validators that use machine learning models, like Transformers or even other LLMs, to carry out more complex analysis of the text. This can help you build systems that, for example, can help with chatbots stay on topic by checking against a list of allowed discussion subjects, or prevent specific words from being included in an LLMs response, which is useful for avoiding trademark terms or mentions of competitive names. You can even use guardrails to help reduce hallucinations from LLMs. In one of the lessons in this course, you'll use a natural language inference or NLI model to build a hallucination validator that checks whether the answer to a question in a RAG system is actually grounded in the words of the retrieved text. This means that the source text actually confirms the truthfulness of whatever the LLMs just generated. Guardrails are very flexible, and you can make use of many smaller machine learning models to carry out validation tasks. This helps keep your application performant and actually results in higher reliability than using LLMs alone for some failure modes. This course will walk you through the coding pattern that we use to build guardrails at our company, and you'll implement individual guardrails for a number of failure modes. You'll also learn how to access many pre-built guardrails that are available on the Guardrails hub. After completing the course, you'll be able to modify this pattern and build your own guardrails that are customized for your specific product or use case. I'd like to thank Zayd Simjee from GuardrailsAI and Tommy Nelson from DeepLearning.AI, who have worked to create this course. A lot of companies worry about safety and reliability of LLM based systems, and this is slowing down investments in building such systems. Fortunately, putting guardrails on your system makes a huge difference in terms of creating safe, reliable applications. So, I think the tools in this course will unlock for you more opportunities to build and deploy LLM-powered applications. Specifically, the next time someone expresses a worry to you about safety, reliability, or hallucinations of LLM-based systems, I think this course will help you answer in a way that helps reassure them. So that, let's go on to the next video where you explore the basic failure modes of a chatbot.
course detail
Next Lesson
Safe and reliable AI via guardrails
  • Introduction
    Video
    ใƒป
    6 mins
  • Failure modes in RAG applications
    Video with Code Example
    ใƒป
    13 mins
  • What are guardrails
    Video
    ใƒป
    6 mins
  • Building your first guardrail
    Video with Code Example
    ใƒป
    11 mins
  • Checking for hallucinations with Natural Language Inference
    Video with Code Example
    ใƒป
    12 mins
  • Using hallucination guardrail in a chatbot
    Video with Code Example
    ใƒป
    6 mins
  • Keeping a chatbot on topic
    Video with Code Example
    ใƒป
    9 mins
  • Ensuring no personal identifiable information (PII) is leaked
    Video with Code Example
    ใƒป
    13 mins
  • Preventing competitor mentions
    Video with Code Example
    ใƒป
    9 mins
  • Conclusion
    Video
    ใƒป
    1 min
  • Course Feedback
  • Community