DeepLearning.AI
AI is the new electricity and will transform and improve nearly all areas of human lives.

๐Ÿ’ป ย  Accessing Utils File and Helper Functions

In each notebook on the top menu:

1: ย  Click on "File"

2: ย  Then, click on "Open"

You will be able to see all the notebook files for the lesson, including any helper functions used in the notebook on the left sidebar. See the following image for the steps above.


๐Ÿ’ป ย  Downloading Notebooks

In each notebook on the top menu:

1: ย  Click on "File"

2: ย  Then, click on "Download as"

3: ย  Then, click on "Notebook (.ipynb)"


๐Ÿ’ป ย  Uploading Your Files

After following the steps shown in the previous section ("File" => "Open"), then click on "Upload" button to upload your files.


๐Ÿ“— ย  See Your Progress

Once you enroll in this courseโ€”or any other short course on the DeepLearning.AI platformโ€”and open it, you can click on 'My Learning' at the top right corner of the desktop view. There, you will be able to see all the short courses you have enrolled in and your progress in each one.

Additionally, your progress in each short course is displayed at the bottom-left corner of the learning page for each course (desktop view).


๐Ÿ“ฑ ย  Features to Use

๐ŸŽž ย  Adjust Video Speed: Click on the gear icon (โš™) on the video and then from the Speed option, choose your desired video speed.

๐Ÿ—ฃ ย  Captions (English and Spanish): Click on the gear icon (โš™) on the video and then from the Captions option, choose to see the captions either in English or Spanish.

๐Ÿ”… ย  Video Quality: If you do not have access to high-speed internet, click on the gear icon (โš™) on the video and then from Quality, choose the quality that works the best for your Internet speed.

๐Ÿ–ฅ ย  Picture in Picture (PiP): This feature allows you to continue watching the video when you switch to another browser tab or window. Click on the small rectangle shape on the video to go to PiP mode.

โˆš ย  Hide and Unhide Lesson Navigation Menu: If you do not have a large screen, you may click on the small hamburger icon beside the title of the course to hide the left-side navigation menu. You can then unhide it by clicking on the same icon again.


๐Ÿง‘ ย  Efficient Learning Tips

The following tips can help you have an efficient learning experience with this short course and other courses.

๐Ÿง‘ ย  Create a Dedicated Study Space: Establish a quiet, organized workspace free from distractions. A dedicated learning environment can significantly improve concentration and overall learning efficiency.

๐Ÿ“… ย  Develop a Consistent Learning Schedule: Consistency is key to learning. Set out specific times in your day for study and make it a routine. Consistent study times help build a habit and improve information retention.

Tip: Set a recurring event and reminder in your calendar, with clear action items, to get regular notifications about your study plans and goals.

โ˜• ย  Take Regular Breaks: Include short breaks in your study sessions. The Pomodoro Technique, which involves studying for 25 minutes followed by a 5-minute break, can be particularly effective.

๐Ÿ’ฌ ย  Engage with the Community: Participate in forums, discussions, and group activities. Engaging with peers can provide additional insights, create a sense of community, and make learning more enjoyable.

โœ ย  Practice Active Learning: Don't just read or run notebooks or watch the material. Engage actively by taking notes, summarizing what you learn, teaching the concept to someone else, or applying the knowledge in your practical projects.


๐Ÿ“š ย  Enroll in Other Short Courses

Keep learning by enrolling in other short courses. We add new short courses regularly. Visit DeepLearning.AI Short Courses page to see our latest courses and begin learning new topics. ๐Ÿ‘‡

๐Ÿ‘‰๐Ÿ‘‰ ๐Ÿ”— DeepLearning.AI โ€“ All Short Courses [+]


๐Ÿ™‚ ย  Let Us Know What You Think

Your feedback helps us know what you liked and didn't like about the course. We read all your feedback and use them to improve this course and future courses. Please submit your feedback by clicking on "Course Feedback" option at the bottom of the lessons list menu (desktop view).

Also, you are more than welcome to join our community ๐Ÿ‘‰๐Ÿ‘‰ ๐Ÿ”— DeepLearning.AI Forum


Sign in

Create Your Account

Or, sign up with your email
Email Address

Already have an account? Sign in here!

By signing up, you agree to our Terms Of Use and Privacy Policy

Choose Your Learning Path

Enjoy 30% Off Now. Cancel Anytime!

MonthlyYearly

Change Your Plan

Your subscription plan will change at the end of your current billing period. Youโ€™ll continue to have access to your current plan until then.

View All Plans and Features

Welcome back!

Hi ,

We'd like to know you better so we can create more relevant courses. What do you do for work?

DeepLearning.AI
  • Explore Courses
  • Community
    • Forum
    • Events
    • Ambassadors
    • Ambassador Spotlight
  • My Learnings
  • daily streak fire

    You've achieved today's streak!

    Complete one lesson every day to keep the streak going.

    Su

    Mo

    Tu

    We

    Th

    Fr

    Sa

    free pass got

    You earned a Free Pass!

    Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.

    Free PassFree PassFree Pass
for LLM Applications, built in partnership with WideApps. When building an LLM-powered app, you often want to use metrics to ensure it can handle inappropriate outputs and to ensure the quality and safety of its outputs. What I've seen in many countries is that the LLM app proof-of-concept can be quick to build. Maybe you can throw something together in days or weeks, but the process of then understanding if it's safe to deploy, then Hoseup is getting into actual usage. This short course goes over the most common ways an LLM application can go wrong. You hear about prompt injections, hallucinations, data leakage, and toxicity, plus tools to mitigate the risk. I'm delighted to introduce the instructor for this course, Bernice Rundin, who is Senior Data Scientist at YNABS. Bernice has worked for the last six years on evaluation and metrics for AI systems, and I've had the pleasure of collaborating with her a few times already, since YLabs is a portfolio company of my team AI fund. Thanks, Andrew. I've been seeing a lot of LLM safety and quality issues across a lot of companies, and I'm excited to share best practices from the field. In this course, you learn to look for data leakage, where personal information such as names and email addresses might appear in either the input prompts or the output responses of Yellow. You also learn to detect prompt injections, where a prompt attempts to get an LLM to output a response that it is supposed to refuse, for example, reviewing instructions for causing harm. One such method that you use is an implicit toxicity model. Implicit toxicity models go beyond identifying toxic words and can detect more subtle forms of toxicity, where the words may sound innocent, but the meaning is not. You also identify when responses are more likely to be hallucinations using the self-check GPT framework, which trumps NLM multiple times, to check for consistency to determine if it's really confident about something it's saying. Bernice will go through how to detect, measure, and mitigate these issues using open-source Python packages, lang codes, and ylogs, as well as some HuggingFix tools. Practitioners and researchers have been experimenting with countless LLM applications that could benefit society, but measuring how well the system works is a necessary step to the development process. In fact, even after a system is deployed, ensuring quality and safety of your AI application will continue to be an ongoing process. Ensuring your system works long-term requires techniques that work at scale, and in this course, you'll see some of these techniques that will make LLM-powered apps safer. Many people have worked to make this course possible. I'd like to thank, on the YLAB side, Maria Karayanova, Kelsey O'Neill, Felipe Adachi, and Alicia Bicznek. From DeepLearning.ai, Eli Hsu and Diala Ezzedine have also contributed to this course. The first lesson will give you a hands-on overview of methods and tools that you'll see throughout the course to help you detect data leakage, jailbreaks, and hallucinations. That sounds great. Let's go on to the next video and get started.
course detail
Next Lesson
Quality and Safety for LLM Applications
  • Introduction
    Video
    ใƒป
    3 mins
  • Overview
    Video with Code Example
    ใƒป
    14 mins
  • Hallucinations
    Video with Code Example
    ใƒป
    25 mins
  • Data Leakage
    Video with Code Example
    ใƒป
    19 mins
  • Refusals and prompt injections
    Video with Code Example
    ใƒป
    26 mins
  • Passive and active monitoring
    Video with Code Example
    ใƒป
    28 mins
  • Conclusion
    Video
    ใƒป
    1 min
  • Course Feedback
  • Community