Introducing Multimodal Llama 3.2Try out the features of the new Llama 3.2 models to build AI applications with multimodality.Meta
ChatGPT Prompt Engineering for DevelopersLearn the fundamentals of prompt engineering for ChatGPT. Learn effective prompting, and how to use LLMs for summarizing, inferring, transforming, and expanding.OpenAI
LangChain for LLM Application DevelopmentUse the powerful and extensible LangChain framework, using prompts, parsing, memory, chains, question answering, and agents.LangChain
How Diffusion Models WorkLearn and build diffusion models from the ground up, understanding each step. Learn about diffusion models in use today and implement algorithms to speed up sampling.
Building Systems with the ChatGPT APILearn to break down complex tasks, automate workflows, chain LLM calls, and get better outputs from LLMs. Evaluate LLM inputs and outputs for safety and relevance.OpenAI
LangChain Chat with Your DataCreate a chatbot with LangChain to interface with your private data and documents. Learn from LangChain creator, Harrison Chase.LangChain
Building Generative AI Applications with GradioCreate and demo machine learning applications quickly. Share your app with teammates and beta testers on Hugging Face Spaces.Hugging Face
Evaluating and Debugging Generative AILearn MLOps tools for managing, versioning, debugging, and experimenting in your ML workflow.Weights & Biases
Large Language Models with Semantic SearchLearn to use LLMs to enhance search and summarize results, using Cohere Rerank and embeddings for dense retrieval.Cohere
Finetuning Large Language ModelsDiscover when to use finetuning vs prompting for LLMs. Select suitable open-source models, prepare data, and train & evaluate for your specific domain.Lamini
How Business Thinkers Can Start Building AI Plugins With Semantic KernelLearn Microsoft's open source orchestrator, Semantic Kernel and use LLM building blocks such as memory, connectors, chains and planners in your apps.Microsoft
Understanding and Applying Text EmbeddingsLearn how to accelerate the application development process with text embeddings for sentence and paragraph meaning.Google Cloud
Pair Programming with a Large Language ModelLearn how to prompt an LLM to help improve, debug, understand, and document your code. Use LLMs to simplify your code and enhance productivity.Google
Functions, Tools and Agents with LangChainLearn about the latest advancements in LLM APIs and use LangChain Expression Language (LCEL) to compose and customize chains and agents.LangChain
Vector Databases: from Embeddings to ApplicationsDesign and execute real-world applications of vector databases. Build efficient, practical applications, including hybrid and multilingual searches.Weaviate
Quality and Safety for LLM ApplicationsLearn how to evaluate the safety and security of your LLM applications and protect against risks. Monitor and enhance security measures to safeguard your apps.WhyLabs
Building and Evaluating Advanced RAGLearn advanced RAG retrieval methods like sentence-window and auto-merging that outperform baselines, and evaluate and iterate on your pipeline's performance. TruEra, LlamaIndex
Reinforcement Learning From Human FeedbackGet an introduction to tuning and evaluating LLMs using Reinforcement Learning from Human Feedback (RLHF) and fine-tune the Llama 2 model.Google Cloud
Advanced Retrieval for AI with ChromaLearn advanced retrieval techniques to improve the relevancy of retrieved results. Learn to recognize poor query results and use LLMs to improve queries.Chroma
Build LLM Apps with LangChain.jsExpand your toolkit with LangChain.js, a JavaScript framework for building with LLMs. Understand the fundamentals of using LangChain to orchestrate and chain modules.LangChain
LLMOpsLearn LLMOps best practices as you design and automate steps to fine-tune and deploy an LLM for a specific task.Google Cloud
Automated Testing for LLMOpsLearn how to create an automated CI pipeline to evaluate your LLM applications on every change, for faster and safer development.CircleCI
Building Applications with Vector DatabasesLearn to build six applications powered by vector databases, including semantic search, retrieval augmented generation (RAG), and anomaly detection.Pinecone
Serverless LLM Apps Amazon BedrockLearn how to deploy an LLM-based application into production using serverless technology. Learn to prompt and customize LLM responses with Amazon Bedrock.AWS
Prompt Engineering with Llama 2&3Learn best practices for prompting and selecting among Meta Llama 2 & 3 models. Interact with Meta Llama 2 Chat, Code Llama, and Llama Guard models.Meta
Open Source Models with Hugging FaceLearn how to easily build AI applications using open-source models and Hugging Face tools. Find and filter open-source models on Hugging Face Hub.Hugging Face
Knowledge Graphs for RAGLearn how to build and use knowledge graph systems to improve your retrieval augmented generation applications. Use Neo4j's query language Cypher to manage and retrieve data.Neo4j
Efficiently Serving LLMsUnderstand how LLMs predict the next token and how techniques like KV caching can speed up text generation. Write code to serve LLM applications efficiently to multiple users.Predibase
JavaScript RAG Web Apps with LlamaIndexBuild a full-stack web application that uses RAG capabilities to chat with your data. Learn to build a RAG application in JavaScript, using an intelligent agent to answer queries.LlamaIndex
Red Teaming LLM ApplicationsLearn how to make safer LLM apps through red teaming. Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.Giskard
Preprocessing Unstructured Data for LLM ApplicationsImprove your RAG system to retrieve diverse data types. Learn to extract and normalize content from a wide variety of document types, such as PDFs, PowerPoints, and HTML files.Unstructured
Quantization Fundamentals with Hugging FaceLearn how to quantize any open-source model. Learn to compress models with the Hugging Face Transformers library and the Quanto library.Hugging Face
Getting Started with MistralExplore Mistral's open-source and commercial models, and leverage Mistral's JSON mode to generate structured LLM responses. Use Mistral's API to call user-defined functions for enhanced LLM capabilities.Mistral AI
Prompt Engineering for Vision ModelsLearn prompt engineering for vision models using Stable Diffusion, and advanced techniques like object detection and in-painting. Comet
Quantization in DepthCustomize model compression with advanced quantization techniques. Try out different variants of Linear Quantization, including symmetric vs. asymmetric mode, and different granularities.Hugging Face
Building Agentic RAG with LlamaindexBuild autonomous agents that intelligently navigate and analyze your data. Learn to develop agentic RAG systems using LlamaIndex, enabling powerful document Q&A and summarization. Gain valuable skills in guiding agent reasoning and debugging.LlamaIndex
Building Multimodal Search and RAGBuild smarter search and RAG applications for multimodal retrieval and generation.Weaviate
Multi AI Agent Systems with crewAIAutomate business workflows with multi-AI agent systems. Exceed the performance of prompting a single LLM by designing and prompting a team of AI agents through natural language.crewAI
Introduction to on-device AIDeploy AI for edge devices and smartphones. Learn model conversion, quantization, and how to modify for deployment on diverse devices. Qualcomm
AI Agentic Design Patterns with AutoGenUse the AutoGen framework to build multi-agent systems with diverse roles and capabilities for implementing complex AI applications.Microsoft, Penn State University
AI Agents in LangGraphBuild agentic AI workflows using LangChain's LangGraph and Tavily's agentic search. LangChain, Tavily
Building Your Own Database AgentInteract with tabular data and SQL databases using natural language, enabling more efficient and accessible data analysis.Microsoft
Function-calling and data extraction with LLMsLearn to apply function-calling to expand LLM and agent application capabilities.Nexusflow
Carbon Aware Computing for GenAI developersTrain your machine learning models using cleaner energy sources.Google Cloud
Prompt Compression and Query OptimizationOptimize the efficiency, security, query processing speed, and cost of your RAG applications.MongoDB
Intro to Federated LearningBuild and fine-tune LLMs across distributed data using a federated learning framework for better privacy.Flower Labs
AI Python for Beginners: Basics of AI Python CodingEnroll in 'AI Python for Beginners' by DeepLearning.AI and learn Python programming with AI assistance. Gain skills writing, testing, and debugging code efficiently, and create real-world AI applications.DeepLearning.AI
Embedding Models: from Architecture to ImplementationLearn how to build embedding models and how to create effective semantic retrieval systems.Vectara
Improving Accuracy of LLM ApplicationsSystematically improve the accuracy of LLM applications with evaluation, prompting, and memory tuning.Lamini, Meta
Building AI Applications With HaystackLearn a flexible framework to build a variety of complex AI applications.Haystack
Large Multimodal Model Prompting with GeminiLearn best practices for multimodal prompting using Google’s Gemini model.Google Cloud
Multimodal RAG: Chat with VideosBuild an interactive system for querying video content using multimodal AIIntel
Retrieval Optimization: From Tokenization to Vector QuantizationBuild faster and more relevant vector search for your LLM applicationsQdrant