In this lesson, you will use simple prompting to leverage the long context capabilities of the Jamba model. Let's have some fun! Now we have learned how the context window was expanded for the Jamba model. In a previous lab, you use document parameter to process long documents. In this lesson, you will take a different approach to work with long documents, which is to include the full text in the prompt and leverage the long context capability of the Jamba model. This approach had a very similar effect as you use the document parameter. Here are several scenarios you could consider to include long context in the prompt or in the document parameter to the Jamba model. For example, when you want to summarize a full-long document to extract insight from comparing multiple documents where you want to analyze long chat history, or call transcript where the relevant information could be scattered across the entire content, or multi-hop reasoning and agentic systems where you have a long chat history, including multiple rounds of tool calling messages like the example used in SEC 10-q function calling, and also other steps such as planning, reflection, validation and more. Now is your turn to work on some examples. Again, two quick lines to ignore any unnecessary warnings. Now import the same libraries you have used in the previous labs and load API key that are being set for you, and create AI21 client. To get started, you can load the Nvidia 10 k files which we have did in the previous lab. Now you can ask the Jamba model to help you process the Nvidia 10-K files in 2024, which the very long file with a simple system message and prompt without need to do any chunking. To make things even easier, you can create a simple function to chat with your document directly in this Jamba chat function, the Jamba model is tasked to answer your question based on the provided document in the prompt, and the main driver here is the data center systems and products. In your agentic workflow, Json format is often necessary for structured output from a large language model. Similar to the Jamba chat function, you can create this Jamba Json function, providing an instruction in a system message and prompt, and specify Json object in response format parameter in the Jamba model call. Now it is time to write your query to ask a Jamba model to create a Json output for you. So in this example we ask the Jamba model to generate Json outputs including fiscal year-end date, revenue, gross profit, net income per share, revenue by segment, and revenue by geo region. Now you can send the query and the documents to the Jamba Json function. And now the Jamba model has the answer for you in Json format. As we discussed in earlier lessons, with a long context window, you can ingest multiple long documents and ask the Jamba model to provide an answer across all the documents provided. For example, we can combine the 10 K filings from year 2023 and 2024 and the path to the Jamba chat function with a request to create an HTML table encoding the following financial information. You can then use the IPython display module to visualize HTML table. In this lesson, you have used Jamba with some simple prompting to process long documents. In the next lesson, you will work with RAG tools. Let's regroup there.