Unlock Plus AI learning and gain exclusive insights from industry leaders
Access exclusive features like graded notebooks and quizzes
Earn unlimited certificates to enhance your resume
Starting at $1 USD/mo after a free trial โ cancel anytime
In the previous lesson, you saw how controlled generative UI gives you full control and customizability. But it requires a dedicated component for every interaction. In this lesson, you'll take a different approach called declarative generative UI. We will use A2UI, which is an open spec for declarative generative UI led by Google in which CopilotKit has deeply collaborated on. Instead of building a component for every single use case, you'll define a catalog of Lego-like building blocks and let the agent assemble those on demand. This is an exciting part of the generative UI spectrum. So let's dive in. In the previous lesson, we defined a dedicated React component for each item that we wanted our agents to support. The benefit was great control over each and every item shown. The problem is that every new capability requires a new dedicated component built from scratch, which is fine at 20 components, but quickly becomes painful at 200 components. Declarative Generative UI flips this equation. Rather than defining each component individually, we define an upfront catalog which we then let the agent assemble together on its own. Let's dive into what this means specifically. First, as we mentioned, developers declare a catalog of components which should be thought of as Lego-like building blocks. The catalog is defined in two parts. First, there are definitions for each atom in the catalog. which include the item's name, description, and its required props. And separately, there are renderers defined for each item, which take a JSON payload describing each component and return an actual rendered component for the relevant platform. In this lesson, we will use React, but the same pattern extends elsewhere. When a user asks some question, the agent may decide to answer that question by assembling together a declarative generative UI component. The agent first emits a Schema, which is a structural assembly of items from the Components Catalog, which is not yet populated with data. Then the agent emits the data bindings that populate that schema with specific values. Finally, the schema and the data bindings are passed on to the front end where they are served as inputs to a renderer, which returns the fully assembled component for the native platform. Again, in this case, we will be using React. The end result is a component which can vary considerably, but within the confines of a bounded menu that you control. Declarative Generative UI has a number of distinct advantages. First, it offers flexibility, but with guardrails. Your agent can render custom components for a job, but will draw from a fixed menu of pre-created components, which can be made for example both to conform to your design guidelines. Additionally, because declarative generative UI is rooted in a structured schema, typically JSON, it is suitable for any front-end environment, not just for web, but also for mobile, Slack, and other environments. At the same time, declarative generative UI is less predictable and customizable than fully controlled generative UI. This unique profile of strengths and weaknesses makes declarative generative UI most suited for the long tail of product surfaces in consumer-facing applications. where flexibility is generally more important than pixel-perfect designs and predictability. It's also suitable for internal applications where functionality and simplicity of implementation often take precedence over a optimized experience. For example, if you're an airline, you will likely want your flight card to be implemented with Controlled Generative UI for maximum predictability as well as control of your application's most used surface by far. If you think about features like tracking a lost laptop or getting refunded for a trip, it's more important that the agent can talk and interact with those abstractions at all than that it offers a pixel perfect experience. And so these surfaces are great matches for declarative generative UI. In this course, we will implement declarative generative UI using A2UI. A2UI is an open, declarative generative UI spec that is led by Google and which CopilotKit has closely collaborated on. A2UI implements the declarative generative UI approach we outlined earlier with a component catalog, a schema, data bindings and renderers using A2UI operations. A2UI middleware is fully integrated with A2UI, which means you can bring A2UI declarative generative UI to any agent in the stack. Declarative generative UI comes in two flavors, dynamic and fixed schema. With the dynamic schema variant, the agent is responsible both for assembling the schema on the fly and for populating that schema with data bindings. With a Fixed Schema variant, you as the programmer are responsible for defining the schema in advance, and the agent is only responsible for populating that schema with data bindings. Now let's jump into some code and take a look at how this works. First, just like in the previous modules, we're going to copy the reset session cell and run it to make sure we're starting from a fresh starting point. Next, just like before, we're going to set up our dependencies, starting by installing the Python dependencies, followed by the frontend dependencies. Now that those are installed, we'll load up our API keys using our helper function. Remember, those are preset up for you by the deep learning platform. Now we're going to build the agent. We'll start the server that's going to serve the agent. Now, just like before, we're going to start with our scaffold server. That's it. Our FastAPI scaffold server is running. We're now going to actually serve a real agent from it after we define it. This time around, we want the agent to query for and use data before it responds with any declarative generative UI components. To simulate this, we're going to define a get sales data tool that the agent can call in order to fetch sales data at any given time. In a real application, this would query some API or a database. In this example here, we're just hardcoding some sales data for the agent to use. This is not our core focus, so if you're not sure what we're doing here, you can come back to this point later. Now that we've created the tool, we're going to create an agent. Just as before, this is a standard LangChain Deep Agent powered by GPT 4.1 model. We're giving this agent now the get_sales_data tool. And just like before, we're passing it the CopilotKitMiddleware. You can read up the system_prompt of this agent. As you see here, the agent is simply instructed to use the generate_a2ui tool to visualize data for the user. Now we're going to set up our frontend with A2UI rendering. Just like before, we're setting up our Copilot runtime. Most of the code here is very familiar. We're setting up a standard connector to our LangChain agent that we set up on port 8004. And we're initializing the Copilot runtime with that agent as the default agent. You'll notice the only real addition that we did not have before, which is the a2ui configuration with a flag of injectA2UITool set to true. Under the hood, the LangChain a2ui agent can talk a2ui by calling a specialized tool. This tool is propagated by the CopilotKit and A2UI stacks. By default, it will be included. If for any reason you do not want to include this tool automatically, you can always exclude it by passing false to this argument. We're now going to define our declarative generative UI component catalog. Remember, the Component Catalog consists of two parts, the component definitions and the component renderers. We'll begin by populating the component definitions. Now, don't get scared, we're going to paste a big chunk of code, but we're going to scroll right to the top and explain what it is that we pasted in a minute. All right. This is our demonstrationCatalogDefinitions. You see here, there's a dictionary of components with their definitions. The first component here is the Title component. It has a name, Title, a description, in this case a heading, used for section titles and page headers, and props, typed schemas using Zod schemas, like before for communicating the types of arguments this component is expecting. You see here, we have a very long list of components that we're defining. Title, text, icon, image, and so on and so on. This is one of the inherent costs of declarative generative UI. You have to set up all of your Lego-like building blocks up front for your agent to be able to assemble them together. We won't go through each one of the components here because essentially they're all the same. We're now going to define the renderers of our components. Once again, we're going to paste a long file first and then scroll up and walk through it. The beginning of the file consists of imports that are going to be used by our renderers and some helper functions once again used by our renderers. If we scroll a little bit down, we'll see the actual CatalogRenderers. First you see that the CatalogRenderers is a generic type that specifies the catalog definition as its type parameter. This ensures that the renderers perfectly match the types of the components that we defined earlier. Now you'll see that each one of the components we previously defined has a renderer. The renderers are simple React functions. For example, let's look at the Title renderer. You see that it takes a props argument. Props contains the types we defined earlier. And then it simply returns a React component using these arguments. As you see here, there is a long list of component renderers, one renderer for each component, we defined earlier. These custom renderers is how you can get your own custom stylings when using declarative generative UI. Simply make your component renderers aware of your design style guides. Now let's scroll down to the bottom of the renderers file. quite a few renderers in there. All right. You now see the demonstrationCatalogDefinitions. We use the createCatalog function call and we pass to it as arguments the demonstrationCatalogDefinitions and the demonstrationCatalogRenderers, which we created just now. Finally, we're giving it the catalogId of app-dashboard-catalog. All right. Now we have our catalog definitions in place and it's time to wire it up. We're now looking at application code that should look very familiar. Once again, we're wrapping our application with the CopilotKit provider so we can connect to agents anywhere within it. The only difference is that this time we're passing the CopilotKit provider another argument, the a2ui argument. In the a2ui argument, we're passing the demonstrationCatalog that we just created. This tells the agent it can use this component catalog when composing components together. Now we'll wire the CopilotChat component back into our application, along with some example suggestions, that once again simply hardcode a set of prompts into the chat. Having defined the front end, we'll now start it. We'll use our start_frontend helper method to start the frontend on port 3004. Now that it's running, we'll once again embed its iframe inside of our Jupyter notebook for convenience. All right, our chat is now live. Let's test out some declarative generative UI rendering. First, we'll click Sales Dashboard. You see here, when we click the suggestion, we simply got a pre-canned prompt. All right. The agent dynamically combined a bunch of the components we defined in our catalog together to show us this great sales dashboard. We see here the total revenue with a title and then a text, new customers, conversion rate, and even a pie chart and a bar chart. This is an example of Dynamic Schema Declarative Generative UI. The agent first assembled a schema from scratch putting together a bunch of the components from our component catalog, and then populated that schema with data bindings. We're now going to see an example of fixed schema declarative generative UI, where we, as programmers, hard code a predetermined schema in advance, which the agent will simply populate with data. The first step in Fixed Schema Declarative Generative UI is of course defining the schema. Where is the schema going to come from? Well, you're not going to want to handwrite it. The schema file is a big messy JSON schema that is designed to be written by machines. There's a great tool in the A2UI ecosystem called the A2UI Composer, which essentially includes a copilot that helps you assemble a schema to match some parameters. We're going to drop screenshots for this A2UI Composer into the Jupyter notebook, and you can find it on a2ui.org and then click on the composer. This picture is what you'll see when you go to the A2UI Composer. There's a simple chat input box that lets you specify the type of component you want to build. After some iterations, you'll end up with a schema. The composer shows side by side the schema, sample data bindings that you can edit if you'd like, and a preview of what the rendered schema plus data bindings would look like. After some iterations, you'll end up with a schema to your liking. When you've done that, simply copy its JSON and prepare for the next step. We're now going to add a tool that returns a fixed schema declarative generative UI component. You will want to paste the schema definition you got from the composer into a variable somewhere in your application. If you look here in the detail, you will see the rough shape of assembly of components together. But once again, this is not code that you want to be handwriting. Another detail you notice here is the CATALOG_ID referencing the component catalog we defined on the front end, and a SURFACE_ID. The SURFACE_ID identifies a specific a2ui component. It's useful because it allows components to be updated, not just appended to. We're going to create a dummy search_flights tool for the agent to find relevant flights for this user. In a real application, this would be hitting an API or a database, but for the purposes of example, we're simply going to hardcode some data here. Now, we're ready to define the tool that will return a fixed schema declarative generative UI component. As you will see, any component whatsoever can return such components. First, let's look at what this tool returns. This tool uses the a2ui.render helper function to return an array of a2ui operations. The first operation creates a new surface using our SURFACE_ID. The second operation updates components for that surface using our FLIGHT_SCHEMA. And finally, the third operation populates the data for the components using the flights arguments passed to this tool. Remember, in fixed schema declarative generative UI, we as programmers predefine the schema. But it's still the responsibility of the agent to populate the data for that schema. It's critical that the shape of the data we pass to the component matches the shape of the data expected by the schema. In order to ensure this happens, we have our tool take an array of flight arguments whose shape exactly matches the shape defined in our schema. Now that we've defined our tools, it's time to create our agent. Once again, we're going to create a simple LangChain Deep Agent. If you look at our list of tools, you will see the tool we defined previously, get_sales_data with mock sales data. as well as two additional tools. The first one is the search_flights tool, which provides mock data for flight information. Once again, in a real application, this would be connecting to an API or to a database. as well as the display_flights tool. The agent can call the display_flights tool in order to show the A2UI component to the user. Once again, we specify the CopilotKitMiddleware and we include a system_prompt that tells the agent how and when to display components. If you're curious, you can read this in more detail. Here, we're adding chat suggestions to let users easily search for flights. Now that we've defined our fixed schema tool, let's take a look at our application. The front end is exactly identical to what we saw before. But now that we've given our agent the ability to search and display flights, let's see what happens when we ask it for some flights information. Good job. The schema you created in the composer is now live on your screen, having been populated by the agent. We're now done with this section. Great job. You now have experience working with declarative generative UI. You first defined a component catalog, which is specified by a combination of component definitions and component renderers. And then exercise that component with both dynamic schema and fixed schema declarative generative UI. As a reminder, each declarative generative UI component is specified by a combination of a schema relative to the component catalog and data bindings relative to that schema.