Unlock Plus AI learning and gain exclusive insights from industry leaders
Access exclusive features like graded notebooks and quizzes
Earn unlimited certificates to enhance your resume
Starting at $1 USD/mo after a free trial โ cancel anytime
I'm a tie. I'm co-founder and CEO of corporate. In this course, you will learn to use the open source corporate stack in the ag UI protocol, which has been adopted by Google, Microsoft, Amazon, Link Chain, Oracle and many of the other leading edge ecosystems. So everything you learn here can apply across the entire popular agenda. In this lesson, you will learn about the generative UI spectrum and about its three key pillars. Controlled. Declarative. All the way to fully open ended generative UI and about what makes each one of those parts different and when to apply each. By the end of this lesson, you'll have the mental model for everything you're going to be building in this course. So let's go. All right. Let's jump in. I want to start the course with a bit of a provocative statement. Over the next few years, all UI will become AI, meaning every interaction between humans and technology will be increasingly mediated by genic systems. Of course, this is true of complex software products like how to spot Zendesk, Sigma, and so on. But it may even be true of your refrigerator. That might sound a little bit preposterous, but wouldn't it be nice if you could walk by your refrigerator and tell it to order the missing ingredients for tonight's lasagna? So that's where we're going. Ubiquitous interactions between agents and humans. But where are we now? Carcinogenic interfaces look a lot like the mS-DOS command line interfaces of the 1980s. These initial raw interfaces are good enough for early adopter types who drive the lion's share of initial use, but they're not good enough for mainstream mass adoption. To continue this analogy, today's agents are beginning to graduate from the mS-DOS era to the Windows and Mac era. If you're paying attention, you've likely noticed your customers are asking for this loudly. In this course, we will walk you through the concepts behind building for psychogenic applications, which are ready for mass adoption. Great for stack Hygenic applications. Leverage generative UI capabilities to build a great user experience for working alongside agents. For example, in Cloud code and in cursor, we're able to seamlessly work alongside our agent and see what it is doing in serial, incoherent steps. In notion, our agents are able to take actions on our behalf in the application right as we work and manage our agents stream to us what they're doing and presented with a nice UI. So why is building all of this so hard? Developers who first approach a generic application development start by focusing on the agent itself. They initially expect that once they build the agent, they'll just stored behind an API and connect it to any user facing application, just like they've done with every front end and back in development before agents came along. But when we try to do that, we run into a second and initially unexpected set of problems. We discover that agents break the requests response paradigm, which has been powering the internet for the last 30 years. Agents are, of course, just software, but from the standpoint of a generic software, they're a little bit like a weird bird. They're long running, so they have to assume their work as they run in support, reconnections interruptions and mid run strings. You have to supports structured and unstructured data exchange at the same time. You have text and voice alongside two calls in status update and so on. So it's a little bit like implementing Slack and Zoom in a traditional application all at once. And there's really a long, long list of these peculiarities that agents have from the standpoint of pre agent software. And so when we tried to stick an aged behind a traditional API, we end up running into wall after wall after wall of complexity. And we discovered that we have to re-implement 30 years of glue code, which we've all been taking for granted for the agenda. And that's where I copied the kit. And Agyekum in Kabuki provides. Open source developer is the case in a copper phone for building user facing hygienic applications in a way that's optimized for the genetic era, as opposed to retrofitted from the request response paradigm. The AGI protocol, which stands for the Agent User Interaction Protocol, lets developers who build agent frameworks in harnesses, bring their agents to user facing ecosystems by building against a robust standard, which means they conform to a few simple requirements and everything just works. This, in turn, allows developers and organizations to build their genic applications to build against all of these different agents in a streamlined fashion. To situate a UI in the landscape, you have obviously MCP connecting agents to third party tools into context. You have E2E connecting agents to other AI genic systems, and AGI is the third leg of this triangle connects your agent to user facing applications. AGI emerged from our initial partnership with LinkedIn and Courier, and since we launched the protocol, it's been widely adopted by many of the top leaders in the eugenic space, including by Google, Microsoft, Amazon, Oracle. So as mass representing, I'm going to need you to love me, Onyx and others. Which means that everything you learn in this course can be applied to any agent ecosystem you work with out there in production. By the way, if you do come across a system that is not yet supported or if your organization made its own agent ecosystem, you can also build your own e.g. UI connectors for that system. However, we will not cover that side of the equation in this course. So you can use a GUI to connect any agent to your users, but you can also use a UI to connect any user to your agents. Everything you will learn in this course can be applied to building hygienic applications. In any front end your users use, from web to mobile to slack to text messaging and so on. There's a quickly growing list of surfaces supported by AGI in this course, we're going to focus on web and specifically on react, which is the dominant web framework. So having laid out the architecture, let's bring attention back to the world of generative UI. First, what is generative UI? Generative UI is a UI paradigm which is both enabled by and exists in service of lymph systems and agents. What does that mean? Well, lymph systems and agents introduce both new capabilities and new challenges to the software landscape. Generative UI is the UI paradigm that takes advantage of those new capabilities, both to improve experiences and to address those new challenges. Generative UI solutions exist on a spectrum where control flows from the developer to the agent. There are three pillars in this spectrum controlled generative UI, declarative generative UI, and open ended generative UI. The entire generative UI spectrum is needed in modern AI genetic applications, because different solutions along the spectrum have different pros and cons, and so are best suited for different use cases. In this lesson, we will briefly outline the three pillars of the generative UI spectrum, and then in the rest of this course, you will dive deep into each one and learn how and when to apply it. Control. The generative UI is a paradigm where the developer provides the agent with pre-built, fully custom UI components of the agent can call upon, as needed to augment an interaction. It allows for maximum customization and predictability, and pixel perfect designs. Controlled of UI also allows for a straightforward development experience familiar from the pre genic back in front end era control. The generative UI is considered the workhorse of the generative UI world, because maximum customization and predictability are important for your products. Most used services. The downside of control of the generative UI is that you have to design a specific component for each interaction you want your agents to support, and so implementation complexity scales linearly with capabilities. We will implement control the generative UI using a GUI and communicate in this course. Declarative gen of UI is a paradigm where developers declare a component catalog of pre-built Lego like building blocks, which the agent can then assemble into rendered components dynamically and on demand for any given interaction. Under the hood, the agent returns a structured schema. The represents an assembly of building blocks along with data bindings containing values that fill in that schema. The schema and the data bindings are then passed to front end renderers, which can visualize the structured content as native UI components. We will soon see an example of this in action. In consumer facing applications, declarative generative UI is most suited for the long tail of product services, where flexibility is more impactful than perfection. Declarative gen UI also excels in internal applications that generally favor pure functionality and ease of implementation over a perfectly optimized user experience. The reason is that declarative generative UI cannot be made pixel perfect. Components have to consist of a standard combination of pre-built building blocks. Declarative UI is also less deterministic than control the generative UI. The same user query may be answered via a different combination of building blocks at different times, and because it requires more computation and generation time, it is also slower than control gen UI. The tools we will use here are Copilot Kit and the UI spec, which is spearheaded by Google Copilot Kit and EGI. Our launch and design partners on AtW UI and AGI allows you to use the UI spec with any agenda in further along the generative UI spectrum, we find MC apps, which are an official extension of MCP, MC apps or widgets, or even fully featured applications that get embedded inside the AI chat window. Under the hood, MC apps are implemented via an embedded iFrame. MSPs are supported by the ChatGPT and Claude app stores, and they are deeply optimized for those App Store use cases. The advantage is that you can bring any third party app into a chat experience, while allowing it to keep its branding, and without having to customize the host app in any way. The disadvantage is that this constraint necessitates an inherent indirection in the interaction between an app and the host, which introduces additional implementation complexity. Because you're implementing essentially an app inside of another app. Security challenges and limited customization outside of the MXGp app window. Additionally, today, apps are inherently tied to an iFrame implementation, which means they're less suitable for experiences that exist outside of web, such as mobile or slack. In this course, we will take advantage of the handshake between AGI and MSPs to bring any MXGp app, including those apps originally built primarily for the ChatGPT and Cloud app stores, into your own custom agenda applications, which means that your users can bring the apps they're already familiar with from Salesforce and HubSpot and Spotify into your own custom agent application. Finally, in the very far end of the generative UI spectrum, we find fully open ended generative UI. This paradigm consists of UI that is fully generated by the agent on demand per user request. Fully open ended degenerative UI is often what people have in mind when you first see the term generative UI. This paradigm of generative UI is most suitable for the far long tail of user queries, and for delighting users with custom embellished experiences that, by today's standards, are unexpected. That said, fully open ended generative UI requires far more thinking tokens to work, which is both slower and more expensive, and it is far less predictable and robust than alternatives in the generative UI spectrum. It's currently still in the experimental phase of adoption, but it represents the full promise of a generic UI and captures our imagination. Given the tremendous pace of development in the eugenic space, it will likely quickly become more and more production ready. In this course, we will implement fully open and generative UI using Copilot Kit and add UI middleware, which means you can bring it to any agenda pack and right away. Let's do a quick recap. As you will soon see up close, the different parts of the generative UI spectrum have different strengths which are suitable for different needs and surfaces. Control. The general UI, the so-called workhorse of the generative AI world, is the most appropriate solution for the most used surfaces in your application. Declarative general UI, which is more flexible but less customizable and deterministic, is most suitable for the long tail of services in your application. An open ended generative UI is most suitable for interacting with third party systems, and for the late, long tail of your application. I'm really looking forward to going on this journey with you. We will begin the course by building a basic agent chat UI on top of a solid foundation we can continue to build on. Then we'll go over controlled, declarative, and open generic UI's with hands on examples for each. Finally, we'll close out with additional building blocks. You need to create a great for psychogenic application outside of the chat, including frontend, tool calling, and state synchronization. This is a quickly evolving emerging space, and I really hope this course will get you ahead of the curve.