It already gives you suggestions based on your current conversation. So if you're looking at a team member's work, it'll suggest things like "show their open PRs" or "anything blocked." But learning your patterns across sessions and surfacing them when you first open the app is a great idea, I'm gonna add that next!
Yeah, we've been following it closely. We already support the majority of the MCP spec and plan to add support for UI over MCP.
But our use case is a little different. MCP Apps embed interfaces into other agents. Tambo is an embedded agent that can render your UI. There's overlap for sure, but many of the developers using us don't see themselves putting their UI inside ChatGPT or Claude. That's just not how users use their apps.
That said, we're thinking about how we could make it easy to build an embedded agent and then selectively expose those UI elements over MCP Apps where it makes sense.
There's overlap for sure. I'd say we've built a more drop-in solution. We actually migrated to AG-UI events under the hood, and we have plans to expand cross-compatibility across standards.
The major difference is we provide an agent. You don't need to bring your own agent or framework. A lot of our developers are using our agent, really happy with it, and we have a bunch of upcoming features to make it even better out of the box.
You install the React SDK, register your React components with Zod schemas, and then the agent responds to users with your UI components.
Developers are using it to build agents that actually solve user needs with their own UI elements, instead of text instructions or taking actions with minimal visibility for the user.
We're building out a generative UI library, but as of right now it doesn't generate any code (that could change).
We do have a skill you can give your agent to create new UI components:
Basically it's just... agreeing upon a description format for UI components ("put the component C with params p1, p2, ... at location x, y") using JSON / zod schema etc... and... that's it?
Then the agent just uses a tool "putCompoent(C, params, location)" which just renders the component?
I'm failing to understand how it would be more than this?
On one hand I agree that if we "all" find a standard way to describe those components, then we can integrate them easily in multiple tools so we don't have to do it again each time. At the same time, it seems like this is just a "nice render-based wrapper" over MCP / tool calls, no? am I missing something?
I have to keep asking the same questions. Do you think it could remember what I typically ask and generate or suggest it?
reply