Hacker Newsnew | past | comments | ask | show | jobs | submit | grouchy's commentslogin

It's been fun using this in our standups. Thanks for building this.

I have to keep asking the same questions. Do you think it could remember what I typically ask and generate or suggest it?


It already gives you suggestions based on your current conversation. So if you're looking at a team member's work, it'll suggest things like "show their open PRs" or "anything blocked." But learning your patterns across sessions and surfacing them when you first open the app is a great idea, I'm gonna add that next!

Glad you like the approach. When you give it a spin or look into the implementation please let us know what you think.

We are constantly improving tambo. It's crazy to see how much it's improved since we first started.


It get's it wrong sometimes but I think the alternative is the user getting it wrong trying to navigate your site.

I like to think how much time I spend clicking different nav links, clicking different drop downs trying to find the functionality I need.

It's just a new way for the app to surface what the user needs when they need it.


It does!

import { z } from "zod";

inputSchema: z.object({ query: z.string() });

or

import * as v from "valibot";

inputSchema: v.object({ query: v.string() });

or

import { type } from "arktype";

inputSchema: type({ query: "string" });


I'm curious what would make you say that? Because we haven't experienced this. We are being used a fortune 1000 fintech in production.

Any specific experience you had? or more specifics of where batteries included went to far?


Yeah, we've been following it closely. We already support the majority of the MCP spec and plan to add support for UI over MCP.

But our use case is a little different. MCP Apps embed interfaces into other agents. Tambo is an embedded agent that can render your UI. There's overlap for sure, but many of the developers using us don't see themselves putting their UI inside ChatGPT or Claude. That's just not how users use their apps.

That said, we're thinking about how we could make it easy to build an embedded agent and then selectively expose those UI elements over MCP Apps where it makes sense.


There's overlap for sure. I'd say we've built a more drop-in solution. We actually migrated to AG-UI events under the hood, and we have plans to expand cross-compatibility across standards.

The major difference is we provide an agent. You don't need to bring your own agent or framework. A lot of our developers are using our agent, really happy with it, and we have a bunch of upcoming features to make it even better out of the box.


Awesome to meet another tambonaut.

We love zod, we also support standard schema and thus most other popular typing libraries.

I'm curious how you found us?


Thank you. I just sent you an email. Looking forward to learning more about what you are building.


You install the React SDK, register your React components with Zod schemas, and then the agent responds to users with your UI components.

Developers are using it to build agents that actually solve user needs with their own UI elements, instead of text instructions or taking actions with minimal visibility for the user.

We're building out a generative UI library, but as of right now it doesn't generate any code (that could change).

We do have a skill you can give your agent to create new UI components:

``` npx skills add tambo-ai/tambo ```

/components


Okay but I fail to see how this is "new tech"?

Basically it's just... agreeing upon a description format for UI components ("put the component C with params p1, p2, ... at location x, y") using JSON / zod schema etc... and... that's it?

Then the agent just uses a tool "putCompoent(C, params, location)" which just renders the component?

I'm failing to understand how it would be more than this?

On one hand I agree that if we "all" find a standard way to describe those components, then we can integrate them easily in multiple tools so we don't have to do it again each time. At the same time, it seems like this is just a "nice render-based wrapper" over MCP / tool calls, no? am I missing something?


It's that plus the hosted service which interacts with the LLM, stores threads, handles auth, gives observability of interactions in your app, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: