Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm building something similar. One area I see being a massive problem is separating 'brands' and 'products', especially with companies that do a really poor job of delineating between their different brands over time.

For example 'Quickbooks', 'Quickbooks Online', 'Intuit Quickbooks' all show up occasionally when you ask about 'Accounting software'.

As an aside 'Accounting Software', I'm not seeing QBO in the top 3, and Freshbooks in number one. I have never had that result whenever I've run reports.

https://productrank.ai/topic/accounting-software https://www.aibrandrank.com/reports/89



Very cool!

Yup I definitely see confusion in our responses around the product and brand names. We do another pass through an LLM specifically aimed at ‘canonicalizing’ the names, but we’ll need to get more sophisticated to catch most issues.

In that case you mentioned, the brand confusion is what accounts for the top three omission for QBO. Both OpenAI and Perplexity rank it #1, but Anthropic ranks the slightly different “Quickbooks” product as #1. Our overall ranking prioritizes products that appear in all three responses, so both are dropped down.


Interesting, I thought it might be something like that.

Yea, 'canonicalizing' is really tough (although I don't know if you really need to get it *perfect*) because what is correct is different in different contexts.

Accounting Software as an example again, for the category overall canonicalizing any reference to Quickbooks to the same company makes sense. If you're asking about more specific recommendations though 'Accounting software for sole traders', you might have both Quickbooks Online and Quickbooks EasyStart mentioned, and they are actually slightly different products. Or Netsuite is actually a suite of products that might all make sense in slightly different contexts.


That nuance is really important/hard to piece apart. Have you found any good techniques to solve for it?


To be honest not really!

I get the output from the LLMs, compile into a report, and then pass it back through an LLM to sense check the result with the added context of what's been requested in the report, but I'm not super happy with the outcome still, some different categories still come out a bit of a mess.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: