Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've tried to use Perplexity after reading all of the hype, seeing it praised by so many VCs, and seeing it appear on so many different lists of essential AI tools.

Yet most of my Perplexity queries have produced poor results. It always feels like they optimized for minimizing latency and producing output that feels good instead of doing actual research. Most of the time it feels like the same quality of results I'd get from skimming the top of the Google search page summaries if I didn't filter out the spammy site.

The product could be more useful if it spent several minutes researching, but that would defeat the wow factor that I'm sure their product managers are prioritizing.



Perplexity had a business case for one hot minute there, before OAI, Anthropic and Google all added search to their models, but now that have it, Perplexity doesn’t have a reason to exist anymore. They’re kind of the poster child for “if you don’t have your own model, you’re basically VC-funded market fit research for the companies which do, who will go on to copy and crush you.”


Hard disagree.

Even during ChatGPT peak, when HN was buzzing with every other post being how ChatGPT/other LLM product replaced Google for them, I could not honestly switch, or meaningfully reduce my Google usage.

Until Perplexity.

It was the AI product that actually reduced my Google usage. Even with AI mode directly built into Google homepage now, Perplexity is still better.

It has basically zero hallucination, each para/entry backed by a URL, and lower latentcy than any other LLM product.

I don't know why you find it bad. I use it daily, and for serious searches.

It has fundamentally changed the way I search the web/ask questions in the web.


But not every time it shows a source and doesn’t stop when it doesn’t find the data

Answers irrespective


It sounds like you need to be using the research function, which takes ~3 minutes but does a much more in depth search to find more relevant data.


3 minutes is too long for exploratory searches, where I'm not sure what I'm even looking for. And 3 minutes feels too short for deep research which I'm expected to trust some complex result which I either don't know enough about myself (that's why I'm searching for it) or know enough about to the point that AI probably can't do something that I already couldn't within a couple minutes.

I think the sweet spot for AI results is around 10-30 seconds. It's fast enough that I'm willing to wait for the results even if I'm not sure I'm exploring the right topic. And it's also fast enough that even if I knew what to search for, it can give me summarized results faster than I could read on my own.


Hm, I'd think, AI aside, that if 3 minutes is too long for an exploratory research, that's not going to be good quality exploratory research...


I remember when the hype first started around it, it was unusably slow, and produced poor results. Granted, I haven't tried it lately to see if latency improved, but the hype versus product state at the time, really turned me off from the product.


Same experience. Maybe I was using it wrong, but it always returned some outdated results, not even completely related to my query.


I think the UX is good and could imagine it being applied well to a much better research tool.


Really? I’ve found it to be a fantastic product, and a part of my daily use.

It’s reduced my legacy search engine usage significantly.

Is there a better product? ChatGPT with web search enabled?

I guess Google’s AI is probably good, I just haven’t used Google in a while as I switched to DuckDuckGo.


I second this. Perplexity is the only AI I actually pay for. It absolutely excels at the kind of deep search into narrow domains where expertise is concentrated in forums and specialist sites. Things like mechanical work on obscure classic vehicles, vacuum tube electronics, company tax arcana. It's also very very good at those questions you sometimes wake up with, where something happened in the news six months ago and you think, "Whatever came of that?"

Its deep research and Pro modes are great at synthesizing thorough briefings on complex topics too, to get up to speed on a new client or job responsibility for example.

It's not a chatbot for me, it's a brilliant, tireless little research minion.

As always with any LLM you should double-check its final, specific answers. It does occasionally hallucinate when information simply isn't available. Your research minion is just that - a minion, you have to have the context. It's not a teacher or guru.

EDIT: the bottom line is, it came along at exactly the right time for me. Google's search results are pages of ads, and DuckDuckGo insists on showing page after page of content-farm blogspam for the types of topics I search for. It cuts right through all that crap for me.


> As always with any LLM you should double-check its final, specific answers. It does occasionally hallucinate when information simply isn't available.

It also sometimes completely botches it when information is available. For example a while back someone cited Musk only scoring 730 on the math SAT as evidence that there is something from with the test.

I looked up Musk's age to figure out about when he would have taken the SAT then asked Perplexity what percentile a 760 would have been then. It gave me an answer that as far as I can tell was right (~90th).

I then wondered what my percentile was, so asked it what percentile 790 would have been when I took it. It told me it would have been 17.something, where that something had 5 digits.

That was obviously completely wrong because (1) there is no possible way it could have data that would justify giving an answer with 5 digits after the decimal point, and (2) the maximum possible score was 800 and scores were a multiple of 10, so for 790 to have been 17th percentile would mean that 83% of people who took the test scored a perfect 800.

I told it that this was clearly absurd.

It responded that I was completely right and said it was going to try again. On the retry it gave a reasonable answer that I knew from what I remembered was in the right ballpark and not given to ridiculous accuracy.


Couldn't agree more with this. A stock I hold suddenly started trending sharply upwards earlier in the year and when I asked Perplexity to research why it came back with a very detailed and well-cited explanation. It's far more efficient at distilling stuff down into a useful format than if I were to Google it myself


The SEO-optimized blog spam is the worst.

Each page has paragraphs and paragraphs of pure filler. Let the AI crawlers read them so I don’t have to.


Same, it outshines Gemini and ChatGPT and hallucinates far less. The tone is less eager too, making it feel more tolerable as a tool rather than an unpaid assistant


I completely agree. I also love the transparency that it provides as to where it is getting the reasoning for making a specific claim.

I can also ask it to just reference research papers and it will find relevant data relating to my query from peer reviewed sources.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: