Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For what it's worth, both the article you're linking to and the one this story is about are immediately flagged by AI text checkers as LLM-generated. These tools are not perfect, but they're right more often than they're wrong.
 help



>These tools are not perfect, but they're right more often than they're wrong.

Based on what in particular? The only time I have used them is to have a laugh.


Based on experience, including a good number of experiments I've done with known-LLM output and contemporary, known-human text. Try them for real and be surprised. Some of the good, state-of-the-art tools include originality.ai and Pangram.

A lot of people on HN have preconceived notions here based on stories they read about someone being unfairly accused of plagiarism or people deliberately triggering failure modes in these programs, and that's basically like dismissing the potential of LLMs because you read they suggested putting glue on a pizza once.


I had fun with AI detectors in particular for images, even the best one (Hive in my opinion) was failing miserably with my tests, maybe the one trained on text are better but I find it hard to trust them, in particular if someone know how to fiddle with them.

> immediately flagged by AI text checkers as LLM-generate

Proof? Which one? I would like to test a few other articles with your checker to test its accuracy.


hey! im not op but ive used originality.ai before and it saved my ass. its super sensitive, but also super accurate



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: