Hacker Newsnew | past | comments | ask | show | jobs | submit | mold_aid's commentslogin

You are not able to write like Shakespeare. Shakespeare isn't really even a great example of an "author" per se. Like anybody else you could get away with: "well I read a lot of Bukowski and can do a passable imitation" or "I'm a Steinbeck scholar and here's a description of his style." But not Shakespeare.

I get that you're into AI products and ok, fine. But no you have not "studied [Shakespeare] greatly" nor are you "able to write like [Shakespeare]." That's the one historical entity that you should not have chosen for this conversation.

This bot is likely just regurgitating bits from the non-fiction writing of authors like an animatronic robot in the Hall of Presidents. Literally nobody would know if the LLM was doing even a passable job of Truman Capote-ing its way through their half-written attempt at NaNoWriMo


>Literally nobody would know if the LLM was doing even a passable job of Truman >Capote-ing its way through their half-written attempt at NaNoWriMo

As I look back on my day, I find myself quite pleased with this line.


A good point. "Famous author" is a marketing term for Grammarly here; it's easy to conceive of an "author" as being an individual that we associate with a finite set of published works, all of which contain data.

But authors have not done this work alone. Grammarly is not going to sell "get advice from the editorial team at Vintage" or "Grammarly requires your wife to type the thing out first, though"

I'll also note that no human would probably want advice from the living versions of the author themselves.


>Yes, and it's a detection loop without feedback. You can never verify that a piece of work in the wild is actually AI. The poster is the only one who really knows, and they'll always say it's not.

Yes. People keep saying, in response to points like this, "oh but you/I can tell pretty easily." But it's not the detection, it's the verification! (see what I did there)

Where I'd push back is the idea that the problem is the boring "call out" discourse that follows each accusation. The problem of verifying human provenance is fundamental to the discussion of trust and argumentation, but the simple "the zone is flooded" problem is also an ecological one. There's terrible air/water/soil quality in the metro area I live in; people have to live with it w/o regard to how invested they are in changing it.


>So the marketing I've seen is intended to reassure skittish administrators that the software is not going to generate false accusations.

This is it, right here. All policy I've seen lately has been geared towards students having expanded "due process" rights.


You know you can't just say "I detect AI written prose" and then do whatever you want about it, right? It's not difficult, sure, to detect it. It's difficult to prove that it's true and then punish the student for it.

It's very easy to convince yourself it's true and then hand out punishment like it was on sale.

I suppose the hard part then is sitting through 26 grade appeals.

We can't, and neither can the machines that people build and/or use for "detection." Everyone in this thread also needs to recognize the entrenched differences between secondary educators, who have wholeheartedly adopted AI products into their teaching workflow, and tertiary educators, who have adopted them only by necessity. "By necessity" in this case means "having to spend a ton of time dealing with, talking about, and learning about this nonsense."

The discourse around "cheating" with these products has always been a mistake. We should have characterized them less as "cheating machines" and more as "expediency machines." Because once you're invested in describing students as having academic dishonesty issues rather than skill issues, you've made it an administrative problem. You never come back from that.

For mine, we lost the issue long ago when accountability culture won. We should never have bothered with the idea that "mechanics, grammar, and proofreading" should be part of a "rubric" that "assessed outcomes" for "good writing." We should have just said "we don't care if you don't think this is worthwhile, because your time is worth nothing." The last two years of student labor certainly suggests this.


Are you? How many preprints are posted here every day?


>Or will people realize they are programming and discipline up?

Or will there be coding across disciplines, and attendant theories of literacies in context?

What I like about the OP is the consonance with literate practices, which has gone through similar generations of "our children don't know how to [...]" alongside of "our children will not need to [...] because of the machines."


>That phrasing makes me imagine a cultural anthropologist studying the behavior of programmers in the wild

Google scholar is your friend here


Can't wait for postmodern AI.


How to flip burgers better than an AI robot!


"This is the most used AI chatbot in the world!" "Who uses it?" "Other AI chatbots!"


:) Too true

But tbh, it'll more likely be repairing those burger flippin' robots


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: