You are not able to write like Shakespeare. Shakespeare isn't really even a great example of an "author" per se. Like anybody else you could get away with: "well I read a lot of Bukowski and can do a passable imitation" or "I'm a Steinbeck scholar and here's a description of his style." But not Shakespeare.
I get that you're into AI products and ok, fine. But no you have not "studied [Shakespeare] greatly" nor are you "able to write like [Shakespeare]." That's the one historical entity that you should not have chosen for this conversation.
This bot is likely just regurgitating bits from the non-fiction writing of authors like an animatronic robot in the Hall of Presidents. Literally nobody would know if the LLM was doing even a passable job of Truman Capote-ing its way through their half-written attempt at NaNoWriMo
A good point. "Famous author" is a marketing term for Grammarly here; it's easy to conceive of an "author" as being an individual that we associate with a finite set of published works, all of which contain data.
But authors have not done this work alone. Grammarly is not going to sell "get advice from the editorial team at Vintage" or "Grammarly requires your wife to type the thing out first, though"
I'll also note that no human would probably want advice from the living versions of the author themselves.
>Yes, and it's a detection loop without feedback. You can never verify that a piece of work in the wild is actually AI. The poster is the only one who really knows, and they'll always say it's not.
Yes. People keep saying, in response to points like this, "oh but you/I can tell pretty easily." But it's not the detection, it's the verification! (see what I did there)
Where I'd push back is the idea that the problem is the boring "call out" discourse that follows each accusation. The problem of verifying human provenance is fundamental to the discussion of trust and argumentation, but the simple "the zone is flooded" problem is also an ecological one. There's terrible air/water/soil quality in the metro area I live in; people have to live with it w/o regard to how invested they are in changing it.
You know you can't just say "I detect AI written prose" and then do whatever you want about it, right? It's not difficult, sure, to detect it. It's difficult to prove that it's true and then punish the student for it.
We can't, and neither can the machines that people build and/or use for "detection." Everyone in this thread also needs to recognize the entrenched differences between secondary educators, who have wholeheartedly adopted AI products into their teaching workflow, and tertiary educators, who have adopted them only by necessity. "By necessity" in this case means "having to spend a ton of time dealing with, talking about, and learning about this nonsense."
The discourse around "cheating" with these products has always been a mistake. We should have characterized them less as "cheating machines" and more as "expediency machines." Because once you're invested in describing students as having academic dishonesty issues rather than skill issues, you've made it an administrative problem. You never come back from that.
For mine, we lost the issue long ago when accountability culture won. We should never have bothered with the idea that "mechanics, grammar, and proofreading" should be part of a "rubric" that "assessed outcomes" for "good writing." We should have just said "we don't care if you don't think this is worthwhile, because your time is worth nothing." The last two years of student labor certainly suggests this.
>Or will people realize they are programming and discipline up?
Or will there be coding across disciplines, and attendant theories of literacies in context?
What I like about the OP is the consonance with literate practices, which has gone through similar generations of "our children don't know how to [...]" alongside of "our children will not need to [...] because of the machines."
I get that you're into AI products and ok, fine. But no you have not "studied [Shakespeare] greatly" nor are you "able to write like [Shakespeare]." That's the one historical entity that you should not have chosen for this conversation.
This bot is likely just regurgitating bits from the non-fiction writing of authors like an animatronic robot in the Hall of Presidents. Literally nobody would know if the LLM was doing even a passable job of Truman Capote-ing its way through their half-written attempt at NaNoWriMo
reply