I’m playing with it. After giving it my name, it correctly stated that I moved to Poland in Summer ‘08, but then described how I became some kind of techno musician. I run it again and it says wildly different stuff.
I have to say playing with GPT3 has been a mind blowing experience this week and you should all try it.
The most striking point was discovering that if I give it texts from my own chats, or copy paste in RFPs, and ask it to write lines for me, it’s better at sounding like a normal person than I am.
Create an account at https://beta.openai.com/playground . You get $18 of free credits, and generating small snippets with the most powerful language model costs only a cent.
Another great option is https://textsynth.com/playground.html (made by the very impressive developer Fabrice Bellard - of linux in javascript and world pi digit calculation fame). He deserves some money funneled through that site for his efforts over the decades (and the output is about as good as gpt-3 imo).
When you’re in there, try to challenge it a bit beyond writing fiction.
A stock example was “write a tag line for an ice cream shop”. We tried changing it a bit and I’ll give you some of what it’s punchlines.
“Write a tagline for an ice cream shop run by Bruce Wayne.” Result: “the only thing better than justice is ice cream”
“… run by an SCP”: “The SCP Ice Cream Shop: the only place where you can enjoy ice cream and fear for your life!”
„… run by Saddam Hussein”: “the best ice cream in the world, made by the worst man in the world!”
One thing to watch out for though is it is not self aware at all (at least in a practical sense) and can just make things up. For example, we tried giving it my daughters homework reading comprehension questions on the book “w pustyni i w puszczy” and it gave cogent, plausible and totally wrong answers that it made up on the spot. It would seem it hadn’t been given the book, and would have got an F.
And it can’t speak for itself. I can ask it directly “have you read Tractatus”, and it will insist “no, never”, but knows it front and back like a scholar.
Oracle Query Builder, for example, but there were dozens of tools like this over the past couple of decades.
Except of course that those tools are at least somewhat dependable in what they output, because they were created to generate queries, not a roughly human-looking random text.
Markov chains look like absolute gibberish almost all the time whereas GPT-2/3 (especially 3) generate natural sounding sentences. If you think they’re equivalent in capability, you haven’t spend any time using GPT-3.
They are more naturally sounding, sure, but semantically it’s the same - there are no signs of any intelligent thought there, it’s all gibberish that just happens to match patterns it was trained against.
You could imagine a bot which takes your question, googles it, and then assembles the answer based on random pieces from millions of search results that happen to match the syntactical structure of the sentence - and you wouldn’t really be that far off.
Are you implying that those people literally never do anything other than regurgitate stuff from Facebook? Because yeah, such a person could probably be described as not having intelligence, but I also have never met or heard of such a person.
So make the bot. I paste in SAT reading comprehension questions including whole short stories and GPT3 gets them right. It’s not gibberish. It’s not even just cogent. Go throw your bot together and Show HN. I’ll wait.
I have to say playing with GPT3 has been a mind blowing experience this week and you should all try it.
The most striking point was discovering that if I give it texts from my own chats, or copy paste in RFPs, and ask it to write lines for me, it’s better at sounding like a normal person than I am.