As another data point as we chart our path to the singularity, I entered the same prompt into the API playground and got this response:
> A Jerry Seinfeld routine about cats.
> Why do cats always land on their feet? I've never seen a cat land on its feet. They don't; they land on you.
> I was at my friend's house the other day, and his cat was walking across the room. And I said to him, "How does that cat always land on its feet?" And he said, "Well, it's because they're lighter than air."
> And I said, "No, it's because they've got nine lives."
> And he said, "No, it's because they're lighter than air."
> And I said, "No, it's because they've got nine lives."
> And he said, "No, it's because they're lighter than air."
> And I said, "No, it's because they've got nine lives."
And then it just repeated those last two lines until my browser crashed. It cost me about a quarter.
Copilot gives me these kinds of two-line loops a lot. This is probably what AI will sit around doing until the sun flames out, after it's killed all of us off.
GPT 3 really reminds me of the kind of fluent, but ultimately incoherent, speech someone with Wernicke's aphasia produces. It's like we've done a really good at artificially building the language part of the brain isolated from all the other parts
I'd never heard of that so I googled for a video because I wanted to know if GPT 3 is like that. After watching the video I can see what you mean. Fascinating but somewhat horrifying.
It just dawned on me how disruptive this tech is. If you think we are drowning in a sea of shitty content now, just wait until this type of thing is deployed on demand and at scale into people's feeds.
This actually had me laughing which equals a dopamine hit. I can easily see people endlessly watching rapid fire equivalents of this in a hypothetical future TikTok ursurper.
> just wait until this type of thing is deployed on demand and at scale
South India averages 4 movies a week. Each movie has like 5 songs. So in a year you need lyrics for atleast 1000 songs. Sure, lyricists are creative artists, but not that creative. So you hear a lot of catchy songs which have some AI input. Here’s a song with 45 million views - I don’t think a human wrote all of it because it’s mostly garbage, but very catchy, so most likely AI augmented.
https://youtu.be/MJmxdlCtFWU
Only an AI would write something like - 2 2 2 2 2 2 2 2 2 2, i love u 2 really i love u 2.
Someone had to take the time to perform the song, there's no reason they wouldn't take a few minutes to change/improve the lyrics, even if an AI produced it.
I suspect it is not AI produced and simply a simple song.
You wouldn't even need your toes to count the number or people who sang like 80% of the songs in Bollywood film history. A random google claims that Kishore Kumar sang 2905 songs, Lata Mangeshkar somewhere around 25 thousand, and Asha Bhosle around 12 thousand.
The time taken to perform a song doesn't have to be much.
edit: that being said, the choreography probably took 50x as much work as the song.
There already seas of ai content on YouTube, mostly based on a script that's a long post stolen verbatim from Reddit. SEO pushes people to the pages. Content doesn't have to even be good if it's cheap enough to make
The future will be filled with ads for things that are real. It will be what’s scarce and exclusive. And half the things they advertise as real will probably be fake anyway.
Also as soon as deep fakes are as prevalent as photoshop, I'd want someone to verify the content. You could imagine hearing "Jerry Seinfeld" repeatedly using the n word. So I'd want to see the NYTimes or Bloomberg trying to verify it.
Cool in concept, but shows how important comedic timing is. Example where the semantics requires the pauses and drawn out expressions to really make it work. Might be where the jokes could be written by AI but perfected by humans, especially as comedians use audience feedback to hone their material. Even the best don’t know how any joke will land until it’s delivered. How could any AI incorporate that real world feedback loop?
I was pretty impressed with it, honestly. But I understand what you're saying - a comedian's delivery is different from normal speech. It may be that GPT-3 just wasn't trained with enough standup routines.
(I dread to think what a Sam Kinison AI would be like)
What’s the training if every comedian has their own style, in words, intonation, and timing? Do we need to build personalized AI comedians? Or write new jokes for specific comedians trained on previous their routines?
To be honest, given the complexity and context connection DALLE2 makes it doesn’t seem all that hard to learn some speech distinctions of a couple of hundred people. It’s really not all that complex.
It also shows the importance of a laugh track. I think a lot of Seinfeld material would sound weird, even performed by Jerry himself, in the absence of audience laughter or other contextual background noise.
I actually think this should be possible to train an AI to do. It could alter timing and monitor for laughs, it will learn when and how to pause, and use intonation to maximize for laughter.
The trouble with jokes is they're only funny to people once. You'd need an adversarial laughing AI to train against. And boy would that thing be loony.
It would be such a force for good in the world that Disney should allow it to be used for free like Volvo did with their patent for the three-point seat belt.
I wouldn't be surprised if this routine was guided by the creator. As in, they edited it to make it work and that was probably after discarding a bunch of terrible stuff. Nevertheless it's not unlike how Jerry actually works, he would write a lot, and then edit it down later. He worked hard at it and it sounded almost joyless.
All it needs is a layer of contrastive loss at the top, to tell real Seinfeld and SAInfeld (don't bother checking, it's already registered) apart, and boom! Indistinguishable delivery.
I just tried the prompt in the GPT-3 playground and got a similar result. I tried a bunch of other "A Jerry Seinfeld stand-up routine about X." prompts too - it's an interesting prompt format.
Here's what I got for "A Jerry Seinfeld routine about Datasette." (my open source project). Note that the "." is important - without it, GPT-3 tried to generate me a README file for a new Datasette plugin instead.
Jerry:
Hey, have you ever heard of this new website called Datasette?
George:
No, what is it?
Jerry:
Well, it's basically a way to store and share data sets.
George:
Huh. Sounds kind of boring.
Jerry:
Yeah, I know. But it's actually really useful. For example, say you wanted to find out how many people in your city use the same first name as you.
George:
Why would I want to know that?
Jerry:
I don't know. But with Datasette, you could easily find out.
George:
Really? That sounds kind of useful.
Jerry:
Yeah, it's actually really handy for all sorts of things. So if you ever need to find out anything about a data set, Datasette is the place to go.
“George:
Why would I want to know that?
Jerry:
I don't know. But with Datasette, you could easily find out.
George:
Really? That sounds kind of useful.”
Another instance of the lack of memory.
Why would I want? Kind of useful…
Not that it is not impressive but that would not get a pass anywhere.
I also like how this exhibits a classic GPT-3 trait: it's always biased towards having characters agree with each other. Here George breaks the entire bit by saying "Really? That sounds kind of useful."
GPT-3 is actually doing improv, not performing jokes. If it were really Seinfeld, it would be paying writers and scribbling notes down in a pocket notebook while golfing.
i.e. It's not Seinfeld, it's an improv guy who you just asked to "Do Seinfeld."
Nah. It's an whatever the opposite of that is, directed at the people who do improv. Improv is just a series of associative, impulsive reactions to stimuli - exactly what these systems are doing, following a series of words with the first (most strongly connected) utterances that come to mind.
It could seem like anthropomorphism if you don't think about it, but it's not anthropomorphism to say robots walk just because humans also walk.
Yes, if you dehumanize human creativity to "just a series of associative, impulsive reactions to stimuli", then computers can think and GPT-3 is sentient.
> …it's not anthropomorphism to say robots walk just because humans also walk.
But it is to say that robots "do improv". Could GPT-3 mimic something that looks like improv by remixing an enormous corpus of human-created improv? Certainly not well, as we've all seen in non-cherry-picked GPT-3 output.
Funny that in a way we fall into the same pattern as GPT-3, reproducing the same two lines always. The first is "GPT-3 generated this", the second "but how do we know it really generated that?".
Just played it for my wife but didn’t tell her it was AI. Watching her facial expressions it looked like at points she was skeptical of the cadence, but she came away with it being Seinfeld, but described it as “definitely not one of his better jokes” and “something about it is really off.”
I think I would have had the exact same thought had I not known it was AI. It seems like AI has this rambling, dream-like quality where things just don't entirely make sense, probably because AI lacks understanding or context (my take as a layperson). The voice is surprisingly accurate sounding, but the cadence is stilted and really lacks any kind of comedic timing.
> [I worry more about] hack comedians generating entire sets based on other people's material.
The difference is that GPT-3 produces all kinds of hacky content based on other people's material at scale, producing an invasive species of an impossible-to-control faux-written content. It's all fun and shitty Seinfeld routines now, but GPT-X will make it impossible to find non-computer-generated content on search indexes by 2030.
In contrast, comedians and comedy nerds know who the joke-stealing hacks are (e.g. "Carlos Men-Steal-ia"), and the splash damage is limited.
I think that's because a new cat food commercial a week isn't really a thing. I didn't really laugh; I do laugh at actual Seinfeld probably around 40% of the time, less with his more recent content and more with his older content.
In a word, 'derivative'. Not that pop culture isn't full of remakes and endless sequels, bands that sound like their predecessors, etc. But I am not optimistic that fully AI-generated content offers any exciting novelty; so far at least most of its entertainment value comes from where it fails to convince.
It lacks the cultural awareness to pick a topic that would actually surprise a listener, hence not really funny. But the tone and structure was nearly flawless.
I think the parent means the particular beats of the joke. THere's clearly something you could do in Seinfeld style with "tuna surprise" that would be unexpected and funny, but the AI is missing all of the cultural context and understanding of comedy that would make it work.
One thing I really want to see in my lifetime is an "AI" I can feed episodes of a TV show and have it generate more episodes. Like a second season of Firefly or Gravity. Or re-create the finale of Battlestar Galactica and give that show the ending it deserved.
I don't think we're going to get there following the GPT path. I'm aware this is a bit of a contrarian take -- and that I have some degree of bias, since I've made literally tens and tens of dollars selling my own fiction -- but even the best generated text that I've seen suggests a few things:
- it can be very good at mimicking narrative styles
- it is good at close range context, what might be (uncharitably) described as "autocomplete on steroids"
- it doesn't understand overall structure
That last one is the contrarian bit, but I think it's absolutely true. Context gives some structure to generated text, but the longer the generated text runs the closer it gets to falling apart. Nothing we have now is remotely good enough to pull off things like "here is something planted in chapter two that will be referenced in chapters eight and fifteen and finally pay off in the climax in chapter twenty-five".
And I don't think this is just a matter of iterating on the current models; at some point this requires an understanding and application of story construction rules, of act structure, of character, that seems fundamentally incompatible with the way neural network-based machine learning currently works. If you feed a bunch of screenplays into an ML model, it can generate text in that style, but it's not going to write what we would consider a good screenplay. If you feed a screenwriting textbooks into that same model, it won't help, because it doesn't have the ability to interpret those as structural guidelines for writing entirely unrelated texts.
I'm just saying that if we are about to use "AI" for all kinds of increasingly disturbing and dystopian purposes, at least I want to get a second season of Gravity out of it.
This is amazing, especially the punchline! Some of the Netflix specials I've watched had jokes that were not much better than this. I'm sure with a proper delivery (tone, pauses) it would get quite a few laughs if it was presented on the stage.
Playing around with gpt3 in the past I've noticed it lifts a lot of material and repurposes it. Does some variation of cat food skit exist in the wild? I wouldn't be surprised.
I'm having trouble believing this is real and not heavily massaged output. I worked on a AI-like project in 1988 as a programming gig in college, and the output was heavily edited by the composer.
If it is real, I'm actually a somewhat frightened.
It seems unlikely that this is the real prompt. Even at temperature=1.0, i repeatedly get "Why did the cat cross the road? To get to the other side!" as the response.
I gave it some monologues and expanded the prompt a bit and now it's more talkative, a bit.
--
Cats are weird. They're like little old men in furry suits. Have you ever seen a cat drink? They stick their tongue in the bowl, and they just go "Nyahhh." It's like they're not even trying. Dogs are different. Dogs are eager to please. They're always happy to see you, they wag their tails, they bring you a stick. They're like, "Hey, look what I found! Isn't this great?" Cats are like, "Eh, what do you want?"
--
Why does it seem like every time you meet somebody, they're either a cat person or a dog person? What is that all about? You ever meet somebody and they're like, "I'm not really a cat person or a dog person, I'm more of a snake person." Never. You're either a cat person or a dog person. And I've never met a cat person who's also a dog person. Have you? It's always one or the other. It's very rare. In fact, I've never even met a cat person who's also a human person. They don't seem to like us very much, do they?
--
What's the deal with cats? I don't get it. They're so cute when they're kittens, but then they just turn into these lazy, aloof creatures that don't do anything. Just lay around all day, sleep all day. And they're always like, "I'm not really in the mood." "I don't really feel like it." "I'm not really in the mood for this." What are they in the mood for? What do they want? Nobody knows. Not even the cats. They don't know what they want. They're just like, "I'll know it when I see it."
Years ago on /r/aww someone posted their black cat named Jerry Seinfeld. This sounds like the kind of comedy that this cat Jerry Seinfeld would come up with.
I remember this cat because I improvised a similar comedy bit for him in the comments. Like "Humans, huh? You never see them hunt or fish, but they have all this meat just lying around!"
Very impressive. The only thing missing from AI is a perfected understanding of context without billions of data points. and it has some trouble with the consistency and novelty that humans intuitively understand because of our shared evolution
All it needs to do now is to make some unbelievably pretentious waffle about how it's been perfecting a joke it's entire career only for the joke to turn out to be "What's the deal with poptarts!?".
"abysmal".. ? Either you're an expert on the field and know better (please send some link !) or i'm not even sure what you're comparing that too. This is the best thing i've heard from speech synthesis since i can't remember.
> A Jerry Seinfeld routine about cats.
> Why do cats always land on their feet? I've never seen a cat land on its feet. They don't; they land on you.
> I was at my friend's house the other day, and his cat was walking across the room. And I said to him, "How does that cat always land on its feet?" And he said, "Well, it's because they're lighter than air."
> And I said, "No, it's because they've got nine lives."
> And he said, "No, it's because they're lighter than air."
> And I said, "No, it's because they've got nine lives."
> And he said, "No, it's because they're lighter than air."
> And I said, "No, it's because they've got nine lives."
And then it just repeated those last two lines until my browser crashed. It cost me about a quarter.