This is possibly a death spiral. GPT is only possible because it's been trained on the work humans have learned to do and then put out in the world. Now GPT is as good as them and will put them all out of work. How can it improve if the people who fed it are now jobless?
Presumably it will improve the same way humans did -- once it's roughly on par with us it'll be just as capable of innovating and trying new things. The only difference is that for humans, trying a truly new approach to something isn't really done that often by most. "GPT-9" might regularly and automatically try recomputing all the "tricky problems" it remembers from the past with updated models, or with a few tweaked parameters and then analyze whether any of these experiments provided "better" solutions. And it might do this operation during all idle cycles continuously.
Honestly as a human who grasps how the economy works, this doesn't sound like a good thing, but I don't see any path to trying the fundamental changes that would be required for really good general AI to not be an absolute Depression generator.
The only thing I'm wondering is, will the wealthiest ones, who actually have any power to influence these fundamental thing, figure this out before it's too late? I really doubt your Musks and Bezoses would enjoy living out their lives on ring-fenced compounds or remote islands while the rest of the world devolves into the Hunger Games.
I keep seeing people post a line like this and it makes about zero sense to me...
Just what the shit do you think Google/Microsoft/Amazon are pouring billions of dollars into machine learning/AI for? The first one that creates a self improving-self learning machine wins the game (or destroys the earth with a paperclip maximizer).
You and your human intelligence are not magic. You're biological hardware and software that a lot of people are spending a lot of time and effort on reproducing in a digital format.
This is (materialistic) nihilism. Materialism is a philosophy and not a new one, but an exceedingly sterile one, in my view. If you want to take that philosophical position you can, but others are free to reject it (and most people do) because it is only a philosophical position, not a proved description of reality (how can it be?).
That was very bad for the weavers. Well earning middle class jobs were replaced by toiling in the mills, It then took a few generations for the work act to put a 10hr maximum workday etc.
The luddites were trying to defend their livelihoods, communities etc. It's the same rational thing people are looking at now: How to survive
Consider how expensive textiles were before weaving machines. People wore their one set of clothes until they disintegrated.
Fast forward to today. I received a solicitation for a charity in the mail the other day. They enclosed pictures of the poor kids. The kids were all wearing fashionable, spotless clothes in perfect condition.
They did?! I thought they just had to learn and adapt doing something else eventually during a transition which was not very fast while also enjoying the new, better and cheaper products available to all.
Any source for those starvation deaths? I would like to learn more about what prevented them from simply doing what the survivors did.
And weaving machines have not fully trickled down to citizens.
You can not easily buy a weaving machine (there are some second hand ones) or easily go to your local maker space and use the weaving machine to create the design you desire. Open source in the space of textile making is in its infancy even though there are some projects. I bet it is easier to get a low volume tape-out of some custom chips than it is to get a custom roll of textile. (you can get printing but that's not the same thing)
Textiles have become way cheaper and both higher quality (when demanded) and lower quality (cost saving fast fashion) and available in far higher quantities.
That medieval era technology can still be manufactured at an individual scale was not my point.
My point was that access to the technological advancement has not trickled down and that this creates an imbalance of power.
Compare how easy it is to get a custom textile (not a custom print) made to how easy it is to get a custom PCB made (it is reasonably easy to etch a double sided board and multi-layer and flexible ones can easily be ordered online). The situation with regards to knitting is somewhat better.
Saying that a "a basic loom isn't hard to make" in a world of high speed air jet digital looms is equivalent to saying that perf-board still exists in a world of SMD components.
This hoary take irks me. There were still places for human endeavour to go when the looms were automated.
That is no longer the case.
Think of it instead as cognitive habitat. Sure, there has been habitat loss in the past, but those losses have been offset by habitat gains elsewhere.
This time, I don't see anywhere for habitat gains to come, and I see a massive, enormous, looming (ha!) cognitive habitat loss.
--
EDIT:
Reply to reply, posted as edit because I hit the HN rate limit:
> Your job didn't exist then. Mine didn't, either.
Yes, that was my point. New habitat opened up. I infer (but cannot prove) that the same will not be true this time. At the least, the newly-created habitat (prompt engineer, etc.) will be miniscule compared to what has been lost.
Reasoning from historical lessons learned during the introduction of TNT was of course tried when nuclear arms were created as well. Yet lessons from the TNT era proved ineffective at describing the world that was ushered into being. Firebombing, while as destructive as a small nuclear warhead, was hard, requiring fantastic air and ground support to achieve. Whereas dropping nukes is easy. It was precisely that ease-of-use that raised the profile of game theory and Mutually Assured Destruction, tit-for-tat, and all the other novelties occurrent in the nuclear world and not the one it supplanted.
Arguing from what happened with looms feels like the sort of undergrad maneuver that makes for a good term paper, but lousy economic policy. So many disanalogies.
> This prediction has occurred with every technology revolution. It hasn't been borne out yet.
So what? You are performing 'induction from history', which is possibly the hand-waviest possible means of estimating what is next to occur.
Discontinuities occur. Fire gets tamed. Alphabets get invented. What went before is only a solid guide to the future absent any major disruption to the status quo. There is no a-priori reason to think that this time will be the same, either. Burden of proof is yours.
> It's a variation of the broken window fallacy.
I appreciate parsimony as much as the next academic but I'd appreciate you fleshing out your position here, so I can take it apart at the joints, in the custom and manner of my people >:)
You still haven't stepped away from historical induction -- your argument still depends on this time not being radically different than last time. There are good reasons -- presented everywhere, right now -- to suppose that this time is substantively different. Sundar Pichai called the invention of AI the most important thing humanity has worked on -- more important than fire, or the alphabet -- and I share his view. It's out there, commonly, in the intellectual wild; you cannot, on pain of being unconvincing, simply ignore it. "Big, if true," and it very well might be. https://www.youtube.com/watch?v=sqd516M0Y5A
I propose that you invest in a more convincing line of argument. The burden of proof lies heavy upon you.
Secondly -- for the life of me -- I don't see how we got from "prosperity doesn't come from jobs that are little more than make-work" (a claim, by the way, that Keynes would take exception to) to the view that automating most of the intellectual work on the planet will have nugatory impact, or that we'll all just vy to become celebrated Twitch streamers or influencers or whatever (assuming that synthetic influencers don't take off -- oh, wait, they did: https://www.synthesia.io/glossary/ai-influencer)
Even were you correct (and Keynes wrong), the instantaneous conversion of meaningful labour --- journalism, counselling, engineering -- into, as you say, "make-work" (the position I infer you are taking) would have tremendous cost.
At minimum, the psychological impact of such a transition would make the developed world's COVID hangover look like a day at the zoo.
Finally, the Parable of the Broken Window specifically refers to destructive work. Non-productive work is not covered. https://finshots.in/archive/dig-holes-and-get-paid-to-fill-t.... And that is to say nothing about how economic fruits are distributed -- a whole other matter, upon which, I again infer, you have no further comment.
Allow me to indulge in my own historical induction.
Up until Louis Pasteur invented the germ theory of illness, it was broadly understood, across many different cultures, that disease had its origins in one or all of: witchcraft, possession, loose morals, blocked meridians, etc.
Were you to do historical induction on the spawning of illness theory, you might well conclude that no theory of illness would be scientifically verifiable. You might have argued that anyone claiming a radical change in medicine was deluded, alarmist, or simply excitable.
And you would have missed out on the multiple decades of extra health that you've had on account of antibiotics, sterile procedure, and disinfectant. Your induction from history would have caused you to miss the disanalogy.
Something to think about next time you're at the doctor.
Non-productive work is exactly what breaking a window and then fixing it is, as well as doing work that is far better done by machine.
As to distribution of economic fruits, as I mentioned before, replacing labor with machines made the US the most prosperous country in history, along with the richest poor people.
> Non-productive work is not destructive work -- again, Keynes ditches.
Sorry, but breaking a window and then fixing it is non-productive.
> Your country, the USA, is very close to system collapse because of its inequal distribution of fruits.
Hardly. If the US will collapse, it's because of the current leftist swing of the government engaging in ever-increasing wealth redistribution.
> I daresay it makes good popcorn-time.
It's the equal distribution of income countries that repeatedly collapse. France is in the news currently because they've discovered that the math of redistribution does not work, and the people who cannot accept the math are rioting.
Competition still has potential for infinite growth. Even if ai is better than humans at everything, humans will be finite and will likely be better at making people with money feel important. Potentially the future economy is everyone just competing to make the wealthy feel important whether fighting their wars, worshiping at their cults, or working at their “startups”
You joke, but an economy that is 97% artists (aka content creators) sounds... good? Isn't this the utopic end goal after we automate the scarcity out of our lifes?
Have you seen some of that content? This sounds like a level in Dante’s inferno, all day everyday all “these” (and myself probably ) people going blah blah blah into the either. Navel gazing to the extreme.
In theory it's great, in practice... who knows. The cynic in me would expect it to go worse than anyone could ever imagine. If everything is automated, why do you still need humans?
Horizon: Zero Dawn was so compelling (to me) because of the outright horrifying plausibility of a gassed up tech CEO, convinced that software safeties were infallible, unleashes the consequences for their hubris upon the whole of humanity.
Ted Faro was a horrible human being blinded by delusions of grandeur, but he wasn’t “evil” - he was even convinced he was saving humanity by ending the threat of both war and climate change.
I don’t see GPT itself as representing a new Faro Plague, but I do see a lot of wannabe Ted Faros making the decisions at the top.
If LLMs come even close to achieving their short-term potential, we’re unleashing a bigger destabilizing force on the world than the smartphone/social media combo - and the world of 202x seems blatantly incapable of absorbing that level of disruption.
I saw a stream the other day that was just the output of an AI trained on a popular streamer’s past streams. It would select a random clip for video, respond to viewers’ comments in the voice of the streamer. It even superimposed roughly corresponding lip movements on the video.
I've listed to some popular podcasters. Over time, they all run out of material and their newer podcasts are just rehashes of the old ones. I suppose AI will take over that job!
Literally everything you do online is training data. This comment and discussion is future training data. Your browser history is logged somewhere and will be training data. Your OS probably spies on what you do...training data. It's training data all the way down. And they've hardly begun to take into account the physical world, video, music, etc. as training data.
Also what happens to the intuition and unwritten skills that humans learned and passed on over time? Sure, the model has probably internalized them implicitly from the training data. But what happens in a case where you need to have a human perform the task again (say after a devastating war)? The ones with the arcane knowledge are gone, and now humans are starting from scratch.
Incredible that we've been writing speculative fiction about this for decades and still we sleepwalk right into it. I'd love to be wrong, but I think we're all still too divided and self-interested for this kind of technology to be successfully integrated. A lot of people are going to suffer.
It’s not just sci fi. It’s has already happened in past with construction. Things like pyramids and certain cathedrals and what not are no longer possible even with machines. At least this is what I’ve read and heard, I’m not actually an engineer or architect.
Tangent, I’m looking for some sci fi about this topic. Any suggestions?
No. Things like Greek fire or Roman cement aren't possible - because we don't know the precise mixture or formulation involved. Many old descriptions mean we don't know how to do it, because they are very vague.
But we can technically do much better waterproof concrete or whatever - however our incentives are also not aligned in the same ways.
Here's a tangential link to monks building a Gothic cathedral with modern machines: https://carmelitegothic.com/
Presumably this problem is solved with technology improvements or the need is recognized to hire experts capable of generating high quality training material. In either situation, there's going to be extreme discomfort.
There is a problem, how will people become experts in the field. If all entry level positions are taken by AI, nobody will be able to become an expert.
GPT is good because of collective knowledge, lots of data. What do you have in mind by "hire experts"? Isn't that what we have now? Many experts in many fields, hired to do their work. Cut this number down and you reduce training data.
Let's assume that GPT eliminates an entire field of experts, runs out of training data, and whoever is at the helm of that GPT program decides that it's lucrative enough to obtain more/better data. One alternative is subsidizing these experts to do this type of work and plug it directly into the model. I don't expect the nature of the work to change, more likely it's the signature on the check and the availability of the datasets.
It's important to note however, that GPT does not itself have any knowledge, only information. Knowledge implies it has comprehension or understanding. It can just as easily produce bad information as good and it has little to no ability to self-assess the accuracy of information it provides.
You also may underestimate how quickly that AI could pass expert level. The experts out there still have many years of life left so they won't be disappearing soon. If we get self improving/self training AI sooner than later then, we'll humans won't be the experts.