It's ridiculous to call this tulips, in the sense of a speculative asset whose price depends on resale. A more similar recent example is the dotcom boom and bust based on building internet infrastructure, or the 2008 crash which was based on cyclical infrastructure overinvestment. These crashes were characterized by demand growth not keeping up with investment because the target markets were tapped out. Not clear when we'll get there with AI. The consumer market seems saturated on chatbots but we're not even close to saturated for b2b or self driving for example. And this discounts other new technological offerings which may unlock larger consumer markets (products where people are willing to pay $100 a month instead of 10 or 20)
All that said the dotcom boom is extremely analogous and that crash was quite bad.
dotcom was maybe 100B a year focused on the US and mostly VCs. AI is perhaps 250B global VC (with more than half of ALL VCs concentrated in one sector) and another 800B+ from non-VC. These numbers are basically a guess but structurally we are set up for something much, much worse.
But unlike the dot com boom, demand for tokens has not let up and there is increasing demand. I don’t know where it falls, certainly companies don’t get or right and they either over or under build. With the current demand rate changes it’s hard to understand why you would stop building today.
Demands for tokens exists yes. On one side you have huge demand for the infinitely subsidized tokens so that people can post a "unique" illustration when posting on social media, along with the text itself even.
On the other end we have professionals happy to pay a subscription for heavier use, to build something in the hope to sell it.
I figured I don't believe in value when my dad explained to me his mate fired his team once he realised he could just pay 20 bucks for his Gemini account and run his business. I asked, do you call this value add? He said it must be, since he can produce the same output with no staff.
There is a confusion between profiting from a circumstance and value creation.
You create value if, say, you cure a disease. That it takes you an army of staff or extract maximum profit from it is just a feasibility formula.
That you make the cure more affordable is value creation.
That you cure the same disease but increase your profit doesn't create any value, except to yourself, for a while
Maybe you don't, but it's fairly obvious that a lot of things are changing and things are moving.
Maybe your dad's mate didn't have to expand on his business, good for him. Other business are expanding because they now can.
Will the positive overweigh the negative? Not necessarily, but to go "it's tulip" is the kind of argument so devoid of nuance that we shouldn't be discussing so on HN.
The overwhelming demand for token would not coming from people wanting a unique illustration - it would be from professional usage. In fact, I'm not even sure who is subsidized. The $20 subscription surely isn't being used fully across all members of that subscription.
I think the discrepancy here is that almost all these crashes would not have resulted in an insurance claim, e.g. backing into a pole at 1 mph -- this is not enough damage to report for an average driver.
That said, really bad numbers for an autonomous system which is supposed to be way better than humans.
It depends on what part of the car is crumpled, dented, scratched, or misaligned and what your deductible is. It doesn’t take much body work to hit $250, $500, or even $2000.
A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
I read TFA. They found a best fit to a hyperbola. Great. One more data point will break the fit. Because it's not modeling a process, it's assigning an arbitrary zero point. Bad model.
This is an excellent example to illustrate an S-curve. There is a certain amount of energy in a photon. It cannot be emitted with less energy. There is 100% efficiency barrier that cannot be surpassed no matter how smart you are.
Sure, but the technology lifetimes and adoption rates have compressed exponentially despite that.
Efficiency is not the only relevant metric, there's also cost, flexibility, lifetime/durability, CRI, etc...
For example, OLEDs are (literally) flexible, but burn out faster then LEDs and are less efficient.
As another example, the light sources for televisions have undergone nearly annual changes! They started with CFL backlights, then side-illumination with white LEDs, then blue light with quantum dots, OLED panels, backlights as controllable grids of LEDs, mini-LED, micro-LED, RGB micro-LED, etc...
We're up to something like 10K dimming zones with the latest TCL panels and 100K is just around the corner.
If you want to light an indoor room to be as bright as the outdoors on a sunny day, you're going to need a lot of heavy, expensive equipment that produces a lot of waste heat (LEDs produce way less than incandescent, but still a significant amount). It's also not going to be a full continuous spectrum of light the way that sunlight is.
I think we can stop building new streetlights at the moment we have full daylight illumination on the visible spectrum 24x7 in urban areas. We’ll probably settle for much less and be happy with that.
If we need more light, we can deploy more power generators.
I think the mistake here is that there is a certain rate of progress where humanity can no longer even collectively process the progress and it is equivalent to infinite progress. This point is the singularity and requires non-human driven progress. We may or may not reach that point but full automation is a requirement to reach it. We may hit a hard wall and devolve to an s-curve, hit a maximum linear progress rate, hit a progress rate bounded by population growth and human capability growth (a much slower exponential), or pass the 1/epsilon slope point where we throw up our hands (singularity). Or have a dark age where progress goes negative. Time will tell.
I think we are on the cusp of it and that growing sense of chaos and acceleration and fear and at the same time gravitational attraction towards it is the beginning.
It's amusing to me that in the 90s you could easily play Quake or Doom with your friends by calling their phone number over the modem whereas now setting up any sort of multiplayer essentially requires a server unless you use some very user-unfriendly NAT busting.
Glad you mentioned DOOM! Sometimes people forget that DOOM supported multiplayer as early as December 1993, via a serial line and February 1994 for IPX networking. 4 player games on a LAN in 1994! On release, TCP/IP wasn't supported at all, but as the Internet took off, that was solved as well. I remember testing an early-ish version of the 3rd party iDOOM TCP setup driver from my dorm room (10 base T connection) when I was supposed to be in class, and it was a true game changer.
What was even more amazing is you could daisy chain serial ports on computers to get multiplayer Doom running. One or more of those links could even be a phone modem.
Downside is that your framerate was capped to the person with the slowest computer, and there was always that guy with the 486sx25 who got invited to play.
Someone tried running that in one of the campus computer labs when I was a student, and the (probably misconfigured) IPX routers amplified it into... a campus-wide outage. Seems weird to me, but that's what the big sign on the door said the next day.
You usually just need to forward a port or two on your router. That gets through the NAT because you specify which destination IP to forward it to. You also need to open that port in your Windows firewall in most cases.
Some configuration, but you don't have to update the port forwarding as often as you would expect.
The reason you can't just play games with your friends anymore is that game companies make way too much money from skins and do not want you to be able to run a version of the server that does not check whether you paid your real money for those skins. Weirdly, despite literally inventing loot boxes, Valve does not suffer from this sometimes. TF2 had a robust custom server community that had dummied out checks so you could wear and use whatever you want. Similar to how Minecraft still allows you to turn off authentication so you can play with friends who have a pirate copy.
Starcraft could only do internet play through battle.net, which required a legit copy. Pirated copies could still do LAN IPX play though, and with IPX over IP software you could theoretically do remote play with your internet buddies.
By the way, this is why bnetd is illegal to distribute and was ruled such in a court of law: authenticating with battle.net counts as an "effective" copy protection measure under the DMCA, and providing an alternate implementation that skips that therefore counts as "circumvention technology".
Multi-player started with Doom 2. Original doom was single player only. Doom 2 was for 4 players which I used in my mod ArsDoom. Quake then extended it to scale via a dedicated quake server.
Hamachi and STUN were what I was thinking of when I referred to user-unfriendly NAT busting. It's true that these are not much harder to get working than a modem, but they don't match up with modern consumer expectations of ease-of-use and reliability on firewalled networks. It would be nice if Internet standards could keep up with industry so that these expectations could be met. It's totally understandable where we've landed due to modern security requirements, but I still feel something has been lost.
Hamachi does not require you to open any ports on your firewall by nature. Except maybe the local firewall (Windows firewall, likely) which apps should automatically get asked for when they try to use a port.
I mean, internet standards kept up. IPv6 is a thing, and some form of dynamic IPv6 stateful firewall hole punching a la UPnP would be useful here. Particularly if the application used the temporary address for the hole punch--because once the address lifetime ends, it's basically not going to get used again (64-bit address space). So that effectively nullifies any longer term concerns about security vulnerabilities.
This is just not true. You can still write GTK2 or SDL apps, you just need to package your app for the target distro or open source it because it's an open-source-first ecosystem.
If you're looking for binary stability and to ship your app as a file, ELF is extremely stable. If your app accesses files, accesses the network through sockets, and use stable libraries like SDL or GTK it will work fine as a regular binary and be easy to ship. People just don't want to write their apps in C, when the operating system is designed for that.
Many native apps like Blender, Firefox, etc ship portable Linux x64 and arm64 binaries as tar gz files. This works fine. You can also use flatpak if you want automatic cross platform updates but yes, the format is unfortunately bloated.
It's not that easy to ship a JavaScript app on other OSes either and electron apps abound there too.
What does ELF being stable or people not writing apps in C have to do with Linux binary compatibility? No matter what language you use, it’s either dynamically linking to the distro’s libc or using Linux system calls directly.
Also, I recommend taking a gander at what the Linux build process/linking looks like for large apps that “just work” out of the box like Firefox or Chromium. There’s games they have to play just to get consistent glibc symbol versions, and basically anything graphics/GUI related has to do a bunch of `dlopen`s at runtime.
Flatpak and similar take a cop-out by bundling their own copies of glibc and other libraries, and then doing a bunch of hacks to get the system’s userspace graphics libraries to work inside the container.
All that said the dotcom boom is extremely analogous and that crash was quite bad.
reply