I don't think AI is benefiting humanity when you consider:
- It's heavy use in military and surveillance engagements
- The billions+ spent, yet no economic gains were noted
- The pressure on white-collar jobs
The threat to AI far exceeds any benefits I can see.
Did something change? HN has always been very pro-AI until recently, and now it seems to have swung dramatically the other way. Not one comment even agreeing with me.
We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.
Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
> underlying threat posted by AI to society, the economy and human freedom persists
I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI - unless you're claiming that an AGI would be capable of such independent actions.
AI is similar in transformative power to how the internet was a transformative power - might even be greater, if it is more commonly available for use through out the world. Whether that transformative power is doing good or bad really depends on the people doing it, not on the tech. I would bet that the future is going to be better because of AI, than to imagine a worse future and act to stunt the tech.
> I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI
Of course, it is popular to deny it. People constantly tell themselves "it is people, not tech". They make valid, yet banal and inconsequential statement. This distinction has no bearing on reality.
> So you're saying that if people hadn't invented weapons, there would be no violence?
If anything, if people hadn't invented weapons, they would not use weapons to enact violence, and this in turn will impact the practical nature of violence.
> The claim that AI is itself dangerous has no merit.
My claim is that considering any technology by itself is pointless. There is no such thing as thing by itself. Technology always exists in structural setting, and in turn shapes this structure.
Or perhaps, the underlying threat is personified by Altman, in that our country has repeated and widespread institutional failures to hold the wealthy accountable for wrongdoing.
The threat of AI is, after all, driven by the people who use it.
>But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
Without Sam Altman the compute and improvements for LLMs to be a threat wouldn't have readily existed at all. He was the one who got the ball rolling because of his desperation (SVB collapsed right before the hype bubble started), ego, and quasi-religious desires.
Worth mentioning that Canadian PM Mark Carney is the ex-head of the Bank of England and has a long list of pro-uk/globalist affiliations. Given the globalist aligned states and territories are the most on-board in progressing mass surveillance currently, it's sadly not a surprise.
It seems like at every technological step, we're sold the dream and delivered the meme. We always end up with the worst possible combination of players, ideas and outcomes; with the promise of what the said technology delivers in terms of additional freedom or free time never realised. How many more broken social contracts can society endure before it crumbles?
It's "socializing the losses and privatizing the gains"… but now alarmingly supercharged well beyond purely financial realms, and into really basic and fundamental matters of individual physical autonomy and liberty.
> How many more broken social contracts can society endure before it crumbles?
Having any kind of agency in those things would be a start.
If <FAANG bigcorp of your choice> announces with great fanfare "We're building this totally awesome new technology that will make everything better! And the best thing? You won't have to do anything, we will auto-update all your devices/accounts/etc with it for free! Trust us!", then whether you personally believe their enthusiastic predictions or not doesn't really matter a lot - you will get it anyway, unless you spend a lot of energy to deliberately avoid the new technology.
I felt compelled to write this email to 1password today:
Dear 1password,
Please stop trying to "innovate". I really like your password manager. That's all I want. I don't want "automatic watchtower AI phishing prevention" I just want a password manager that works across my devices. Make it simple, make it secure, and don't change it. You have a great product. Adding more features will only make it worse. If you keep this bullshit up I will churn.
From my understanding, we are pretty close to a Dystopian world where all elites of a certain group collaborate to run a Super Leviathan. We still gotta choose our flavors, which may not be feasible in maybe 5-10 years when those leviathans clash into each other.
Likewise, thank you for the recommendation. I obviously haven't read Goliath's Curse yet, but it seems like Joseph Tainter's The Collapse of Complex Societies (1988) might also be interesting for the same readers.
It's not like this is surprising, there have been plenty of sci-fi books/movies that have predicted this very thing. How many movies have the haves lived above ground/off planet, while the have nots have lived underground or stuck on a apocalyptic planet.
This is just furthering the previous history. Currently, the lords have just been able to keep the serfs appeased to a longer extent. Every time in history or in sci-fi, the serfs reach a breaking point and rise up.
You seem very confident. This seems to imply you feel the haves will know when to leave enough on the table for the have nots to still feel like they are a part of the haves. I'm not so confident in that.
People in technologically advanced societies have more than enough & the people who are not as advanced can not do anything that will have any effect on the people who own the fighter jets, missiles, robot factories, & "internet" satellites. The current system has no historical precedent. It is very close to an almost perfect panopticon w/ an associated media & police apparatus to keep everyone docile & complacent. Like I said, this time is different.
Far more likely is that we head back to a feudal era where data mining tech is used to identify and eliminate potential rabble-rousers. Once enough production is automated, all remaining have-nots are exterminated.
The weak link is that for “the haves” to have, the “have -nots” are needed. To have or to not is just a comparison, a millionaire needs the poor to be rich and to feel special otherwise when everyone is special nobody is.
It will instead eventually fall apart in more thoroughly destructive ways. But not until it does a possibly-unrecoverably (at least in the medium term) amount of damage to civilization, humanity, and life on Earth first.
It's already crumbling. That's why we have AI-powered fascism in the first place. Society destabilizes and a significant fraction of the population says "perhaps authoritarianism is a good thing." It's never worth it, though.
The story here is that a FedRAMP-authorized system had 53MB of Vite dev source maps exposed on a production government endpoint. That's not "sold the dream, delivered the meme," that's a specific auditable compliance failure. Meanwhile a fintech engineer explaining that this is all standard legally-mandated KYC infrastructure got flagged to death. The interesting question isn't whether technology betrays us, it's why US law requires this surveillance apparatus in the first place and why the security assessment apparently missed checking for /vite-dev/ on a government system.
Also every technological step? Ever? Really? This wouldn't happen to be typed on a computer from a climate-controlled room on a nice global network or anything?
Except it wasn't a production endpoint and there's no actual security risk in having source maps available. It's more annoying to read source code that has been minified, but if a security professional tells you that minifying source code is something that increases security, you should be wondering what other bullshit they've pedaled you.
I'm not a fan of persona and have gone out of my way to not provide my details to them even before this, and I really dislike Thiel, but... let's be honest about the stuff we're complaining about.
I think that's a natural outcome of a model where sociopaths climb to the top, with a layer of sycophants beneath them that shield normal workers from perceiving the amount of depravity going on at the top which would make them unable to continue and tank the business. AI might remove the reliance on regular folks and give sociopaths direct execution of all ideas they have without any moral opposition, and that would explain a lot of the rush for AI everywhere we see nowadays.
I would be careful with this kind of reasoning, because it suggests corruption within a corporate model is inevitable, giving it implicit permission to continue existing. It's not inevitable.
I would suggest it is inevitable when the goal is to grow without end. The sociopaths buy the shares and push the businesses to ether become "evil" or get pushed out and taken over. Its what the current models leads to when there are no checks and balances.
Pursuing growth at all costs is inevitable though. If you don't continue to grow, you get superseded by entities that do. Goes for both countries and companies.
Communist countries like the Soviet Union and China have even had the explicit goal of outgrowing the US.
Yes. Local government has long failed to focus on its core mandate of base infrastructure, instead opting for vanity and ego projects like stadiums and convention centers.
Wellington in particular has had a string of divisive mayors. Simply google the previous Mayor "Tory Whanau" for a never-ending list of controversy, incompetence and failure.
Previous socialist central government attempted to strip assets off the regional bodies and centralize them under a common scheme. Would have been successful, however a lot of race-based ideology was peripherally injected into the process which gave asset management an unaccountable and ultimately undemocratic race-based overlay which basically killed the idea (central govt were voted out).
Central Government also has a fairly miserable history of asset management before privatisation. It's a multi-decade process of slow erosion and precendent.
The intrusion of government and intrusion of identity politics seems to be the core issue. Failure to provide core services, failure to be competent but the conversation is almost always re-directed towards "racism" and identity as the root attributes. We had no trouble producing high quality functional and well managed assets before the arrival of modern identity politics. Bait and switch IMO.
There's a commercial product available from 6WIND that makes this much more supportable for mission-critical networks. It leverages DPDK and delivers excellent performance at scale.
Well, somebody has to go to jail if catastrophic decisions are made and you can't jail AI. We very often see CEOs being jailed in the real world, so the pay is actually a very fair compensation for the risk.
16GB is _not_ sufficient if you have Jellyfin or Immich or similar and a lot of media you want to scroll through quickly; I've found I need a lot of ZFS cache for that to be as responsive as I want, even with SSD storage.
Wonder when the TrumpDrone "made in america" will be announce. Just like the TrumpPhone, no doubt it'll end up being made in China. The jokes really do write themselves.
The threat to AI far exceeds any benefits I can see.
reply