It is absolutely not ok for "some people to want to hurt" someone who is running a company that is vying for contracts from a democratically elected government's defense department.
It's also ok to protest that, to boycott it or to refuse to work for or with them for it. But escalating that to physical violence is not ok, and nor should people be "confused that he seems surprised he is now in physical danger"
(As an aside, from the statements I've heard so far it seems the person was more an anti-AI, anti-tech person than anti-war)
I don't think anyone is saying this is justified. But that doesn't mean it's not going to happen and I can understand why people would do this. ESP people that are pushed beyond the limits they can endure.
Right now we have a huge imbalance in the world and more situations like this are going to manifest as we slide further and further into authoritarianism.
Firstly, True Names is an awesome read, and the real origin of cyberpunk. I much prefer it to Neuromancer or Diamond Age.
Secondly, I recently tried to work out what year on the Top500 list[1] I could reasonably be for around US$5000. It's surprisingly difficult to work out mostly because they use 64 bit flops and few other systems quote that number.
Jeff Geerling made a $3000 raspberry pi cluster and shared the linpack scores, so I looked at when it’d hit different spots in the top500 list. He’d have won from ‘93 to June ‘96, and then been knocked out of the top 10 in November ‘97.
That’s with a pretty substantial constraint, making it out of raspberry pi’s, and a lower budget. With $5000, and your pick of chips… I bet you could hit the turn of the century…
Isn't the Diamond Age something like post-cyberpunk already?
It came out three years after Snow Crash, which already ironically referenced "The sky above the port was the color of television, tuned to a dead channel".
I agree that Neuromancer wasn't a great novel, though it obviously had vibes that resonated with many people. The novel being otherwise a bit of a dud actually speaks to how strong the vibes were to overcome that.
I feel that's a bit uncharitable, it wasn't just vibes, it was imaginative world building, with some truly interesting and novel concepts tied into a decent enough story to enjoy the world within.
As with much from this thread of cyberpunk writing, the cities and world are the most important characters, and the storyline is just an excuse to wander through their streets.
That's true, I read Neuromancer pretty late, already well primed on the terms of art which smoothed that over a bit. But a lot was left to the imagination.
The price movement is the indicator that there is insider information.
Of course there are lots of problems with this theory - in large markets a single trader has to make large bets to move the market, and with the current leadership the price moves large amounts unpredictably as well based on the latest statements.
But the mechanism itself makes sense.
It's unclear if that's a good thing. Of course some people know secret information before hand. Is disclosing that always good?
Another problem is, people may actually want to bet on random outcomes, because of money laundering or simply because this is how gambling essentially works. That huge account could be an insider or a billionaire with a few hundreds k to burn. Or maybe they want to orient people’s opinions towards a certain outcome.
Claiming that price movement in a prediction market reveals some amount of truth implicitly assumes that:
- people bet on something they believe to be true, and not to sway other people’s opinions or simply to burn money,
- people bet on something they believe to be true because they have specific private information (e.g. I bet on the Red Sox not because I think they’re good but because I know things other don’t about their opponents, their physical conditions and so on).
- their belief is actually correct (eg if I’m in the CIA and I know that the Soviets are about to launch a nuclear missile I can bet on it… but I don’t know that an officer down the line will refuse to do that).
Even if this was true, there is an issue of timing and consequences. Example: imagine it’s 2011 and some CIA or DoD officer makes huge, sudden bets on the fact that Bin Laden will be caught. Some AQ people get wind of this and move Bin Laden somewhere else. Congrats, your price movement signaled non public information to the market!
Another issue is that these bets tend to rely on public sources, news reports and so on. A journalist in Israel was threatened to change his news reports so that certain people didn’t have to lose on a prediction market. This could become more and more common, and with the advent of AI generated pictures who are you going to believe? Are you losing money because you bet on the wrong outcome or simply because someone with enough resources ensured that your outcome was never going to be reported?
> Congrats, your price movement signaled non public information to the market!
so from bin laden's perspective, this would've been a good outcome isnt it?
Can't say what a good outcome is without saying who.
What if enemies of the USA had corrupt generals who also make bets on anti-US actions to profit personally, and inadvertently reveal information to the CIA/NSA, who then prevent such anti-US actions? Would that not have been a good outcome as well?
Information is information - and one cannot say if it's good or not. However, i am a believer that more information generally do good than bad - assuming the consumer of said information is smart.
> Are you losing money because you bet on the wrong outcome ...
It doesnt matter, because you chose to bet. You do not need to bet in order to make use of the information being revealed by those who are betting.
>so from bin laden's perspective, this would've been a good outcome isnt it?
Of course
> Information is information - and one cannot say if it's good or not. However, i am a believer that more information generally do good than bad - assuming the consumer of said information is smart
Smart doesn’t always equal good.
The consumer can be smart and use the information to benefit themselves (and possibly harming others), but this doesn’t necessarily justify releasing information. In fact, even Snowden, who famously released a lot of information, didn’t release everything. He applied his judgment and avoided publishing some stuff. Was his judgment correct? I don’t know. The question is - at some point, is information release always neutral?
> Are you losing money because you bet on the wrong outcome ...
It doesnt matter, because you chose to bet. You do not need to bet in order to make use of the information being revealed by those who are betting.
What I’m saying is, if I bet on event X and X happens, I would expect to be paid. Instead I may not get paid simply because someone else who bet against X has the power to suppress any proof of X happening (via threats, money,…). This doesn’t happen with regular sport bets because sport events inherently have a lot of witnesses (physically present at the place where things are happening), there are referees, the teams themselves advertise the results, there is a professional league keeping scores and so on. If you bet on someone getting killed abroad by some military abroad, or military skirmish happening in a remote place, or other plausible but hard to verify event, faking something with AI or a friendly reporter is easier. And because people use cryptocurrencies in this platform, how can you prove active manipulation vs bona fide in some video some reporter published? “Hey, I just saw this video, who knew it was wrong?”.
The argument that you can lose money simply because it’s a bet, even when you should have won, is not convincing. Ok, I can lose but if I win shouldn’t I get the money?
But that’s not the claim people bandy about on the gambling markets.
They claim it incentives sharing of info, but what you’re saying is it’s only sharing meta info. That the info remains secret and the wagers reveal that it’s possible insider info exists - not what the info is.
Honestly, this has all the same smell as NFT justifications. I’d be suprised if the main touts of prediction markets weren’t previously touting NFTs and “smart” contracts. Actually, it even seems like the markets are inspired by those.
> Meta’s new foundational A.I. model, which the company has been working on for months, has fallen short of the performance of leading A.I. models from rivals like Google, OpenAI and Anthropic on internal tests for reasoning, coding and writing, said the people, who were not authorized to speak publicly about confidential matters.
> The model, code-named Avocado, outperformed Meta’s previous A.I. model and did better than Google’s Gemini 2.5 model from March, two of the people said. But it has not performed as strongly as Gemini 3.0 from November, they said.
> They added that the leaders of Meta’s A.I. division had instead discussed temporarily licensing Gemini to power the company’s A.I. products, though no decisions have been reached.
If you are trying to come up with anti-media conspiracies there are always plenty of ways to do it against any media company.
The idea that NY Times is particularly anti-Meta seems a stretch. They - like most traditional media companies - are anti-tech in general. The fact they also collect data doesn't make their reporting untrue.
Personally I think a much more interesting rumor to make up would be that Yann Lecun (who famously had his reporting lines rearranged to go through Alexander Wang after Scale.ai acquihire) works at New York University.
New York University is in the same place as the New York Times.
There's a conspiracy for you. I made it up, but I mean it could be true I guess?
(Of course Lecun also publicly congratulated Wang on the launch of the model. But maybe that's a ruse to hide everything.. blah blah)
> They were ~gpt4o, with the added benefit that you could run them on premise.
No, they are bad models. They were benchmaxxed on LMAreana and a few other benchmarks but as soon as you try them yourself they fall to pieces.
I have my own agentic benchmark[1] I use to compare models.
Llama-4-scout-17b-16e scores 14/25, while llama-4-maverick-17b-128e scores 12/25.
By comparison gemma-4-E4B-it-GGUF:Q4_K_M scores 15/25 (that is a 4B parameter model!) - even GPT3.5 scores 13/25 (with some adjustment because it doesn't do tool calling).
> And to be clear, OpenAI/Anthropic most definitely know this: that's why they've been aquihiring like crazy, trying to find that one team that will make the thing.
Anthropic is up to $30B annual recurring revenue. I wish I had failing business models like that.
> Token prices are significantly subsidized and anyone that does any serious work with AI can tell you this. Go use an almost-SOTA model (a big Deepseek or Qwen model) offered by many bare-metal providers and you'll see what "true" token prices should look like.
I'm not sure what think you are saying here, but if you look at the providers for both "almost-SOTA model (a big Deepseek or Qwen model)" or at the price for Claude on AWS Bedrock, Azure or on GCP you will quickly see inference is very profitable.
Anthropic has raised $64B in total since they were founded.
Even if you say we are going to measure profit in the very special hacker news way of looking at money taken in from customer revenue against money invested and we say they can't do things like counting building data centers or buying GPUs as capital expenses and instead have to count them against profit then in 2 years time they will have made more money than they have taken in investment.
The proverbial "50B" is investment in next year's model. The current model cost under "30B", and therefore "is profitable". It is a bet on scaling, yes, but that's been common throughout the industry (see, eg, Amazon not being profitable for many years but building infrastructure)
> If every year we predict exactly what the demand is going to be, we’ll be profitable every year. Because spending 50% of your compute on research, roughly, plus a gross margin that’s higher than 50% and correct demand prediction leads to profit. That’s the profitable business model that I think is kind of there, but obscured by these building ahead and prediction errors.
You're missing the forest for the trees. Per-token pricing is irrelevant when you're just trying to get shit done. I pay 20 bucks a month for OpenAI, but I use likely $200+ a month of tokens just on the coding (and I'm just looking at the raw tokens, this is ignoring all the harnessing on their end). Even OpenAI has said that they're losing money on the 200-dollar subscriptions[1]. This is not a viable business model. Why do you think they are introducing ads this year[2]?
reply