Hacker Newsnew | past | comments | ask | show | jobs | submit | pants2's commentslogin

Removed the ring. Just watching three movies of Frodo and friends living life in Hobbiton.

Reminded me of "Garfield Minus Garfield" - https://garfieldminusgarfield.net/

"""

Garfield Minus Garfield is a site dedicated to removing Garfield from the Garfield comic strips in order to reveal the existential angst of a certain young Mr. Jon Arbuckle. It is a journey deep into the mind of an isolated young everyman as he fights a losing battle against loneliness and depression in a quiet American suburb.

"""


Removed the zombies. Just a guy in a sheriff's hat losing every group vote on where to camp next.

Removed the island. Now it's just a surgeon, a lottery winner, and a man carrying 40 knives all trying to get through TSA.

So what happens if the author ignores this judgement? Surely arbitration can't send someone to prison. According to the web they still need a court to even confirm a monetary penalty.

Per the article:

"facing fines of $50,000 for every statement that could be seen to be “negative or otherwise detrimental” to Meta"

> According to the web they still need a court to even confirm a monetary penalty.

No, not necessarily with arbitration. The judgement itself may need to be confirmed in some states; it likely already has.


While the D5 is a great camera it's ~10 years old. Wonder why they didn't go for the Z9 which is its modern mirrorless equivalent.

"The Nikon D5 remains the camera of choice for the Artemis II mission and will be assigned primary photographic duties. It is a proven, highly-tested camera that the Artemis II team knows will excel in the high-radiation environment of space. However, as Artemis II Commander Reid Wiseman explained ahead of yesterday’s launch, he successfully fought to have a single Nikon Z9 added to Artemis II’s manifest."

https://petapixel.com/2026/04/02/a-nikon-z9-made-it-aboard-t...

There are more interesting details in the PetaPixel article, such as: "'That’s the camera that they’ll be using, the crew will be using on Artemis III plus, so we were fighting really hard to get that on the vehicle to test out in a high-radiation environment in deep space,' Wiseman said."

H/t to "SiliconEagle73" who linked to that PetaPixel article in the thread linked below.

https://old.reddit.com/r/nasa/comments/1sbfevm/new_high_reso...


> Wonder why they didn't go for the Z9 which is its modern mirrorless equivalent.

From [0], "The D5 was chosen for its radiation resistance, extreme ISO range (up to 3,280,000), and proven reliability in space." (

[0] https://www.photoworkout.com/artemis-ii-nikon-d5-moon/


They did bring the Z9: https://petapixel.com/2026/04/02/a-nikon-z9-made-it-aboard-t...

But yeah the grainy photo of the Earth with the D5 at ISO 51200 shows the shortcomings of the ancient DSLR. Still, great shot.


I'd argue the D4s and D5 may be some of the best high ISO cameras I'm aware of maybe surpassed by that one canon video camera that can seemingly see in the dark (sorry I'm mobile) and the D3s. I think the lower numbers produce nicer looking max ISO noise but that's all preference. Sony has the A7s as well but as with some of these the overall resolution isn't extreme.

How does the age of the camera influence physics? The only thing that really helps would be increasing the aperture.

Lower noise sensors and better image stabilization for longer exposures

From what I recall reading its more or less, "we have established and validated processes for using the D5." Its less about getting the best possible photo, more about making sure what they do take looks fine and doesnt waste a ton of time.

The D5 has flight heritage to use the industry term.

It might be the newest thing on the ship.

Zero point in measuring camera sizes (or other sizes haha) when JWST is floating there.

Government budgets man…

I think it's gorgeous and reminiscent of the Sagrada Familia

Yes, you've described Tailscale + Exit Nodes + Tailnet that you invite your family to. Install Tailscale and enable some devices as exit nodes - it's pretty much as simple as that.

Doesn't this just look like another case of "count the r's in strawberry" ie not understanding how tokenization works?

This is well known and not that interesting to me - ask the model to use python to solve any of these questions and it will get it right every time.


It's not just an issue of tokenization, it's almost a category error. Lisp, accounting and the number of r's in strawberry are all operations that require state. Balancing ((your)((lisp)(parens))) requires a stack, count r's in strawberry requires a register, counting to 5 requires an accumulator to hold 4.

An LLM is a router and completely stateless aside from the context you feed into it. Attention is just routing the probability distribution of the next token, and I'm not sure that's going to accumulate much in a single pass.


> An LLM is a router and completely stateless aside from the context you feed into it.

Not the latest SSM and hybrid attention ones.


Stateless router to router with lossy scratchpad is a step up, still not going to ask it to check my Lisp. That's what linters are for

It's not dismissible as a misunderstanding of tokens. LLMs also embed knowledge of spelling - that's how they fixed the strawberry issue. It's a valid criticism and evaluation.

The r's in strawberry presents a different level of task to what people imagine. It seems trivial to a naive observer because the answer is easily derivable from the question without extra knowledge.

A more accurate analogy for humans would be to imagine if every word had a colour. You are told that there are also a sequence of different colours that correspond to the same colour as that word. You are even given a book showing every combination to memorise.

You learn the colours well enough that you can read and write coherently using them.

Then comes the question of how many chocolate-browns are in teal-with-a-hint-of-red. You know that teal-with-a-hint-of-red is a fruit and you know that the colour can also be constructed by crimson followed by Disney-blond. Now, do both of those contain chocolate-brown or just one of them, how many?

It requires excersizing memory to do a task that is underrepresented in the training data because humans simply do not have to do the task at all when the answer can be derived from the question representation. Humans also don't have the ability that the LLMs need but the letter representation doesn't need that ability.


That’s what makes it a fair evaluation and something that requires improvement. We shouldn’t only evaluate agent skills by what is most commonly represented in training data. We expect performance from them on areas that existing training data may be deficient at providing. You don’t need to invent an absurdity to find these cases.

It's reasonable to test their ability to do this, and it's worth working to make it better.

The issue is that people claim the performance is representative of a human's performance in the same situation. That gives an incorrect overall estimation of ability.


I do think this is a tool issue. Here is what the article says:

> For the multiplication task, note that agents that make external calls to a calculator tool may have ZEH = ∞. While ZEH = ∞ does have meaning, in this paper we primarily evaluate the LLM itself without external tool calls

The models can count to infinity if you give them access to tools. The production models do this.

Not that the paper is wrong, it is still interesting to measure the core neural network of a model. But modern models use tools.


So, the tools can count then?

Humans can fly, they just need wings!


It is academically interesting what pure neural networks can do, of course. But when someone goes to Claude and tries to do something, they don't care if it solves the problem using a neural network or a call out to Python. So long as the result is right.

More generally, the ability to use tools is a form of intelligence, just like when humans and crows do it. Being able to craft the right Python script and use the result is non-trivial.


Seems like it’s maybe also a tool steering problem. These models should be reaching for tools to help solve factual problems. LLM should stick to prose.

I think this is still useful research that calls into question how “smart” these models are. If the model needs a separate tool to solve a problem, has the model really solved the problem, or just outsourced it to a harness that it’s been trained - via reinforcement learning - to call upon?

Does it matter if the LLM can solve the problem or if it knows to use a resource?

There’s plenty of math that I couldn’t even begin to solve without a calculator or other tool. Doesn’t mean I’m not solving math problems.

In woodworking, the advice is to let the tool do the work. Does someone using a power saw have less claim to having built something than a handsaw user? Does a CNC user not count as a woodworker because the machine is doing the part that would be hard or impossible for a human?


It does matter because the LLM doesn’t always know when to use tools (e.g. ask it for sales projections which are similar to something in its weights) and is unable to reason about the boundaries of its knowledge.

Is your issue with math in this example the tediousness of the operations or a conceptual lack of understanding of how to solve them?

It has "outsourced" it to another component, sure, but does that matter?

What the user sees is the total behavior of the entire system, not whether the system has internal divisions and separations.


It matters if you’re curious about whether AGI is possible. Have we really built “thinking machines”, or are these systems just elaborate harnesses that leverage the non-deterministic nature of LLMs?

An "elaborate harness" that can break down a problem into sub-tasks, write Python scripts for the ones it can't solve itself, and then combine the results, seems able to solve a wide range of cognitive tasks?

At least in theory.


What is a difference? If the "elaborate harness" consists of mix of "classical" code and ML model invocations, at which point it's disqualified from consideration for "thinking machine"? Best we can tell, even our brains have parts that are "dumb", interfacing with the parts that we consider "where the magic happens".

Are you still talking about this paper? No tools were allowed in it.

Not really - I use Brave browser on iPhone, a simple app install, and it blocks ads extremely well, even on YouTube and Instagram.

1. Most AI datacenter plans and valuation are not tied to subscriptions, but from a more vague promise of "AGI," so this isn't likely to pop the bubble IMO (even if it does happen)

2. Historical precedent holds that governments are more likely to suppress rates to spur the economy during wartime.


Was Raycast bought by GitHub or something? Why would it be advertising for Raycast?

Brought to you by Wendy's.


Presumably you need to pay raycast once for a setup operation while you need to pay constantly for copilot. Why wouldn't you advertise for someone who makes you more money at the same time as advertising for yourself?

This is super cool, and fully agreed that dark patterns / performance issues in TurboTax are frustrating. That said, I'm probably not ready to delegate something that sensitive to AI.

What I'd love to see here is if you actually do use TurboTax, how does your final tax return differ from the vibe-coded one?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: