Hacker Newsnew | past | comments | ask | show | jobs | submit | nielsole's commentslogin

We have speed pedelecs (45kmph) where I live with extra restrictions compared to normal ebikes which are capped at 25kmph. On the countryside 25kmph just doesn't get you places fast enough.

> And I dont think I would want to share cycling lanes with people doing more than that either

The regulation I wish for is for speed pedelecs to be allowed to use cycle paths whenever the street has a speed limit > 50kmh. Being on a 70-100kmph road as a 45kmph bike is needlessly dangerous when there's a usually idle bike path next to it.


Also I heard there's tricks like going to the calibration with underinflated tires to eek out marginal speed gains.

You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.

He is trying to win sympathies even (or especially?) among nationalist hawks.


> moreso than humans

Citation needed.


Much of the space of artificial intelligence is based on a goal of a general reasoning machine comparable to the reasoning of a human. There are many subfields that are less concerned with this, but in practice, artificial intelligence is perceived to have that goal.

I am sure the output of current frontier models is convincing enough to outperform the appearance of humans to some. There is still an ongoing outcry from when GPT-4o was discontinued from users who had built a romantic relationship with their access to it. However I am not convinced that language models have actually reached the reliability of human reasoning.

Even a dumb person can be consistent in their beliefs, and apply them consistently. Language models strictly cannot. You can prompt them to maintain consistency according to some instructions, but you never quite have any guarantee. You have far less of a guarantee than you could have instead with a human with those beliefs, or even a human with those instructions.

I don't have citations for the objective reliability of human reasoning. There are statistics about unreliability of human reasoning, and also statistics about unreliability of language models that far exceed them. But those are both subjective in many cases, and success or failure rates are actually no indication of reliability whatsoever anyway.

On top of that, every human is different, so it's difficult to make general statements. I only know from my work circles and friend circles that most of the people I keep around outperform language models in consistency and reliability. Of course that doesn't mean every human or even most humans meet that bar, but it does mean human-level reasoning includes them, which raises the bar that models would have to meet. (I can't quantify this, though.)

There is a saying about fully autonomous self driving vehicles that goes a little something like: they don't just have to outperform the worst drivers; they have to outperform the best drivers, for it to be worth it. Many fully autonomous crashes are because the autonomous system screwed up in a way that a human would not. An autonomous system typically lacks the creativity and ingenuity of a human driver.

Though they can already be more reliable in some situations, we're still far from a world where autonomous driving can take liability for collisions, and that's because they're not nearly as reliable or intelligent enough to entirely displace the need for human attention and intervention. I believe Waymo is the closest we've gotten and even they have remote safety operators.


It's not enough for them to be "better" than a human. When they fail they also have to fail in a way that is legible to a human. I've seen ML systems fail in scenarios that are obvious to a human and succeed in scenarios where a human would have found it impossible. The opposite needs to be the case for them to be generally accepted as equivalent, and especially the failure modes need to be confined to cases where a human would have also failed. In the situations I've seen, customers have been upset about the performance of the ML model because the solution to the problem was patently obvious to them. They've been probably more upset about that than about situations where the ML model fails and the end customer also fails.


That's not a citation.


It's roughly why I think this way, along with a statement that I don't have objective citations. So sure, it's not a citation. I even said as much, right in the middle there.


That’s because there’s no objective research on this. Similarly, there are no good citations to support your objection. They simply don’t exist yet.


Maybe not worth discussing something that cannot be objectively assessed then.


Then don't; all I did was offer my thoughts in a public comments section.


Pareto frontier is the term you are looking for


In the late 2000s i remember that "nobody is willing to pay for things on the Internet" was a common trope. I think it'll culturally take a while before businesses and people understand what they are willing to pay for. For example if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.


> For example if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.

One is the time of a human (irreplaceable) and the other is a tool for some human to use, seems proportional to me.


> human (irreplaceable)

Everyone is replaceable. Software devs aren't special.


Domain knowledge is a real thing. Sure I could be replaced at my job but they'd have a pretty sketchy time until someone new can get up to speed.


Yes, with another human. I meant more that you cannot replace a human with a non-human, at least not yet and if you care about quality.


Perhaps you can replace multiple developers with a single developer and an AI tool in the near future.

In the same way that you could potentially replace multiple workers with handsaws with one guy wielding power tools.

There could be a lot of financial gain for businesses in this, even if you still need humans in the loop.


That may be, but I still think

> if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.

Is way off base. Even if you replace multiple workers with one worker but better tool, businesses still won't want to pay the "multiple worker salary" to the single worker just because they use a more effective tool.


Yes, I agree. But do they have to?

It would seem to me that tokens are only going to get more efficient and cheaper from here.

Demand is going to rise further as AI keeps improving.

Some argue there is a bubble, but with demand from the public for private use, business, education, military, cyber security, intelligence, it just seems like there will be no lack of investment.


Late 1990s maybe. Not late 2000s.


> indirection

Isn't nvc often about communicating explicitly instead of implicitly? So frequently it can be the opposite of an indirection.


I guess so? I'm not well-versed, but the basics are usually around observation and validation of feelings, so instead of "you took steps a, b, c, which would normally be the correct course of action, but in this instance (b) caused side-effect (d) which triggered these further issues e and f", it's something more like "I can understand how you were feeling overwhelmed and under pressure and that led you to a, b, c ..."

Maybe this is an unhelpful toy example, but for myself I would be frustrated to be on either side of the second interaction. Like, don't waste everyone's time giving me excuses for my screwup so that my ego is soothed, let's just talk about it plainly, and the faster we can move on to identifying concrete fixes to process or documentation that will prevent this in the future, the better.


The surprising thing for me was how long it took to get old. I got a reward(and then immediate regret upon reflection) for way too long.


I think a prompt + an external dataset is a very simple distribution channel right now to explore anything quickly with low friction. The curl | bash of 2026


Exactly. Prompt + Tool + External Dataset (API, file, database, web page, image) is an extremely powerful capability.


I think you misunderstood. The API key is for their API, not Anthropic.

If you take a look at the prompt you'll find that they have a static API key that they have created for this demo ("exopriors_public_readonly_v1_2025")


Yes, thanks for explaining it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: