Hacker Newsnew | past | comments | ask | show | jobs | submit | more timschmidt's commentslogin

I've been using the T-Deck Pro and T-Lora Pager, so the device is the app.


oof. right in the self-image.


Have you ever watched an interview with Tom Homan? Man is single-minded and highly motivated. Like leashing the dog, and letting loose a wolf.


That's why Obama gave him a medal. Homan is great at what he does. Too bad for the selective outrage over this. He was doing his job non-politically in the background for years until TDS brain rot made the left so easily manipulated into hysteria.


he's a steamroller. and he said they would "do it by the book" or something along those lines. "dial it back" is probably coming from media.


And they've recently chewed through > 1 million young men: https://www.dw.com/en/12-million-russian-soldiers-killed-inj...


Have you read the article? It's casualties, which includes wounded. Article mentions that about 100-140k are killed, not >1m


It seems like, in the course of calling out a perceived assumption, you may have made an assumption yourself. I'm aware of the difference between casualties and deaths. My chosen terminology applies equally to both. And I think both are relevant to a number of related stats like lifetime earnings, mental and physical health, family prospects, etc.

War is hell. And I don't think anyone comes out untouched by it. The stats on vets are brutal.


As state of the art machines continue to chase the latest node, capacity for older nodes has become much less expensive, more openly documented, and actually accessible to individuals. Open source FPGA and ASIC synthesis tools have also immensely improved in quality and capability. The Raspberry Pi Pico RP2350 contains an open source Risc-V core designed by an individual. And 4G cell phones like the https://lilygo.cc/products/t-deck-pro are available on the market built around the very similar ESP32. The latest greatest will always be behind a paywall, but the rising tide floats all boats, and hobbyist projects are growing more sophisticated. Even a $1 ESP32 has dual 240mhz 32bit cores, 8Mb ram, and fast network interfaces which blow away the 8bit micros I grew up with. The state of the open-source art may be a bit behind the state of the proprietary arts, but is advancing as well.

It's really fun to have useful hardware that's easy to program at the bare metal.


Even when technically accessible to individuals it still costs at least 10k$ to get a batch of chips made on a multi project wafer.


chipfoundry.io charges $14,950 for packaged 100 chips. As far as small batch manufacturing goes, that's reasonably affordable. $149 ea. Occasionally I see better deals crop up as part of group buys or for bare dies. Presumably, one would prototype their design on an inexpensive FPGA board first, to verify functionality. So as to be reasonably sure the first batch of chips worked. Folks like Sam Zeloof are working to build new tools for one-off and small batch designs as well, which may further reduce small quantity prices.


Some of the chips I use in my design that are not custom are in that price range, so to me that looks extremely affordable.


You can't order one chip. You need a whole batch.


Even with MoE, holding the model in RAM while individual experts are evaluated in VRAM is a bit of a compromise. Experts can be swapped in and out of VRAM for each token. So RAM <-> VRAM bandwidth becomes important. With a model larger than RAM, that bandwidth bottleneck gets pushed to the SSD interface. At least it's read-only, and not read-write, but even the fastest of SSDs will be significantly slower than RAM.

That said, there are folks out there doing it. https://github.com/lyogavin/airllm is one example.


> Experts can be swapped in and out of VRAM for each token.

I've often wondered how much it happens in practice. What does the per-token distribution of expert selection actually look like during inference? For example does it act like uniform random variable, or does it stick with the same 2 or 3 experts for 10 tokens in a row? I haven't been able to find much info on this.

Obviously it depends on what model you are talking about, so some kind of survey would be interesting. I'm sure this must but something that the big inference labs are knowledgeable about.

Although, I guess if you are batching things, then even if a subset of experts is selected for a single query, maybe over the batch it appears completely random, that would destroy any efficiency gains. Perhaps it's possible to intelligently batch queries that are "similar" somehow? It's quite an interesting research problem when you think about it.

Come to think of it, how does it work then for the "prompt ingestion" stage, where it likely runs all experts in parallel to generate the KV cache? I guess that would destroy any efficiency gains due to MoE too, so the prompt ingestion and AR generation stages will have quite different execution profiles.


The model is explicitly trained to produce as uniform a distribution as possible, because it's designed for batched inference with a batch size much larger than the expert count, so that all experts are constantly activated and latency is determined by the highest-loaded expert, so you want to distribute the load evenly to maximize utilization.

Prompt ingestion is still fairly similar to that setting, so you can first compute the expert routing for all tokens, load the first set of expert weights and process only those tokens that selected the first expert, then load the second expert and so on.

But if you want to optimize for single-stream token generation, you need a completely different model design. E.g. PowerInfer's SmallThinker moved expert routing to a previous layer, so that the expert weights can be prefetched asynchronously while another layer is still executing: https://arxiv.org/abs/2507.20984


Thanks, really interesting to think about these trade-offs.


I thought paging was so inefficient that it wasn't worth doing vs using CPU inference for the parts of the model that are in system memory. Maybe if you have a good GPU and a turtle of a CPU, but still somehow have the memory bandwidth to make shuffling data in and out of the GPU worthwhile? I'm curious to know who is doing this and why.


With a non-sequential generative approach perhaps the RAM cache misses could be grouped together and swapped on a when available/when needed prioritized bases.


The badness cannot be overstated. "Hostile codebase" would be an appropriate label. Much more information available in Giovani Bechis's presentation: https://www.slideshare.net/slideshow/libressl/42162879

If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.


> If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.

I wonder who could possibly be incentivized to make the cryptography package used by most of the worlds computers and communications networks full of subtly exploitable hard to find bugs. Surely everyone would want such a key piece of technology to be air tight and easy to debug

But also: surely a technology developed in a highly adversarial environment would be easy to maintain and keep understandable. You definitely would have no reason to play whackamole with random stuff as it arises


> Surely everyone would want such a key piece of technology to be air tight and easy to debug

1. Tragedy of the Commons (https://en.wikipedia.org/wiki/Tragedy_of_the_commons) / Bystander Effect (https://en.wikipedia.org/wiki/Bystander_effect)

2. In practice, the risk of introducing a breakage probably makes upstream averse to refactoring for aesthetics alone; you’d need to prove that there’s a functional bug. But of course, you’re less likely to notice a functional bug if the aesthetic is so bad you can’t follow the code. And when people need a new feature, that will get shoehorned in while changing as little code as possible, because nobody fully understands why everything is there. Especially when execution speed is a potential attack vector.

So maybe shades of the trolley problem too - people would rather passively let multiple bugs exist, than be actively responsible for introducing one.


I wonder what adoption would actually look like.

It reminds me of Google Dart, which was originally pitched as an alternate language that enabled web programming in the style Google likes (strong types etc.). There was a loud cry of scope creep from implementors and undo market influence in places like Hacker News. It was so poorly received that Google rescinded the proposal to make it a peer language to JavaScript.

Granted, the interests point in different directions for security software v.s. a mainstream platform. Still, audiences are quick to question the motives of companies that have the scale to invest in something like making a net-new security runtime.


> undo market influence

Pointless nitpick, but you want "undue market influence." "Undo market influence" is what the FTC orders when they decide there's monopolistic practices going on.


Not pointless. I had no idea what the original wording meant.


> Surely everyone would want such a key piece of technology to be air tight and easy to debug

The incentives of different parties / actors are different. 'Everyone' necessarily comprises an extremely broad category, and we should only invoke that category with care.

I could claim "Everyone" wants banks to be secure - and you would be correct to reject that claim. Note that if the actual sense of the term in that sentence is really "almost everyone, but definitely not everyone", then threat landscape is entirely different.


I read that whole paragraph with a tinge of sarcasm. There's bad actors out there that want to exploit these security vulnerabilities for personal gain and then there's nation-state actors that just want to spy on everyone.


> highly adversarial environment

Except it's not. Literally nobody ever in history had their credit card number stolen because of SSL implementation issues. It's security theater.


Another great example from tedunangst's excellent presentation "LibreSSL more than 30 days later".

https://youtu.be/WFMYeMNCcSY&t=1024

Teaser: "It's like throw a rock, you're gonna hit something... I pointed people in the wrong direction, and they still found a bug".


I expected much worse to be honest. Vim’s inline #ifdef hell is on a whole other level. Look at this nightmare to convince yourself: https://geoff.greer.fm/vim/#realwaitforchar


That's a lot of ifdefs, sure. But at least Vim doesn't have it's own malloc which never frees and can be dynamically replaced at runtime and occasionally logs sensitive information.


As long as you don't statically link you can easily replace malloc (LD_PRELOAD). Many debug libraries do. Why is this so special in openssl? (I don't know if there is some special reason, though openssl is a weird one to begin with)


Using OpenSSL's malloc may bypass protections of hardened libc mallocs like OpenBSD's.

If memory crosses the boundary between OpenSSL and your app, or some other library, freeing it with a different allocator than the one it was allocated with is undefined behavior.

OpenSSL's allocator doesn't free in in the same ways other mallocs do, which prevents memory sanitization tools like valgrind from finding memory bugs.

OpenSSL has a completely separate idea of a secure heap, with it's own additional malloc implementation, which can lead to state leakage or other issues if not used perfectly at the (non-existent because the entire library surface is exposed) security boundary and is accidentally intermingled with calls to the (insecure?) malloc.

It's just a big can of security worms which may have been useful on odd platforms like VMS, though that's questionable, and only serves to add additional layers of inscrutability and obfuscation to an already messy codebase today. It's not enough to know what malloc does, one must familiarize themselves with all the quirks of both(!) of OpenSSL's custom implementations, which are used precisely nowhere else, to judge the security or code correctness implications of virtually anything in the codebase. There's no good reason for it.


See also The State of OpenSSL for pyca/cryptography

https://cryptography.io/en/latest/statements/state-of-openss...

Recently discussed: https://news.ycombinator.com/item?id=46624352

> Finally, taking an OpenSSL public API and attempting to trace the implementation to see how it is implemented has become an exercise in self-flagellation. Being able to read the source to understand how something works is important both as part of self-improvement in software engineering, but also because as sophisticated consumers there are inevitably things about how an implementation works that aren’t documented, and reading the source gives you ground truth. The number of indirect calls, optional paths, #ifdef, and other obstacles to comprehension is astounding. We cannot overstate the extent to which just reading the OpenSSL source code has become miserable — in a way that both wasn’t true previously, and isn’t true in LibreSSL, BoringSSL, or AWS-LC.

Also,

> OpenSSL’s CI is exceptionally flaky, and the OpenSSL project has grown to tolerate this flakiness, which masks serious bugs. OpenSSL 3.0.4 contained a critical buffer overflow in the RSA implementation on AVX-512-capable CPUs. This bug was actually caught by CI — but because the crash only occurred when the CI runner happened to have an AVX-512 CPU (not all did), the failures were apparently dismissed as flakiness. Three years later, the project still merges code with failing tests: the day we prepared our conference slides, five of ten recent commits had failing CI checks, and the day before we delivered the talk, every single commit had failing cross-compilation builds.

Even bugs caught by CI get ignored and end up in releases.


Wow, that is just crazy. You should investigate when developing software, but for something like OpenSSL... Makes me think this must be a heaven for state actors.


We really need as an industry to move away entirely from this cursed project


I'm surprised AI was even able to find bugs in that.

Given that it's been trained on "regular" code and that presentation points out that openssl might as well be written in brainfuck it shocks me that AI would be able to wrap its pretty digital head around it


There is a reason AWS created their own TLS library.


> If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.

I'd wager if someone did that the codebase would look better than OpenSSLs

The codebase designed to hide bug would look just good enough that rewriting it doesn't seem worth it.

OpenSSL is so bad that looking at it there is just desire to rip parts straight out and replace them, and frankly only fear-mongering around writing security code kept people from doing just that and only after heartbleed the forks started to try. And that would also get rid of any hidden exploit.


Tribalism is part of the brainrot. Divide and conquer. To paraphrase Carlin, wealth and power are are big club and we ain't in it.


> which means that now, they need to make a conversion, which is obviously slower than doing nothing.

One would think. But since caches have grown so large, and memory speed and latency haven't scaled with compute, so long as the conversion fits in the cache and is operating on data already in the cache from previous operations, which admittedly takes some care, there's often an embarrassing amount of compute sitting idle waiting for the next response from memory. So if your workload is memory or disk or network bound, conversions can oftentimes be "free" in terms of wall clock time. At the cost of slightly more wattage burnt by the CPU(s). Much depends on the size and complexity of the data structure.


An accurate statement. In places where guns are difficult to come by, you'll find knife crime in it's place. Take the knives away and it'd be fists.


>In places where guns are difficult to come by, you'll find knife crime in it's place.

By how much and how consequential exactly, and how would we know?

There were 14,650 gun deaths in the US in 2025 apparently. There were 205 homicides by knife in the UK in 2024-2025. [0][1]. Check their populations. US gun deaths per capita seem to exceed UK knife deaths by roughly 15x.

[0]

https://www.thetrace.org/2026/01/shooting-gun-violence-data-...

[1] https://commonslibrary.parliament.uk/research-briefings/sn04...


Good question. Canada has twice as many registered firearms as the US (though the number of unregistered firearms is likely greater in the US). It's certainly not difficult to purchase guns in either country. And Canada experiences an order of magnitude fewer gun deaths per capita than the US. The US is somewhat unique among western nations in how it handles mental illness, and crime, and I would suggest those are more fruitful avenues of inquiry.

So I'll stand by the stance that individuals are responsible for their own actions, that tools cannot bear responsibility for how they are used on account of being inanimate objects, and that all tools serve constructive and destructive purposes, sometimes simultaneously.


isn't that the point? the estimate is that the US has 1.2 gun per capita (compared to 0.34 for Canada)

and since the US handles guns so lax they are a problem

a vocal minority is making a lot of problems (but the US is not even enforcing its existing gun control laws sufficiently)

individuals are responsible, but that doesn't mean that the tool is not a significant factor.

and hence the recommendation is to have better control of who gets the tool (and not emotionally charged "scary rifle" ban)


I mentioned the estimated unregistered firearms, but they are just that, an estimate. I went looking for some references and found the following: household gun ownership is down over the last 50 years, hunting is down, gun ownership among men is down, gun ownership among women remains steady, gun ownership by race has not appreciably changed: https://vpc.org/studies/ownership.pdf Gun ownership declining would be consistent with increased gun control.

Yet gun deaths by suicide and murder per 100k people hasn't varied widely between 5 and 7 over the same period: https://www.pewresearch.org/short-reads/2025/03/05/what-the-...

I also found the stats on this site interesting (many are estimates):

https://www.nationmaster.com/country-info/stats/Crime/Murder...

https://www.nationmaster.com/country-info/stats/Crime/Violen...

https://www.nationmaster.com/country-info/stats/Crime/Violen...

> individuals are responsible, but that doesn't mean that the tool is not a significant factor.

Individuals are responsible. No buts. And there is no solving violence on any scale without understanding and addressing the reasons someone might commit it. This is a rabbit hole of difficult and uncomfortable truths we must address as a society.


Responsibility is a very complex topic. Sometimes it seems straightforward. People training child soldiers are more responsible than the child soldiers, right? The USA financing, training, and arming this or that group seems to be also responsible if those groups do bad things. (Hence all the protests in the US against the way the IDF wages war in Gaza.)

People voting for or against gun control also have some responsibility. (Australia's National Firearms Agreement comes to mind.) Similarly people who (continued to vote, or) voted in the EU to use cheap Russian gas even after 2014, and even after 2022 share again certainly share some responsibility. Maybe even more than the conscripts coerced to be on the front.

I think structural effects dominate in many cases. (IMHO local crime surges are perfect evidence for this, and even though the FBI crime data is slow and not detailed enough, the city-level data is good enough to see things like a homicide spike after a "viral police misconduct incidents" -- https://www.nber.org/papers/w27324 and this is even before George Floyd -- and https://johnkroman.substack.com/p/explaining-the-covid-viole... which shows how much of an effect policing has on homicides.)

Tool availability is an important factor, and in the US it's a drastically huge effect, because the other factors that could counteract it are also mostly missing.

We can simply apply the Swiss cheese model for every shooting and see that many things had to go wrong. Of course focusing only on guns while neglecting the others would lead to increase in knife-deaths.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: