Hacker Newsnew | past | comments | ask | show | jobs | submit | bflesch's commentslogin

If you make a billion but only pay $2M for a pardon it might be worth it: https://www.independent.co.uk/news/world/americas/us-politic...

Coincidental timing for Wolfram to pop up here just as it becomes clearer that he might actually have met Epstein after all.

Someone having met Epstein is about as interesting and concerning as having met Theodore McCarrick [1] or Harvey Weinstein [2].

Those kind of people aren't cartoon villains who only meet with people as part of their villainous activities.

The vast majority of people they meet with are for their other activities and interests. Most people meeting with McCarrick for example were meeting for the reasons they would meet with any Archbishop (or priest or bishop, depending on when they met him), or met with him for some other mutual, legal, interest or business reason.

Same with Weinstein. Most people he met would be meeting for the same reason they would meet any producer, or for some other mutual, legal, interest of business reason.

And same with Epstein. Epstein fancied himself as a patron of the sciences and made contact with a lot of scientists over their research and possible funding of their labs. He also fancied himself a philanthropist and had many contacts related to that.

[1] Former Archbishop of Newark and of Washington.

[2] One of the biggest and most successful Hollywood movie producers.


Did you measure the performance impact of having multiple trees in a single file vs. having one tree per file? I'd assume one per file is faster, is that correct?

no dont know about it. I will check it out.

What are the odds that the Cloudflare CEO will have a twitter meltdown about this?


> Home Assistant [1] has been written using web components and it has been great.

That could explain why the percentage slider is not showing a current value tooltip when sliding it :P


> But I don't understand why they chose to include the account as a plain-text string in the DNS record.

Simple: it's for tracking. Someone paid for that.


That's what happens when the whole company uses high-end macbooks and nobody has an older PC. It's been noted thousands of times on HN but these US companies make money head over fist and do not give a single damn about people on "lower" end devices.


Large GitHub PRs are miserably slow even with a maxed out Mac Studio on gigabit fiber with single-digit ms ping to their server. It’s not an example of something that works well on high-end hardware but scales down poorly.


Are you suggesting a very large custom blocksize? I don't think this would be feasible beyond a few megabytes.


No, a FPE algorithm is a cryptographic construct that uses an existing block cipher (e.g. AES-256) to construct a cryptographically secure permutation of the input without length extension. That is, input size = output size, for all sizes. Ideally, if input size >= block size of the underlying cipher, the resulting permutation is no weaker than just using the cipher directly.

You could use FPE for multi-megabyte permutations, but I don't know why you would.


Slightly unrelated, but aren't these AES-specific custom CPU instructions just a way to easily collect the encryption keys? There is a speedup but is it worth the risks?

If I were a nation state actor, I'd just store the encryption keys supplied to the AES CPU instruction somewhere and in case the data needs to be accessed you just read the stored keys.

No need to waste time deploying a backdoored CPU firmware and wait for days or weeks, and then touch the hardware a second time to extract the information.

When all AES encryption keys are already stored somewhere on the CPU, you can easily do a drive-by readout at any point in time.

Linux kernel has a compile time flag to disable use of custom CPU instructions for encryption, but it can't be disabled at runtime. If "software encryption" is used, the nation state actor needs to physically access the device at least two times or use a network-based exploit which could be logged.


There are serious risks about backdoors in CPUs, but they are not about the CPU gathering the AES keys.

The storage required for this would be humongous and the CPU cannot know for which data the keys have been used. Moreover this would too easily be defeated, because even if the AES instructions allow to specify a derived round key in them, you can always decline to do this and use a separate XOR instruction for combining the round keys with the intermediate states. Detecting such a use would be too difficult.

No, there is no base for fearing that the AES keys can be stored in CPUs (on the other hand you should fear that if you store keys in a TPM, they might never be erased, even if you demand this). The greatest possible danger of adversarial behavior of a CPU exists in the laptop CPUs with integrated WiFi interfaces made by Intel. Unless you disconnect the WiFi antennas, it is impossible to be certain that the remote management feature of the WiFi interface is really disabled, preventing an attacker to take control of the laptop in a manner that cannot be detected by the operating system. The next danger by importance is in the computers that have Ethernet interfaces with the ability to do remote management, where again it is impossible to be certain that this feature is disabled. (A workaround for the case when you connect such a computer to an untrusted network, e.g. directly to the Internet, is to use a USB Ethernet interface.)


I am not a chip designer but from my limited understanding, this "somewhere" is the problem. You can have secret memory somewhere that isn't noticed by analysts, but can it remain secret if it is as big as half the cpu? A quarter? How much storage can you fit in that die space? How many AES keys do you handle per day? Per hour of browsing HN with AES TLS ciphers? (Literally all supported ciphers by HN involve AES)

We use memory-hard algorithms for password storage because memory is more expensive than compute. More specifically, it's die area that is costly, but at least the authors of Argon2 seem to equate the two. (If that's not correct, I based a stackoverflow post or two on that paper so please let me know.) It sounds to me like it's easily visible to a microscope when there's another storage area as large as the L1 cache (which can hold a few thousand keys at most... how to decide which ones to keep)

Of course, the cpu is theoretically omnipotent within your hardware. It can read the RAM and see "ah, you're running pgp.exe, let me store this key", but then you could say the same for any key that your cpu handles (also rsa or anything not using special cpu instructions)


Good points, but might be mitigated by knowing that the first key after boot is for HDD encryption and if storage is limited then keep counter for each key, and always overwrite least frequently observed key.


Could work. How do you know what the least-frequently used key is if you can't store them, though? Would need some heuristics. Maybe it could write the first five keys it sees after power on on every power on, or some other useful heuristic.

Like, I do take your point but it does seem quite involved for the chance that it'll get them something useful, and they still need to gain physical access to the intact device, and trust that it never gets out or the chipmaker's reputation is instantly trash and potentially bankrupt. And we know from Snowden documents that, at least in ~2013 (when aes extensions weren't new, afaik), they couldn't decrypt certain ciphers which is sorta conspicuous if we have these suspicions. It's a legit concern or thing to consider, but perhaps not for the average use-case

edit: nvm it was proposed in 2008, so that it didn't show up yet in ~2013 publications is not too surprising. Might still be a general point about that 'they' haven't (or hadn't) infiltrated most cpus in general


I think Linux/LUKS software encryption was a very big challenge and they solved it with multiple approaches

- 2004: Linux LUKS disk encryption [0]

- 2008: ring −3 / intel management engine [1]

- 2010: AES instruction set [2]

- 2009: TPM [3]

[0] https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup

[1] https://en.wikipedia.org/wiki/Intel_Management_Engine

[2] https://en.wikipedia.org/wiki/AES_instruction_set

[3] https://en.wikipedia.org/wiki/Trusted_Platform_Module


I don't imagine it would be too difficult to snoop the instruction stream to identify a software implementation of AES and yoink the keys from it, at least if the implementation isn't obfuscated. If your threat model includes an adversarial CPU then you probably need to at least obfuscate your implementation, if not entirely offload the crypto to somewhere you trust.


Yes but it's much easier to tell devs "put your keys here" and then just take that.


We’re talking about a hidden CPU backdoor that would let you secretly come in and retrieve keys you’ve squirreled away somewhere. I don’t think finding the keys is the hard part.


Are you serious?

The CPU firmware blobs are encrypted and nobody except Intel can see what is running there. A handful of people on the planet have the tools and skills to analyze the chip for backdoors.

A small section of CPU cache could stay powered even though the OS is shut down, persisting the keys that were passed to the AES CPU instruction. As CPU is directly linked to wifi/bluetooth and USB chipsets, exfiltration could be possible both wirelessly and via special USB payload.


Compared to all of that, looking for certain patterns in the instruction stream is barely any more effort than looking for specific instructions.


Wasn't that most likely related to the US government using claude for large-scale screening of citizens and their communications?


I assumed it's because everyone who works at Anthropic is rich and incredibly neurotic.


Paper money and if they are like any other startup, most of that paper wealth is concentrated to the top very few.


That's a bad argument, did Anthropic have a liquidity event that made employees "rich"?


Yes.

https://www.maginative.com/article/anthropic-launches-first-...

Well, I think $2 million is pretty good, but maybe it's not much after taxes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: