When I read the original blog post by Cellebrite, which is on archive.org [1], it left me scratching my head too. Signal is open source. They had access to the device to dump everything. Then they went through the source code to figure out how to decrypt the data. Just as this blog response says, they could’ve just opened the app and retrieved the contents (and even forwarded that to another device if they wanted).
So someone enthusiastically posted about wasting their time as if it was a technological achievement. Then someone (else?) realized that the long technical post sounded stupid and had it replaced.
And some people wonder where their tax money goes to — all these companies who are better at marketing themselves well as experts are getting free lunches!
> [...] Once the decrypted key is obtained, we needed to know how to decrypt the database. To do it, we used Signal’s open-source code and looked for any call to the database. After reviewing dozens of code classes, we finally found what we were looking for
> [...] After linking the attachment files and the messages we found that the attachments are also encrypted. This time, the encryption is even harder to crack. We looked again into the shared preferences file and found a value under “pref_attachment_encrypted_secret” that has “data” and “iv” fields under it.
Today I learned that I can do code cracking too...
After opening the cabinet and the box with money we found is also locked. This time, the lock is even harder to crack. We looked again into every single coffee mug and bookshelf in the room, and found a keychain hanging on the second office desk drawer, that has the "key" on it...
I needed to get into my office so I asked my boss for a key. She gave me an entire key ring of keys. After trying dozens of keys against my office door I finally found the key that worked.
It's actually how you decrypt an anonymous OpenPGP message (e.g. in GnuPG, you can create one using --throw-keyids or --hidden-recipient) - Normally an OpenPGP message records its intended recipients in the header, so GPG knows which key to use right away. But when the message is anonymous, GPG must try all the private keys in the keyring one by one, until it sees a valid solution or fails. If you have multiple private keys, you'll go through many passphrase popups (and smartcard/USB swapping), the struggle is real!
Trial decryption. This is a potential server behaviour for Encrypted Client Hello (the current iteration of the work to encrypt SNI in TLS traffic) too
ECH will be GREASEd. To prevent those who might want the capability to stop ECH in the future from getting a head start while it's uncommon, implementations would always pretend to be doing it anyway.
So talking to any TLS server, even one that has no idea about ECH, the client says basically "Hi, here is a normal unencrypted TLS 1.3 Hello message for this.server.example, also, here's an Encrypted Client Hello message". If the server actually does offer ECH, there could be a real Client Hello, perhaps addressed to another.server.example, encrypted inside the Encrypted Client Hello, but if not there's just random noise. An eavesdropper doesn't have the key, so they don't can't tell which is the case.
Obviously if your server can't do ECH, the Encrypted Client Hello is just a mysterious unintelligible extension with noise inside it, no further inspection needed.
And in some setups the server knows how to tell easily which key would have been used for any valid ECH, so if that key doesn't work then it was just noise, and can be ignored.
But in other cases the server knows two or more keys that might be valid, yet the client either can't or has chosen not to be open about which (if any) was used, so the server has to try them all until it finds out.
Simple, you just check whether the header, data structure, metadata, checksum, etc. in the trial-decrypted data in valid, no hack is needed. For example, something as simple as a 16-byte magic number can ensure a false positive rate of 1/2^128. Since any arbitrary binary data can be encrypted, "this is mostly ASCII" is unusable.
In GPG:
/* if KeyID is empty... */
if (!k->keyid[0] && !k->keyid[1])
{
log_info (_("anonymous recipient; trying secret key %s ...\n"), keystr (keyid));
}
err = get_it (ctrl, k, dek, sk, keyid);
k->result = err;
if (!err)
{
/* If get_it() succeeds */
if (!opt.quiet && !k->keyid[0] && !k->keyid[1])
{
log_info (_("okay, we are the anonymous recipient.\n"));
}
}
And in get_it()...
if (sk->pubkey_algo == PUBKEY_ALGO_ECDH)
{
/* Now the frame are the bytes decrypted but padded session key. */
if (!nframe || nframe <= 8
|| frame[nframe-1] > nframe)
{
err = gpg_error (GPG_ERR_WRONG_SECKEY);
goto leave;
}
}
else
{
if (padding)
{
if (n + 7 > nframe)
{
err = gpg_error (GPG_ERR_WRONG_SECKEY);
goto leave;
}
}
if (n + 4 > nframe)
{
err = gpg_error (GPG_ERR_WRONG_SECKEY);
goto leave;
}
if (dek->keylen != openpgp_cipher_get_algo_keylen (dek->algo))
{
err = gpg_error (GPG_ERR_WRONG_SECKEY);
goto leave;
}
/* Copy the key to DEK and compare the checksum. */
csum = buf16_to_u16 (frame+nframe-2);
memcpy (dek->key, frame + n, dek->keylen);
for (csum2 = 0, n = 0; n < dek->keylen; n++)
csum2 += dek->key[n];
if (csum != csum2)
{
err = gpg_error (GPG_ERR_WRONG_SECKEY);
goto leave;
}
}
You get the idea.
BTW, did I just "break GPG encryption and deanonymize users" according to Cellebrite because I found the correct function in the code? And no, I didn't "review dozens of code classes", I just grepped the word "anonymous"... /s
Having the actual key to the lock, then using that to specially craft a pick made in the exact same shape and size as the original key. And then using that to "pick" the lock.
So maybe more like the person who works the key cloning machine at the hardware store.
"they could’ve just opened the app and retrieved the contents" is not really sufficient.
First, doing it manually through the app is not okay since it does not scale, you don't want to read a message, you want to retrieve and index all messages, and you might want to process many devices quickly.
Second, apps usually do not show the user all the information that's available - often there is extra metadata (which may be as important as the message contents) so you do want to decode the actual message database.
Third, doing it through the app might change things - the app may change state (for example, mark an unread message as read), send some notification to central servers, alter metadata, etc. So it potentially disrupts evidence, and that's not okay.
So the original blog post from Cellebrite makes all sense - if you do want to do forensics, then a tool that does all that is really a requirement, it's not wasting time.
As a forensic tool, it surely does, and such tools are both common and have their robust client base. I think the mistake in that article was to present a forensic tool as some kind of advanced code-breaking. I guess it sounds more exciting this way, but also kinda misleading - which is witnessed by the fact that BBC was totally misled about it.
Well starting a blog post by pointing out criminals are using it to communicate secretly and ending the post by concluding you can read it if you gain priviledged access to the device they're using to communicate is a bit of a let-down to be honest.
Now if they were able to break the Keystore itself then they might be on to something but as it is it requires convincing criminals to give up their phone and their password. If you translate this to a physical analogy people's reaction would be 'no shit', but because it's digital you can apparently get away with it.
I agree that both the original post and the media coverage of this is extremely misleading.
> So someone enthusiastically posted about wasting their time as if it was a technological achievement.
I think there's plenty of value and achievement in understanding a program's functionality, even when the source is fully available to you.
We all (presumably) agree that source code isn't self-documenting and that understanding someone else's work usually involves a lot of individual comprehension and context; I read this blog post as someone (diligently) describing their mental process as they tried to understand Signal's internal formats. As others have pointed out, there are oodles of "legitimate"[1] reasons for doing so.
[1]: From the perspective of LEO and the legal system, anyways.
I agree as far as the content of the article is concerned. I think the main problem with it was its tone. I think if they had approached it the way you described, someone just going into detail about how they analyzed an unfamiliar application, it would have been fine. As written, it feels like a new programmer bragging to his friends about how he got "hello, world" to compile.
One has to bear in mind that the typical digital forensic examiner is not a programmer and would therefore be impressed. Perhaps impressed enough to purchase Cellebrite's products instead of the equivalents from MSAB, Oxygen, Elcomsoft etc.
In the usecase I'm particularly familiar with (law enforcement, specifically of violent crimes), it's pretty valuable to minimize the amount of manual data handling investigators have to do. The State's Attorneys office/US Attorneys office/Prosecutor's office have finite resources and have to be selective about the cases they decide to spend resources on. Even if the correct suspect(s) has(have) been identified and arrested, the case can be rejected if the decision-making prosecutor thinks the evidence isn't strong enough or defending the evidence will be too difficult because evidence collection was done in a nonideal way. It may be possible to forward Signal messages to another device, but A) that just adds more links in the chain that can be challenged, and B) most detectives don't know that's an option or have any idea how to do it, so you'll regularly see sloppy stuff like photos taken by the detective of a phone displaying the messages of interest.
It's just a lot easier for the investigator to just plug the phone into a Cellebrite UFED analyzer and extract as much as is covered either by their search warrant or by the signed consent form of the phone's user(s), and it's a lot easier to defend in court, as it eliminates room for accusations that investigators cherry-picked messages and data that look incriminating out of context.
TL,DR: Even if it's not an impressive feature technologically, it's still a valuable feature to some of Cellebrite's main customers.
Having some familiarity, how often do you think " ... and extract as much as is covered either by their search warrant or by the signed consent form of the phone's user" is an accurate description of what actually happens in the field?
Does a UFED even have a way of selectively extracting only what's covered by a warrant?
Are warrants usually granted that would be considered outrageous fishing trips by the more privacy aware of us here?
> all these companies who are better at marketing themselves well as experts are getting free lunches
Better than us, I presume? But unless you or I are prepared to do this work (to parse data from Signal and every other application out there), for the police, for other investigation agencies, for corporate and private investigators, for lawyers, for data protection officers etc., the vast majority of whom aren't programmers or reverse engineers, what's the problem if someone else is?
I agree with your post, except for this. It is NOT a waste of time to assert, once in a while, your power to examine, extract, and change anything running on your device.
Is it actually common for such hardware to be used in civil cases? I'm not sure I'm keen on someone crawling through all my data and potentially storing it indefinitely--including data with security implications (session cookies, partially-decrypted data from password managers, who knows what else) just because they managed to reach the discovery phase of a civil case.
I would expect the toolsets to converge over time. That is, I can't imagine a tool that a LEO would have access to that I would not. Nor can I imagine a tool they might have that I would not want to use myself.
That said, I think law enforcement has an intrinsic advantage in practical terms. They have the money to pay for a subscription to known exploits packaged and made easy to attempt, and b) lots of experience using that tool (as well as just ordinary persuasion and coercion) to crack into peoples phones in the real world.
I'm not entirely sure that makes sense here. If I understand correctly, Cellebrite mostly just provides an interface and degree of automation designed for law enforcement; you could, in theory, retrieve the same data on your own without Cellebrite, just without the LEO-centric UX. Depending on the device, I would think the tools to which you have access likely make more sense for a consumer's use case.
Even for someone getting a bit more technical and tinkering with their device, tools like `adb` are fairly powerful on their own.
Exactly. I've so far decrypted my own messages from two different apps because I needed some specific information that would've been too hard to find without RegExp-capable search. In both cases I was glad to find guides online explaining how to pull the database, get the decryption key and decrypt the database with the key.
It may seem trivial from a security perspective since it doesn't involve breaking any cryptography, just using a decryption key as intended, but in practice the ability to get a plain-text dump of all messages is very useful.
+1. If what they were working on was not Signal but some proprietary applications (possibly with DRM), the hilarious blog post on their "code cracking" effort can actually be a legitimate one. In principle, the process is the same - break into something when you are root already. The only difference is the challenge of code obfuscation - which is a real one, unlike Signal.
I greatly look forward to hearing Cellebrite executives doing jailtime for DMCA violations stemming from Hollywood or the music industry going after them in court for removing encryption on copyrighted material...
Using only the homeowner's house key and extensive key-sliding-into-lock reverse-engineering, I'm able to break into their home. Whitepaper coming soon.
I'm guessing some overzelouse 20 year old at cellebrite "hacked" signal and wrote a silly blog post that no one at the company reviewed and marketing was happy to have some engineering thing to blog.
to me what is embarrassing is that all of these major news outlets and professional journalists could not actually read the article and do some very basic research before blasting out to the public. It just really shows how low the bar is to get something published. I could blow my nose on YouTube and make stock picks based on where the bugger lands and I wouldn't be surprised if BBC Business picks up the breaking story. That's how low the bar is it seems. Sad.
> I'm guessing some overzelouse 20 year old at cellebrite "hacked" signal and wrote a silly blog post that no one at the company reviewed
More likely the opposite: Some engineer was tasked with adding Signal database handling, marketing got wind of it, and they went to town on blog posts and PR pieces about it.
Really though, they don't care that it's technically wrong. The target audience for this stuff isn't other engineers or technical people. It's their potential customers, who don't know the difference.
> It's their potential customers, who don't know the difference.
Their potential customers (forensic examiners not just in law enforcement, but also corporate investigators, incident response etc.) do know the difference. Being able to get data from the endpoint is exactly what they are after, because the alternative is that some poorly paid soul has to sit and photograph thousands of pages in numerous applications. Having access to the database saves time, reduces interaction with the exhibit and gets metadata which isn't shown on the phone.
Right, some private company posting a misleading blog post is one thing, hell, even small news blogs posting about it, I could understand. But a large news organization such as the BBC not even bothering to contact Signal to get a statement or their side of the story? What the hell... Someone should get fired over this.
> After getting the decryption key, we now needed to know what decryption algorithm to use. We went back to Signal’s open-source code and found this:
> Seeing that told us that Signal uses AES encryption in CTR mode. We used our decryption key with the AES encryption in CTR mode and decrypted the attachment files.
It’s shameful that one of the worlds best journalistic sources didn’t even bother to reach out to Signal to get comment on a story they ran about them
I feel like a lot of today’s mistrust of news stems from publications not verifying sources, or checking evidence, or at least scrutinizing what others are saying.
Related: the Gell-Mann effect. You read a newspaper story on a topic about which you are knowledgeable, and get mad at how wrong they've got everything. Then you turn the page and read the next story, on which you are not an expert, and take it at face value.
In my circles this gets me into arguments all the time. Everyone reads a book, everyone but me likes it. I point out how one section I know a lot about is deeply wrong, everyone else says versions of "well other than that part it's a great book!"
I wish LW-style rationalist circles didn't attract such obnoxious people, because I don't know of any other collection of people who recognize and try to adjust for things like this.
Rationalist circles make just as many horrific logical errors. If you want to dig into this stuff philosophy has dug deep, and frankly all they came up with is all sources should be treated as suspect. Aka, even if a source got the stuff you know about right, you should still be be skeptical. And yes, that includes purely logical reasoning about math.
> Rationalist circles make just as many horrific logical errors. If you want to dig into this stuff philosophy has dug deep, and frankly all they came up with is all sources should be treated as suspect.
Sharing some anecdata, because I enjoy when others do.
I've got some very dear friends born in a highly prescriptive culture where credential-ism runs deep, being an intellectual is extremely valued -people are often denied leases due to their non-academic status- and authority figures and strangers get the "Mrs." and "Mr." treatment for years after you've met them. Early on in our friendship we would discuss and rant about everything and anything. We would often talk about my attitude of trusting no one and how I believe there are, truly, no experts, in the colloquial sense of the word. We've often had late-night discussions about my "stubborn and deeply misguided attitude", about how it's wrong that the only experts I trust are those who deeply distrust their own abilities, instincts and continuously attempt to disprove their own claims.
Shockingly, they defended the aforementioned views until many of their countries top virologists and they themselves, often and loudly, shared their opinion on covid-19 around March, April and May: covid-19 was nothing special, national mortality trends were unaffected, their country is rich and they had a high number of ICUs, it was basically just like the flu and nothing to worry about. Then, as we all know, shit hit the fan and we saw cooling trucks being turned into mortuary vehicles. I simply told them what I've always told them, and for the first time since we're friends they nodded with some sadness in their demeanour: trust no one, acquire evidence, make your own judgments, try to find hidden risks and the only so-called experts worth trusting are the ones who, in their own words, publicly and candidly express their skepticism towards their very own claims and try to help you reproduce their methods and conclusions.
I think people who are too in love with their own rationality leave themselves vulnerable to some pretty ridiculous beliefs that common sense would normally guard them against. A suspension of common sense is certainly important sometimes, because common sense can lead you astray, but at other times it will save you a lot of time and mental energy. A classic example from antiquity is Zeno's paradoxes of motion. It was not immediately clear to people trying to rationalize about these 'paradoxes' how they could be resolved. But somebody who is slightly less in love with their own rationality, who trusts their own common sense, would not waste their time wondering about a supposed paradox that can be refuted by standing up and walking around the room. Failing to balance rationalism with common sense can get you stuck in solipsist traps or worse.
I'll rephrase my point then; I think Less Wrong people spend a lot of time trying to figure out how many angels can dance on the head of a pin. Or: They should stop sniffing their own farts, pull their heads out of the clouds, try not to get lost up their own asses, etc.
Anyway, according to Wikipedia: "Some mathematicians and historians, such as Carl Boyer, hold that Zeno's paradoxes are simply mathematical problems, for which modern calculus provides a mathematical solution.[6] Some philosophers, however, say that Zeno's paradoxes and their variations (see Thomson's lamp) remain relevant metaphysical problems.[7][8][9]"
So it seems like a few people are still wasting their time with this.
I have bad news about philosophy circles; I agree that some philosophers have dug deep, and most people in philosophy circles (even those who have read widely) are much worse at this than the LW folks.
(I found LW/rationalism through philosophy circles, incidentally)
It can be perfectly rational to conclude a book is great despite there's a flaw in some part.
You can't simply extrapolate from finding one mistake in a book to declaring the whole book wrong. Likewise, you can't extrapolate from finding few bad publications to "everything is crap". I'm not trying to claim journalists are all good or even consistently good. But believing they are always bad is the same logical fallacy as believing they are always good.
It's not "believing they are always bad". It's "being unable to determine whether they are good". If you simply don't know enough to evaluate a source, it is correct as a matter of epistemology to view its contents with a large dose of suspicion. Any historian will tell you this; a pretty sizeable aspect of the study of history is the weighing of sources, working out why they said what they did, what they're not telling you, and what they're wrong about - because every source is incomplete, biased, and contains simple factual errors. I might sit back from a book and go "wow, that was good", and it's possible for a book to be "great but wrong" - but I can only reasonably conclude that a source's contents are correct if I have got more evidence for its thesis than just that one source.
So the situation is bad enough when I am encountering a new source. But if I'm already familiar with a source, and it's consistently wrong about the things I know, then I can only be even more suspicious of everything else they say; and sadly, an awful lot of publications do fall into that category. Finding a mistake in a book does make it more likely that it contains other mistakes, and it does make any given fact in the book more likely to be wrong.
I don't think it's rational unless and until you or people who are knowledgable in all other topics covered, can assert that the other parts all check out.
If you can only check one thing and it's wrong, then it is entirely and clearly irrational to assume that everything else is correct.
It's not fully defensible to assume that it's 100% wrong either. Just by plain statistics you may assume almost anything must have some correct portion.
What's rational is to make as few assumptions as possible, and where guessing or informed guessing is necessary, use only the information you actually have. That means back to the beginning, if you can only evaluate one part, and it has a many errors, then that is the only thing you can use to make any assumptions about the rest, unless and until you get actual credible assertions about the rest from someone else who are themselves credible in that domain.
They're not talking about extrapolating from one mistake though. They're talking about drawing one observation from a population of an unknown distribution. At that point that single observation the only estimate of the quality of the entire book. If you decide to not sample further (that is find other chapters in the book of which you are an expert) then the conclusion that the book is trash is the only rational one.
(If you want to read more about this then google "single observation unbiased estimator")
> If you decide to not sample further (that is find other chapters in the book of which you are an expert) then the conclusion that the book is trash is the only rational one.
That's exactly the wrong kind of extrapolation I'm pointing out.
If you found a flaw in one chapter, the book can be trash, and the chance of it being trash is definitely higher than without any other data, but whether the absolute chance of it being trash is sufficiently high enough can only be determined based on the nature of the flaw, and even then, the confidence for that conclusion can't be really that high.
The claim isn't really that the book as a whole is necessarily good or bad. It may be expressed as such, but Crichton was talking about the habits of readers.
Determining whether a source is trustworthy is an ongoing process: you're seeing one portion where you have some expertise or evidence and then another portion where you don't. You have to extrapolate from what you can validate to whether what you can't is valid.
The complaint behind Gell-Mann amnesia is that we're too quick to dismiss clear evidence that a source is untrustworthy, that we have a bias towards trust.
To put it in perspective, let's imagine the opposite, call it Crank Awareness. If you see a document and it's laid out poorly, uses weird boldface, all caps, colors and blinking text, you'll, at the very least, get the impression it's written by a crank before you even start reading.
> But believing they are always bad is the same logical fallacy as believing they are always good.
Again, this comes back to the problem of strict logical reasoning vs. treating trust as a larger process. We have limited resources to evaluate sources, so we're stuck making fallacious generalizations when we need to make a decision based on our sources. What we want in the long run is to incentivize authors to exercise care and diligence.
That would indicate we should punish known bad information by deprecating the authors. So another way of reading Gell-Mann amnesia is that readers aren't doing this. They're seeing stuff that they know is wrong and continuing to patronize the publications regardless, thus authors can be untrustworthy and still collect a paycheck.
> I'm suggesting the book should be treated very skeptically.
Well then why did you write about liking the book and whether it's good as the cause of disagreement? That's a very very different thing from whether the book is an accurate source!
Even a specific section you prove to be wrong can still be good.
Perhaps we should reframe how we approach new knowledge?
"That was a super interesting book, it brought up lots of ideas and explanations, lets discuss what parts if any we think are true." !
Folks call out information and facts as fake news, not apply healthy skepticism but a rejection of all knowledge and then at the same time falling for hoaxes.
The whole country needs to take a gap year and learn the scientific method.
They are actually extremely rigorous about this in lower Swedish education. Source criticism is part of the curriculum. It becomes evident how much of a difference it makes when comparing the behavior of the elderly population at large (were only those who went to University have been thought it) to those who have had this from the start.
This seems like something different to me. Most books, TV shows, movies, etc that do anything with technology above the most basic level usually get at least one thing incredibly wrong. It's not necessarily a deal-breaker though - almost all stories are written to present and advance an interesting plotline, and almost all of them gloss over various inconvenient realities for the sake of a better story. Consumers can usually suspend their disbelief and accept the story for its own sake.
Problems only really come in when ignorant people read too many stories and start to think that how they present things is actually real. And some people who know a particular area very well may find whatever the story does too patently absurd to suspend disbelief.
> I point out how one section I know a lot about is deeply wrong, everyone else says versions of "well other than that part it's a great book!"
Sometimes books about magical properties of crystals get their geology right. Sometimes they get it wrong.
If I apply your superficial filter I essentially give up my ability to convey the difference to your friends.
If I ignore your filter and pay attention to the entirety of each book, it's trivial for me to help them separate wheat from chaff. (Or at least chaffy-wheat from pure chaff.)
Considering that Matticus there knows about LW and Rationalism, it seems obvious he isn't saying to zero out your coefficients.
Imagine I give you a dictionary purporting to contain descriptions of the referents of the following words: (I assume you know what a sprint is, but not a bilparyoti or a zambungar)
* sprint - to run at a rapid pace for a short distance
* bilparyoti - a kind of bright blue butterfly, found in Congo-Brazzaville
* zambungar - a muddy colour, specifically that created when a landslide enters a clear river
Now, based on knowing that 'sprint' is 'correct', what is your probability, posterior to being supplied the dictionary, that you know what a 'bilparyoti' references?
Now, imagine that I tell you that the 'bilparyoti' is wrong and you are able to be convinced that the 'bilparyoti' is wrong. Is your prior for the accuracy of the 'zambungar' reference the same as the posterior after being supplied the information about the 'bilparyoti'?
Matticus, there, presumably laments the fact that the zambungar accuracy probability has not moved. Rationally it should move towards zero. By varying amounts, certainly, but towards zero. With his friends, P(zambungar_correct | bilparyoti_wrong) = P(zambungar_correct|bilparyoti_unknown), truly a situation worthy of wailing and gnashing of teeth.
> With his friends, P(zambungar_correct | bilparyoti_wrong) = P(zambungar_correct|bilparyoti_unknown), truly a situation worthy of wailing and gnashing of teeth.
Granted. But OP seems to be at the other extreme which is equally problematic:
> Everyone reads a book, everyone but me likes it.
That strongly implies a boolean value judgment to me.
Nah, he's right. I often just end up with "I found parts of this persuasive and appealing, but I can't trust it because those parts are things I don't know much about, and in the parts I do know a lot about, there were serious issues." It's not that I think I know the rest is bad, too; it's that I know I don't know.
I think that's just a shortcoming of terse communication. I would wager he's not zeroing out the coefficients considering the context. Maybe $10 at 5:1 if there were a way to ensure fairness on the bet.
LW seems to stand for Less Wrong: LessWrong (also written Less Wrong) is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics.
Thank you for this! This happens to me all the time, e.g. when the label planes (eg mixing f-16 and f-18) in pictures wrong. And every time I wonder what they get wrong where I'm not knowledgeable.
I like this meme, it is fun and so on, but I have to admit, it is not really a thing: I am a professional physicist and journalists at respected outlets are pretty good. NPR, PBS, NYT all do a pretty great job at science journalism. More often than not the rare complaints from professional scientists are more self-aggrandizement lacking awareness of the pedagogical constraints of popular press.
You lost me at NPR do a pretty great job at science reporting. In fact they are a punchline to the joke about how incredibly bad science reporting can be.
Right yeah. NPR great. NPR science reporting. Terrible. It's a shame. Sadly it's a strong pattern not a one off, hence the NPR search link on that site so you can satisfy yourself that "as heard on NPR" is anything but a strong signal of quality research.
And if we keep repeating that they'll likely fix it. It's absolutely fixable. NPR are not a total write off.
That website has logged 10 complaints in 13 years, most of them in 2016. This is a bit of a selection bias: yes, if you select the terrible examples, 100% of them will be terrible.
On the other hand, I listen daily to their flagship science show "Shortwave" and science-adjacent shows like "The Indicator" and their general news updates, and they are pretty great both in terms of being pedagogical and in not being misleading when they simplify something.
Same is true of criminal courts. "If you select only the examples of fraud of course my client looks like a fraudster."
Seriously if NPR had good science reporters why on earth aren't they tapping their colleagues on the shoulder and saying "Don't" or maybe "You need to correct that, it's NPR's reputation, not just yours." I mean, on the standards of popular journalism they're pretty bad. And NPR is the punchline when someone publishes research by media release fitting some model to noise and booking their TED talk[1]. NPR are always there.
[1] Some people think TED is good science too. Maybe it's not all garbage? I don't know, my search is not exhaustive once the pattern is established.
Nitpick: Its Gell-Mann amnesia, Michael Chrichton's name for Murray Gell-Mann's amnesiatic behavior, not an effect discovered or promoted by Gell-Mann.
It's not a nitpick. The "Gell-Mann" in the name elevates a novelist and pundit's ideas to those of a Nobel physics laureate. It's worth pointing out!
It's also a frustrating argument. What does it actually say? "Journalists are often wrong". No shit! That's why it's called "the first draft of history". Meanwhile, everybody is often wrong. But we don't have a "Djikstra amnesia" to describe all the times we fall short of the ideals of our own discipline, but forget about that when holding other people to our notional ideals of their disciplines.
It's not that journalists are often wrong, it's that journalists often say stuff that is obviously wrong to anyone who knows anything about the subject. And that indicates sloppy investigation, like not contacting Signal before reporting this story.
The question is are they more wrong than any other segment of the population that writes for consumption?
Journalists, unlike say bloggers or marketers or think tank authors or pundits, have a fairly robust ethics/rules system about how to publish. Does it fail them at times? Of course, but do they fail at a higher % than other outlets?
I’ve never seen any actual evidence to suggest that. That there is a pithy quote from a Nobel prize winner isn’t interesting.
The point is that journalists tend to be a lot less reliable than the reading public seems to think they are.
Most people would have enough sense to take information with a grain of salt if the source is "I heard it from a guy who heard it from a guy." Even if the guy is reliable, who is to say that the other guy is reliable?
Journalists aren't random people who have "heard it from a guy who heard it from a guy". They do reporting: they call people, develop sources, and work with fact checkers. That doesn't make them right all the time, or mean you should read them uncritically. But, of course, the bullshit meme is that you shouldn't take them seriously at all. It's premised on holding journalists to a standard we don't hold doctors to, let alone software developers.
> But we don't have a "Djikstra amnesia" to describe all the times we fall short of the ideals of our own discipline, but forget about that when holding other people to our notional ideals of their disciplines.
Perhaps we should?
(And then, of course, someone can post:
> Nitpick: Its Djikstra amnesia, tptacek's name for Edsger Djikstra's amnesiatic behavior, not an effect discovered or promoted by Djikstra.
when it eventually gets rounded to "Djikstra effect".)
The strongest interpretation of the idea expressed by Gell-Mann Amnesia is "You set your posteriors to be different from your priors based on journalism more than you should be and you do not alter this difference upon seeing evidence that should indicate imperfection in journalism". i.e. it warns you that you are likely over-weighting journalism.
Fortunately, most journalism is useless for information transmission and likely rarely alters behaviour - the latter having been chosen first with the journalism being used as justification. To that degree, the fact that most journalists are usually low quality information sources is not particularly dangerous since you never use them to do anything different from what you'd do.
I’m just as concerned about:
> According to one cyber-security expert, the claims sounded "believable".
One anonymous source at the topic of the article that bolsters the claim, then all the experts who were willing to attach their names to their words all temper the articles claim are towards the end of the article.
The Signal blog post says from the beginning that:
> Since we weren’t actually given the opportunity to comment in that story
So it may just mean they were not given enough time to respond before publication, or even that they were contacted post-publication. In the race to front page "breaking news" the responses are expected to be published as updates to the story.
I've been on the receiving end of these calls. "This is so and so reporter from X News and we're running a story about X. Please call us back before 2pm so we can get a response."
So what? There's plenty of dubious blogs claiming all manner of things. The actual accomplishment isn't noteworthy (they wrote a scraper). If it were encryption-breaking, it would indeed be noteworthy, and thus worthy of fact-checking or at least waiting for Signal's rebuttal of "lolwut, no, that's nonsense".
"Accuracy" is moot here: BBC's headline is misinformation.
How and when they contacted cellebrite/signal is important, but even when you see "refused to comment" there really isn't a timestamp for when contact was attempted/initiated. Is there a reason for this?
Additionally shameful:
- they haven't printed a retraction yet
- the technology reporter in question doesn't understand the tech well enough to recognize the error, even when somebody states it explicitly (https://mobile.twitter.com/janewakefield/status/134141965721...)
It isn’t shameful. It is yet another indicator that the journalism industry is creating the intellectual equivalent of Animal Crossing.
It’s a time waster that entertains- not a reflection of truth. How could any business be considered “the best” in its field and create such a shitty product? Simplest explanation: they are not trustworthy and never were.
This is not due to incompetence. This is done with the objective to influence the public opinion of cryptographic tools so that people will stop using them. The system has no way of actually breaking encryption, that's why it is focusing on the other ways to circumvent it - one of them being making most people (non-experts) believe that it doesn't work anyway so they will stop using it.
This is a focused campaign, this is not just random occurence of incompetence.
I used to work at a traffic signal company. We got bad press whenever somebody "hacked" our infrastructure. It was always a super sophisticated "default password attack".
I tried to get the default password changed to unique-per-unit randomly chosen uuid, just to be so obnoxious as to convince the customer to set their own password. Encountered resistance of the "but then they'd need to be retrained" sort.
I wish I could say we didn't deserve the bad name, but we kinda did.
> I feel like a lot of today’s mistrust of news stems from publications not verifying sources
First, a nitpick: that is a thought not a feeling: you didn't state how it made you feel, you stated an idea.
Moving on... That's not why people mistrust the media. They mistrust the media because they are told to by politicians seeking to discredit journalism and control the narrative.
Double nitpick: People often colloquially use "I feel like" in place of "It is my opinion that" and it doesn't even strike me as literally wrong to describe an opinion as a feeling...
And do you think this episode is evidence of trustworthiness on the part of the BBC?
I agree with you that politician sow distrust, but poorly researched pieces are the fault of no one but the journalists.
I don't think this impacts the BBC trustworthyness one lick. They reported a story objectively, stating each side, without their own opinion. Even though one side was acting in bad faith. Is the BBC out of their element here, absolutely. But are they engaging in lies and deceipt, like Fox or OANN? Hardly.
> poorly researched pieces are the fault of no one but the journalists
Agreed. THey should have consulted with HN, look at all the experts here.
> literally wrong to describe an opinion as a feeling.
I don't know the gender of the OP, but one area where therapists and psychologists struggle with male patients is that they often say "I feel" instead of "I think", ann when asked what the feeling was, they struggle.
You can't feel an opinion. You can have one, and it can make you feel suspicious, or doubtful, or confused, because those are feelings.
This is a good example of how words used incorrectly are symptomatic of a deeper issue: male avoidance of feelings beyond hungry/angry/horny. It's not exclusive to men, but it is very common.
I don't think it's reasonable to conclude that there is no influence directing the overall thrust of stories, choosing which stories or which versions of stories, or which writers get published, any more than to conclude that every single story was scripted by the Illuminati or the Rothchilds.
We HAVE seen enough evidence to know that much just by tabulating stats and things like that John Oliver bit showing all the tv news stations using the exact same supposedly off the cuff remarks.
I am a counterexample to your main point. I distrust most media sources because I've not once seen one present rigorous, transparent, verifiable research about a current event of interest to me.
I think I've seen every media source I've followed get significant facts wrong about things I know well.
I try to fight back against Gellman amnesia in my own head.
By "distrust", I just mean that I do not default to trusting any news source I know of. I try to take what they say as plausible but not likely the complete picture. I also keep an eye out for what their overarching narratives tend to be and try to compensate for that in my own interpretation of events.
> I distrust most media sources because I've not once seen one present rigorous, transparent, verifiable research about a current event of interest to me.
"Because I've never seen it, clearly it does not exist."
"I have no reason to believe this, but it clearly must exist"
That's called faith and religion.
You cannot rationally make decisions based on anything other than what you know and have seen or can reasonably project from there.
IE, I can't see an atom with my eyes, and I can't duplicate all the research of history myself, but I can see some things with my eyesb and I can duplicate some research myself, and I can follow a reasonable, logical, defensible chain of trust from what I can directly prove to myself, to proofs I can accept indirectly, and distinguish those from fairy tales.
> "I have no reason to believe this, but it clearly must exist"
Except I never said that, whereas the previous post had a congruent summary of OP.
Strike one.
> You cannot rationally make decisions based on anything other than what you know and have seen or can reasonably project from there.
True. But that isn't what we are talking about, you ventured into strawman territory all on your own. The person was claiming they've never seen a truthful source, this is called the No True Scotsman fallacy. We're not talking about god, we're talking about the fact that some journalism is unbiased. There is much of it out there, but OP seems to think there isn't, or has convinced him/herself due to their own desire to not be wrong.
Strike two.
> can follow a reasonable, logical, defensible chain of trust from what I can directly prove to myself, to proofs I can accept indirectly, and distinguish those from fairy tales.
You've demonstrated that you aren't very good at following a "logical chain of trust" because you've reasoned to yourself, poorly, something that wasn't even being discussed.
Strike three.
First time playing with logic?
It never ceases to astonish me how people who throw around the terms "logic" and "rational thought" really, reeeeeally don't know how to deploy either.
That is a fun presentation, although the title is misleading because it's 99% about the Web PKI which is orthogonal to SSL (and TLS). TLS doesn't care at all why you trust these certificates, if you want to trust certificates so long as the public key contains the decimal digits 42069 that's fine.
Even PKIX (the IETF's profile for X.509 on the Internet) is orthogonal to TLS as designed, although in practice you're creating a world of pain for yourself if you decide you do want TLS but you don't want PKIX since the two have grown next to each other for decades.
Anyway, almost all of Moxie's talk is about the Certificate Authorities in the Web PKI, and not about SSL/TLS per se at all. It's about his attempt (Convergence) at multi-perspective peer validation for authenticity to eventually replace Certificate Authorities. Could that have worked? Maybe, sort of. It never went anywhere much.
Of course in hindsight we can't blame Moxie for not guessing what will happen next - I expect few if any of us spent last Xmas thinking "Better enjoy this, next Xmas will be a totally different ball game because of a pandemic virus" either.
> Articles about this post would have been more appropriately titled “Cellebrite accidentally reveals that their technical abilities are as bankrupt as their function in the world.”
> If you have your device, Cellebrite is not your concern.
But if the attacker has a 0day, which likely all the big players do, they don't need your physical device. Which means signal will do squat to protect your data in that case.
All nation-state governments are just buying 0days from companies like NSO Group and Zerodium.
The question is are you a valuable enough asset that they are gunna burn their $50M 0day just to get your device.
I think Signal is pretty safe from such things. Better than for example Whatsapp. Which seems to be where a majority of these nation-states using their 0days and exploits on.
> All nation-state governments are just buying 0days from companies like NSO Group and Zerodium.
USA/Russia/Israel for sure have these programs.
> The question is are you a valuable enough asset that they are gunna burn their $50M 0day just to get your device.
You are at least an order of magnitude overshooting the price. Also what is the percentage of Android phones not on the latest security patches and pretty much wide open for known 0days? For sure 90%+.
This tech is available for anyone with enough money, there are plenty of bad guy rich people. An actual investigative journalist can easily make an enemy of a rich person.
> I think Signal is pretty safe from such things.
You base this information on what? If someone is executing code as root on your phone they can absolutely use the method describe in the Cellebrite article.
If someone has gained root you're done. Every application must be assumed to be unsafe at that point. This isn't news, and it doesn't mean signal is broken.
Good luck finding a messenger app that can help you when "they have root access to my phone" is in your threat model. Not sure what you expect Signal to do about this...
> There are no apps that resist the phone being rooted. Everyone is vulnerable to 0days by definition.
I don't know why everybody is repeating this as if I somehow don't understand that. My point is Signal is promoted as some sort of panacea by security professionals even though all that security can be bypassed, likely routinely by actual bad guys.
I mean that that Snowden specifically agitated about dragnet surveillance, so it's not at all surprising that he'd promote the encrypted messaging app that he thinks is the most effective against it.
Has he ever said that Signal is the end-all, perfect solution that will prevent all kinds of threats and provide perfect privacy? I am sure there is a lot of sloppy messaging out there, but an endorsement along the lines of "I trust Signal's encryption and that it's not backdoored" is not unreasonable
I think the problem here is that you have misinterpreted a recommendation that is using the median risk as a recommendation that is using the p100 risk. Allow me to correct that for you: security professionals are not recommending Signal as protection against the p100 risk. Hope that helps.
You can't say a firewall is insecure because someone installed it in a network where one can walk in freely and take the passwords off of the sticky note on the desk.
Are there not any apps that do this? I notice that signal unlocks when you unlock the phone; are there not e2e messaging apps that require authentication (whether passcode or biometric) on unlocked devices?
Just checked, Signal has this; does this actually serve to unencrypt the encryption key or is that still accessible as root?
If anyone has access to your device, your data can't be protected, don't matter if physical or remote access.
The attacker could simply log all your passwords, so there is nothing signal nor any other software could to.
Those features help against Cellebrite but not against actual 0days which can read incoming messages in real-time. If the NSO has a rootkit installed on your phone, it doesn't matter that Signal is shredding messages after you read them.
Some links in most chains will have some weakness or other. So what?
That does not mean that there is no value in the strong links.
You might as well say "But if the attacker has a sniper, which likely all the big players do, they don't need to read you communications to get you, they can just shoot you from across the street. Which means Signal will do squat to protect your life in that case."
There is something which Signal could do to help even in this scenario. Give the user a logout option which leaves nothing but ciphertext on the device (e.g. by encrypting any plaintext keys with the passphrase). To login again you need a passphrase. Then as long as the user has enough time to click logout they are safe even when the device is out of their hands. Of course, after that, it might be prudent to consider the device compromised, and thus not login again afterwards in case it has been backdoored.
The Signal app used to have an option to protect the critical data with a strong passphrase but that option was removed.
The developers might of considered that the real threat was a remote access trojan that would just keysniff the passphrase. I guess the Cellebrite thing is a reminder that there are other threats. As it is you are pretty much lost if someone is willing to snatch the phone from your hand while you are looking at cat pictures on the web. Phones really need more than one level of "unlocked".
Signal can at least have timeouts for requiring re-auth (typically biometric).
The isolation concept can indeed apply to more than just functional separation, including separation in time, or in search depth (number of records returned) or by classification. It would be quite simple -- require auth to scan backwards more than an hour. Or 'press and hold to mark message as sensitive|expiring'. And possibly require a passphrase to access those sensitive messages / threads.
I've applied these techniques for database systems where NTK (need to know) rules apply, to force queries to be narrowly scoped. Likewise system backups don't all need to be online (or in the tape robot), most restores are from the most recent backup.
Exactly.. it would also be possible to design an app that never persists any messages or other information on the phone.. though at that point the whole thing is just a shell over a website.
There were applications that worked that way before mobile phones and the web.
If Cellebrite thought they had broken the signal encryption - why did they release it in a simple blog post? As responsible people in the sec community shouldn't they tell this to the Signal devs first so that they could review it and fix it if possible/needed? Or is that just something real security researchers are required to do and not something companies feel themselves bound to in any way?
Probably just the company wanting to push out some engineering blog content, and very unfortunate phrasing in the blogpost itself. If they'd framed it as "let's learn how Signal's encryption works" there would not have been any issue.
> the BBC ran a story with the factually untrue headline, “Cellebrite claimed to have cracked chat app’s encryption.” This is false
The headline is actually true. Cellebrite did claim that after all. Whether or not it Cellebrite is lying is another story. Although it was pathetic of them to not reach out of Moxie. It only shows the quality of the mainstream media.
That doesn't absolve BBC of spreading misinformation, which is what they are doing. It takes near zero effort to make a dubious claim on a blog. If BBC then writes, "kortex claimed to have hacked the NSA", that's accurate, I did make a wild claim. It's also misinformation if that headline is published without any investigation as to if that claim has any weight beyond internet rambling.
"You can't put anything on the internet that isn't true"
"2. Cellebrite is not magic. Imagine that someone is physically holding your device, with the screen unlocked, in their hands. If they wanted to create a record of what's on your device right then, they could simply open each app on your device and take screenshots of what's there."
Under the laws of many countries, that "record" alone would likely be inadmissible. Cellebrite's market is authorities who seize computers and then must follow forensics protocols for extracting digital evidence. It is not someone holding your device in their hands, opening up each app and taking screenshots.
>Not only can Cellebrite not break Signal encryption, but Cellebrite never even claimed to be able to.
But the original Cellebrite post says:
>Decrypting messages and attachments sent with Signal has been all but impossible…until now.
and
>This time, the encryption is even harder to crack.
It's just that Cellebrite's claim is totally baseless and what they're actually doing is not "breaking" or "cracking" anything. The BBC article should have been more critical of Cellebrite's language, but I don't agree with Signal that their headline was "false".
They automated taking pictures of app content on an unlocked Android phone, and bragged about it as a breakthrough tech, then deleted the article and replaced it with fluff. Embarrassing.
There's a world of difference between taking screenshots of an application and extracting messages and metadata from a database. I should imagine that Cellebrite's audience of corporate and law enforcement investigators would be very keen on getting access to this breakthrough. OTOH, Cellebrite's competitors have been able to do that for a while!
More clarification on the topic would be nice. When I open the Signal app right now, I am nagged to "Create a PIN. PINs keep information that's stored with Signal encrypted. Remind Me Later / Create PIN".
Would be interesting to know if an app specific PIN resists cellebrite analysis. Screen unlocked, Signal PIN enabled.
No Signal does not ask you for your PIN to access your messages. It is required to transfer your account to a new device, if you don't have access to the old one.
I suppose that the BBC article title could be considered to be correct in a narrow sense. Cellebrite makes products that can, in some cases, unlock phones. Signal Messenger depends on the phone OS to protect the key used to encrypt the saved data. So in some cases Cellebrite does have the power to break the saved data encryption.
The Signal case is interesting because the app used to have a feature where you could protect the data with a separate strong passphrase. That would of prevented this particular attack. For reasons that are not clear to me, Signal eliminated this feature.
So what? That's a procedural rule not the difference between "cracking encryption" and not. It still means all they did was automate something, not crack something.
Whoever wrote the article for Signal should be writing bars in rap songs. Such a great article. I was laughing the whole way through. The author manages to poke fun of Cellebrite and plug Signal.
It is true that Cellebrite blog post looks like an amateur work. However, an important context is missing from signal' blog post:
Cellebrite specializes in breaking and extracting data from encrypted partitions, and (this is the important part) extracting keys from the secure keystore (Qualcomm/exynos).
From Cellebrite point of view, the data and keystore are already "given", all the remains is "breaking" the app encryption scheme, which in signal's case is trivial.
While Signal's encryption is good, I don't like that (1) you have to have a phone number to register, (2) it asks for access to your contacts on your phone (3) you have to install the app on a computer rather than being able to just use it through a browser. For these reasons I prefer Wire... and you can log into three accounts at once on the free tier.
- (1) They are working on this, but it serves to limit spam, and it is easily comprehensible by the non technical.
- (2) You don't have to give it access, it works either way. If does its best to only use this for finding which of your contacts use Signal, rather than uploading the full address book.
- (3) client side crypto is supplied by the server, which fundamentally is a big problem for a system like Signal. Until web crypto is not a dumpster fire you can't do better. (I personally think allowing the Electron app is a huge mistake but it's their call.)
I found the original setup requiring a phone number off putting as well. However, I'm not sure how much it is used. In my case, I had a phone number that was used to set up signal. It was my number at the time. I now have a new number, yet there is no within Signal to update that number. It still shows the old number in my settings. ???? Signal still works just fine for me and all of my contacts
Any time any encryption breaking has the words "well, if we can retrieve the decryption key from the phone" and doesn't back that up with a mechanism by which this is feasible, this isn't an encryption breaking as much as it is "if i had an already decrypted device, then man can i do cool stuff for you!"
> The types of people that use Cellebrite will never understand the nuance here.
Au contraire, I would imagine a law enforcement or corporate investigator would knows the difference between encryption in transit and encryption at endpoints, and therefore be in the market for what Cellebrite has achieved (if they aren't using a competitor[1] already).
Ever since I learned Facebook greenlit "Signal is awesome, were using it" I've been trying to headscratch why. Then I realised WhatsApp had kicked the whole thing off years ago and got even more confused. Why do you want to encrypt everything??? It makes your life harder. It makes cooperating with law enforcement harder. It means legitimate users can't recover their messages. It means you can't do fun things with analytics, which is extremely contentious but concedably valuable. So, why??
I think I've figured it out. A tiny little bit of it, anyway.
Imagine you're a multi-million (okay, multi-billion) dollar communications company. You're WhatsApp. Apple (iMessage). Facebook. Google (RCS).
You store trillions of messages.
In those trillions of messages, you are going to have the statistical >100.00% guarantee that there are chats and conversations between individuals and groups that would launch World Wars 4 through 16 if certain other individuals, groups, governments and so forth were to learn/verify that A did really say <thing> to B. The nuclear launch codes don't fit in a football anymore.
I have no hope of ever confirming the validity of that Bloomberg article about the alleged Supermicro hack. But it seems "well duh" simple enough to be concerningly plausible (custom silicon packaged in WLCSP or SC70, bit-twiddling SPI? Too easy... :S). As a technically-flawless plausibility, I say it can serve as a concrete reference example of a fraction of the persistent, sweeping, ruthless, and terrifying scale of the super-industrial, Eye Of Sauron-style attacks that these companies have very obviously been facing for some time now.
So, my possibly-not-really-a-conspiracy-theory-since-the-pieces-come-together-without-fantastic-levels-of-extrapolation theory is, someone stumbled on an idea one day, maybe in a stuffy committee meeting, or maybe in a bar, to solve the problem by giving the people what they wanted... end-to-end encrypt everything... and go from encryption at rest, which is basically nothing, to encryption everywhere; and you instantaneously divest the massive, massive burden of owning all that readable data.
True, now "accidentally" forgetting the `s` in backend URLs doesn't let the NSA read everything anymore, but that kind of pales in comparison to being able to incontrovertibly, mathematically prove that, since the data really is encrypted before it leaves the device, there really is no chance any readable plaintext is leaking and potentially being stored; so if the nation states would kindly take stock of this situation and point the coherence death ray beams elsewhere that would be great since we are kind of on fire here at the moment and it's too hOT we are meLTING--
Getting this to catch on was obviously difficult. Anybody that can scare multi-billion dollar companies obviously has the skill to steer collective opinion and impression at scale. Whoever came up with the idea to piggyback on top of individual privacy is... a task-focused genius, I'll put it that way. On the one hand, the idea has scaled beautifully: all the tech folks have gone "Is private. Respects freedom. Og like." and loudly pushed for the idea everywhere they can. And from a sociopolitical perspective, the narrative is faultless and blameless, which is where the genius definitely shines through.
The first bit I can't say I like is the narrative appearance of first-class support for end-to-end encryption as a Scientific Advancement™. It's not. It's an implementationally-scoped, crowd control spin campaign to increase datacenter security beyond what disk encryption at rest can ever achieve. The scale of wreckage in the form of technically minded people who really believe the privacy narrative is disillusioning to see.
The other bit that I find unamusing is the long-term shifts in the attack landscape that will result from this. Specifically the fact that, an Eye Of Sauron style adversary is not ultimately going to care what their attack target is, or how to attack it, only that it gets vaporised. End-to-end encryption shifts the burden of responsibility to the owner of the server to the owner of the client. I can see the positive angle here from a think-tank standpoint - literal decentralization as a defence strategy - but still, Android/iOS are now the focus of some laser beams that were terrifying a bunch of rather large companies. Maybe it'll seem reasonable to heavily fund the vulnerability research scene to maintain a favorable status quo, and we'll see some impressive hacks going forward (or, er, we won't). Or maybe things are already "that bad" and I don't have anything to worry about. But considering that users are now that much more responsible for devices that are interesting in a way they were never before, this whole strategy kinda feels irresponsible to me if you squint at it from a certain angle. At the same time, it might ironically be ensuring our survival.
In this picture, law enforcement really is the afterthought. It's well known the law court system doesn't understand technology and is 20 (40? 50?) years in the past. That situation extends beyond the courts though, with law enforcement generally in the same position. But it's worse than it may at first seem, because the notion of "the past" that refer to a collective public interpretation of "now" doesn't do justice to the technological development that has happened at these companies over the last 5-10 years - these private companies are internally fighting battles of a complexity that the public law enforcement system cannot hope to comprehend, let alone help with.
In this fight, the best way to avoid World War 4 is to encrypt everything. But Washington is still getting over how cool they handled the Cold War, and the police still think it's "hard" and "complicated" and "special" to "hack phones".
Signal seems a bit of a jerk company. They should have "stayed above the fray". Instead, they ridiculed. They were in their rights to do so but should have been the better. Throwing insults, however well grounded, is never good for a brand and shows a lack of level-headedness. They could have simple stated the story is a non-story and stated why. This could have been paired with working with the BBC to get a retraction in place, taught someone at BBC something, and maybe even gotten a little free positive PR from them in the process.
Friendly reminder that reproducible F-Droid builds of Signal are still rejected for weak reasons [1]. You must trust the Signal binaries on popular app stores [2][3].
Don’t worry though; the Signal devs assure you that signed Android binaries from their website are reproducible [4]. As if checksum collisions aren’t something that state actors could trivially create [5].
Collision attacks are an issue when using md5/sha1, but I don't see how it's relevant in this case. If state actors wants to replace the signal apk with a backdoored one, they'll need to pull off a preimage attack, not a collision attack. A collision attack wouldn't be useful because you still need the original publisher to sign the apk for you, which seems unlikely considering OWS isn't a CA or anything. If OWS was compromised by state actors into signing, then they can just sign the backdoored version directly, no need for a collision attack.
I presume that the idea is that the compiled binary from the source and that the binary distributed by signal would be different but have the same sha1s. It does not make a lot of sense though because one could simply use another algorithm.
AFAIK, you cannot view applications on iOS unless your device is jailbroken. Apps are completely opaque since there is no traversable filesystem. On Android, it is a little simpler.
The vast majority of non-technical users have no knowledge about hexadecimal file comparisons or checksums. They see an app that promises privacy, and they click download.
You can download the APK from their site[1] and have been able to for several years, that's how I install it on my devices (and it does autoupdates).
As for hash collisions, as far as we know SHA256 hasn't been broken yet and even if it were broken you would need to be able to create a valid APK with Signal's code and the backdoor which hashes to the same thing as Signal.
Signal's reasons for not wanting to maintain an F-Droid repository are terrible, but that doesn't in any way compromise their security. As others have pointed out, reproducibility goes farther than just checksums.
So someone enthusiastically posted about wasting their time as if it was a technological achievement. Then someone (else?) realized that the long technical post sounded stupid and had it replaced.
And some people wonder where their tax money goes to — all these companies who are better at marketing themselves well as experts are getting free lunches!
[1]: https://web.archive.org/web/20201210150311/https://www.celle...