Oh too bad. Law enforcement has had it absurdly easy for the past ~15yr, all they had to do was cry "terrorist" and they could read whatever they wanted (I'm glossing over a lot of details here) If the FBI can read everyone's email (and other communications) then the people who really have stuff to hide will find other ways of hiding it and they'll waste their time (and our $$) screwing innocent people or catching dumb people they would've caught in other ways.
As encryption becomes more common they've gotta choose who's communications they look at rather than entering "keywork=bomb" checking the "wiretap all in results" box. Bugging a room or vehicle isn't new but you need a semi-legitimate reason to justify that sort of stuff (reasonable investigations have an ample supply of legitimate reasons). They've got ways to look a a suspect's communications if they want to, especially on US soil.
The more I read about this debate the more I think that the FBI (law enforcement in general) is just whining because, rather than just tapping everyone of interest and sifting through the results, they have to find a different way of doing things and that requires work and work is hard. I don't feel sorry for them
This is the same problem the music industry is possibly starting to to get over. The way they do things is outdated and they're complaining that progress is causing more work for them. Legislating the past back into existence doesn't work well. Just ask the Taliban.
> “There is a misconception that building a lawful intercept solution into a system requires a so-called ‘backdoor,’ one that foreign adversaries and hackers may try to exploit,” Comey said. “But that isn’t true. We aren’t seeking a backdoor approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law.”
So, he is calling backdoors frontdoors, and then arguing that he's not seeking backdoors.
Isn't their a fair distinction between an undocumented weakness ("back door") to which LEOs have access, and a provider providing a key ("front door") upon a lawful order, maintaining the strength of the encryption scheme?
It seems some believe that Comey is playing with semantics in order to obfuscate, or doesn't understand the argument he's making -- I don't.
Neither backdoor nor frontdoor are well defined (backdoor does not always imply secrecy). I never heard the word "front door" until this recent push, and generally key escrow has been referred to as a backdoor despite being public knowledge.
My personal definitional taste would be:
* Backdoor - an additional way to decrypt a communication without the consent of the communicating parties.
* Secret Backdoor - a backdoor which the communicating parties are not aware of (DUAL_EC).
* Public Backdoor - a backdoor which is built into the public description the of the encryption system so that the communication parties are aware of it (lotus email backdoor).
* Frontdoor - a type of public backdoor which requires a warrant to access and whose key is controlled by a neutral (disinterested) third party. I'm not sure this is exactly what the FBI wants.
Thus, frontdoors are a very specific form of backdoors.
Exactly. The front door is what the end user uses in the regular operation of the system. If there is another "door" imposed under penalty of law then it isn't the front door.
Every user downloads and runs arbitrary code constantly, as updates. In the far future updates might come with a formal proof of their security, machine-verified on download, but for quite a few years still we will be stuck with just cryptography.
A front door would be using Microsoft's signing keys. As long as you don't leak the keys, you aren't diluting security in general. A back door would be just leaving vulnerabilities around. It's a meaningful distinction.
There is a meaningful distinction between lawful imprisonment and false imprisonment; that doesn't make it accurate to call lawful imprisonment freedom.
Moreover, the ability of software vendors to push malicious updates is a security vulnerability. Just because we haven't eradicated it yet doesn't mean we should codify our inability to address it in the future, e.g. by allowing users to choose what party they trust to verify and sign updates.
I say we refrain from adopting any new silly terminology that anyone attempts to foist upon us regarding this issue.
Something is either cryptographically secure, or it isn't.
A "cryptographic" method that allows access to anyone not authorized by the one doing the encryption is not cryptographically secure. And in that case, what's the point in using it or even calling it cryptography?
That's some semantic gymnastics. So a "Public backdoor" is a "Frontdoor" when used by a law enforcement agency with a warrant. What about when the "neutral" third party uses the keys for some purpose without a warrant? What about when the third party is hacked? It seems confusing to refer to the same system as both a frontdoor and a backdoor.
The FBI would probably be happy with a front door. Unfortunately, a "frontdoor" means some "neutral" third party (or anyone who hacks it) then has the ability to decrypt all your communications. Furthermore, a "neutral" third party isn't necessarily that trustworthy. The saving grace of the CA system is that non-targeted attacks are likely to be detected, because the CAs don't have the certificates' private keys, and use of alternate keys is detectable, or even preventable (only in advance) with pinning.
Everyone would balk at a CA system where the CAs had all the servers' private keys. No matter how trustworthy the CA. It would be undetectable, and pinning wouldn't mitigate it. And that's exactly what the FBI wants for Google, Facebook, Microsoft, and Whatsapp communication products.
If a neutral third party holds the keys, then you have the company (1) that makes the communications products having the keys, transferring them to the neutral third party (2) and deleting them ("we promise!"), so only the third party holds them for possible eventual use by the FBI (3). That's three entities that may potentially have access to key material in the future, not to mention anyone who hacks those three entities.
I'm not advocating such a system and I agree with the points you make. I don't think a "frontdoor" would be effectively secure against either government abuse or key compromise. Its a bad idea. Not only that but since it is publicly known, such products would have a competitive disadvantage.
To the extent that this is true (which I think is only in rather specific circumstances), backdoors also also ways of routing around constitutional protections.
First of all, encouraging Congress to change laws is not police work. It's political work.
Second, in this case, "routing around" is distinct from "respecting".
The courts and constitution state that in some circumstances, a person has the right to encrypt messages and not divulge the encryption key. The fundamental right here is the right to a private internal dialogue -- the state can't compel you to speak on certain questions. The FBI is trying to route around that fundamental right by creating a technical mechanism that allows them to never have to ask you to hear your internal dialog.
In short, the existence of a technical means for violating the intent of an guaranteed right without technically violating the letter of constitutional law is a game that the courts eventually shut down as unconstitutional bullshit. But a lot of people get hurt in the in-between.
But again, just to be extremely clear on the most important issue here, encouraging Congress to pass laws is not in any way police work...
> encouraging Congress to pass laws is not in any way police work
Who said it was? Police work is obtaining evidence while respecting the Constitution. As technology changes, Congress and the courts must redefine exactly how that can be done, and the police participate in that discussion.
The fifth amendment guarantees the right not to be a witness against oneself. It doesn't guarantee unbreakable encryption.
That said, I don't think Congress can stop criminals from using encryption and I don't think Congress should stop law-abiding citizens from using encryption.
Because you still need the user's willfull cooperation. It doesn't work if the user doesn't cooperate or if he's not supposed to be aware that he is being listened to, which is I believe the case in most terrorism cases.
Not always, especially when law enforcement can request data straight from the provider, which I would imagine happens in the majority of internet crime investigations. Because let's be real, how many internet companies have a zero knowledge policy towards their users' data?
> Because let's be real, how many internet companies have a zero knowledge policy towards their users' data?
Well, the whole point of TFA is that Apple, Google, Yahoo and the likes want to progressively move in that direction (not totally for sure, but just enough so that the FBI/NSA doesn't like it).
This is a profoundly US-centric perspective. Consider Microsoft's efforts to isolate data in Irish servers from US jurisdiction. US providers have already lost substantial global business since Snowden's releases. If mandated lawful access could short circuit jurisdictional disputes, US providers would arguably lose far more global business.
That is a very important point from the other side too. Clearly other governments are not going to accept the US FBI having a world-wide backdoor into the communications of their citizens. But if all the "criminals" have to do is buy their phones in Mexico (or wherever) then what are we even talking about?
consider guidance from new zealand's Health Information Governance Expert Advisory Group (HIGEAG) [0]:
> Unless an exemption is granted by the National Health IT Board, all personal health information held in an identifiable form and associated clinical or administrative data must be fully domiciled in New Zealand.
i get that this is a tongue-in-cheek comment, and appreciate your sense of humor, but it also saddens/frightens me to think this could become the prevailing sentiment in the years to come.
i grew up believing the internet was a great equalizer, capable of providing opportunity to people from across the globe, irrespective of nationality, race, creed, etc.
but now i fear the actions of certain nation states could produce -- for lack of a better term -- a fervent "digital xenophobia" that causes people to delineate and enforce boundaries on the net mirroring real world geopolitical borders maintained by a powerful minority.
that's not the kind of future i'd like to see.
for example, might people one day need a digital passport to send and receive data internationally? and might those data be subject to customs inspection before/after receipt?
today these may seem like parts of a cyberpunk b-movie plot, but it's not inconceivable that legislators would push for similar measures in the future if, for example, "national security interests" hung in the balance and they had the power to push such measures through.
Short of banning math, how exactly can you prevent people from using strong encryption?
Even assuming the government can convince a lot of big corporations to put backdoors in their products, tech literate people will just use niche open source encryption for their own communications. Terrorists, or people that want to do evil, will switch to using secure channels - so the only people you will be able to monitor via sig-int will be average people that are not particularly tech literate. That doesn't seem like such a huge gain to me.
Short of banning math, how exactly can you prevent people from using strong encryption?
There are plenty of things that are technically possible but illegal. Most things that are illegal are physically possible to perform if I so wished. The efficacy of law is never 100%, how often people break it is often based on the ease to do so, the ease to not get caught, the scale of punishment, personal morals, interpersonal pressure, and societal standards.
I think the distinction is that cryptography is much closer to thought-crime than to any physical crime. It's on the continuum of banning speaking in private. There are a lot of ways to slice it, but it boils down to privacy being illegal per se. It would be unconscionable by the majority if the implications were clearly understood, but because it's "technology" somehow the fearmongers are able to spin it (as they do so many things) as a national security issue.
I think the point is to carve the division between tech-literate and common communications in stone -- to prevent privacy and security from ever going mainstream. If encryption is outlawed, people using it will be easy to identify and manipulate.
Bingo. There's no way these guys can truly believe that e.g., forcing Apple to decrypt iPhones on demand is going to put a big dent in terrorism. I guess they are a) hoping to get an upper hand on a few small-time/unsophisticated criminals and don't mind stretching the truth to sell it to the public, or b) they view this as the first of many long-term steps in killing encryption/privacy in the US altogether, or c) they plan to deploy this much more widely than they are letting on, possibly even internationally through things like trade agreements (in which case it could actually be useful for counter-terrorism.)
I would imagine here in the UK our politicians will use this narrative:
* Evildoers aren't born knowing how to use tor and having all the right onion urls memorised. New recruits are onboarded/radicalised with tools the average person regularly uses. Like facebook, twitter, gmail, https-encrypted websites and so on.
* Active terrorists learn to use encryption properly before they start committing crimes, so we can't monitor the active terrorists. As our only option is to monitor new and prospective recruits, we'll have to monitor people who aren't yet committing any crimes. As we'll be monitoring the widest part of the recruitment funnel, monitoring huge numbers of innocent people is just the only option.
* People might be recruited through channels we're not already aware of. When a terrorist is recruited we need to be able to find out how they were recruited, so we can find new channels we need to monitor. And it could be months or even years between the recruit learning to use effective encryption and them being identified as a terrorist. So we need to store an internet history for every person for several years.
* Privacy? Well, there will be strict legislative safeguards in place of course! Unfortunately all the work will be top secret so only insiders can audit compliance and it'll be illegal for them to take any evidence of wrongdoing to the press. But the legislative framework will be very strict, yes sireee.
* This is the best option we have to keep our children safe http://www.bbc.co.uk/news/uk-31575908 you wouldn't want any more schoolgirls going off to join ISIS now, would you?
I think the argument of backdoors making services less secure is the wrong approach.
The government just needs to convince lawmakers they've come up with a scheme where only they can access the backdoor, not criminals. The opponents scream, "that's impossible," but LEOs only need to persuade lawmakers it's unlikely the backdoor will be compromised and they've won the argument with "good enough." If there's a terrorist attack and they claim weak encryption could've prevented it, "good enough" will suffice.
The more important argument is the government has repeatedly demonstrated they can't be trusted with the special access and they're pretty much guaranteed to abuse it. They'll even exploit those backdoors to spy on the very politicians from whom they're trying to get the new authority.
The threat of the government using new powers to violate the 4th amendment rights of innocent citizens is orders of magnitude higher than the threat of sophisticated criminals exploiting LEO backdoors.
The OPM leak is a good example of why we shouldn't just take their word for it that they can keep things secure -- they leaked everyone's personal details and security clearance interview details, why would we expect them to do a better job with people's communications?
Painting cryptography as a 'weapon,' as the header image of this article does and as was done in the past with the ban on it's exportation, is a completely wrong analogy.
Cryptography simply translates to the digital world mankind's right to be secure in our personal effects, and freedom from unreasonable search and seizure.
Unencrypted digital communication removes the temporal aspect of surveillance. If you have a conversation with somebody in the physical world and you are secure that that conversation is not being spied on, you can be sure that at no point in the future will that conversation be spied on. Encrypted digital communication simply translates that right to the digital world.
If an authority has pretty good knowledge that someone is in possession of encrypted digital files that only they have the password to access, this is simply the digital version of an authority having pretty good knowledge that someone has a buried treasure that only they have access to. This person cannot be coerced against their will to travel to the location of the buried treasure an dig it up just because an authority is pretty sure it is out there somewhere.
Others in this thread have pointed out that this is just law enforcement being lazy. And that is a very good point there are plenty ways to do law enforcement even if your target is communicating in codes and crucially it causes resources to focus on the important targets.
This is really crux of the matter, encrypted communication is not going to prevent law enforcement from doing their jobs. Crucially, what it does prevent, is the nation/world wide digital-technology enabled surveillance dragnets.
This is what must be kept in mind when considering this issue, we must make sure that we are creating future societies that are under no threat of nationwide spying from tyrannical governments.
In fact, any non-sense about 'going dark,' or terrorism and criminals is a smokescreen and should be struck from any reasonable, level-headed discussion of this matter. Terrorism and crime are rare enough and law enforcement is good enough that we are under no risk in seeing an upswing in crime and terrorism due to continued proliferation of strong(that is, actual) encryption. The continued development and use of cryptography is entirely about preventing the kind of massive scale, unlawful spying that has been shown to be taking place.
I'm a little perturbed by the length of this article. Those who are aware of issues in crypto already know a lot of the background here. The challenge is to engage the public on the issue and break it down in a sensible way.
With the response to Paul Ford's "What Is Code?" article in Bloomberg, longer thought pieces on technology are now in vogue but the author could have made their argument in a more concise way.
I try to comment on substance generally and not appeal to complaints like mine but I only bring it up because this new fight around crypto is an issue with as much public bearing as the net neutrality debate but with infinitely more complexities (which makes it more challenging to engage regular people on)
Starts by talking about a discrepancy between technology and law, and I agree, but not in the way Comey is saying.
The discrepancy is that where once communications we considered private (loved ones, close friends) was generally face to face and phone lines were just used for business or more innocuous conversation, now all of it takes place along these lines the FBI and NSA want to have access too, along with personal information and other stuff that used to be only physical.
The discrepancy is that they want the law to grant them powers as if the internet is just a different type of phone line from the 60s.
One of the panelists from last Wednesday's senate hearing Peter Swire (Huang Professor of Law and Ethics, GT) puts forward the case that we're in actually in a Golden Age of Surveillance. I largely agree, but unfortunately most of the coverage I've seen starts by uncritically accepting the "going dark" narrative.
The funny thing is that extending CALEA to information services and requiring baked in facilities for government digital wiretapping would do nothing to stop a smart terrorist or criminal organization. Cryptography is a field of mathematics. It's not some physical thing like a mineral which can only be mined in the United States. The knowledge of how to build a proper cryptographic channel is freely available to anyone who can understand it.
Sounds like a solution which increases bureaucracy and spending yet has the plausible objective of fighting crime. I can see why this issue never goes away.
It would also require crypto systems to be licensed and regulated, effectively limiting their deployment to huge vendors. Look into what FIPS compliance entails. These kinds of things always have a regulatory capture / monopolistic land grab component.
The thing that gets me in all these discussions is people acting like we can't build a protection mechanism that stops A from getting in but not B. Yet, these same people in another context say we can using words such as VPN, port-knocking, TLS, SSH, and so on. We do it all the time with some solutions being quite strong. If anything, a robustly-designed R.A.T. or escrow is nowhere near as threatening than the average app or network protocol on endpoint: practically backdoor generators in practice and their 0-days are the main way High Strength Attackers hit targets.
So, first we should stop with the double standard. There exist mechanisms that work well enough in practice which allow selective access to systems (aka front-door). There's also decent work in limiting such access, auditing it, escrowing, and so on. NSA also already does the escrow for its Type 1 encryption products with all key material managed and protected by their EKMS system [1]. If it got visibly breached, we'd probably have seen evidence of it. So, I'll skip straight to the other issues that popped up in peer review when I published a few attempts at a secure solution to the L.I. problem.
Will it have security problems in general? Yes. Damage can be limited in a number of ways, esp if scheme is decentralized. Yet, it adds risk on top of what we already given even rigorously-built solutions have had issues.
Will there be authentication forgeries for LEO requests? Yes. This might be a one-off, quickly detected thing. It might be unlimited depending on how the breach goes. Sender and receiver must be protected across the board. LEO's track record isn't good. High risk here.
Does the government have a history of abusing its access and deceiving about its usage? Yes. A great reason not to do L.I.
Will the government agree to a L.I. scheme with a strong foundation for security that ensures read-only access for a limited period and can't subvert system? Clive Robinson suggests that even a theoretically, bulletproof L.I. system won't be acceptable if it limits them and it will be forced to be more "flexible" by their sway over courts/lawmakers. Seems true given current situation.
Can they prevent attacks via malicious sys-admins or infiltrators? Snowden says no and with quite the credibility. ;) High risk here.
And the big one: will the KEYMAT-holder survive the inevitable attacks of all intelligence services, hackers, and organized crime combined? And all the things they will do to the organization or its employees? Gravely, high risk here.
Dirk Praet brought up the last question in a discussion where I had tried to cover every single angle in design, personnel, and ops. In the end, Dirk and Clive are right: the government will force the system to have too much privilege, centralization, and/or accessibility; data will be mass compromised in a free-for-all by the opponents. Whatever concrete requirements come out of such legislation will present vastly higher risk than with key escrow or remote administration tools in general.
So, there's extremely high risk in warrant process, key storage even with HSM's, and organizations administering this even at SAP security level. More risk than any DOD, NSA, or FBI have faced before given the concentration of enemy effort that will occur. Their track record on smaller things shows they can't handle something that big. Even if they were honest as Boy Scouts, the risk assessment argues against the L.I. option 100% and regardless of what level of INFOSEC our mechanisms ensure.
Also, if they were Boy Scouts, they couldn't think like the attackers enough to stop them. Proposal is untrustworthy whether they are or not. That's No^2. ;)
As encryption becomes more common they've gotta choose who's communications they look at rather than entering "keywork=bomb" checking the "wiretap all in results" box. Bugging a room or vehicle isn't new but you need a semi-legitimate reason to justify that sort of stuff (reasonable investigations have an ample supply of legitimate reasons). They've got ways to look a a suspect's communications if they want to, especially on US soil.
The more I read about this debate the more I think that the FBI (law enforcement in general) is just whining because, rather than just tapping everyone of interest and sifting through the results, they have to find a different way of doing things and that requires work and work is hard. I don't feel sorry for them
This is the same problem the music industry is possibly starting to to get over. The way they do things is outdated and they're complaining that progress is causing more work for them. Legislating the past back into existence doesn't work well. Just ask the Taliban.
Short sighted bunch of fools...