FOR the record, I’m fine with Apple’s policy of providing secure, end-to-end encryption with no backdoor for the government, Apple, or anyone else.
But to play devil’s advocate just a little: One of the anti-backdoor arguments being bandied about is, “Even assuming we trust our government to use a backdoor for legitimate purposes only, and only when necessary, I don’t believe that Apple can create a backdoor for our government that wouldn’t also open the door to skilled hackers, and to other governments. If anyone can get in, bad guys will get in.”
The purpose of this article is to argue that no, the above argument is incorrect, and a secure, emergency backdoor can be created. Here’s how:
Apple creates a private key BDpriv (backdoor private) known only to Apple, and keeps this in a secure location (like a vault).
Apple includes the corresponding public key, BDpub, in iOS.
When two iOS devices have successfully negotiated a temporary session key (SK) for a secure communication session, then they include SK along with each packet of communication, but in an encrypted form: specifically, encrypted with BDpub. So each packet contains (A) user data encrypted with SK, and (B) SK encrypted with BDpub.
The receiving iOS device ignores (tosses) part B, which it can’t decrypt anyway, and uses SK (which it already has) to decrypt part A, the user data.
If/when the government intercepts (and records) iOS-to-iOS packet data that it has a need to decrypt, then Apple can assist either by providing BDpriv (not such a good idea), or better yet, simply by using BDpriv to decrypt the data, but without ever handing over BDpriv to anyone.
This is a very simple system, and if implemented correctly, would be just as secure as is SSL when you access your bank account, and just as secure as iOS-to-iOS communications currently are (provided you don’t consider Apple’s ability to emergency-crack your encryption to be in-and-of-itself an insecurity).
Under the above-described system, Apple would not store any user data that it isn’t storing now. It would be up to the government to intercept communication packets en route, then request that Apple decrypt them.
So in a nutshell: While I laud Apple’s policy of not having any such backdoor, I have to disagree with the technical claim that such a backdoor cannot be securely implemented. It can.
What if the criminals jailbreak their iOS devices and modify them to not include the BDpub-encrypted SK, or include instead fake, random data, that is not useful for backdoor decryption?
The system could be designed so that both participating criminals would have to have the modification. But, of course, they both could. Other than continuing to work toward a jailbreak-proof version of iOS, I don’t know what could be done about that. And any criminals who can physically crack open their iOS device and replace its ROM chip will always be able to jailbreak.
But at least anyone using out-of-the-box iOS would not be able to communicate criminal intent without worry that the government might read it. And in any case, jailbreaking to disable the backdoor is not a path to enabling hackers and other governments to use the backdoor.
Update 2015.08.10 — Witness the following conversation (Apple Context Machine #318), between Bryan Chaffin and Jeff Gamet:
BC: The reality is ... let’s say Apple and Google are forced to build-in backdoors. Let’s just make that an argument here.
BC: And so now, when the government delivers a warrant, Apple can turn around and use that backdoor, and decrypt iMessages between one target and another target, hand that information over to the government, and the bad guys are stopped. ’Kay?
JG: Well, on the surface, that doesn’t sound so bad, because now the bad guy’s caught.
BC: Right. Yayyy, everything’s wonderful. But the cascading effect is this: ... if Apple has the keys to the back door, the bad guys will find the keys.
JG: And that’s a huge problem.
BC: That’s a universally accepted truth in the world of people who actually know how this stuff works.
BC: I know we’ve talked about this before on the show. And none of that’s changed; these are the same old arguments. And the thing is that we had all these arguments in the ’90s, and the government lost. And the government should have lost. Encryption protects us from the bad guys even while it also prevents the good guys, the theoretical good guys, from accessing our stuff when they have a warrant and the right to it.
JG: Well, data protection: you can’t expect it to be a mutually exclusive thing, where it’s protected on one side, but— or from one group but not from another? It’s a binary thing; it’s protected or it’s not. And it doesn’t matter if it’s hardware-based or if it’s software-based; if you have created a backdoor, it’s no longer effectively protected. From anyone.
BC: Right. From anyone! From anyone. And yet, despite the fact that we’ve already gone through this, despite the numerous white papers, and all kinds of technical papers, and philosophical papers that’ve been written, and published, and filed, and just added to this topic, we’ve got the government doing it again ...
Yikes. Looks like this meme’s gonna be hard to dispel.
If the government has a backdoor into encrypted communications, then everybody has access to that same backdoor. And that will always happen; this is a known reality to security experts.
And today Glenn Fleishman wrote on Macworld:
The problem with a backdoor is that there’s no way to create a way into a secure system that only the “good guys” can use.
On an only dimly related topic in the same article, he approvingly reiterated Amnesty International’s claim that torture doesn’t work:
Sometimes, torture is involved, which Amnesty International would remind us is both against international law and doesn’t help acquire useful operational information.
Two bogus truisms in one article and I couldn’t resist posting to its comments section, as follows:
At the risk of sounding like an insensitive bastard, allow me to suggest that the idea that torture “doesn’t help acquire useful operational information” is pretty silly. If you torture someone until they reveal the password to their encrypted hard drive, and now you’re in and can see the contents of their hard drive, then guess what: it worked! The idea that torture “doesn’t work” is one of those moral-obligation memes — disagreement is silently understood to be a red-flag that you’re a bad person.
A similar moral-obligation meme seems to be forming around the idea that a hacker-proof backdoor isn’t feasible. Sorry again — it’s very feasible. Search “a secure backdoor is feasible” to find my detailed explanation.
My post went into “moderation” — and surprise, surprise, it was unceremoniously deleted. I didn’t even get a message telling me about that, much less mentioning why. Several other comments were subsequently allowed, all expressing fawning agreement with the author. I don’t know that Fleishman is responsible, but somebody at Macworld sure is.
There are true things that some people really don’t want to know, and they’ll do whatever they can to make sure no one else does either.
Update 2015.11.12 — Cory Doctorow, from last May:
It’s impossible to overstate how bonkers the idea of sabotaging cryptography is to people who understand information security. ... Use deliberately compromised cryptography, that has a back door that only the “good guys” are supposed to have the keys to, and you have effectively no security. You might as well skywrite it as encrypt it with pre-broken, sabotaged encryption.
I wonder if this is one of the aforementioned “security experts,” “who actually knows how this stuff works,” on whose authority we are supposed to take it that the above-described BDpriv scheme would magically collapse?
Doctorow, just in case you’d forgotten, is the guy who, when iPad launched, said that he wouldn’t buy one and thought you shouldn’t either because “open platforms and experimental amateurs ... eventually beat out the spendy, slick pros. ... Relying on incumbents to produce your revolutions is not a good strategy. They’re apt to take all the stuff that makes their products great and try to use technology to charge you extra for it, or prohibit it altogether.” The next year he informed us that, “A tablet without software is just an inconveniently fragile and poorly reflective mirror, so the thing I want to be sure of when I buy a device is that I don’t have to implicitly trust one corporation’s judgment about what software I should and shouldn’t be using.” And, of course, we all heeded his warning against iBooks, “Digital Distribution and the Whip Hand: Don’t Get iTunesed with your eBooks ... Any time someone puts a lock on something you own against your wishes, and doesn’t give you the key, they’re not doing it for your benefit.”
Update 2015.11.19 — Bryan Chaffin again, in ACM #333:
A backdoor available to one is available to all. This is known. This isn’t theory. This isn’t speculation. This isn’t me crying about the government coming to get us. This is a reality of encryption. A backdoor available to one is available — to — all.
I don’t mean to pick on Chaffin here. I really like his work, particularly on The Apple Death Knell Counter. But I have to say it: The above quote is one of the best examples I’ve ever found of the idea that if you repeat something over and over again, in the most dogmatic terms possible, you can make it true — even though it actually isn’t.
Update 2015.12.25 — Gamet and Chaffin again, with Dave Hamilton on TMO Daily Observations:
JG: What we’re seeing right now from, in this case, the UK government, are statements — and I’m paraphrasing — this [a law-required backdoor] is not going to impact security; this is not going to impact people’s privacy; this is all just about protecting everyone. Don’t worry. It’s all fine.
BC: The whole thing is nonsense. We’ve talked about this topic.
DH: Maybe it’s not nonsense. Right? I mean, maybe there’s some thing, that they could say, hey look, you haven’t thought about this. And we’d all say, hmmm. But we need to hear that first.
JG: We do. And I —
DH: Sorry, Bryan. I didn’t mean to interrupt.
JG: Oh, yes. Bryan, you were talking; go ahead.
BC: Uh, it’s nonsense.
DH: (laughing) OK, it’s nonsense.
JG: Well said.
BC: We’ve talked about this several times. We’ve talked about this on probably every podcast that we’ve been on. We’ve written about this a lot. And of course, lots of other people have talked about this and written about this, as well. The security experts all call it nonsense. Apple is essentially calling it nonsense. I mean, the politicians have the information they need to not be idiots about this. And yet they want, apparently, to be idiots. And it’s super frustrating.
JG: I agree. It is frustrating.
And on ACM #338:
JG: Like we have said so many times, a backdoor isn’t just for one person; it’s there for everyone.
BC: It’s there for everybody to find.
BC: And don’t believe that because we say it, because we’re just repeating our betters here. Right? We’re repeating the encryption nerds, and the math wonks, and the people who have been studying this for decades. They’re the ones who say this.
JG: Right. The serious privacy and encryption advocates.
BC: Yeah. The people who actually know.
I certainly can’t claim these guys aren’t in good company: Here’s Tim Cook himself, being interviewed by Charlie Rose on 60 Minutes last Sunday:
[I]f there’s a way to get in, then somebody will find the way in. There have been people that suggest that we should have a back door. But the reality is if you put a back door in, that back door’s for everybody, for good guys and bad guys.
Hats off to Cook for refusing (thus far) to put in a backdoor. Hats off to Cook for trying to discourage governments from passing laws that would require a backdoor. And when Cook says that a backdoor won’t do much to catch the really bad guys because they’ll just find a way to communicate that’s out of reach of that backdoor, I have to agree, he’s probably right.
But when he says what I just quoted him saying to Rose — then I have to say, no, that isn’t true.
What is Cook going to do if/when some major nation (e.g. the USA, the UK, or China) passes a law requiring a backdoor? My guess is, he’ll implement something pretty much exactly like I describe at the top of this article. And it will not give backdoor access to hackers, foreign governments, unspecified bad guys, or “everybody.” It will be usable only by Apple, and Apple will use it only when they get a court order telling them to do so.
Update 2016.01.15 — New York is working on a bill that would require all smartphones sold in the state to be unlockable/decryptable by their manufacturers. How could Apple comply with such a bill? Simple: Make available (to any connected device that requests it) a concatenation of the iPhone’s 256-bit data encryption key and the user’s unlock code encrypted with BDpub. Only Apple (the holder of BDpriv) will be able to do anything with that value.
Again — I’ll be happy if New York doesn’t pass this bill. And if they do pass it, I’ll be fine if Apple decides not to sell its products in New York until the law is repealed. But if Apple wants to securely, safely comply with such a law? It can. Easily.
Update 2016.01.21 — Now California is working on a similar bill. Can’t see Apple stopping sales of its products in its home state.
Update 2016.02.20 — A bad analogy is making the rounds. It goes like something like this: “Requiring Apple to put in a backdoor is like if door lock manufacturers had to give a master key to the government, so the police could enter your house any time they thought it was the right thing to do. If there’s no master key to the door locks on your house, why should there be a backdoor to your phone?”
The flaw in that analogy is that the government does have such a master key — it’s called a battering ram. And the nice thing about the ram is that it leaves obvious evidence of its use, so they can’t come in when you’re not there and snoop without you knowing something happened. Nor can a rogue officer use the ram to snoop without leaving ample evidence for other officers that it was done.
The analogy of the battering ram fits the BDpriv scheme pretty well. There would be no way for the government to use BDpriv to quietly snoop; they would have to serve Apple with an order to decrypt specific data, and such a request presumably would be a matter of public record. (If it wasn’t, Apple could simply refuse to do it, and/or publicize the request themselves.)
A police master key to your house’s door locks would be analogous to Apple handing over BDpriv to the government, saying, “Do with this whatever you think best.”
Manufacturer-Held Master Key?
Now — is BDpriv analogous to a door-lock master key that is held by the lock manufacturer, and used only by that manufacturer when issued a court order to do so? Kind-of. But the analogy is strained. First, the police have battering rams, and don’t need the lock manufacturer’s cooperation to use them. And second, a door-lock master key inevitably would be duplicated (or reverse-engineered from a lock), and fall into the wrong hands — including the hands of the police. But BDpriv — kept in an Apple vault, and used only by Apple, at Apple, under Apple’s in-house procedures designed to prevent both undocumented use by a roque employee and copies of BDpriv leaving the vault — would not inevitably fall into the wrong hands.
The main weakness of BDpriv (as compared to battering rams) is that there would be no way to know if it had somehow gotten out of Apple’s control, and was being quietly used by the parties that possessed it.
Am I starting to become pro-backdoor? I’m aware that it must sound like I am. Let’s just say this: I would be OK with the BDpriv scheme I’ve outlined here. But I would also be very much OK with Tim Cook fighting this successfully, and never implementing any backdoor at all.
Update 2016.02.22 — Richard A. Epstein of Stanford University’s Hoover Institution, writes about “Apple’s iPhone Blunder”:
I participated in hearings that then-Senator John Ashcroft held in March 1998, and spoke in opposition to [a government-mandated backdoor], along with Kathleen Sullivan, then a professor of law at Stanford Law School. The greatest risk of the built-in back door is that the government will not be the only party that will enter through it. Back doors necessarily compromise the integrity of a security system.
Update 2016.03.08 — Something just like my BDpriv is apparently called a “golden key” (meaning a master key, a key that opens all doors). On Exponent #68, Ben Thompson describes a system where there is a golden key in (say) Apple’s possession, that can be used to unlock the encryption — to which James Allworth responds:
JA: What you’re describing is functionally the same as not having encryption at all.
BT: ... Having a golden key is, yes, in the long run the same as not having encryption at all. ... Encryption, the actual scrambling of data on a disk, is a binary thing. It’s either truly encrypted, or it’s not.
So having the user-data encryption key discoverable by the holder of BDpriv would be the equivalent of having no encryption at all? It wouldn’t even fall, say, in-between no encryption and Apple’s currently impervious-even-to-Apple encryption? Thompson answers that question a bit later in the show:
BT: There is a spectrum. There is not a spectrum when it comes to encryption; there is a spectrum when it comes to security.
Well, I guess that settles that. And lest I had any further doubts, here’s Jason Snell in Six Colors:
Piers Morgan: “Let Apple open the damn back door, let the FBI know what’s on the phone, then close it again. This isn’t difficult.”
Congratulations, Piers Morgan — you can join the tech geniuses on the Washington Post editorial board in the Golden Key society. They’re the ones who wrote: “With all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.” (That’s not how cryptography works.) But nothing is difficult when you believe in wizardry.
So if I think BDpriv would actually work — to the point that I could be dissuaded only by someone explaining very specifically why it wouldn’t — then I literally believe in wizardry. OK, Jason. You must be right. Somehow. I certainly don’t want people to think I believe in wizardry.
But for the time being, the only weakness of BDpriv of which I’m aware is that it might leak out of Apple’s control. Somehow. Even though Apple currently signs its own OS with key(s) that it has to keep secret — and somehow, they don’t leak out.
Update 2016.03.19 — Fleishman was the guest on John “Daring Fireball” Gruber’s “The Talk Show With John Gruber” (#149). As a long-time fan, I’ve actually gone out of my way to keep Gruber’s name off of this page so far. But in this latest episode, he takes the bluster a full notch higher than anyone I’ve quoted here yet; therefore it becomes unavoidable.
JG: You know, I’ll just admit it, and Hillary Clinton has espoused the same opinion, is a belief in this magical thinking that if we just put smart enough people into a room together, that they can come up with a way that this backdoor solution would only be available to law enforcement.
Not saying I’m the smartest person in the room, but I described it all by myself (see the top of this page), and no magic was — nor is — required.
That, we’re not asking you to make a backdoor that anybody could get into, we just want a backdoor that law enforcement can get into, when we have a warrant. Which sounds reasonable, and in some fictional, other universe, where that’s mathematically possible, that might be great.
Bad news: If you’re reading this article right now, you’ve just warped off to an alternate universe where secure backdoors are eminently possible (if not advisable). I sure hope you can find a way back to your own universe, where secure backdoors just don’t work.
I actually, I think that there’s good reasons why a civil libertarian would be opposed even to that. Let me just put this out there. And I tend to lean that way. I would listen to the argument, but I tend to lean towards, even if that were possible, I don’t think it’s a good idea, and I think it’s contrary to the values that are already in our Bill Of Rights. But it is an idea.
I have to agree: If, if it could be done (somehow, who the hell knows how, like in an alternate universe with different math??) — maybe it still shouldn’t be.
But the simple truth is that it’s math— all experts agree, and everybody who understands encryption. I mean, this is, it, I don’t think that you— it’s, it’s more than even, like, uh, I mean, it’s like provably incorrect.
The proof is out there? A link would be very helpful.
You know, like, as opposed to, let’s say, climate change, where you can say, you can argue that only 98 or 99% of expert climate scientists agree that what we’re seeing is man-made. I mean, with cryptography and backdoors it’s 100% agreement.
GF: You’re totally right. I just realized I haven’t seen any crypto deniers out there saying this is possible. I’ve only seen politicians and law enforcement.
JG: Right. Right.
GF: That’s fascinating.
Let me make sure I understand this correctly: Describing my BDpriv scheme (scroll to top) makes me a crypto denier, in the same league with anti-science nutballs in general. (In this case, anti-math nutballs!) Well, that’s certainly one way to win an argument.
But in this case, we nutballs don’t even exist, because there is “100% agreement” on this issue, not even “98 or 99%.” How can that be? Oh yeah, I forgot — I’m from another universe. A fictional one.
I want Cook to win this battle, but not by any means necessary. The civil libertarian argument, I think, is strong. The “it can’t be done; everyone will have access to it” argument isn’t just weak — it’s completely false.
Update 2016.03.20 — Later in the same episode:
JG: One of the things that depresses me about the current state of decades-long discourse in the United States is the polarization of politics, and that so many issues are so clearly polarized, and that we’ve self-sorted on these various lines into the two parties, and that there’s no interchange between them. It warms my heart that on this particular issue, it doesn’t fall on one line or the other ...
Yes, the polarization of politics is quite depressing. Would you like to know why it happens? Because people on both sides of an argument think they can win by dismissing all in-between positions, by embracing a false dichotomy. It happens because both sides think, if the only choices are my way or Hitler’s way, then people will choose my way!
Here’s how it’s done:
Either there’s no backdoor for anyone, or there’s a backdoor everyone will be able to use! Either it’s encrypted with no backdoor, or it’s not encrypted at all, and may as well be sky-written! If you disagree, you’re a crypto-denier, you literally believe in wizardry, you’re engaging in magical thinking, you think you live in another, fictional universe, and you don’t understand how encryption works. Among those who do understand it, there is nothing less than 100% agreement on this!
That’s how things get polarized. Fast.
Update 2016.03.21 — added “a concatenation of the iPhone’s 256-bit data encryption key and”
Update 2016.03.31 — Still later in the same episode:
JG: It’s easy for a layperson to believe in the magic solution of a way for the government to get in, but nobody else. You kind-of have to be gently informed of the basic way that encryption works to understand just how dangerous and impossible it is to say, the only people who can get in are the U.S. federal government, with a warrant. It just doesn’t work that way, once you create a backdoor.
GF: I know; that’s right.
Update 2016.06.07 — UK backs down on requiring a backdoor.
Update 2016.06.20 — Russia considering requiring a backdoor in all messaging apps.
Update 2016.08.12 — Microsoft just leaked a private key, and the usual suspects are immediately buzzing with the idea that this somehow demonstrates the fundamental insecurity of encryption backdoors. On AppleInsider Podcast #81, Victor Marks and Mikey Campbell opine:
VM: You named the words, “golden key,” right? And once such a key exists, it is impossible to keep it secured.
VM: You can keep things quiet or secret for a time. But you can’t keep them secret indefinitely. Because Microsoft has a golden key that governs the secure boot of Windows, their core operating system, their core product, and the thing upon which they built their whole empire, right?
VM: The golden key escaped into the wild, didn’t it?
MC: It did.
VM: How apocalyptically bad is that for Microsoft?
MC: The ramifications are— they’re pretty extreme. ... [Apple] specifically designed theirs not to have this said golden key.
Microsoft Windows already has the encryption backdoor that Apple refuses to implement? Hmmm.
Jeff Gamet and Bryan Chaffin weigh in on TMO Daily Observations:
JG: Microsoft gave us a perfect example of why it’s so important not to have backdoors into our operating systems, by accidentally letting the master keys — the golden keys, so to speak — for their encryption, out into the wild. Bryan, you wrote about this. So, fill us in please?
BC: Well, what Microsoft did, apparently earlier in the year, was to accidentally release a key that allows — a backdoor, basically — that allows someone to install whatever they want, even on a device that has what is called secure-boot. ... [this leak] does serve as an expert example of why backdoors existing are a bad idea. Not only does the existence of a backdoor make a target for our government, foreign governments, authoritarian regimes, criminal organizations, terrorist organizations, curious hackers, not only does the existence become an automatic target, but sometimes even the legitimate holders of a key can mishandle it.
Immediately after the above, Dave Hamilton tries to inject a dose of sanity to the discussion by explaining the necessity of having these keys at all, and he comes tantalizingly close to saying that this Microsoft leak event isn’t an example of backdoors being insecure.
It isn’t. Just like Apple, Microsoft has private/public-key pairs that it uses to secure its products’ bootup processes and OS update processes, with the private key never needing to exist anywhere outside of headquarters. Unlike Apple, Microsoft somehow managed to leak one of those private keys to the public. That key was not cracked, found, or otherwise extracted by any government, regime, criminal, terrorist, or hacker. It was accidentally leaked by Microsoft.
Could a backdoor key be accidentally leaked? Of course. But the security of your data already depends on avoiding such leaks. Nobody knows how to make a secure product that doesn’t depend on private keys that are held (and not leaked) by the maker of that product.
Later in the AppleInsider episode:
VM: The FBI ... they wanna restart the encryption debate.
VM: I know. Shocking, if true. People need to know how to back off of something when it’s not gonna work for them. Of course, people in government never do. But it’s frustrating that we’re still having this, after it’s been demonstrated repeatedly that math doesn’t work that way. Math doesn’t work the way that [FBI director James Comey] wishes it did.
Microsoft accidentally leaked a private key because math doesn’t work the way the FBI wishes it did? Hmmm.
Update 2016.12.21 — The U.S. Congress’s Encryption Working Group just issued a report in which it discouraged legislation to require backdoors. I’m glad that the U.S. government seems to be backing down from Apple on this issue. However, the report includes the following:
Any measure that weakens encryption works against the national interest.
Congress should not weaken this vital technology because doing so works against the national interest.
How would consumers’ privacy and data security suffer if encryption were weakened?
The direct implication is that a backdoor involves weakening the encryption. That is simply false. In the BDpriv scheme (watch the below-linked “Method...” video), the encryption of user data would not be weakened in the slightest. The only difference would be that the holder of BDpriv (Apple) would uniquely possess the ability to acquire the session (decryption) key.
Only by redefining “weakened” to include the existence of any backdoor, can backdoors be argued (circularly) to weaken encryption. But that’s like saying that your device’s encryption is “weakened” because you wrote down your device-unlock code on a piece of paper and put it in your safety deposit box at your bank. The encryption has not become weaker in any way — the hypothetical possibility that legal authorities might force their way into your deposit box, and find your unlock code there, is not an encryption “weakness.”
There seems to be an unstated assumption floating around that the way a backdoor works is that you introduce some hidden weakness to the encryption technique, so that a government agency with strong computers (and knowledge of the weakness and how to exploit it) can brute-force the encryption via this weakness. This idea is also vaguely implied when people say that hackers will “find” the backdoor — i.e. they’ll discover the weakness and figure out how to exploit it.
This idea is completely wrong. A backdoor does not (or certainly need not) involve any such hidden weakness. The only thing that needs to be hidden is the actual value of BDpriv; all other features of the backdoor scheme can be loudly publicized to the world without ill effect. And the user-data encryption is not weakened — or even changed, for that matter — at all. It’s just straight-up AES.
Update 2016.12.25 — That Congressional group has Chaffin and Gamet back on the subject (ACM #390):
BC: If we want to have security from all the bad guys out there — we gotta have security! And the good guys, we can’t give just the good guys access to this. Now we’ve talked a lot about this on this show, and on TDO. I believe that Dave Hamilton and John F. Braun have talked a lot about it on Mac Geek Gab. We’ve written thousands of words on the subject.
JG: And John Martellaro has had security experts on Background Mode.
BC: Yup. And, the long and the short of it is, there’s really no compromise.
JG: No. It’s either, we have true encryption and security for our data, or we do not.
BC: Or we do not. We do not! And so the Encryption Working Group was tasked with studying this, because— listen, to Jeff and I [sic], and again, to anyone with a basic understanding of reality, when it comes to encryption, it’s very obvious how this works. As we just described it.
BC: To, but— reasonably speaking, and yeah, I’m being a jerk when I put it that way, but reasonably speaking, there are lots of folks who don’t have a basic understanding of how encryption works. And to them, it’s just very obvious that law enforcement should be able to get into the device owned or used by a bad guy. It’s just obvious. You wanna stop the bad guys, and so of course they should have access to it. Because they don’t understand the repercussions; they don’t understand how this stuff actually works.
JG: Right. And, you know, that’s understandable that someone, or groups, could jump to a wrong conclusion based on their lack of knowledge and understanding about a topic like this. But for the people that are in a position where they will be setting policy for how our data is protected, like the government, if they’re going to mandate backdoors into our data, they should be really diving into this in a serious way, so they understand what the implications are, and can make an informed policy decision.
I’m no politician, but after hearing this, boy do I feel informed.
Chaffin quotes from the Encryption Working Group:
BC: “Technology companies, civil society advocates,” — this is a quote — “a number of federal agencies, and some members of the academic community argue that encryption protects hundreds of millions of people against theft, fraud, and other criminal acts. Cryptography experts and information security professionals believe that it is exceedingly difficult and impractical, if not impossible, to devise and implement a system that gives law enforcement exceptional access to encrypted data without also compromising security against hackers, industrial spies, and other malicious actors.”
JG: Ah, that’s awesome.
What really awesome is that I devised just such a system about a year-and-a-half ago. And it wasn’t even difficult, not to mention impossible. As far as implementation is concerned, the only hard part is keeping BDpriv secure within Apple’s HQ, and Apple apparently already nailed that problem about a decade ago when they figured out how to keep their iOS signing keys from leaking.
BC: “Further, requiring exceptional access to encrypted data would, by definition, prohibit encryption design best practices, such as,” quote-unquote, “forward secrecy, from being implemented.”
Nope. The session key (“SK” in my “Method..” video below) is different every session. So is the user’s unlock code. So is the phone’s data encryption key (DEK). If Apple reveals any of those things to the FBI for a mass-shooter’s iPhone, it wouldn’t compromise any other past, or future, values of SK, DEK, or unlock codes. Only the security of the specific iPhone (or the specific transmissions) under warrant would be compromised. Not even future FaceTime calls (e.g.) made with the same phone would be compromised, at all. They use an entirely new, random session key.*
JG: I’m totally loving this. This is exactly what they should be saying.
BC: Right, because it happens to be reality.
When people I respect, and with whom I almost always concur, insist that such-and-such is “reality” — I really would like to agree with them. But should I say that an encryption backdoor can’t work, even if I know it can? Somewhere, somehow — somebody has to explain, specifically, how my backdoor plan would fail, before I can say, with a straight face, that I believe it would.
*Update 2016.12.27 — I am assuming that Apple does not leak BDpriv. The only danger against “forward secrecy” is that if they did leak it, recorded data from years ago could be cracked. Besides, of course, not leaking BDpriv (just like they currently must not leak SIGpriv), a “forward secrecy” solution might go like this: Use a different BDpriv every year, with iPhones pre-loaded with fifty years of BDpub. When any year’s BDpriv becomes three years out-of-use, permanently destroy it, so it can never be leaked (nor even obtained by any level of force or intimidation).
Update 2017.03.09 — Gamet again on TDO:
What [FBI Director James Comey] wants is backdoors. And that’s where the problem comes in. Because, as we’ve said so many times, a backdoor not an exclusive thing. If you create a backdoor for the FBI, and the CIA, and whatever intelligence agency, you’ve created a backdoor that other governments, hackers, other criminals, can use. All they have to do is figure out how to get through the hoops to do it, and that’s it.
All they have to do is break the strong encryption in which the backdoor data is wrapped. Just jump through the hoops, and that’s it!
Come to think of it, the bad guys don’t even need a backdoor. They can break the existing system right now: Just figure out how to do it, jump through the hoops, and that’s it — encryption cracked! What is Apple going to do about that? We’re all totally vulnerable, right now.
Update 2017.05.01 — More wisdom from Doctorow: DRM will be all gone by 2025! So if you have a profitable app on the App Store, be sure to save some of the money: The party’s over just eight years from now.
Update 2017.06.19 — EU now considering requiring secure encryption, and outlawing backdoors.
Update 2017.07.14 — Australia working on law that would require government access to encrypted messages.
Update 2017.08.18 — Help Net Security has the definitive word:
How security pros look at encryption backdoors
The majority of IT security professionals believe encryption backdoors are ineffective and potentially dangerous, with 91 percent saying cybercriminals could take advantage of government-mandated encryption backdoors.
What do the other 9% say — that the Dead Sea Scrolls told them a secure backdoor is possible?
Encryption backdoors create vulnerabilities that can be exploited by a wide range of malicious actors, including hostile or abusive government agencies.
Oh, well, that settles it.
Update 2017.11.01 — Russia’s ban on VPNs goes into effect.
Update 2017.11.08 — It’s been over two years since he first started talking about this, but Chaffin’s tune has just gotten more entrenched than even I thought possible (ACM #436):
A backdoor created for one is available to all. This is a hard-and-fast rule. There is no way around the laws of reality in this particular regard.
The specific, exact way that my BDpriv plan would fail is — drumroll please — that it just would because that’s a hard-and-fast law of reality. Ahhh. I’m so glad I understand that now. Why did I ever think my idea could work?
Update 2017.11.09 — U.S. DOJ’s Rod Rosenstein (as quoted by Cyrus Farivar in Ars Technica):
[T]ech companies are moving in the opposite direction [of cooperation with government]. They’re moving in favor of more and more warrant-proof encryption.
Law 101, Rod: There’s no such thing as a legal order to do the impossible. No one can comply — or refuse to comply — with any such order. There would not even be a way to determine whether or not the subject of such an order was in compliance.
There is such as thing as crack-proof encryption. (It’s been around for many decades.) But no encryption system, or anything else in this world, is “warrant-proof.” That descriptor lies somewhere between oxymoronic and meaningless.
That said... here’s the opposite sophistry, from Farivar:
The DOJ’s position runs counter to the consensus of information security experts, who say that it is impossible to build the strongest encryption system possible that would also allow the government access under certain conditions.
The feasibility/infeasibility of a secure encryption backdoor, like the feasibility/infeasibility of an airplane or an atomic bomb, isn’t consensus. Either it can be done, or it can’t.
iOS Jailbreaking — A Perhaps-Biased Assessment
A Secure Backdoor Is Feasible
Method of Implementing A Secure Backdoor In Mobile Devices