Why The Government Shouldn’t Break WhatsApp

Why The Government Shouldn’t Break WhatsApp


There’s been a lot of talk about whether
the British government will force companies like WhatsApp to introduce
a backdoor into their encryption, so that the police and government can read
your messages if they need to. As I record this, they haven’t done it yet, but the laws that could let them do so in
the future are already in place. And here’s something you
might not expect me to say: that sounds like a reasonable idea. After all, backdoors have been allowed for
old-school phone conversations for decades. They’re called wiretaps. And if a criminal investigation has enough
evidence that they can get a legal warrant, then they can look inside your postal mail, they can listen to your phone calls, and they can intercept your text messages. And it’s called a wiretap because, many years
ago, the police would literally be attaching a
device to a physical phone wire. So for anyone who grew up knowing that, anyone who grew up with computers like this, like pretty much every politician in government, well, it seems reasonable that that should
also extend to, for example, WhatsApp. So why not? Well, first, let’s look at the technical detail. It all depends on who is holding the keys. Modern encryption uses complicated math that is easy for a computer
to calculate one way, but almost impossible to work out in reverse. A really simple example: if I ask you to multiply two prime numbers
together, like 13×17, you can do that by just hitting a few keys
on your calculator. And because those were prime numbers, we know that’s the only way to make 221 by
multiplying two whole numbers together. Other than 221 times 1,
and that’s not really helpful. But if I ask you: what two prime numbers were
multiplied together to make 161? There is no way to work that out quickly. There are a few shortcuts that you can take, but it’s still basically a brute-force method. Now imagine that you’re
not trying to work out 161, but instead something like this… and you start to see the scale of the problem. And that’s just a simple example, modern cryptography uses way more complicated
one-way operations. The important part is that you can have a
computer do math that’s simple one way, but could take longer than the lifetime of
the universe to brute-force back. The result is that you can have two keys:
two massive numbers. One public, one private. You send your public key out to the world. Anyone can encrypt a message with it: the message gets converted to
what looks like random noise. Even that same public key
can’t convert it back. But you can take that noise
and use your private key — and only your private key — to decrypt it. When you want to send a message back, you
use their public key, and they use their private key
to decrypt it. And the beautiful part of this:
there’s no need to exchange keys in advance, you don’t have to work out old-school
one-time pads, or anything like that. You can post your public key
out on the internet for all to see. As long as you keep that private key secret, no-one else can read your messages. This is a system that has been tested under
incredibly harsh conditions for decades. It works. The catch is, it’s really unfriendly to use. It’s difficult enough to get someone to join
a new messaging service as it is, let alone bring their friends along. Now you have to generate
these weird key things as well? And if you lose your phone
or somehow forget that key, or your hard drive crashes
and you haven’t got a backup, all your messages are gone,
lost as random noise forever. Email that works this way
has been around for decades but it’s too complicated and it’s too unfriendly
for most people. The security wasn’t worth the effort. So instead, web mail services, along with Facebook, Twitter,
and everyone else, didn’t worry about that. Early on, they were mostly unencrypted, but rapidly realised that was a bad idea — so now, they use regular web encryption, that padlock in your browser, to make sure that no-one
on your network can see your password or your messages
when they’re in transit. And that’s the threat that most people
have to worry about. But they do have
the content of those messages in plain text, or something close to it, and those companies can give that back to you
whenever you want. Which means that when a government comes along
with a legal warrant, the companies can also
give the messages to them. And this was fine, right? This was reasonable. This was an acceptable compromise between
security and usability. Or at least it was, until it was revealed
that — in short — every major government was keeping a copy
of pretty much everything everyone ever wrote, at which point a few companies decided, that,
actually, they didn’t want to take the risk of anyone
— not even their own employees — being able to even theoretically access the
messages that people were sending. The result is WhatsApp, and iMessage, and
the many smaller apps like them. They have “end-to-end encryption”. Your phone generates a public and private
key for you, automatically. It exchanges public keys behind-the-scenes, while you’re writing your first message to
someone, and everything after that is encrypted. And it’s all automatic! And so WhatsApp and iMessage
aren’t open source, in theory they could steal your private key
as well or quietly issue a fake one to someone and
sit in the middle listening, but in practice people would notice. Sure, there are small loopholes that could
work in particular circumstances, but the odds are remote, and security researchers
are already decompiling and tearing apart every version
of every messenger program just to see if someone’s
put a backdoor into it. The short version is: if any of these apps get served
with a government warrant right now, the most they could do is say how much two
people have been talking, and maybe roughly where they were: but never what they were talking about. More than that is
literally, mathematically impossible. But it’s impossible only because of the way
they’ve designed their systems. And that is the vulnerability. A government could make it a legal requirement
for Apple and Facebook to quietly add a backdoor in all their encryption if they want to sell anything in their country. I’ve heard this phrased as “outlawing maths”, but that’s a bit like saying that making punching a stranger in the face illegal
is “outlawing hands”. And if Apple and Facebook refuse to add a backdoor,
a government could… well, theoretically they could ban their phones
or ban their apps from sale, or prosecute the people in charge, or block Facebook, who own WhatsApp, or they could tell internet providers to block
their services, or they could… Look, in practice they’re going to fine the
company. Apple and Facebook have local addresses, they pay… some tax. Sitting on the sidelines, I would love to see
the British government go up against Apple and see who blinked first. But companies have bowed to foreign countries
loads of times in the past. BlackBerry let the Indian government have full access to users’ chats
and web history back in 2013. The only reason WhatsApp can’t read your messages is because they have deliberately chosen to
design their systems that way. They were just as popular without encryption: it was an afterthought, they’d been going for years before they switched
encryption on. This was a human decision, not an inevitable fact of technology. So why is an encryption backdoor
such a bad idea? Well, if there’s a backdoor, it can and will
be abused. Local British authorities already used our
surveillance laws, the ones that were brought in to stop terrorism, to monitor loud dogs barking, crack down on illegal feeding of pigeons, and to spy on some parents to see if they
actually lived near enough to a particular school they wanted to get
their kids into. Now, is this useful for preventing crime? Sure. And there’s the argument that “if you have nothing to hide,
you have nothing to fear”: maybe they shouldn’t have
illegally fed those pigeons. And yes, you, watching this, you probably
have nothing to hide and nothing to fear from the current government in your country. But laws and governments change,
and besides that: the internet, and the apps that we use on
our phones, are global. If you allow a backdoor here, you’re also allowing it for another country’s
government to spy on its opponents, and another to spy on people they suspect
might be gay, or who use marijuana, or who are Christian, or whichever thing is illegal in that country. In fifty years, maybe you’ll be part of a country where eating
meat has been outlawed, and the government will want to come after
you for tracking down the illegal bacon-trading ring that
your friends are part of. “Nothing to hide” only works
if the folks in power share the values of you and everyone you know
entirely and always will. To make it worse, on the surface this seems
like it’s equivalent to a regular, old-school wiretap,
but it’s not: depending on how the backdoor’s set up, a government might not just be able to get
what someone’s sending now. They could get the whole message history. Perhaps years of messages, back and forth with hundreds or thousands of other people. It’s not just a look into what a person’s
saying: it’s an overreaching look into the thoughts
of many, many people. it’s that long-forgotten naked picture that
someone sent five years ago. It’s that angry essay they wrote in school
and which they completely disagree with now. It’s not just “what are they saying”, it’s “what have they ever said”. That’s all assuming the backdoor doesn’t get
abused by folks with more personal grievances. All it takes is one rogue employee, in the government or at a messaging app, and we’ve got a huge amount of personal information
being leaked. Either of the public at large or of specific people that someone would like
to take revenge on. It fails the “bitter ex test”: can someone with an agenda use this to ruin a life? An AP investigation found hundreds of cases where police officers and civilian staff in
the US looked up private information
for personal reasons. And let’s not start on what would happen
if a hacker, or even some other government’s
intelligence service, got access to the backdoor. Or how it’d make it much more risky to report abuses of government power,
on any scale. There is an argument that
it would all be worth it, that all those drawbacks
would be a small price to pay for stopping very rare Bad Things. I disagree,
but that’s an opinion, not a fact. But an encryption backdoor wouldn’t stop
bad things happening. The problem with stopping terrorism right
now is not a lack of information. The Manchester bomber was reported to the
authorities five times, including by his own friends and family. One anonymous source inside
the UK security services told Reuters that at any time there are
500 people being investigated, and about 3,000 people “of interest”. For scale, just to reassure you, that’s only about .005% of the UK population. But the way to solve this is not more data, it’s having enough police officers
and security staff with enough time to do their jobs and investigate. And let’s be clear: anyone who wanted
secure communication for evil purposes would just use something else, any of thousands of smaller services that
the government hasn’t noticed yet or that they couldn’t possibly
have jurisdiction over. Or if even that is not an option, they can come up with a code themselves, even just in-jokes and references
that no-one else understands. So when I say that an encryption backdoor
sounds like a reasonable idea, I mean it. It sounds reasonable. Like a lot of ideas sound reasonable when
you express them in one or two sentences. But the devil is in the detail. If we could replicate the way
wiretaps used to work, limited in scope and time, requiring a warrant and some physical effort, not including the history of everything that
someone’s ever said, and not open to repressive governments
elsewhere in the world, then sure, I would absolutely
be in favour of it. Building an encryption backdoor isn’t impossible: but building a reasonable one is. Thank you to everyone who helped
proofread my script, and to everyone here at the
Cambridge Centre for Computing History, who let me film with this wonderful old equipment.

19 comments

  1. This is the first video from "The Basics", a series of three pilot computer-science videos I'm putting out in the next couple of months. This one's opinionated; one's explanatory; and one demonstrates coding. It's been a while since I've done this sort of thing — thanks to the folks who helped proofread my scripts!

  2. No broad backdoors should be used. However, I've worked with encryption and here is an easy solution.
    There are various derivations with public keys and or AES.
    Phones are powerful enough to do these encryptions easily.

    1. do the normal end-to-end encryption SAME as today
    2. separately take each message and
    a. encrypt with a public key of What'sApp company ( or whoever)
    b. encrypt with a public key assigned to the user account sending the message
    c. generate a public key daily for a user and encrypt with that. This limits scope of the user data and key.
    d. lastly encrypt with a public key of a firm who's job is meant for auditing, certificate or key management "Audit Firm X". But not What'sApp in this case.
    send the message to the server with the account id/date in plain view but the data encrypted with a.-d.

    The message encrypted with the a.-d. method cannot be seen by one organization. You need a warrant from the govt. to be sent to
    What'sApp and "Audit Firm X" to even get to the message and the warrant can limit the daily key scope.

    You could add encryption of the daily keys themselves as well using "Audit Firm X" or another firm Y so it keeps those locked down again without a warrant.

    Anyway, some variation of this will make the data available to the govt when needed assuming they have the WARRANT and prevents one organize from exposing information.

  3. I'm personally very concerned about security and privacy on internet, but at the same time I think limiting anonymity is not necessarily bad thing. Both ideas ofc conflict a bit, but as more of out lives move to internet and things we do on internet have more real consequences, there should also be more accountability.

  4. My experience is the FBI usually wiretaps innocent people who are threats to organized crime. It then uses these wiretaps to create entrapment scenarios to take the innocent people out.

  5. Very grateful you're brave enough to post a video like this. I see a lot of channels getting completely shadow banned, if not entirely deleted because they talk about something similar. Keep up the amazing work.

  6. Please on these videos can you remove the high pitch noise from it. It's all I can hear when I'm watching. Probably because of the crts I'm assuming

  7. This is probably the best I’ve ever heard this put, I’ve always felt like it was wrong on so many levels but couldn’t express it in words until now

  8. This is just like the 9/11 theory. The twin towers attacks were staged to give the US a reason to go to war in the Middle East. They probably didn’t state the attack, but maybe the government are using this tragedy to get more access into our lives through security theatre.

  9. Am I the only one that thought having a back door at all was UNreasonable, even before the explanations? You can just see where the abuse would come from!

Leave a Reply

Your email address will not be published. Required fields are marked *