The Fog of Cryptowar (3/4)
Editor’s Note: This is page 3/4 of this extensive article. Click here to go back to the beginning.
Key Escrow
Key escrow in the strict sense means that all keys (in this debate, confidentiality keys), must be shared with a trusted agent – like a government agency – before they can be used for encryption. In the case that encrypted data must be decrypted under a warrant, the police would then request the key from the agent and perform the decryption.
While possible to implement from a purely theoretical point of view, key escrow mechanism are inherently complex when deployed on a larger scale. It must be considered that there must be a secure way of transmitting the secret keys between the user and the escrow agent, and that those keys must be made accessible to law enforcement in some way.
Very naive approaches use only one additional, global key to secure this key transport. But this makes that gloabl key a secret on which the confidentiality of all communication within the domain of regulation would rest. The escrowed keys must be stored, managed and protected against unlawful access.
If recent history is any indicator, then building such a system even on a national scale is unrealistic. Many government agencies have suffered fatal data breaches recently, including the NSA (which is specialized on keeping secrets), the CIA (the same) and the Office for Personal Management in the USA. This list of breaches is far from being exhaustive, but it demonstrate the risk of a key escrow agent would face.
This risk is compounded by the fact that two conflicting requirements exist for an escrow agent. On the one hand he must protect all keys against unlawful access, on the other hand he must establish a way to share those keys with law enforcement in a timely manner. This makes it necessary to keep some form of the key digitally available and online -which in turn exposes that key to attacks.
To mitigate the risk of a single escrow key, some schemes suggest the use of splitting the user’s key between many key escrow agents that then have to cooperate to reveal the key. While the security of these schemes is higher, they also multiply the complexities and cost of such a system, especially in regards to deployment and operation.
Furthermore the process by which law enforcement can request keys from the escrow agent(s) must be secured and authenticated, meaning that law enforcement requires to have some form of authentication key that would be used to demonstrate legal access. Each authorized agency and office would require one of those authentication keys. However, since each of those keys comes with the ability to reveal an escrowed key from the agent, the security of a key escrow scheme would rely on the secrecy of each of those authentication keys.
Additional problems like secure key rotation, availability of the agent, and cost of operation would likely turn this approach into the biggest and most complex government mandated information system project in history. The risk of failure to deploy, security breaches, and the cost of operation make such an approach unrealistic.
Another problem of key escrow systems is the scope in which they are to be deployed. If they are deployed as a global infrastructure, the management and regulation would require global political coordination. If they are however deployed on a national scale, they would require some means to enforce the specific demands of the jurisdiction on the user’s device – like choosing the transport key of the national key escrow agent.
A further problem of key escrow mechanisms is that they conflict with cryptographic best practices, especially Perfect Forward Secrecy. Here a new key is generated for each message and old keys are immediately destroyed. This ensures that a leak of keys does not put all communication at risk of being decrypted, but only the communication during a short time frame for which the key was stolen. Key escrow systems however require that keys are shared with the agent which both introduces a long-term storage of secret keys that can potentially decrypt the communication of years and an enormous amount of communication between user and escrow agent since every new key needs to be escrowed.
Another best practice that is incompatible with key escrow is the use of authenticated encryption. Here the same key is not only used for confidentiality, but also for integrity protection (and indirectly authentication) of the communication. Sharing this key with an escrow agent would allow the agent to not just read the communication, but also manipulate it without the original parties being able to detect this. Which means that not only confidentiality of data is at risk, but also the security of the communicating devices.
Instead of the user generating a key and then sharing it with the escrow agent, the escrow agent could also generate keys for the user. This suffers from the same problems, but introduces an additional one that the security of all keys relies on the security of the key generation method employed by the escrow agent. Implementation mistakes in cryptographic algorithms are commonplace enough that this could potentially lead to a situation in which the security of all keys is undermined but without anybody being able to detect it – except for a successful attacker.
Advances in cryptography may also lead to key escrow becoming much more secure. For example, various proxy re-encryption schemes could be employed to mitigate many of the security problems of previous approaches and reduce the complexity of implementing key escrow.
Content Escrow
Instead of encrypting data end-2-end between the intended sender and recipient only, a third party (called agent) can be introduced to which all content is encrypted. Various protocols exist that make this possible and enforceable, as long as at least one of the original parties is honest. The communication can then be intercepted by regular means and decrypted if the need arises.
Content Escrow schemes allow the continued use of some forward secrecy mechanism as long as the agent actively supports them.
One additional problem of content escrow mechanism is that the agent plays an active role in communication, which increases the demands for reliability and accessibility of the agent. Should the agent become unavailable, this could (depending on the protocol) prevent communication which turns the agent into a single point of failure and would make it a prime target for denial of service attacks.
Key Recovery
Key Recovery schemes are similar to Key Escrow schemes in that they make keys available to a trusted third party. However, keys are not directly made available to an escrow agent to be stored, but instead require access either to one of the devices that communicate with each other, or realtime interception of the communication.
In key recovery schemes the confidentiality keys generated by the user are stored in a secure storage module of his device, stored in a remote cloud account, or transmitted with his communication. The keys are encrypted for one or more escrow agent keys.
Key recovery schemes have the same problems that key escrow schemes have, but they are less resource intensive because no communication with the escrow agent is required by the user. Instead the existing interception capabilities of communications providers are used only in those cases when a need for interception actually arises.
Key recovery schemes for data at rest, especially encrypted devices, are a seemingly attractive approach because any access to the secret keys would require access to the device as well as cooperation of the escrow agent(s). This could potentially satisfy part of the law enforcement demands without undermining security too much. However, the implementation of such a recovery scheme would require the creation and deployment of special secure storage modules in all relevant devices – current devices would not be covered.
A final note should be added concerning key escrow, content escrow, and key recovery. All these approaches are brittle in the sense that there is no guarantee that they will work when they are most needed. Verifying that such a scheme works in a specific case requires actually decrypting the data of interest. If such a verification is not undertaken frequently, these schemes might break without being noticed. However, this creates new legal problems since the interception and decryption of data for verification purposes is hardly justifiable by current standards of law. Attempts to verify those schemes by employing the (automated) cooperation of the communication partners only applies for data in transit, and always relies on the honesty of at least one party. Since these schemes are only considered to catch criminals (people that actively and intentionally break the law), such a cooperation cannot be assumed. It is this verification (among some other aspects) that doomed the famous Clipper Chip key recovery system that the USA tried to roll out in the 1990s. Since then, no substantial improvement on this front has been made.
Mandatory Key Discovery
Several jurisdictions (UK, indirectly USA and Canada, amongst others) have codified laws that are meant to compel suspects to reveal their secret keys and passwords to law enforcement or the court. If the suspect does not comply, fines and prison time await him.
This approach suffers from technical, practical and legal problems:
First, it is of no use if the suspect employed Perfect Forward Secrecy in his communication, or uses timed encryption for his storage devices.
Second, it is hard – and sometimes impossible – to distinguish between a suspect that is unwilling to reveal his keys and one that is unable to – either because he forgot or he never actually knew the keys (mis-attributed device, or hardware security token that has been destroyed).
Third, it is questionable if anybody should be mandated to produce incriminating evidence against himself. Since we are no legal experts, we must refrain from further judgments. However, the legal implications are deeply troubling.
Insecure default settings
It seems that one of the approaches that have been tried by both the USA and the UK is to influence software and hardware vendors to abstain from making strong cryptography the default configuration of their products, while keeping the capability in tact.
This attempts to at least catch the low hanging fruit, the fully incompetent criminals. Surprisingly, this might actually be a productive means since criminals in general are caught because of their incompetence – until they learn.
Remote Access Schemes
A prominent approach to solving the Going Dark problem is to allow law enforcement remote access to the device of a suspect. Various variations of this method exist which we will cover below. Common to those variations is that they suffer from three problems:
- Access control for the use of those remote access methods is a hard problem. Only law enforcement, and ideally only with a warrant, may be able to use them. Hackers and foreign governments must be excluded. This essentially mirrors some of the problems that key escrow systems have. There must be a secure way of targeting the device and necessary access credentials (or other secret knowledge required for access) must be securely managed.
As is evident from the NSA and CIA Vault 7 leaks, it is an enormous undertaking to guarantee this. Without such a guarantee, remote access schemes have the potential to undermine the digital infrastructure of nations, making it vulnerable to hackers and cyberwar.
From a purely national security perspective, this appears a price too high to be paid.
- Digital evidence gathered through remote access, as already mentioned before, is of questionable repute. Since remote access would necessarily allow control over the target system any data on it could be manipulated and falsified, including the suppression of evidence or the creation of false evidence. Because all access happens in a covert manner, legal recourse is at risk, and because the access methods must be closely guarded for security reason, they cannot be revealed in legal discovery. This boils down to the necessity to simply trust the individual law enforcement officers to be honest – and that in light of cases in which police has planted drugs as evidence, and the proverbial “Saturday Night Special”.
- Devices may be hard to assign to a jurisdiction. It is necessary to determine the actual location of a device before infiltrating it, otherwise the police of country A could break into a device in country B, leading to potential diplomatic turmoil. It is unlikely that a country like the USA would welcome the remote searching of a domestic device by the police of China or Russia.
Mandatory Software Backdoors
Government could mandate backdoors to be implemented in operating systems so that law enforcement can access any device remotely, given the necessary authentication credentials. This is highly problematic since it risks the integrity of all devices because of an intentional security hole. Securing the access credentials so that they do not fall prey to hackers and foreign adversaries would be an enormous, and potentially impossible task. Furthermore, since software and devices are shipped internationally, such a backdoor would have to be deployed per jurisdiction – potentially at the border. This is frankly unrealistic and dangerous beyond words.
In addition, the backdoor would also be required to be securely programmed in the first place to prevent exploitation even if there are no valid authentication credentials known. Furthermore the communication towards such a remote backdoor would have to pass through all firewalls on the way – meaning that firewalls need to be configured accordingly as well. This applies not just to corporations but also to standard users since off the shelf home routers come with enabled firewalls. Beyond that, the targeting and the reachability of the device must be guaranteed, even though NAT, and especially Carrier Grade NAT is widely deployed and doesn’t support uninitialized incoming connections.
This would mean that government has to deploy something like current malware that actively reaches out to a command and control server or network (C&C) to request instructions. This C&C would become a prime target for denial of service attacks, but also a great source to find out who is currently under investigation, counteracting investigative goals.
Lawful Hacking
Several countries, including Germany, the Netherlands, USA, have created legal frameworks to allow law enforcement to use existing security holes in deployed software to break into systems to remotely identify, search or tap them.
The main problem with this approach is that it requires that law enforcement has access to exploits – software that uses security vulnerabilities in the target to gain system access. These exploits are highly sought after knowledge, and with the growing demand by not only cyber criminals but also law enforcement, intelligence agencies and military, they become a tradeable good that demands increasing prices.
This creates a dilemma. On the one hand government has the mandate to protect its citizens (and that includes their computers) against crime and foreign aggression. On the other hand government needs to keep exploits secret because law enforcement relies on it to execute remote access for investigative purposes.
In addition to the problem of deciding which security holes to make known to vendors for patching and which to keep secret, the demand for exploits by government potentially creates a market that further erodes security because criminals are incentivized to introduce these vulnerabilities into software. For example, contributers to open source software, or employees of software companies, might be tempted to introduce exploitable bugs into software and to later auction exploits for them to the highest bidder.
Since these exploits often demand prices beyond 500,000 USD, this is a pressing risk – especially for open source software where contributors are usually not vetted and identified sufficiently.
One suggested escape from this multi-faceted dilemma is that government only uses security vulnerabilities that have already been made known to vendors but not yet fixed. For example, it is rumored that the NSA has access to the CERT feed over which vendors are informed about found vulnerabilities. While this softens the dilemma, it comes with its own problems:
- The time to create and deploy the exploit code is significantly shortened, requiring that the government employs highly skilled and motivated experts that program and test these exploits around the clock. Again, those exploits should not fall into the wrong hands, but at the same time need to be quickly made available to authorized law enforcement entities.
- Giving government access to a stream of vulnerabilities also means that potentially many more people gain that knowledge, risking leaks. Furthermore: How to decide WHICH government should have priority access to that knowledge, and what consequences does this have for national security?
At least the approach of using only 1-Day exploits (those vulnerabilities made known to vendors already) would contribute to drying up part of the market for exploits.
A variant of this method has recently become known. In some (unidentified) countries, internet service providers were enlisted to help the government in targeting specific users by infecting downloads with remote access trojans on the fly. So called drive-by attacks depend however on insecure usage practices of the user and are unreliable. They also suffer from mistakenly attacking innocents.
Targeted Updates
A rarely discussed method for remote access is the subversion of update procedures. All devices require regular updates to fix existing security vulnerabilities or deliver new features.
Update processes already inherently have the ability to change every part of the device’s software and they often provide targeting methods already – through device identifiers or licenses.
As such, they could be considered to be intentional backdoors.
Software vendors currently employ digital signatures to secure and authorize their updates. This method could however be used by law enforcement if software vendors can be convinced (or forced) to comply. It is certain that vendors would resist such a move vehemently, but they have also a record of previously cooperating, especially when it comes to third-party software delivery.
Both Google (Android) as well as Apple (iOS/iPhone) have already suppressed and forcibly deinstalled software from their customers’ devices, which allows for the assumption that they could also be made to install software – if government asks for it and a sound legal process for it is established.
Common Problems with various regulatory means.
In the following we will touch several open questions and problems that are common to all attempts to regulate cryptography, as well as engage with some of the arguments against it that are often repeated.
Regulation undermines security
All means known to us that soften the Going Dark problem lower the security of information systems and communication to some extend. This is to be expected, since the whole question is that of granting access to third parties that is not necessary for operation in and of themselves. Security thus must be lowered to include those parties even against the will of the user, therefor lowering the extend to which the user is able to control his devices and software. This is even further amplified by the fact that any approach will increase the complexity of the software and infrastructure – and complexity is the enemy of security. Fundamentally, security and control are synonyms in this field.
However, security is not binary. It is a gradient on which we pick a value in light of trade-offs like convenience and cost. The public policy decision to deal with the Going Dark problem is just one of these trade-offs, namely that of public security and enforcement of law.
That presents us with the question on how to balance individual control against the provision of (at least) the rule of law. This is no question of cryptography or computer security, but one of social ethics, politics and statecraft. It therefor has to be answered in that domain.
Within that domain previous answers have been to regulate gun ownership, doors that resist police raids, mandatory government identification schemes that enable identity theft, and TSA locks on luggage. For some special needs licensing schemes have been introduced, which could apply to crypto regulation as well – allowing unrestricted used of cryptography for some uses, like banking and e-commerce, while strictly regulating it everywhere else.
Our answer to the public policy question is radically on the side of individual control and security: Cryptographic protections, privacy, control over our devices and the integrity of information processing systems is one of the most fundamental requirements in a world that relies on international communication and data processing for national, economic and personal wellbeing. This is especially true in face of risks of cyber crime and cyber warfare. Lowering our defenses will make us even more vulnerable than we are already, potentially risking our critical infrastructure and personal autonomy.
Please enjoy, share the podcast around, and consider financially supporting the podcast–we need YOUR help to keep this going. You can become a patron on Patreon for exclusive content by clicking the image below. You can also donate crypto-currencies by clicking here.
Trackbacks/Pingbacks