In the fourth installment of the FOC Working Group 1 (WG1) blog series, Mallory Knodel reflects on the link between cybersecurity and internet protocol engineering at the Internet Engineering Task Force (IETF). Mallory Knodel works for the Association for Progressive Communications (APC).
In a human rights-based understanding of cybersecurity – in which user and network security merit equal consideration – confidentiality, privacy and anonymity become essential parts of what it means to be secure in cyberspace. In this context, internet user’s right to privacy must be protected at many levels. One of the obvious levels is at the lower layers of the internet itself, which for internet policy advocates may appear opaque due to the highly technical nature of the discussion and advocacy work that is required. This post explains why protocol engineering and standardization are important discussions for civil society to take part in to advocate for internet rights, particularly user privacy. The post also elucidates the main space where multi-stakeholder protocol engineering takes place – the IETF, where these discussions are currently taking place.
INTRODUCTION TO IETF
The Internet Engineering Task Force (IETF) was established in 1986 and is an organizational body under the auspices of the Internet Society (ISOC). It has no formal membership requirements and is comprised of volunteers from government, the private sector and civil society. Its stated mission is “to make the internet work better by producing high quality, relevant technical documents that influence the way people design, use, and manage the internet.”
Outputs from the IETF are Requests for Comment (RFCs) that set and maintain basic, low-layer technical standards and norms for internet protocols. These standards and norms are adopted by the internet community including software developers and internet service providers– not because they are binding but because their contributors and the IETF itself have great influence. Standards related to spectrum are set by the ITU and those related to the web are set by World Wide Web Consortium (W3C).
The main organizing mechanism for the IETF is working groups, which are organized under eight areas of work, one of which is “Security”. Working groups establish charters and are disbanded once they have reached their goals. Currently, there are several interesting, active working groups in the security area working on RFCs that will set low-level standards and norms to make the internet better, from a technical, security perspective.
CONFIDENTIALITY IN “THE GOLDEN AGE OF SIGINT”
The fundamental assumption that policy advocates and engineers share is that cybersecurity is important. Security means many things including confidentiality, privacy, anonymity and integrity for users and networks alike. This piece focuses on the technical protections for privacy in cyberspace because it is indisputable that with the many new possibilities in the age of the internet, one is that surveillance has never before been as sweeping and pervasive as it is today.
Confidential communication is an antidote to surveillance and simply means any information that is shared between two or more people in secret. Often this information exchange also requires authentication of the sender’s or the receiver’s(s’) identities. And in all cases trust is a required element, even if the trust is placed only in the protocols that enable secret communication.
Encryption keeps information exchange confidential. The use of PGP (pretty good privacy) encryption will ensure the body of your email message is kept confidential between you and the person you shared it with during the exchange. Encryption protocols can also play a role in authentication, such as in the use of a PGP signature the sender can use to identify themselves.
Encryption solves the problem of confidentiality, but this is actually a very narrow concern in the scope of privacy. In a message exchange using PGP encryption, the body of the message is the only thing kept confidential. There is much more information about the exchange that is in no way kept secret: sender’s email, recipient’s(s’) email(s), subject line, message attachments (unless encrypted separately), time sent, sender’s time zone, sender’s IP address, the fact that encryption was used and how. This information is referred to as metadata. One notices that the mere use of encryption creates more metadata, and therefore encrypted messages are themselves more of a target of surveillance.
THREE EXAMPLES THAT TRADE OFF USER PRIVACY
The first two examples below simply highlight how features in protocols and software implementations can sacrifice privacy for confidentiality and efficiency. The third example, on the Transport Layer Security (TLS) protocol, is a concrete proposal from within the IETF to improve this low-layer protocol from a privacy perspective.
Requesting Authentication Information from a Central Authority
To ensure a confidential exchange, using HTTPS for example, a “handshake” is required, which essentially establishes the authenticity of the web server. In the case of HTTPS, a security certificate, issued by a certificate authority, authenticates the server. The client verifies the certificate and then if it’s valid it goes ahead and has an encrypted connection to the server. This step to authenticate the server is important because it protects the client from a malicious attack. It’s interesting to point out that fake certificates and other attacks are designed specifically to exploit HTTPS, which is a security protocol. Certificate authentication is also important for server security, as in the case of the software called OpenSSL that implements HTTPS. One year ago, in April 2014, OpenSSL had a bug called Heartbleed, which could have exposed the secret keys of servers. System administrators replaced all of the keys on all of their servers and revoked all of their old certificates. The Online Certificate Status Protocol (OCSP) is how the client can tell whether or not a certificate is revoked or not. The OCSP request is just a plain text request to the OCSP responder, passing a unique identifier that’s very easy to pin to the certificate in question. A simple yes or no answer is sent back to the client. However, OCSP responders serve many websites each, so watching the OCSP responder server traffic is enough to know who is visiting a particular website, the visitors of many websites, or even which websites were certified by the same certificate authority that uses the same OCSP responder. So this is a total disaster in terms of privacy because your clients are basically reporting themselves to some central authority.
The good news is there’s a mechanism called OCSP stapling where the server itself fetches the OCSP response from the OCSP responder and keeps a cache of up-to-date OCSP responses and then it sends the OCSP response in the communication to the client, so the client never has to communicate with the server. Some protocol engineers believe OCSP needs to be fixed in more fundamental ways, but this is a good start, especially for user privacy.
The Fingerprintability of Customized Security Features
It’s important to consider that customizations to personal computers and the software that runs on them become unique identifiers for the person who uses the computer. So reducing this fingerprintability, or at the very least ensuring that security features do not increase fingerprintability, is important for protocol engineers and software developers alike to consider. This fingerprintability is sometimes referred to as stored state. While there are a lot of advantages to having stored state, such as speed and efficiency, there are disadvantages to privacy. One approach is to offload some of that local, stored state to global state.
For example, HTTP Strict Transport Security (HSTS) stores useful information about certificates of websites that the user has already seen on the user’s local machine. If there was a way instead that this information could be stored globally so that all users could query it, then the fact that specific users have visited certain websites but not others would not fingerprint them.
The solution is to offer HSTS header availability as a global state so that one’s browser doesn’t need to have stored state and reduces fingerprintability.
Securing Server Name Indication (SNI)
Server name indication is defined in TLS, which is the encrypted tunnel used for HTTPS among other things. Again, for encryption the client must authenticate the server with a handshake in plain text. And because multiple services can run on a single IP address, all routing information to the 10 services or 100 services on this one IP address necessarily contains the name of the service that a client is looking for. So even when a client is relying on the security of HTTPS, the web browser sends the name of the service, usually the precise webpage, in plain text.
For privacy’s sake, it would be better to encrypt the server name indication. But the server has to send the correct certificate for the service requested and the client has to validate the certificate because confidentiality requires authentication. So if encryption was used in the connection to send the string that tells it what I’m looking for, then the server doesn’t know which certificate to send.
The IETF transport layer security working group is in the process of trying to create a new TLS version (1.3, it’s now 1.2), which will encrypt as much of the handshake as possible. Encrypting server name indication will require an additional round-trip for the communication in many circumstances. But if it’s not encrypted then this leak of metadata is a problem for privacy.
So like much of engineering the TLS protocol prioritizes efficiency, which is one way of making the internet work better. However, privacy is sacrificed for speed and bandwidth. So for transport-layer security it’s important to encrypt as much of the handshake as possible and support time and size padding.
SUMMARY AND CONCLUSION
Developers and engineers have important roles to play. While improving confidentiality by encrypting internet traffic with HTTPS is an important goal, users shouldn’t have to trade off their privacy for it. That means that developers implementing HTTPS must go steps further to take privacy into account. Furthermore, protocol engineers, particularly those working to improve TLS, should consider privacy as just as important a quality as speed and size.
Civil society organizations must also play their role of internet rights advocates by better focusing attention on the IETF. IETF processes are open and though not explicitly stated, are multi-stakeholder in purpose and intent. However there is a distinct lack of representatives from civil society organizations, as compared with states and of course the private sector. The IETF is not just a space to increase influence but to collaborate on building the internet from the bottom up, from its very foundation. Privacy and respect for users should be on the agenda of every working group.
The IETF is a notable multi-stakeholder space. IETF participants meet three times per year. Interim progress is made in working group mailing lists and many participants never attend meetings at all. IETF 92 was just held at the end of March in Dallas and the next meeting, IETF 93, will be in Prague. The IETF website has a great deal of documentation to help incoming civil society advocates to become acclimated in this critical arena.
This article is largely based on a talk given by Daniel Kahn Gillmor at the 10th Hackers on Planet Earth (HOPE) in New York on 18 July 2014. The talk is available for download in free/libre audio format and a transcript with slides is forthcoming.
The views expressed in this blog represent the views of individual authors, and do not represent the views of the Freedom Online Coalition or its members.