Saturday, August 10, 2013

Generating Self-Signed X.509 Certificates

People who know me know that I love to dis on X.509 based security solutions. Whether it's implementations that just plain ignore basic constraints, or popular certification authorities that add an extra zero byte to the end of their certs... it's all just so much fun.

But it's hard to argue with the utility of a properly configured TLS layer. And until we add a TLS extension for using OpenPGP to cart around public keys in TLS handshake sequences, we're sort of stuck with X.509.

I spend a surprising amount of time generating self-signed certificates for testing, so a few decades ago I came up with a bash script to eliminate the drudgery of this process. If you're interested, just grab a copy from GitHub.

To use it, just copy and paste it from the gist page into a file you've chmod +x'd. To use it, just run the script passing the name of the host you're generating a certificate for as the first parameter. It defaults to making 2048 bit keys with no passwords, so don't use this to generate production certs (not that you should be using self-signed certificates in a production environment anyway.)

So if I wanted to create a certificate for www.example.com, and I named the script gssc, i would invoke it like so:
gssc www.example.com
and it would generate two files: www.example.com.key and www.example.com.crt. The former contains the private key and the latter is the X.509 certificate for www.example.com.

The -b, -p and -s options allow you to change the length of the private key, the password to encrypt the private key and the certificate's subject name. So if I wanted to create a 1024 bit private key, encrypted with the password "blargh" and with the subject name "C=IO, ST=Chacos, L=Diego Garcia, CN=www.example.mil," I would use this command:
gssc www.example.mil -b 1024 -p blargh \
  -s "/C=IO/ST=Chacos/L=Diego Garcia/CN=www.example.mil"
Cheers!

Monday, August 5, 2013

In Defense of JavaScript Cryptography

Google "javascript cryptography" and you'll quickly find a fair number of people dismissing JS Crypto as a fools errand. My favorite is the Matasanto Security article entitled "JavaScript Cryptography Considered Harmful." The tone of the article seems a little alarmist to me. But... it also happens to bring up a few really great points. Its critique of the current state of web app crypto is mostly spot-on. However,  the state of the art is evolving quickly and may soon make the Matasano Security article mostly irrelevant.

This post is a brief rebuttal to the assertion that JavaScript cryptography should be considered "harmful." I would completely agree with "fraught with serious challenges" and "difficult to do right," but certainly not harmful.

Why Do JavaScript Crypto?

Before you can make a blanket statement like "JS Crypto is EVIL," you really should list out a few use cases. I think it's fair to say replicating HTTPS functionality in JavaScript is a poor idea. All popular browsers provide built-in support for HTTPS. What's more, these implementations have all been reviewed by multiple people to help ensure correctness and freedom from obvious bugs. So if you're just trying to communicate a password from a browser to a web server, use HTTPS. Don't try to replicate that functionality by yourself with JavaScript.

But there are several use cases where JS Crypto may be advantageous. The two I can think of off the top of my head are end-to-end message security and Secure/Stanford Remote Password (SRP) support. Neither of these use cases are directly supported by modern browsers and are of interest to the general community.

End-to-End message security means encrypting a message in such a way that it can only be decrypted by its intended recipient. In the context of JavaScript crypto, this means your favorite email, microblogging or IM web app uses JavaScript to encrypt your message. The encrypted message is then sent to its destination by whatever means and is ultimately decrypted by a web app running on the recipient's machine. In the end-to-end encryption scenario, the server never has access to your decrypted message; and unless you explicitly share your keys with the server, they never will.

End-to-end message security contrasts with "Transport Security" offered by SSL/TLS. HTTPS, which uses Secure Sockets Layer (SSL) aka Transport Layer Security (TLS), encrypts the link between the browser and the web server. To communicate securely with another person, you would send an un-encrypted message to your web server over the encrypted HTTPS link. The server would then forward the message to its recipient using a different (hopefully) encrypted HTTPS link. Because the message is un-encrypted when it gets to the server, the server operator can see the contents of the message. But because the link is encrypted, eavesdroppers listening in to the conversation should not be able to read the message.

Secure Remote Password (SRP), formerly known as Stanford Remote Password, is an authentication protocol with many desirable features: it is resistant to password dictionary attacks and establishes a shared session key which may be used to authenticate or encrypt messages between a client and server. Or, more likely, between a client and a piece of computing equipment "behind" the web server for which the web server acts as a proxy. To be sure, the SRP's utility is diminished by the near universal support of SSL/TLS, but there are definitely situations where it can be useful.

These are not the only reasons why you might want to use something other than HTTPS; but they are two reasonably important use cases not directly supported by SSL/TLS.

The Chicken and the Egg

The Matasano article assumes the reason you're using JavaScript crypto in your browser is to encrypt a user password for its trip from the browser to the server. It then presents this "chicken and egg" problem:
  • if you don't trust the internet to securely deliver a password from the browser to the client, why trust it to deliver a JavaScript encryption library?
  • and if you use HTTPS to ensure no one's tampered with your JavaScript encryption library, why not just use HTTPS to secure your password and be done with it.
I mostly agree with this assessment. However, there may be a situation where your javascript encryption library is served off a different host than the one you're communicating with. Imagine you're trying to communicate with an 8 or 16 bit microcontroller. There are several on the market today with enough CPU horsepower, memory and IO to speak SLIP or PPP (or even IPv6.) Due to policy, debugging or legal reasons, you may serve TLS pages off the microcontroller using only authentication. It's a bit of a corner-case, but I've actually found myself in exactly that situation. My microcontroller could handle authentication with ECDSA, but couldn't cope with a bulk cipher I was willing to use.

But there are some interesting developments in the chicken and egg question. It turns out there's a group of people working on a specification to introduce cryptographic primitives to the JavaScript in browsers. The Web Cryptography API is an emerging standard from the W3C and will provide basic crypto functions to JS web apps. When widely deployed, this should eliminate most the concerns dealing with the question "hey! where did my crypto implementation come from?"

Good Random Numbers

The Matasano article correctly observes the JavaScript Math.random() function is inappropriate for use in "real" security protocols. It simply doesn't utilize sufficient entropy. Fortunately, Chrome and Firefox have implemented the random number generator from the Web Cryptography API in recent builds. According to this Mozilla Development Network page, support for crypto.getRandomValues() was added in Chrome 11 and Firefox 21.

If you are truly interested in properly implementing security-related protocols, you must use this call instead of Math.random().

Extensible Languages and Insecure Content

IMHO, the fundamental concern with web apps is the risk that occurs when JavaScript's extensible nature meets insecure content. The Matasano article talked about this in the context of downloading javascript to implement crypto primitives, but once a bad guy can inject code into your JS execution context, it's all borked, not just the crypto.

The problem here stems from the fact that JavaScript is, by design, an extensible programming language. It's possible to replace some of the basic functions provided by JavaScript and the DOM API. Here's a simple example where I replace the escape() function with a function that reverses a string before escaping it:

window.prevescape = window.escape;
window.escape = function( input ) {
  var output = "";
  for( var i = 1, il = input.length; i <= il; i ++ ) {
    output += input.substr( input.length - i, 1 );
  }
  return prevescape( output );
};

This example doesn't do anything horrible, but it should demonstrate how easy it is to extend or even replace core JavaScript functionality. And it's just as easy to replace the code that manages import / export of cryptographic keys as it is to replace the escape() function.

The ability to replace or extend JavaScript functionality is a good thing when you're using it to fix bugs or add useful features. But if a bad guy can insert a script tag into your page, all bets are off, you're completely 0wn3d. Since it's unlikely you're going to hack your own web app, we need to figure out a way to prevent black hat script tags from appearing in your web page.

In Conclusion

Securely executing JavaScript applications in a browser is not hopelessly borked. Neither is JavaScript Crypto. You have to take care to defend against common vulnerabilities introduced by user generated content. Unless you defend against a man in the middle by sending content capable of modifying the javascript execution context over TLS, it will possible for a bad guy to insert bad guy code into your web application.

Progress is being made with the introduction of the Content Security Policy and Web Cryptography API specifications from the W3C. We're even starting to see browser developers implement them, which is a good thing.

But more work needs to be done to "secure" javascript code. It could be as simple as making the browser's crypto object read only. This would not eliminate all vulnerabilities, but will reduce the attack cross section. We could also require that all scripts referencing the crypto object adhere to common same-origin protections (modulo CORS or CSP.)

This article reflects my personal opinion, and may not reflect opinions or policies of my employer.

Thursday, August 1, 2013

A Couple Useful Aliases for EMACS

Yes. I am an Emacs user. (or, as i call it... EMACS... the editor so ossm, you have to write it in all caps!) But there are a few things I don't like about Emacs, and here's the simple solution I found for them.

Problem 1 : Trailing White-Space is Of the Devil

So if you look at the Mozilla bugs I tried to fix, I think they all have a comment from bsmith and ekr saying something like "uh.. trailing white-space." Yes. It is the sad truth, but God's own text editor has issues with leaving trailing white-spaces in code. I don't remember it used to have this problem in the 80's, so obviously this is Apple's fault.

Seriously though... I could have sworn this didn't used to be a problem. Maybe it's just we had worse tools for detecting trailing white-space and I just didn't notice. But it's really noticeable when you try to generate diffs to attach to bug reports. (The Mozilla process is to attach a diff to a bug, get it reviewed and then apply it to a repository somewhere.)

At first, I simply tried to just delete all trailing white-space in the file I was working on, but any given file in the Firefox source base, one in a hundred lines has trailing white-space so I wound up making diffs with bajillions of updates that had nothing to do with the issue at hand. To me, this stinks of bad form.

Yes, I should have created a bug titled "file foo.cpp has a lot of trailing white-space" and applied the change there, but there was about zero chance of the bug getting a positive review without someone saying "hey! why don't we refactor all the code and add these other features while we're removing all this trailing white-space." And honestly, I got tired of saying "don't make me slap you..." to all the people who suggested this.

So rather than debug a bunch of elisp code, I figured I would take inspiration from the hackers of old and just use a sed script to fix the problem. It removes trailing white-space from lines that begin with a plus ('+') character. If you're familiar with diff or patch tools you'll understand why I did this. Here's the alias I added in my .bashrc file:
alias bongo='sed -e '"'"'s/^\+\(.*[^ \t]\)[ \t]*$/\+\1/'"'"''
You can now do things like this if you don't trust your ability to spot trailing white-space in your code:
hg diff | bongo > current.diff
or if you don't trust other people, you can do this:
cat random.diff | bongo | patch -p1
Problem 2 : I Usually Don't Like EMACS in XWindows

But sometimes I do. So I do the following:
alias emacs='emacs -nw'
This tells emacs to launch in the current terminal window.

Hope these suggestions help, or inspire you to hack your own environment. -Cheers!