We Accidentally a Research Field: Some introspection before langsec con.

May 18, 2014 02:36

Tom Servo: Are you boys cooking up there?
Mike: No.
Tom Servo: Are you building an interociter?
Mike: No!

- Mystery Science Theater 3000: The Movie

If I had to sum up my life as an algorithm that can be expressed in one sentence, a good approximation might be "if it isn't working, try something else." I spent six and a half years in undergrad, for instance, bouncing from major to major like a robot turtle rebounding off obstacles, every time a course of study stopped being interesting, until I stumbled into linguistics looking for an easy upper-division English credit and found the interest-fueled momentum to graduate. My whole career has been like this, really. I can pretend, and on my resume I do pretend, that there's a coherent trajectory behind it all, but the farthest out any of it was planned was maybe fifteen months and that's because I deferred grad school by a year. (Also, if you think the coherent trajectory depicted on anyone's resume was intended, now you have been disabused of that notion. You're welcome.)

Having an operating principle like this means having, and constantly refining, a reliable sense of what "working" means and looks like. In day-to-day life this sense is fuzzy and subjective and took an awful lot of head-on wall collisions to develop into a robot turtle guidance system that mostly only glances off of obstacles these days. Contexts where it's possible to determine, objectively, a yes-or-no answer to some decision problem are outliers. They're really nice outliers, and I like them a lot, because the hardest thing about "solving" most kinds of problems isn't the work itself, but the uncertainty of not knowing whether or how your solution is going to fail on you. You might be able to make peace with the natural evils of flood and Halt and Catch Fire1, but even an honest mistake is a kind of inaction -- the kind you train and improve your whole career to be able to avoid. Any situation where you can determine, with no uncertainty whatsoever, that an alternative is the correct or incorrect one is a refuge.

So that works well for some problems at the individual-problem scale, but back out even just a little beyond that and uncertainty floods in around the edges. Decidable problems are priceless; for everything else, there's pattern-matching. (And when that inevitably fails, there's MasterCard.) Most "is it working or not?" decisions I run into are ones I can only pattern-match about. There are so many ways that heuristic decision-making can fail that inevitably some edge case will present itself clearly enough that the scale tips in the "not working" direction. The robot turtle casts about randomly -- or, more realistically, casts about according to some learned casting-about heuristic -- and then goes ambling along its robot turtle way. In our case, we looked for problems that lent themselves to the tools we were learning to trust, and hopped from one to the next to the next.

It's a little startling at first to look back and realise that the robot turtle has been hopping along for the last few years with very little course correction because it keeps not being obviously wrong. Of course, langsec so far has mostly kept itself to syntax and those parts of semantics that syntax can constrain, and syntax is decidable. That's starting to change. Not the decidability of syntax, I mean, the scope of langsec. I even think it's a scope expansion that Len expected:
I believe that usability is a security concern; systems that do not pay close attention to the human interaction factors involved risk failing to provide security by failing to attract users.

Usability has been the bête noire of security tools since the beginning of security tools: the sheer number of potential adversaries, the broad differences in scope of their capabilities, and the wealth of strategies (some time-tested, some showing their age, some deprecated but still hanging on in legacy APIs, and some as yet unproven) for countering them means there are no one-size-fits-all tools, only tools that apply in a given context and tools that don't. Tens of thousands of hours go into the design, peer review, implementation, and implementation review of crypto libraries and the applications that use them, yet end-to-end-encrypted instant messaging is only just now coming to Facebook via a third-party plugin. OTR has been available in open-source clients for years, but the fraction of IM users who use these clients is vanishingly small compared to the crushing volume of Facebook. Getting realtime browser-to-browser instant messaging right is hard enough even when you're Facebook. The wealth of browser platforms (and platform versions) out there does not help the situation one bit, and if you want to provide end-to-end encryption in the browser, that's a problem you have to charge head-on. And when your business model is "moar users," fucking up your usability (or the usability goodwill you've developed over time) is Not Done. First they'd have to figure out what security properties they wanted to add to Messenger, then they'd have to work out a protocol that provides those properties, there'd be tons of cross-browser issues to work out, and they still wouldn't hit the mark because browser delivery of end-to-end encryption software doesn't protect the user from whoever's doing the delivering. That's a design issue that goes right to the metal of the browser, and I'll go so far as to argue that a lot of that is because crypto APIs are terrible. Yes, the ones your browser uses the linker for.

The problem cuts that deep for the inverse of the reason that Facebook is conservative about UX: cryptographers are conservative about correctness, because their jobs rely on it. It's not that security and usability are incompatible, it's that people who care more about security are more motivated to do security things and people who care more about usability are more motivated to do usability things. But when the access patterns of software languages and libraries make it easy for the developers who use them to model their intentions, and the design elements and interaction patterns of interfaces make it easy for their users to express their intentions to the software (and those models agree where they meet up) -And every phrase
And sentence that is right (where every word is at home,
Taking its place to support the others,
The word neither diffident nor ostentatious,
An easy commerce of the old and the new,
The common word exact without vulgarity,
The formal word precise but not pedantic,
The complete consort dancing together

- T.S. Eliot, "Little Gidding"

- the result is disruptive in ways with the potential to rock far-away foundations.

Justin Troutman recently contacted me to let me know that he's looking to meet up with people interested in the boundaries of competence between UX and crypto at HOPE X this summer. (I am assured by a reliable source that the keynote will be amazing. I can't make it, but you should go.) This is in preparation for a Much Bigger Thing to come, which I do not know how much I can speak publicly about yet, but I think it is pretty fair to say that Justin and I are looking at this problem in compatible ways and I think he's putting together a big step toward bridging the conceptual gaps that make Caring About the Opposite Problem(tm) hard.

Our first official academic workshop is tomorrow, at the conference Len always desperately wanted to get a paper into. We're a real little field now. C'mon, robot turtle, let's go try our hands at some even bigger problems.

1Okay, HCF isn't really a natural evil, but the joke wrote itself.

yes dan you were right, language-theoretic security, a large enough lever

Previous post Next post
Up