Phone developer kit security

Dec 20, 2007 09:28

Monday and Tuesday, I had some AAA (Authentication, Authorization, and Accounting/Auditing) training at work. The first part deals with how we can verify that someone is who they say they are in a secure manner. We discussed digital certificates quite a bit, including how they can be used to verify signatures from the private key. A lot of this was stuff I already knew, but I made some very interesting connections going back over it a second time.

At some point, we discussed 802.1x. I wasn't sure how it was configured on the Mac, so I connected to my laptop at home from my phone and went to the network settings. Found it after a bit of poking around. After the class, I went up to show the instructor what I had found, and we got to discussing the iPhone and iPod Touch. Afterwards, I started thinking about how the concepts from the class may be used by Apple in the developer kit that is supposedly coming out in February.

First, what we know.
  • They said that idea of everything being signed like Nokia does is "a step in the right direction".
  • They said that they are working on a way to keep the platform open while still making it secure.
  • The SDK will also cover the iPod Touch and iPhones not in the United States.

The third item may not seem like much, but it means that the app creation process most likely will not be at the mercy of AT&T. For those who haven't had to deal with AT&T, just know that this is an enormously good thing. They don't care. They don't have to. They're the phone company.

Now, related to the first known I listed there, the desktop/laptop version of Leopard is known to have application signing support. In fact, this is used by Leopard's application firewall. Essentially, if an app doesn't have a developer-provided signature, the OS creates a signature for it. This broke some apps that do their own integrity checking such as Skype, since the OS-generated signature goes into the application package. The iPhone and iPod Touch run an operating system not entirely unlike Leopard. In fact, when I hacked a terminal onto mine and ran the command 'uname -a' (a UNIX command that tells you what kind of environment you are using), it returned that I was running Darwin 9.0.0. That is the same version that Leopard is based on.

Now, normal code signing involves paying a certificate authority such as VeriSign, Thawte, or others for a developer certificate that you can use to sign code. This costs an obscene amount of money, considering what the certificate is and how little it actually means, but that is neither here nor there for this discussion.

For the purpose of simplicity, I will limit my discussion to the iPhone except where noted. It has the "advantage" of a constant connection to the Internet over a cell radio. My "solution" is created with the iPod Touch in mind, though.

Option one
The way this has been done in the past is to require developers to submit applications to the developer of the device or OS (such as Microsoft for Windows Mobile or Nokia for their devices) for verification and signing. This process costs a certain amount per file and can take quite a long time if you aren't a very large company. The issue with this approach is that it requires that each and every update be signed separately, so if you have a bug in your program, you need to reproduce it on a testing unit, fix it, then submit the fixed version to be signed and wait a month or so. This isn't a popular thing with developers, since it costs them money and it isn't popular with users, since they need to wait and pay money for fixes and new features.

I don't see this as being feasible because as I mentioned, it takes time and money for apps to be signed. There are ways to do it that don't.

Option two
Apple could start up their own certificate authorities and create signing certificates for developers directly. This would involve a bit of infrastructure on Apple's side, but less than creating the iTunes Music Store did. They would need to set up servers to let people check the revocation list of the signing CA so that bad developers could be denied the ability to sign code as trusted in the future. Another possibility is to use OCSP, since that checks individual certificates rather than downloading a list of known bad ones. For simplicity, whenever I mention checking the CRL, any certificate status check could be used.

The way it works is this; a developer gets a certificate from Apple, then uses it to write and sign a virus. Users who download the app check to see if the certificate that signed it has been revoked. It hasn't, so they install the app and use it. The virus is noticed at some point and Apple revokes that certificate. If future users download the app and check its signature, they would see that the signing certificate has been revoked for some reason, so they wouldn't install the app. This presents a problem for people who have already installed the application, since they wouldn't be able to tell if the certificate had been revoked.

To fix that issue, there needs to be some regular checking on the part of the client device. For example, an iPhone would need to download the CRL from the certificate authority regularly and check to see if any apps were signed by certificates that have now been declared bad. This is a problem for AT&T, because it would mean that over a million people would be downloading a file that could potentially be very large in the future over the cell network every X amount of time (probably more frequent than a day, but less frequently than every half hour).

At this point, it becomes a serious trade-off between speed/ease-of-use and security. To be more secure, you check the CRL more frequently. This means the device has to have an Internet connection and has to download a file that could be rather large over said Internet connection. It also takes processing power (and therefore battery power) to check the signatures on all of the applications. To be easier to use, you check the CRL less frequently, but this carries the risk of having bad code on your device for a longer period of time.

My thoughts
I came up with an interesting way to potentially fix this problem. It involves moving the CRL checking to when the application is installed and having the phone just verify that the application hasn't changed since it was installed. Essentially, I pick the ease-of-use choice from the last paragraph.

Applications will almost certainly be installed through iTunes. Chances are good that they will even be sold through the iTunes Store. That provides a way for Apple to ensure that only code signed by developers currently in good standing is available for download. If something proves to be malware of some kind, they just remove it from the store and nobody new can be affected.

Now, the issue is people who have already downloaded whatever it is and installed it. My solution to that is to have iTunes check the installed applications' validity whenever the iPhone or iPod is synced. When it's plugged in and iTunes starts checking the music on it, add an extra step that goes out, fetches the CRL, and checks the installed applications against it. Have the 'home machine', which is likely to have an unmetered Internet connection and mains power, do the heavy lifting.

At this point, we have the host computer doing the certificate status checking, but what happens if a rogue application somehow changes another application on the phone? Checking signatures is also used to verify the integrity of an app, so there needs to be another way to do that. Fortunately, that can be done rather simply.

Rather than have the client device check signatures for validity and so forth, have the host computer create a list of hashes for the installed apps, then sign the list. That way, the client device only needs to verify one signature and it verifies that every app was good when the list was created. The issue now is how the host should sign the list. One way to do it is to have a global key present in every iTunes install that is only allowed to sign lists of good apps. The problem with that is that you have a one-key-opens-every-door situation. If someone malicious managed to get at that signing key, they could embed it in malware such that when the malware was first run, it could create and sign a new approved applications list. This is a huge problem.

The way to fix it would be to have each iTunes install (or each iTunes Store account) have a separate signing key. That way, if a malicious user manages to retrieve the key from their own machine, they can't do anything with it because everyone has a different key.

And now, I need to head to work. I'll write more on this later.
Previous post Next post
Up