Smart Reality, the "iWorld"

Apr 18, 2007 18:10

I hate technology. This is my final confession, a shocking revelation (especially when made over the Internet!). I am, truly, a closet Luddite that thinks technology should go away.

Okay, not really. What I do think is that technology should get out of the way. In the ultimate act of self-centered thinking, I think the world (in so far as technology lets us) should revolve around _you_. I call it smart reality, but in a meaningful and quirky way, it could be called "iWorld".

Why should you enter a username and password? Why doesn't the computer just know who you are? Why, when I'm traveling, does the monitor in the airport show me every flight- I only care about one of them. Why specialized rituals for opening lockers- why not just have it recognize me and open for me?

There's a bunch of technologies just waiting to be tapped into, technologies that would allow us to make the world selectively display just the information you want. To recognize you by actively taking measures for identification. To vanish into the background and serve man without us having to work for the technology to make it happen.

An example that's been niggling at my brain, just based on recent events and conversations, is a smart reality approach to resolving a problem- keeping a firearm in the home for protection. My example is going to be somewhat farcical, and almost science fiction-esque; at the moment, I'm less concerned with practicality and potentiality. Let's ignore the political discussions on whether or not Joebob should own a pistol for personal protection, and focus instead on the technological issues.


Joebob purchases a pistol for the explicit purpose of home protection. He has all of the appropriate certifications, permits and training, and is an accomplished marksman. Joebob has a family, including children ranging in age from toddler to early teens.

For safety reasons, Joebob needs to secure the pistol. Obviously, he needs to keep his children away from it, but in the (unlikely) event his home is broken into, he must also keep the Bad Guy(Evilbob) away from it. To do that, the pistol must be secured using some sort of authentication mechanism- ensure that only a valid user accesses the firearm in a fool proof fashion.

Most solutions to this involve a lock, either a combination lock or a key. A key is another item that must be secured- if the key falls into the hand of a child or Evilbob, the entire security mechanism is defeated. A combination, carefully chosen, doesn't have that vulnerability, but both of them have another flaw- accessing the firearm for its original purpose.

If Evilbob does break into Joebob's home, it will most likely be at night. Lights will be off, people will be asleep. Joebob will want access to his firearm quickly and quietly. Fumbling with keys or with a combination gets in the way of this. It becomes even worse if Joebob has secreted his ammunition someplace separate from the pistol (again, a good safety practice)- he must first complete a scavenger hunt before he can arm himself.

How could we solve this, using Smart Reality? Well, let's examine our design goals for this problem:
  • Joebob needs access to his gun, quickly, quietly and on demand.
  • Joebob's family (most especially his children) must be prohibited from accessing the gun.
  • Evilbob and other unauthorized/malicious users must be prohibited as well.
  • Access must be relatively transparent


In addition to those constraints, I'd like to append a few more:
  • The technology should actively work to recognize Joebob, not require that Joebob authenticate himself
  • The technology should integrate with existing systems and communicate with its environment openly
  • It should blend in aesthetically and make no real impact on the environment


Imagine, if you will, a small gun safe with an outer door that blends in with the wall- the safe is designed to nest in the wall, nearly invisibly. This device is layered into a HAN that allows it to communicate with a variety of other "smart appliances" within the home, including Joebob's security system. This gives it the ability to detect large amounts of information about the environment.

Joebob wishes access to this safe. Joebob, knowing where the safe is located, reaches for it, and from Joebob's perspective, the door slides aside- he now has access to the firearm contained within and can perform maintenance, or use it for self-defense. The safe, however, has made a series of analyses that resulted in the decision to allow Joebob entry. The exact chain of analysis has to be designed to have a high degree of confidence that security is maintained without requiring anything of Joebob. How can this be done?

Well, let's go by the basic constraint of keeping children out of it. There are several techniques that can be used to identify children- a pressure plate in the floor, for example, could measure the weight. A camera could estimate the height of the user.

That makes it easy to keep children out, but that's a bad security design. You don't work to identify who you don't want in (a large list), you work to identify who you do want in (a short list, in this case, Joebob). So reverse it- instead of prohibiting children, we can instead say we only want to allow people who are not children in- that's a start, anyway.

Obviously, there are a few other technologies that I could use. Facial recognition, for example. Given that the face identifier would need to operate in low light conditions, that's not a safe one though- too many risks of false negatives. Other biometrics would defeat our main goal- getting Joebob into the safe with no action. The same goes for RFID- he'd have to have the tag on him, and that's no better than a key, and worse in many ways (it transmits).

What else then? Well, let's start by not treating the interaction with the safe as a one-off transaction. Here's the thing- we don't need to authenticate Joebob when he goes to the safe. We can authenticate Joebob when he walks in the house. At the door to the house, we can have better lighting, we can have reliable facial recognition mixed with height/weight estimates. We can even have the house learn Joebob's schedule- if it looks like Joebob, weighs the same (within a margin), etc. and is coming home around the time Joebob usually comes home and is dressed the same way as when he left, it's probably Joebob. We can add more factors to the authentication process and we can really train the house to recognize Joebob. Once Joebob's in the house, we can track him, and when we see Joebob reaching for the gun in the safe, we open the safe.

The biggest change in "Smart Reality" is to stop viewing applications and technology as independent tools, and stop viewing our interactions with them in a transactional context. We don't engage in transactions with human beings except in very unnatural and contrived environments. Among our friends, we converse. Our friends talk to each other, and social networks form. The same sort of social networking is required for our technology. Technology needs to be smart enough to manage its own complexity.

Edited to add:
There's another important feature to Smart Reality- graceful degradation. As aspects of the network of technologies fail, the ones still operating must not fail outright, but have their performance degrade in a predictable and usable fashion. GMail, for example, uses lots of JavaScript, but there's a pure HTML version in case your browser doesn't allow scripts. That's graceful degradation- you don't get all of the features, but it works, at least. An elevator is an example of non-graceful degradation- when the power goes out, it stops working, even if there are people in the elevator. They must switch to an alternate technology.

In the gun safe example, graceful degradation would mean that there must be an alternative to opening the safe, like a combination, key, RFID tag, etc. A "manual override" if you will.

smart reality, technology, bad ideas, good ideas, design

Previous post Next post
Up