I’ve recently watched John Underkloffer’s
presentation on 3D UIs, and how he helped create the presentation for the film Minority Report. You know the scene, the one where Tom Cruise is working his way through the UI
with a series of hand gestures (although the one in Iron Man 2 is an upgrade). As I was watching the clip, I watched Underkloffer
(
Read more... )
In my first application I teach it that to select an item in 3d space I touch it once. As I go from appA to appB it remembers that.
You, on the other hand, grab the object. It remembers that for you.
There should probably be a default standard of the "obvious" things - the LukeW page has some good "obvious" ones that once you know them (zoom in/out) you automatically try to apply them to other things. Don Norman's point (as voidrandom mentions) is that it's not seamless inside an application/shell, let alone across experiences.
I remember with Quake, I customized the keyboard for commands extensively. Unlike 99.9% of players, I use ESDF instead of WASD. It's simply more comfortable to me. I even used the keybind file in other ID games. I hated it when I realize other games weren't going to use the same interface.
Interfaces (touch, 3d, whatever) will change. The way that we "standardize", THAT'S what has to change.
Reply
Maybe we'll find a way to customize computer UI in the same way that we customize car UI, where high end cars can remember a few user settings (seat and mirror positions), but I'm not optimistic.
I've long referred to "point and click" interfaces as "point and grunt", because I believe that the mouse (especially a single-button one) reduces our interaction with the computer to that level, but maybe that's what most people can tolerate. And the rest of us will continue to use the richer language structure of the command line...
Reply
What amuses me are the HP desktop computers with touch screens - they make my arms hurt just LOOKING at them! How long do they think people use that at one sitting?
Reply
Leave a comment