In the 2002 film Minority Report, Tom Cruise manipulates a bunch of windows by making hand gestures with special gloves. During the filming of this iconic scene, Cruise reportedly had to take frequent rests because his arms became tired. The term for this phenomenon, “gorilla arm syndrome,” was coined 30 years ago and still exists in many of the interfaces we use today, like when navigating seatback screens on airplanes.

The Iron Man series has similar scenes of glorious computer interaction, but on top of navigating a series of windows, Tony Stark is transferring holograms from one platform to another to manipulate them. The elements were represented as 3D holograms, and you could manipulate them by spinning them and throwing them. Virtual objects were treated as real objects.

While touchscreens and holograms look spectacular on film, an astute article by WIRED proposes that the real future will likely be similar to the one depicted in the Spike Jonze film Her. Rather than a series of screens and windows, most of the people in the film use a ubiquitous earpiece/microphone that’s paired with a device not unlike the smartphones we use today. Most interaction is through speaking, and users only whip out the smartphone when taking a picture, or when viewing something on-screen. Not a lot of movie magic is required.

Keys Open Doors

As more devices come out, and screen sizes fragment even more, it’s easy to get caught up in how elements are arranged on a screen.

At last year’s SXSW, Golden Krishna gave a presentation on how a user interface can get in the way of the user. His example: the simple problem of unlocking your car door.

Viper is a leader in car security. Given many drivers also own a smartphone, the company’s solution is an app with a series of screen interfaces that lets the user open his car. A typical experience: as you approach the car, you whip out your phone, open the Viper app, tap on the “Unlock” button on the home screen and the car door unlocks. Pretty handy, right?

Not really. That’s a lot of steps. More still if you need to unlock your phone. Also, what if you’re carrying something that requires two hands? Like groceries, or a baby?

This problem was actually solved by Mercedes fifteen years ago. How did they approach the same problem? Well, the smart key in your pocket knows when you’re approaching the vehicle, and so it just unlocks as you pull on the door handle. That’s it. No user interface necessary. 

Screens can help us interact with a computer, but sometimes they just get in the way.

Humans and the Future of Interface Design

Our interfaces will eventually reduce down to how humans think and what our habits are. Interactions with our computers will be as natural as the conversations that we have with fellow humans. The reliability of an AI removes the need for most interactions with a screen, and then we’re all one step closer to the singularity.

When it first came out, the Minority Report interface was thrilling and reeked of innovation. It even spawned real-life technology like Pranav Mistry’s SixthSense. As helpful as touch screens are right now, the design solutions of the future won’t require much interaction, and might not even require our arms.

The shrinkage and eventual disappearance of screens is already happening. Nest, a hardware company recently bought by Google, makes a thermostat for your house that learns your schedule and adapts accordingly. Kinect and Leap Motion allow for increasingly subtler body movements as the sensors improve. And Siri, the iOS personal assistant, is becoming more convenient and reliable (and less weird when talking to her in public).

In the end, we design technology for humans, not screens. Humans interacting with computers should be more like humans interacting with humans. Learning to operate a computer should be as natural as any good relationship. We learn each other.