Home » Opinions » Control Issues » Grand Gestures

Grand Gestures

Our next user interface technology has also had its moment of Hollywood stardom. It's practically impossible to mention gesture recognition without talking about that cool wall display and glove interface that Tom Cruise uses to analyse his pre-cog's visions in Stephen Spielberg's Minority Report, and with good reason. The futuristic UI being show was actually based on real-world gesture recognition tech being developed by one of the film's many advisers, John Underkuffler from the legendary Massachusetts Institute of Technology (MIT). Underkuffler's work is now being worked on for commercial release at his new home, Oblong Industries Inc., where it forms the backbone of the company's g-Speak ‘spatial operating environment'.

What does this mean? Well, g-speak is designed to enable users to work across multiple ‘room-scale' (i.e. really big) displays, drawing resources from multiple processors and collaborating with multiple users no matter where those processors and people are in the world. The cool bit is that you do all this stuff by grabbing and dragging objects and applications from one screen to another in the sort of fast, intuitive way we glimpsed in Minority Report.

Oblong's g-speak environment gives you the Minority Report experience today

Oblong's demo at this year's Sundance film festival shows real-time video editing and compositing across two screens, with the user sweeping through poster-sized thumbnails to select clips from the larger screen, rotating his hand to scrub forwards and backwards through the footage, holding his hand vertical to pause then grabbing and dragging clips from the big screen to a smaller screen nearby for detailed work. Watch this video, and you'll agree that it's breathtaking stuff. This is a user-interface that's designed from the ground-up to work in 3D spaces, to handle complex, networked applications and operate in a way that's analogous to the way we use tools in physical spaces. Oblong's plans are nothing if not ambitious. "We want to redress the conversational imbalance that we have with our computers -- great graphical output but very limited user input." Says Kwindla Hultman Kramer, Oblong's CEO on the company blog. "We want all our applications to make use of all our computers. We want a common interface for all the screens we use every day: laptop and desktop computers, televisions, the nav systems in our car dashboards."

Gestural recognition also goes hand-in-hand with another technology beloved of sci-fi aficionados, Augmented Reality; we're already seeing the first real-world apps on smartphones, with the likes of Layer adding Web-based data feeds to real-time camera feeds, and using GPS and image recognition to give you relevant info on the world around you. MIT's research project, Sixth Sense, takes this to a different level, combining portable computer, wearable pico-projector and gesture recognition to project data displays onto real-world objects and manipulate them with your hands. It looks a little clumsy at the moment, but there are some interesting ideas. Out shopping for groceries? Let Sixth Sense scan them and give you all the info you need. Or why not take a snap just by holding your fingers in that classic ‘frame' shape. In a way, Sixth Sense points us towards a future where we won't need an interface to tell our gadgets what we're looking at or what we need more information on. With their digital eyes and ears, they'll already know.

comments powered by Disqus