Home » Opinions » Could mind control be the next major user interface?

Could mind control be the next major user interface?

by | Go to comments

Share:
MindRDR uses thought control with Google Glass
MindRDR uses thought control with Google Glass

VIDEO: Mind Control and Google Glass Combine

User interface designers This Place recently announced a new thought control app for use with Google Glass. We went hands on with MindRDR to see whether it offered a realistic vision of how we will be interacting with machines in the future.

Watch the video


Interacting with computers hasn’t moved on all that much since the PC became a household staple.

The mouse and keyboard became the de facto computing user interface (UI) in the 80s and are still forces to be reckoned with today – simply because of their precision and abilities to improve productivity.

The 90s brought us touch, a UI that languished until Apple released the first iPhone and made it a true alternative to Blackberry keyboard on a mobile device.

Today we see gesture becoming a bigger part of phones and PCs – laptops with Intel’s RealSense camera have started shipping and phones like the Samsung Galaxy S5 and LG G3 incorporate gestures in neat little ways like being able to take a selfie without touching the screen. Smart yes, but is it revolutionary? Not quite yet – you still need to move around and precision is more miss than hit. We’re still some way away from Minority Report style shenanigans.

Perhaps the power of the mind is the true answer, or rather the ability to control our technology using thoughts alone. Now that is sci-fi.

How does it work?

The MindRDR app uses Electroencephalography (EEG), a technology that’s been around for a long time and that is primarily used in the medical field. Essentially a measure of a brain’s electrical activity, EEGs are recorded by placing electrodes on specific areas of the scalp. 

MindRDR uses a NeuroSky MindWave (available for 89 Euros) to read the electrical activity of you mind. A sensor that sits on your forehead detects your brain’s electrical signals and offers an on/off simple interface. However, ambient noise generated by muscle movement and nearby electronic devices can affect readings. To combat this, the MindWave uses an ear clip to “ground” the distractions and provide better accuracy.

There are already a number of apps you can use the MindWave with, predominantly simple games that require concentration and blinking to control an object. MindRDR is more interesting in that it allows you to control a specific feature of Google Glass, allowing you to take a picture with the camera and then tweet it without any need for speech or touch. 

Being able to take a picture in this way may not sound that revolutionary, but there is potential for future applications that could teach us to use tech in a whole new way. And, of course, there is the potential of mind control helping individuals who have difficulties with motion or speech to communicate. In fact, This Place is now investigating whether a mind controlled keyboard is a possibility.

How far away are we?

Before getting too carried away we need to be realistic about the current limitations of this technology. For a start the MindWave is not the most comfortable thing to have on your head – for mind control to become a viable alternative to other UIs it needs to be far less intrusive. Let’s face it, wearing Google Glass makes you look like a bit of a berk in the first place, add the MindWave and you’re entering hipster-Borg territory. 

From a practical perspective, too, this technology currently only offers the simplest of commands – on while concentrating and off when not. Concentration leads to your brain activity increasing, which triggers MindRDR to take a photo. And even though this technology has been around for several decades we’re still only able to get the most basic of interactions out of it.

There’s a great deal more work to be done before mind control interfaces replace the current way we interact with technology. And even then it’s unlikely that it will take over entirely from other UIs, what it could do though is offer an additional interface – the ability perform simple tasks without needing to touch or talk to your phone or wearable.

We’re happy with anything that means we won’t have to speak to our wrists or glasses in public. That’s progress.

Go to comments
comments powered by Disqus