We hope we aren’t dating ourselves too much by mentioning computer punch cards, but they were once the means of inputting data into a computing device… at least, until the now-ubiquitous mouse and keyboard came into the scene. This variety of interfacing with our devices now seems to be one of the few ways to practically use them. However, other interfaces have emerged - do any of them stand a chance of unseating the keyboard and mouse?
Honestly, it really all depends on how (and if) the interfaces that are being developed are practically adopted. Here, we’ll go over how these interfaces are likely to be developed and how technology may be influenced in the future.
As you’re reading this blog right now, there’s a pretty fair chance that you accessed it using a keyboard and mouse. Despite other interfaces, like touch gestures and voice responsiveness, being popular on many new devices, there are still plenty of devices that leverage the mouse and keyboard combination. However, this is not to say that there aren’t other upcoming interfaces that we are already seeing more and more. Take augmented and virtual reality, for example. These two interfaces are being seen more and more commonly, making it more likely that they will be leveraged for practical work purposes sooner than later. The same can be said of digital assistants and their capabilities to streamline many of the day-to-day operations that would otherwise take up your employees’ time.
The whole point behind a user interface is to make it easier for a user to, for lack of a better term, pick up and play with whatever solution they happen to be leveraging. It was precisely this phenomenon that made the smartphone such a successful technology, as the highly hands-on UI made the devices remarkably intuitive to use. However, we may not even need to touch our devices at all in the near future. One of Google’s many ventures, Project Soli, is dedicated to creating a touch-free manual interface that uses radar to “read” the gestures of a user.
This kind of user interface could likely lead to a more three-dimensional version, similar to what can be seen in films like Minority Report. In fact, the researcher who assisted in creating the movie, John Underkoffer, has made strides towards the practical creation of such an interface.
Another potential route for UI to take in the future, haptic holograms, can be seen in the Iron Man franchise. Instead of the “typical” holograms, which function as little more than projections of light, haptic holograms can be manipulated by the user - allowing them to be edited, reorganized, and reexamined. As seen in the movies, these holograms could even mimic physical computing components, like keyboards. This suggests that the need for these physical components may not be quite as much of a need as it would be a preference in the relatively near future. Putting it simply, we find this idea pretty darn cool - and this is just the tip of the UI iceberg.
Yes, you read that right… the future of UI will likely allow us to use computers with only our minds. Advancements in bioelectronics and what is known as Brain-Computer Interface (BCI) technology have enabled us to scan a user’s brain waves and have a computer translate these waves into actionable commands.
Tests are already in progress to apply this technology to robotic limbs, motorized wheelchairs, and other accessibility tools. There is also research being pursued to enable us to use BCI technology to control the many devices and household utilities in our lives.
BCI has also been heavily featured in solutions to help restore a person’s capacity to communicate, or simply to augment it. There are implants that now allow a user to use their mind to type, while others are in development to directly convert brain waves into text. Astoundingly, there have been experiments conducted that have pretty much granted human beings the gift of telepathy. A subject in India was instructed to think the word “hello.” This thought was then converted into binary code and emailed to France, where it was reformatted into brainwaves and received by a second subject.
BCI has even been used to record dreams. Admittedly, these recordings aren’t of the highest quality, but the fact of the matter is that we have recorded dreams.
OF course, it should go without saying that practically using BCI for computing is a long way off, so you don’t have to give up your mouse and keyboard just yet. However, it is also a little fun to imagine how these advanced interfaces could potentially be used to improve the human experience sooner than later. We’ll just have to wait and see.
But, what do you think? Are any of these applications of advanced user interfaces of particular interest to you? Discuss it with us in the comments!