From weaving loom punch cards, in the early 1800s, to the screen and mice of the 1950s people have been exploring the interfaces between man and machine. The rise of modern computing has seen the screen become ubiquitous with the output of the data. Screens are now on nearly everything from fridges to cars, phones to mirrors.
The interface between humans and computers is constantly under review, researchers are always asking "can we interact differently". With the last few years seeing this pace of change accelerate.
Sense, Gesture, Voice, VR and AR are some of key areas where we see this rapid change. Of course anyone who says the current push into Virtual Reality is new, is forgetting the 80s and 90s where VR headsets were all the rage (and maybe that should stay forgotten).
So what are these key areas and how can you embrace them?
Sense - or ambient interfaces
The rise of small yet powerful computing platforms has seen the birth of the Internet Of Things with computer chips, sensors and almost anything connected to the internet. This is giving way to computer systems that are constantly monitoring an environment, as a person (or other device) enters, performs a task or the environment changes (temperature) the system can read and respond to this activity.
From warehouse shelves that know when they are light on stock, to lighting systems that respond to movement these sense interfaces can be simple or complex. On one hand there is nothing new here, we've had sense based interfaces for years - automatic doors at the supermarket for example, the change now however is these are connected to computers and smart software.
Amazon takes sense interfaces to an extreme with Amazon Go - a supermarket where you simply walk in & scan your phone to identify yourself. From here on various sensors work out what you are 'buying' as you remove it from the shelf or fridge. Then you simply walk out. The entire interface is built on ambient sensors that feed data back to the main Amazon system.
Humans use gestures all the time to convey a message or perform an action. Its only natural that researchers take gestures and apply them to computer interfaces. The first of these were applied through the touch interface, a swipe with 1 finger or 2 fingers, pinch and pull gestures performed on a platform, that is a touch screen or track pad.
Now however we are seeing these applied where there is no physical platform. The gesture is performed in mid-air and a computer translates this to an action.
2015 was the year gesture interfaces became something with BMW bringing this to their cars, check out a video Motoring Trend Australia did on this interface, and Google launching Soli a chip dedicated to gesture interfaces.
It's hard to escape the world of voice interfaces now, having become mainstream in the home thanks to Amazon Alexa and Google Home. It took a long time for the voice interface to come to fruition, with solutions like Dragon Naturally Speaking from the late 70s exploring voice input. Today we have Siri, Alexa, Cortana and Google Assistant with capability to be almost fully conversational computer interfaces.
Voice interfaces specialise in bringing contextual data to the fore when you need it, without the need for a screen. Standing at the kitchen sink making your morning coffee, "Alexa, what's the traffic like this morning" will see you informed and updated about the traffic on your commute, quickly and easily.
Voice interfaces work well when augmenting other interface types. Talking to a TV to change channel, asking Alexa to change the lights or temperature, checking the health of business finances with Siri or sending a message to a loved one via Google Assistant.
VR, virtual reality - leave the reality of where you are and step into a virtual world. Today the leading application of VR interfaces is in the world of entertainment. Gaming in particular, where VR interfaces range from a simple headset to a full body harness environment.
There are of course applications of VR outside of entertainment for example education where stepping into an environment allows for a better learning experience. Put on a VR headset and step back in time to understand history.
An exciting application of VR interfaces is the world of medicine and psychology. Here in Australia the Sydney Phobia Clinic use virtual reality to help patients overcome their fears in a safe controlled environment.
AR, augmented reality - that is the display of additional information over the exsiting world. This area is booming right now with both Apple and Google building AR toolsets directly into their mobile phone systems. This allows developers to leverage a phone camera and screen for an augmented world view.
In 2017 Darwin based developer ServiceM8 added AR Measure to their popular job management app, allowing a tradie to use the phone camera to quickly measure a room. In the education realm JIG Space allow iOS users to learn in a 3D augmented way.
An exciting development in the world of AR Interfaces recently comes from Leap Motion's Creative Director Keiichi Matsuda. Keiichi is exploring what it could mean to not just display, but interact with a full AR interface - check out the amazing video below.
Introducing Virtual Wearables pic.twitter.com/LPvknKBlnO— Keiichi Matsuda (@keiichiban) March 22, 2018
The way people interact with the electronic world around them is changing, quickly. The kids of today are growing up in a world where things are aware of their presence, they have conversations with inanimate objects and they expect data or answers to be where they are, right now.
So how are you preparing for a change in the way people interact with your data, your software or your business? Take a leaf from the play book of ServiceM8 and the Sydney Phobia Clinic, experiment now, find what these new interfaces mean for you, today and tomorrow.
Subscribe to The Futurist Project
Get the latest posts delivered right to your inbox