The first time I heard about spatial computing. It all made sense to me. Put simply, it’s the use of space around us as a medium to interact with technology. It’s the purest form of “blending technology into the world”.
Realities
When we’re talking about spatial computing, we are talking about virtual reality, augmented and mixed reality. What is so interesting is that we describe this technology by the type of interaction we have with it, not by the object we interact with. And this revolution might be as big as mobile computing (that was defined by the places we go with our device) because it’s a fundamentally new paradigm of computing. Currently, there are macOS and iOS and one day, I can picture Apple introducing xOS for mixed and augmented reality.
Form follows function, level 99
This interaction point is precisely why spatial computing is a real game changer: it’s taking the function over form debate to the next level. Spatial computing makes the hardware disappear. Not physically, but digitally: we only have the output of the machine, nothing else. The trend of making the hardware fade away to let the software take over has been on the rise for a long time. The most blatant example is phone design in the last decade: from big bulky plastic boxes to the slick black screens. In spatial computing it’s the same: the hardware is purely the engine oriented toward the display/world. Nothing more. Nothing less.
The fact that hardware is fading away doesn’t mean it’s easier to design. I’d say exactly the opposite: now, there are fewer and fewer ways to make a phone design stand out, you have to get these few things 100% right.
Spatial software
As the physical object to design almost disappears, the most significant part of the design to create a meaningful experience is the software but a new kind of software: spatial software. The UX/UI we know on computers fit a 2D screen, but with spatial computing, we can bring new interactions. Copy-paste what we’ve learnt in 30 years of 2D software design to spatial computing would not be appropriate, new ways can be explored.
Natural interaction for spatial computing
Changing the computing platform already brings a radical change in human-computer interaction. The consequence is that we have to reinvent the whole UX/UI around it. The challenge is to make users feel familiar in a new environment. Create a meaningful experience in spatial computing means that we have to use natural ways of interacting with this new technology. Things have to be natural. A good rule of thumb to create meaningful spatial interactions can be “Is this how I interact with the real world?” If the answer is no, then it’s probably not a good idea to try to force people to do it.
1 — Eye-controlled interactions
Eyes are our input sensors in the real world, and that’s what they should be also in digital ones. For virtual reality and immersive experience, eye-ish controls are widespread and that can seem counter-intuitive. The key point lies in the fact that it is not properly “eye-control” it’s “head-position-control”. We select through our the position of our head, not the exact position of our stare. Eye-tracking analysis has already proven that it’s very erratic and unstable. It would require a lot of attention and energy to control things with our eyes.
2 — Hand gestures
Hands are at the core of our natural interactions. In classical computers, it has been the standard. It all started with the mouse, that was purely 2D. Laptop’s trackpads introduced supercharged-2D with all the different fingers gestures, but it remained 2D (and of course, the iPhone 3D touch is a great feature, but there’s definitely nothing 3D about it).
The HTC Vive controllers are the first ones that truly introduced 3D hand controls. The next generation of interaction is best exemplified by LEAP Motion. It’s an amazing technology that detects our fingers’ moves and it’s leading the way to supercharged-3D, this level of precision in using our fingers is simply mind-blowing.
3/ Voice controls
As much as the eyes are the input, the voice is only an output. So commanding stuff through the voice is extremely natural, and pretty well accepted (Siri, Cortana, OK Google…). The agenda of voice controls is very clear though: we have to match what humans can do, in terms of understanding, analyzing and contextualizing.
Source https://medium.com