In the past two years, the pandemic affected nearly all markets and the world economy. People’s behaviour and new consumption habits have forced companies to reconsider their strategies and how they relate to customers, which has accelerated the digital transformation in various sectors.
As expected, the virtual reality (VR) market that was already prominent in the previous period had continued to grow in numbers (sales of hardware, content production, and application development). Due to COVID-19, the areas of interest within the industry that are furthest away from entertainment started to grow as well. This is because of the large number of new users who first discovered VR possibilities for entertainment but seek other types of services, such as training, education, retail, health care, tourism, and others.
The next step as the technology evolves – spatial computing
Spatial computing is the digitization of activities of machines, people, objects, and the environments in which they take place to enable and optimize actions and interactions. Spatial computing is broadly synonymous with extended reality (XR) - an umbrella term for VR, augmented reality (AR), and mixed reality (MR). It is the practice of using physical space as a computer interface, in a way that machines no longer need to be tied to a fixed location.
Is spatial computing a new term?
The short answer is no. Spatial computing as a term has existed since the early 2000s. The term was defined by Simon Greenwold (MIT media lab alumni) in his 2003 thesis, as "human interaction with a machine in which the machine retains and manipulates references to real objects and spaces."
Spatial computing is the next step in the ongoing convergence of the physical and digital worlds as it augments our reality but also understands the space and therefore allows the content project to interact with the surroundings.
Spatial computing does everything VR and AR apps do, then it combines these capabilities with high-fidelity spatial mapping to enable a computer to track and control the movements and interactions of objects as a person navigates through the digital or physical world. Spatial computing will soon bring human-machine and machine-machine interactions to new levels of efficiency in many walks of life, like transportation, health care, and the home. Major companies, including Microsoft and Amazon, are heavily investing in the technology.
How will this technology shape our future? Probably in many aspects, but let's focus on two: the Digital twin and the Metaverse.
Digital twin concept
As is true of VR and AR, spatial computing builds on the “digital twin” concept familiar from Computer-Aided Design (CAD). In CAD, engineers create a digital representation of an object. This twin can be used in a variety of ways be it to 3D-print the object, design new versions of it, provide virtual training on it or join it with other digital objects to create virtual worlds.
Spatial computing makes digital twins not just of objects but also of people and locations—using GPS, radars, video, and other geolocation technologies to create a digital map of a room, a building, or a city. Software algorithms integrate this digital map with sensor data and digital representations of objects and people to create a digital world that can be observed, quantified, and manipulated.
In other words, spatial computing provides the tools for producing a digital twin of virtually every process, where the user can easily revisit and experiment any time with VR to modify the process as needed in the shortest time possible—and it will be more so when 5G networks become vastly available, because of the major improvement in latency, capacity, bandwidth, etc.
The Metaverse
The user interface for spatial computing will be completely different from the way most of us interact with computers today (i.e., via type, touch, and screen). For example, it will entail eye-controlled interactions, body or hand gestures, and voice controls– hardware will be invisible. The concept of fixed computers and staring at a flat screen will be looked back at by our descendants as absurd, just as we look back on the use of floppy disks to share files today.
A renaissance of interest in spatial computing and the merging of physical and digital personas, identities, and spaces, is now further propelled by several large tech companies boldly establishing their visions and claims for the Metaverse. Big tech, from Facebook to Google and Apple, are moving towards spatial computing and XR devices.
Where are we heading?
Spatial computing provides dynamic 3D visualization in real time of products, industrial spaces, and workers and all their interactions. In other words, it is the ability to virtualize or digitize how machines, objects, people, and environments relate to each other in space.
Chances are that spatial computing will be the next stage of the digital transformation. It will impact nearly everyone, from shoppers in retailers, patients and MDs, and employees and CEOs. Robotics in particular—or machine-to-machine communication—will also be greatly improved in terms of functionality.
From the digital twin concept to the Metaverse and many other opportunities, the possibilities for spatial computing technology are nearly limitless, so it seems that interesting times are ahead of us.
Written by Asaf Green Director of Technological Partnerships of NTT Innovation Laboratory Israel
