Apple announced a new computer last week, with excitement. The Vision Pro is a computer worn on your face, but the novel aspect is how you use it. Instead of viewing the computer’s output through a physical screen, that output is projected directly onto your eyes with two very small but high resolution displays a very small distance in front of you. Instead of controlling the computer through a keyboard, mouse, or touch screen, the primary user interface is through eye tracking and gestures.
Just as they ditched the stylus for screens when they launched the iPhone, Apple doesn’t require a physical controller to use the computer. The computer can sense what you’re interested in interacting with by watching your eye movements and then watching your hands to figure out what you want to do.
There are antecedents to each of these things – a variety of wearable devices for viewing such as Google Glass or Meta’s Quest Pro and technologies such as Leap Motion and the My Armband for motion control. But none of these predecessors united it as a coherent vision.
Apple called this new device a spatial computer. The name is apt, as the device can use any physical space around you as a canvas to display digital outputs. A table (or lap) is not required to place the device, and there are no limitations on the size of the viewing area. That means you can technically sit in a small space, like an airplane seat, and watch a movie the size of a movie.
What should you do with a spatial computer? So far, Apple has outlined use cases that seem pedestrian. You can use it like a normal computer or iPad but with the current 2D information presented in a more flexible and unrestricted display. There is a need for that. This is valuable where you don’t have a lot of space. And it can be valuable for those who currently use their space with many large displays. In that sense, the closest analogue is a very large screen TV. Would people pay $3,500 for that? They do now. Even Apple sells a display (the Pro Display XDR) that can cost up to $6,000. From that perspective, it’s easily in the ballpark cost for current use cases. This strategy also has the benefit of spreading this new platform with a large number of applications that are already available for the iPad and the iPhone.
A better and easier display for 2D content, however, does not seem to justify the technological and R&D weight that has gone into Vision Pro. The real question is whether this device can lead to augmented and virtual reality applications that justify putting a computer on your head. It certainly has the technical capability to do so. Vision Pro can display 3D objects in your current space or even transport you to new spaces. Apple, however, barely mentioned the terms AR and VR during their announcement. In doing so, they drew a line that no one had ever drawn before. It is not an AR or VR device or technology. The technology is a spatial computer, and if there is a role for AR and VR, it is in applications running on a spatial computer.
Let’s review those concepts. Augmented reality (or AR) involves taking the environment around you and changing your perception of it. Google Glass does this by showing you notifications through the smart glasses. Vision Pro does this by placing 2D displays in that environment and fixing them so that when your head moves, the display doesn’t. It appears that it is all around you now. This is achieved by transmitting a very accurate video of the real world through the device to you. You can’t see your surroundings directly, but you think you do. Thus, technically, Apple has increased the capture of video around you, not overlaying objects in your direct view of them. To a user, there is no real difference.
Virtual reality (or VR) involves taking the user and immersing them in a virtual environment. Vision Pro takes the full attention of your eye, and so you are, by definition, immersed in a virtual environment. In one mode, that’s like the environment you live in. Turn the dial, and that can change, and you can be taken somewhere else. The video pass-through from your current environment is replaced by a digitally created 3D environment. From that perspective, it’s clearly a VR device.
The important thing to note is that although there is AR and VR capability, these use cases are not being promoted by Apple. Thus, they created a device capable of both but did not find compelling use cases in either domain. This is one of the reasons they announced it at their annual developer conference. Apple needs apps, and they need other people to imagine them.
In one new paper, we outline the ways we believe AR and VR apps can add incredible value. Aside from gaming and entertainment, our focus is on economic tools — specifically, those that increase users’ productivity. In this regard, we ask: What AR and VR applications can create real value by helping users make better decisions? For someone who intends to develop applications for the platform created by Apple, understanding the possibilities is important.
Most decisions involve some degree of uncertainty. Information is the cure for that, which allows you to know more and, therefore, make fewer mistakes. But there are two aspects of using information in decisions. First, you need to have the right information available. Second, you need to have the cognitive space to digest and parse the information for usefulness.
As it turns out, AR and VR map to each of these things. VR has the ability to present the user with highly relevant information, especially if that information is not available to them or is expensive to obtain. By immersing users in new contexts, what it does is bring information to them. In some cases, it can be a realistic view of what happens inside a building, say, during a fire. In other cases, it presents a safe, simulated environment, like a flight simulator that facilitates training without high stakes.
In contrast, AR takes information presented in a given context and parses it to provide relevant information. For example, if you meet someone at a conference, it provides recognition of who that person is without you having to search your own memory. Or, it can provide a helpful overlay of exit routes if you’re dealing with a fire. In each case, the goal is to distill the amount of information from the user’s environment and present the information needed. One thing to note is that the Vision Pro is not intended as a portable computer for use outside the home or the workplace, which limits its use to navigating external environments (such as driving) .
This perspective highlights why many previous AR and VR purported use cases have little value. VR meetings with avatars in beautiful rooms don’t provide information that is obviously more useful to those in meetings that might come from a Zoom call. AR glasses that provide text notifications as you walk increase your cognitive load rather than reducing it. Our framework suggests that the best use cases are in contexts where it is often expensive or risky to obtain information, which highlights the value of VR, or where the environment is so complex that the value of digital overlays to explain it through AR high. — or both. Consider applications such as prototyping the design of a new aircraft or building, or assisting in remote medical procedures. Vision Pro demonstrates the capabilities to do each of these things but the work of experimenting and designing for these use cases is left to others. Developers seeking to profit from the platform created by Apple would do well to focus on applications that provide users with hard-to-access contextual information at the right level of detail.
This is par for the course for Apple when they first introduce a device. The iPod is a digital walkman. An iPhone is a connected iPod. The iPad is a larger iPhone. The Apple Watch is a better smartwatch. And the Vision Pro is an irresistible 3D screen. In previous cases, the device was especially and became more than the first use by enabling the developer to change. Vision Pro is a welcome innovation experiment along the well-trodden path of computing.