Is Your Face Really the Best Place for a Computer?

The leading AR headsets and glasses are all unibody designs, the CPU is on the user’s head. Headsets with multiple components are treated as limitation of the present rather than a fundamental design choice. Instead, designers could simplify headsets down so they only display content. Compute, battery, and the rest could then be in an external puck, this solves so many problems. Yes, there are existing devices which have some steps in this direction such as the Vision Pro or Xreal’s devices, however they are the exception.

The primary case against separation is latency. However, remember that there are multiple steps to presenting an image, images could be rendered without regard for location, and then a compositor chip in the headset could do minor rescaling and warping to fit them into the scene. Separating rendering from compositing means the computationally intensive work can be moved to an external device. A tradeoff is that would mean higher latency for applications which rely on external tracking. This seems bad for augmented reality, however most applications, at least for now, are webviews or 3d models which can be rendered separately. Wireless VR is much harder, as it does rely on tracking data and an entire scene needs to be rendered, despite this wireless VR products have been available for nearly a decade. AR is much simpler, a cable could be used because it’s on the wearer’s body, and it’s very rare for windows to fill a user’s entire field of view.

There are drawbacks to integrated designs which are eliminated with a distributed design:

  1. Weight and size are far less of a concern when the device on the user’s face is smaller. Unnecessary weight strains neck muscles and can put pressure on the wearer’s nose which is uncomfortable or painful. Extra weight also means the headset is more likely to fly off when the user moves their head quickly. Even the sleekest smart glasses still look thick and bulky. These harms are exacerbated by the fact that smart glasses are ideally supposed to be worn all day, in all social situations.

  2. Upgrades and repairs are easier when components are modular. It is easier to upgrade compute if the headset operates more like a dumb display, think of laptops compared to desktops. This means less ewaste and cheaper upgrades. This probably won’t be an issue in the short term as development as rapid and it’s worthwhile to replace the whole system anyway, but it will become more frustrating as progress slows and people are less willing to upgrade. I don’t imagine companies will care too much about having to sell more devices, but I care.

  3. External batteries make it easier to hotswap, or to charge them while the device is in use. Smart glasses are designed to be worn all the time and so they offer prescription lenses for people who want to replace their regular glasses. If the glasses have to be taken off to be charged, then some people will be blinded while they’re charging. Sure, they could keep a set of dumb glasses on hand, but that undermines the convenience these devices are aiming for. Maybe the devices could be used while plugged in, but at that point why not make it part of the normal design.

  4. Corporate environments need the flexibility. Glasses can be personalised to serve an accessibility function so employees require custom glasses. With headsets as displays employees can connect to the company puck when at work without needing an entirely new device. This would require some sort of protocol to connect to headsets from different manufacturers but standards such as OpenXR already exist to solve this problem. If designers insist on unified designs this becomes a much harder problem. I can see four paths if headsets remain integrated:

    1. Buy glasses with prescription lenses for those who need it. Note that prescriptions are personalised and very specific, and even small errors can cause headaches and nausea, so it is expensive to keep a full set of extras in stock. Some devices, particularly headsets, use magnetic lens, however lenses are device specific, and this is less common for glasses because of the constrained form factor. For example, Meta’s display glasses have the prescription lenses bonded to to the display elements. This means ordering custom devices for each employee, which is a logistical pain, and it means new hires can’t start working until their glasses arrive. When an employee leaves or is due for an upgrade their glasses can’t be reassigned or resold and so are worthless, this also means companies have fewer valuable assets which matters for financial reasons.

    2. Design smart goggles to go over glasses. This seems like a design nightmare, and anything that does exist is likely to be bulky, uncomfortable, and lower quality than regular smart glasses. Some VR headsets can fit over some glasses, but they are unpleasant to wear for extended periods. It seems like a difficult challenge to get optical passthrough working when there’s an unknown lens between the display and the wearer’s eye.

    3. Let employees use their own devices. This seems like a security nightmare. People don’t like being forced to install company surveillance on their personal devices, and it’s not very effective when they can take their devices home and install whatever software they want or mess with the hardware. Alternatively all company apps could be redesigned so the employee’s devices are mere controllers and all data stays on company servers. Admittedly, lots of basic tools such as Word have online versions, however lots of companies have custom software, probably written long ago, that would need to be redesigned at great cost. Web apps tend to be bloated, slow, and generally more unpleasant to use compared to their native versions, they also require the company to purchase another computer. Even if those tradeoffs are acceptable they only mitigate the security concerns, for example data exfiltration remains trivial.

    4. Stick with monitors. This is probably what will happen, and it represents a failure of the technology. Even for basic word processing, AR headsets can display more windows at a more comfortable distance. This is particularly important for jobs which involve lots of travel, however everyone can benefit from more screen space. For more advanced tools, being able to arrange components in 3d space makes it easier to think. And of course for 3d design software actually being able to interact in 3d is transformative. It’s also likely the public will get irritated that the technology at work is so much worse than in their personal lives.

There is likely some pushback because of the cost and difficulty of disaggregating devices, except that it’s already happening. Meta’s neural wristbands are a fundamental, external, part of the system. Generally computers need to be able to interface with accessories anyway. This is important for accessibility, but also so people can modify their tools to actually help them.

There are so many more environments than I or anyone in Silicon Valley can think of, and giving people control over their devices means they can find something which works for them. Of course there are tradeoffs, but this basic level of flexibility is so valuable. So at this point the question remains, is your face really the best place for a computer?