The first milestone we had to reach while building our prototype was to align the virtual representation of a table to a physical table or desk. Since Oculus Quest 2 does not yet offer a way to accurately detect planes in Unity, we had to adopt a manual approach for using the controllers, so that users could quickly and precisely align the virtual desk to the table.

Once the alignment was complete, we needed to find a way to network it. While this might seem simple, there are a number of factors to consider when centering your social experience around a table. Whether you’re in the same space or connected remotely, the tangible table interface is shared and needs to look and feel right for everyone participating.

As such, we fleshed out what the shared computing experience would look like. With support from Oculus Quest 2’s advanced, articulated hand tracking, we managed to build a system that allowed us to turn any table into a giant touchscreen. But this is VR, and our aim was to do more than build flat interfaces, so we began experimenting with reactive 3D objects.

We primarily tested a game of chess by prototyping voluminous 3D pieces that collapsed vertically as the user approached their hand. However, early user testing revealed that the shape of the objects could lead users to misunderstand how to properly interact with them. Since the pieces collapsed vertically, users thought they had to raise their hand and point downward to interact with the piece they wanted to move. This was a problem because hand tracking systems don’t work as well when they can’t clearly discern the top silhouette of your hand.

Despite our reluctance to turn away from the appeal of futuristic 3D interfaces, we decided to make pieces users could interact with flat. This had two key benefits: First, it was easier to select the pieces users were targeting. Second, users no longer felt the urge to contort their hands; they simply touched and dragged.

We learnt that, while interactions along a surface should remain simple and loosely similar to traditional touchscreen interfaces, there is new potential around what can be derived from a user’s hand positioning relative to an object in 3D space. Unlike touchscreens, interfaces can light up in anticipation of being touched, which makes them more playful and predictable as a result.

It took us awhile to get to a system that worked smoothly, but the moment we first hopped into a networked session with expressive avatars, and could both see and hear the other person tapping our table over the voice chat as if we were in the same room, was truly mind-blowing. It felt almost magical to bring this tangible part of our reality into a shared experience.

Source: Unity Technologies Blog