The path to creating this update offered some challenges to consider. Sometimes, what seems like the most obvious answer is not always the best one, especially when developing for accessibility. We took the time to work through each aspect of this update to make sure the features we were adding worked well for those who needed it most.

When we enabled TTS early in the project, we started with automatic narration. This meant that any object a player’s hands waved over would be described, even if that meant speaking on top of a previous description that may still be going. For audio descriptions to be valuable, they need to be heard without other audio fighting for priority.

This resulted in a few changes that worked well in playtesting. For example, we decided to add a button press to activate descriptions instead of having them read automatically, which led to a more comfortable experience. This gives players agency when deciding if a description is read to them when their hand is placed over an object or pointing at something in the distance. It also prevents accidental TTS from happening if a player moves their hand over objects they didn’t mean to have read.

While TTS describes the name of the object and a visual description, it doesn’t take very long. Even with short descriptions, though, we know people would want behavior similar to a screen reader (i.e., the ability to cancel audio while it’s being read).

Another thing we learned with regards to audio is the importance of lowering it so that the lower volume allows the TTS audio descriptions to have priority. We ran into a problem where TTS can be triggered during an interaction with an NPC, making the NPC’s dialog quieter and easy to miss. At this time, players are not able to “rewind” or “retrigger” the same audio to play; however, players can wave their hands and NPCs will respond back to help.

But one of the hardest parts about building features like these is making sure they will actually help users who need them. The best way of determining usefulness for new accessibility features is through testing. Of course, being able to quickly make new builds for all of our platforms after each round of feedback and development was essential to making this update the best it could be. One unlikely tool we found useful for fast iteration on our designs was Unity’s Post-Processing Stack.

Before sending builds to playtesters, our developers wanted to test the effectiveness of features internally. Since many of our developers are sighted, we used the Post-Processing Stack and created entries in our debug menu that allowed us modify the visual clarity of what we were seeing in the headset. This helped our developers simulate roughly what it is like to have different levels of reduced vision while playing the game. Since we could now rapidly identify and tackle the most obvious issues, we were able to iterate on designs more quickly and make sure we were getting the most out of the external playtest sessions with blind and low-vision testers.

Source: Unity Technologies Blog