Computer vision training requires an extensive amount of labeled images to be successful, but the process for labeling real-world data is long and tedious. To address this, we create custom, synthetic datasets for customers, powered by the Unity Perception package. With this technology, we can create a large variety of environments populated with various objects and humans. Through Randomizer scripts, we can randomize several parameters, such as objects’ position, rotation, animation, texture, and lighting. The resulting images, referred to as frames, are generated almost instantly, thanks to the rendering of lifelike 3D scenes in real-time. Additional features are consistently being added to the Perception package with this project addressing Rig automation and resizing. This project’s goal is to randomize digital humans’ blend shapes and automatically adapt their rigs, by using a Blend Shape Randomizer script and other rigging and skinning tools currently in development. We worked closely with one of our customers to create the synthetic dataset they needed over the span of four weeks. We modified existing randomizers and set up interior and exterior scene environments with people and lighting randomization to meet the customer’s needs. After prioritizing this work, I was able to return my focus to the human rigging and skinning automation tools. I expect to have completed a Bones Placer tool by the end of my internship, working alongside a Skinning automation tool. 

My experience on both projects was very exciting and rewarding as I was able to learn more about synthetic data, as well as work in a fast-paced environment on challenging problems with the support of managers, mentors and colleagues. Working on a customer project was initially very daunting, but it turned out to be invaluable. Iterating through their feedback was instructive, as machine-learning has different needs from gaming, where I have more experience. Furthermore, I gained more knowledge on the HD Render Pipeline and the Shader Graph with lighting, post-processing and creating a Shadergraph that randomizes textures’ appearance. I quickly familiarized myself with the Perception package and more specifically with its Randomizers’ logic, so that I could modify them as needed. Then I used this new-found knowledge to write from start to finish the Blend Shape Randomizer, which adds meshes as new blend shapes to a target mesh and randomizes their weight. This taught me more about blend shapes as well as Unity and Perception’s specific API. In addition, I delved further into Houdini Python scripting, as I worked on exporting vertex data from a mesh in Houdini to a .json file. This file will then be handed to a Bones Placer tool in Unity, which will take the vertex data to calculate expected bones position, and generate them onto a target mesh which has the same vertex IDs as the one we previously collected the data from. This tool is what I am currently developing and is due to be completed by the end of my internship as aforementioned. The generated bones will then be used to skin the mesh with a Skinning automation tool, currently developed by my colleague. Overall, I acquired a vast amount of technical knowledge, which I am very grateful for, and I am looking forward to learning even more to help Unity and its customers be successful in synthetic data generation!

Source: Unity Technologies Blog