Neill Blomkamp: We were lucky to find Volumetric Capture Systems (VCS) in Vancouver. They built the rig, which was 265 4K cameras on a scaffold. It’s usually a hemisphere, but we needed more room on the side. It was actually a cylinder. Then on top of that, we had these mobile hemispheres that were a meter wide, with 40 or 50 cameras in them, that would be brought in closer for facial capture.

The truth is I couldn’t imagine a worse environment to put actors in if I tried! I mean, I guess the only other thing you could do is maybe to add water. If they were semi-underwater, maybe that would be the only thing that would make it worse. So hats-off to the actors Carly Pope and Nathalie Boltt for doing awesome work in that insane environment.

The other thing that was extremely weird to get your head around, for me at least, was, there was no clear way to observe the performances other than witness cameras. So you’d have witness cameras with camera operators who were moving and trying to follow the actors, and then I just got the feed from those cameras. Because, obviously, all of the other 265 cameras, they’re just static, and they’re recording wherever the actor is at that moment in the frame.

That means you don’t get any feedback from the volume capture rig, and you certainly don’t have a virtual camera, because the data hasn’t been calculated yet. You’re just sitting around like a stage play, basically.

On a mocap (motion capture) set, that’s different. Because in mocap, you grab a virtual camera, and you shoot it yourself, with the actors there or not there, it doesn’t matter. With vol-cap, what ended up happening after many months of crunching down the data, we could load it into Unity. And then we have this awesome real-time environment, where we could bring in virtual cameras, and then we could just look at it. Then it’s almost leapfrogged normal motion capture, because now all of a sudden what you were looking at was final. Everything is final. So now it’s just a question of like, well, where do you want the lights?

You have nothing to start with, and then suddenly you have a final character. I mean, you’re not assigning it to a rig. You’re not assigning a 3D mesh to the rig. There’s no retargeting. There’s no morph targets. It just comes in, and it’s done, so that was pretty cool.

The data management and logistics was an absolute goddamn nightmare too, because 4K cameras times 265 times 30 minutes of footage – I think we were at 12 to 15 terabytes of downloads per night. So we actually had to supplement VCS’ computers. I think we brought in 24 computers of our own, to the set, just so that they could start shooting the next morning.

Source: Unity Technologies Blog