I've pretty exhaustively covered my process for putting together an 18-camera stereo 360 rig - now I'm going to cover a mock commercial VR shoot from preproduction to final exports. This first project is going to be a week-long, 7-part series:
1. Planning and shooting the apartment
2. File ingest and management
3. Stitching a stereo panorama
4. Postproduction Pt 1: Patching the nadir
5. Postproduction Pt 2: Removing seams
6. Postproduction Pt 3: 3D motion graphics
7. 3D Spatialized Audio
Embedded above is a preview of where we'll be after day 5. We've still got a few kinks to work out, and we're going to be adding some 3D motion graphics, but the 3D 360 video is pretty darn seamless.
This series is going to be based around the 18-camera rig, but the vast majority of the content will be applicable to any multi-camera 360 setup. If you're reading this and getting excited to buy and assemble your own 3D 360 rig, I implore you to just wait a few more days. I've been working on an improved version of my prototype that requires half as many cameras, and I think you'll be as happy with it as I am.
So, let's get started!
If you've got a headset, you should really take a moment to check out the 3D side-by-side video linked above - with a little pixel-pushing to eliminate our unavoidable stereo stitching errors, it's a nice-looking and promising shot.
Initially, I was concerned that there wouldn't be enough light to shoot this indoors at 60fps - remember that I'm shooting 60fps to sync cameras more precisely in post, and that a camera captures half as much light at 60fps than at 30fps. Fortunately, the positioning of the apartment and the time of day gave us ideal lighting conditions - plenty of cool blue diffused light, the sun casting dramatic shadows on the buildings outside, and pretty great visibility for a megacity like Seoul. I especially love the contrast between the warm light from inside the apartment, and the cool blue light from outside - this happened naturally, with only minimal color correction tweaks.
Since we had our blocking planned out in advance, we knew we could capture Mark's whole movement in the frame of a single stereo pair. This is ideal for avoiding moving objects that cross camera seams - as you'll see in Day 5, it's much easier to remove seams from static objects than moving ones.
Checking our framing using the Yi's smartphone app, I rotated the rig until my front cameras captured both the hallway entrance in the back left, and the doorway exit on the right. I also showed Mark on the smartphone how close he could stand to the camera, and how widely he could gesture with his hands. This is going to make seam removal much, much easier for us.
Additionally, since I knew all my action would be captured by my front stereo pair ('Cam01' and 'Cam02') I only shot with all 18 cameras once, and then filmed subsequent takes just with Cam01 and Cam02. This allowed me to save battery life and SD card space on the other 16 cameras, and made file management and import much simpler.
To answer a common question - yes, I individually push all 18 'record' buttons to start and stop the cameras. I trim and sync the videos in post. Not a big hassle, but I'm working on a better solution now.
It's a really frustrating and easily-avoidable mistake to make, if you're working with a manual-start rig.
This was my first day with the Zoom H2n audio recorder. As I'll show in Day 7 of this tutorial, the device by itself is sufficient to capture pretty decent 3D spatialized audio for interactive playback in VR. Unfortunately, I didn't know this at the time of shooting, so I came up with a hack-ey solution on the spot. When we get to 3D spatialized audio, we're going to look at both the new (presumably better) solution I've been using, as well as trying to make the stereo audio that I recorded on this set work.
Stay tuned for Day 2, where I'll go into file ingest, renaming and management. Batch processing! Organized file structures! What could be more exciting!