Seamless 3D 360 Day 2: File Management, Sync and Preview

So, we've shot video on 18 cameras. On an all-day shoot, that can sometimes mean up to 10-12gb per camera - hope you folks have a spare hard drive or two.

Here are my goals for this next step:

  1. Transfer all video files to an external hard drive
  2. Sort and rename all video files
  3. Sync video files in Adobe Premiere
  4. Render sample frames from Adobe After Effects
  5. Use sample frames to create a 2D and a 3D sample panorama

I'll use this final sample panorama to judge the quality of the shot, as well as identify any potential trouble spots (actors too close to the camera, lens flares, etc).

I thought it would be nice to show you guys the process step-by-step, and so I cover steps 2-5 in the YouTube video linked below. First, however, we need to copy over all our footage.

To get started, I transfer the footage from each camera into a corresponding subfolder - Cam01's footage goes into folder 01, and so on. I quickly look through the folders to ensure I have the expected number of files, deleting any unplanned extras and noting any missing files (if I didn't double check my cameras were recording on set).

From here, just check out the YouTube video:

I use a blank folder template to save the trouble of creating 18 new folders for each project. You can download the folder template here.

I use a free bulk renaming utility to manage my 360 video files - download link here.

I also make use of a free After Effects script called "rd_RenderLayers" - download link here.

Finally, here are the PTGui templates (18 camera template and masks) that I use for my 18 camera rig. You can use them as described in the video - hopefully you'll have a much better understanding of how they work after the next tutorial.

I may have overdone the coffee for this tutorial, but I had a lot of fun recording. If I cruised too fast past anything, feel free to leave questions in the comments or on the forum!

Seamless 3D 360 Day 1: Shooting the Apartment

I've pretty exhaustively covered my process for putting together an 18-camera stereo 360 rig - now I'm going to cover a mock commercial VR shoot from preproduction to final exports. This first project is going to be a week-long, 7-part series:

1. Planning and shooting the apartment

2. File ingest and management

3. Stitching a stereo panorama

4. Postproduction Pt 1: Patching the nadir

5. Postproduction Pt 2: Removing seams

6. Postproduction Pt 3: 3D motion graphics

7. 3D Spatialized Audio

Thank to Mark Barthelemy from Kynda.org for arranging, acting in and helping plan this shoot.

Download the 3D 360 video file for Oculus, Cardboard or Gear VR by clicking here.

Embedded above is a preview of where we'll be after day 5. We've still got a few kinks to work out, and we're going to be adding some 3D motion graphics, but the 3D 360 video is pretty darn seamless.

This series is going to be based around the 18-camera rig, but the vast majority of the content will be applicable to any multi-camera 360 setup. If you're reading this and getting excited to buy and assemble your own 3D 360 rig, I implore you to just wait a few more days. I've been working on an improved version of my prototype that requires half as many cameras, and I think you'll be as happy with it as I am.

 

So, let's get started!

 

If you've got a headset, you should really take a moment to check out the 3D side-by-side video linked above - with a little pixel-pushing to eliminate our unavoidable stereo stitching errors, it's a nice-looking and promising shot.

Initially, I was concerned that there wouldn't be enough light to shoot this indoors at 60fps - remember that I'm shooting 60fps to sync cameras more precisely in post, and that a camera captures half as much light at 60fps than at 30fps. Fortunately, the positioning of the apartment and the time of day gave us ideal lighting conditions - plenty of cool blue diffused light, the sun casting dramatic shadows on the buildings outside, and pretty great visibility for a megacity like Seoul. I especially love the contrast between the warm light from inside the apartment, and the cool blue light from outside - this happened naturally, with only minimal color correction tweaks.

Since we had our blocking planned out in advance, we knew we could capture Mark's whole movement in the frame of a single stereo pair. This is ideal for avoiding moving objects that cross camera seams - as you'll see in Day 5, it's much easier to remove seams from static objects than moving ones.

Checking our framing using the Yi's smartphone app, I rotated the rig until my front cameras captured both the hallway entrance in the back left, and the doorway exit on the right. I also showed Mark on the smartphone how close he could stand to the camera, and how widely he could gesture with his hands. This is going to make seam removal much, much easier for us.

Additionally, since I knew all my action would be captured by my front stereo pair ('Cam01' and 'Cam02') I only shot with all 18 cameras once, and then filmed subsequent takes just with Cam01 and Cam02. This allowed me to save battery life and SD card space on the other 16 cameras, and made file management and import much simpler.

To answer a common question - yes, I individually push all 18 'record' buttons to start and stop the cameras. I trim and sync the videos in post. Not a big hassle, but I'm working on a better solution now.

If you take nothing else away from anything I write - always double check that all of your cameras are recording.

It's a really frustrating and easily-avoidable mistake to make, if you're working with a manual-start rig.

This was my first day with the Zoom H2n audio recorder. As I'll show in Day 7 of this tutorial, the device by itself is sufficient to capture pretty decent 3D spatialized audio for interactive playback in VR. Unfortunately, I didn't know this at the time of shooting, so I came up with a hack-ey solution on the spot. When we get to 3D spatialized audio, we're going to look at both the new (presumably better) solution I've been using, as well as trying to make the stereo audio that I recorded on this set work.

Stay tuned for Day 2, where I'll go into file ingest, renaming and management. Batch processing! Organized file structures! What could be more exciting!

3-camera zenith for better 3D

I get a lot of questions about my 3-camera zenith and nadir setup. While the 3-camera zenith is still not a perfect solution, I'll show in this post that it's a better way of creating 3D than any 2-camera zenith setup. The takeaway from this article should be - if you're thinking about buying a 3D 360 camera, please buy one that actually does what it claims.

Let's start with a look at how 3D 360 rigs work.

It should seem obvious that one of the rigs shown below wouldn't work - even though two cameras are used in the front, the cameras are vertically oriented, and human beings just aren't shaped that way.

 

 
HorizontalCamsOnly (0-00-00-11).jpg
 
 
Eyes.jpg
 

- - (

: )

Makes sense, right? So, taking a look at the 2-camera zenith/nadir setup, it should become apparent that there's a significant problem with the image captured by this style of rig:

If you imagine a viewer staring at the ceiling and swiveling in their chair, their eyes would be constantly changing orientation, but our cameras are locked in space when we shoot. With the camera technology that exists today, it's difficult to completely avoid this problem - but adding a third camera to zenith and nadir significantly improves the result.

But don't take my word for it - click here to download a sample stereo panorama, and check out the 3D on the lanterns hanging above my rig. The 3D effect should be stronger and more comfortable across the whole zenith than with standard 2-camera zenith rigs.

For a look at how exactly I'm stitching and blending together these three zenith cameras, please check out my upcoming tutorial on stitching the 18-camera rig.

These days, I'm working on a newer version of my rig that should more completely address this issue - but in the meantime, if nothing else, please don't spend thousands of dollars on a 3D 360 rig built on an ineffective marketing gimmick.

Process overview

Setup and teardown takes me about 8-10 minutes per shot. I manually start all 18 cameras, then walk away from the rig and wait. The Xiaomi Yi isn't natively able to sync/genlock with other cameras to start all cameras at the same time; however, shooting at 60fps allows me to manually sync files in Adobe Premiere with a max time difference of 1/120th sec between shots.

I copy the files from each of 18 cameras to my external hard drive. I use a free batch file renaming utility to quickly label the source files. This is useful for keeping track of which shot came from which camera, and ensuring I have all 18 files for every shot.

I then load all 18 cameras' worth of video into a new Premiere project. Premiere is able to time-sync the clips automatically, and I trim off the excess video. I send this project to Adobe After Effects, where I use a free script to render a single frame from each clip.

With these 18 image files, I use PTGui Pro to stitch together the panorama. At this point, I can finally preview a still frame on my headset. If I like the shot, I'll render the full sequence to frames from After Effects and use PTGui Pro's batch stitching capabilities to process the image sequence. For high-precision work, I generate a layered panorama from PTGui and further tweak the image in After Effects.

Finally, I take the stitched panoramic image sequence, apply some sharpening and color correction in After Effects, and render out a 4k video file in Adobe Media Encoder.

In total, to capture, ingest, edit, stitch, and postprocesss a 30-second shot will take me about three hours. High-precision post work can add an extra hour or two. A full day of shooting produces about 15 shots, which will take me about a work week to deal with.

 

Bulk File Rename Utility : Free

I use this all the time. Absolutely invaluable for keeping track of all the files you'll be handling.

Adobe Premiere / After Effects: $50/mo

I already pay for these programs for my job. If you work in video, you likely already own these or comparable software. If you don't work in video, they may not be worth the investment on a budget – I'll look into cheaper/free options here.

AE RenderLayers Script: Free

This is how I convert my time-synced video files to frame sequences.

PTGui Pro: € 149 ($232 USD)

I believe Kolor Autopano and Hugin are also viable alternatives;; I've only tried PTGui and it works fantastically. VideoStitch is another option, but it's $1000, and I haven't found anything it can do that PTGui Pro + After Effects can't.  

Why Xiaomi Yi?

Most of the professional 360 rig solutions expect you to use GoPros, which, at $400-500 a piece, are simply not an option for me. Research and comparison led me to the Xiaomi Yi, a $75 camera with specs that rival the $400 GoPro. It's got nearly everything I want in a camera for VR, namely:

 

Price

This was a huge factor. This technology is going to keep evolving over the next few years – I don't need something future-proof, I need the best value I can get for my money today. Especially when considering 6 or more cameras, the Yi's $75 price tag was a huge plus.

 

Frame rate

To ensure good sync between all cameras, we either need cameras that can start simultaneously (genlock), or we need to shoot at a high enough framerate to sync in post. I've found that shooting with the Yi at 60fps is enough for good sync in most situations – when synchronizing 60fps footage in post, clips can only be out of sync by at most 1/120th of a second.

 

Resolution

The higher the better. The Yi natively does 1920x1080 @ 60fps, and can be hacked to shoot 2304x1296 @ 30fps or 1600x1200 @ 60fps. I shoot at 1600x1200 because 4:3 aspect ratio uses the entire image sensor for greater field of view.

 

Field of view

Following the formula I listed in my rig design post, I knew I needed at least 90° VFOV for a 6-camera cube rig. This ruled out the Mobius Action Camera with its 132° diagonal FOV (87° VFOV). The Xiaomi Yi, with is diagonal FOV of 155° (100° VFOV), provides plenty of overlap & makes stitching easier.

 

Image quality

Especially at 30fps, the Yi produces a great-quality image with a desirably flat color profile. At 60fps, the image is a little murkier and more susceptible to low light (twice the framerate means the shutter only admits half as much light per frame). Shooting on a bright day, plus downsampling and sharpening in post, can mostly alleviate this issue.

 

Of course, the camera has downsides:

 

Battery

Like all action cameras, the Yi is limited by its battery life. I'm able to take about an hour of footage before batteries start to die.

 

Audio

Another issue common to all action cameras – I wouldn't use the in-camera audio for professional gigs.

 

Lens

Xiaomi Yi lenses sometimes arrive slightly out-of-focus from the factory. Out of the 18 cameras I purchased, I chose to refocus eight – only two of those were pretty bad. I followed an online tutorial to open the front of the camera, remove the glue connecting camera to lens, and refocus the lens while monitoring in real-time through HDMI out.

 

Lack of inbuilt professional controls

Unlike GoPro, the Xiaomi Yi lacks a 'Protune' feature to lock white balance or exposure. This would be a big plus for any multi-camera VR rig, but I haven't been unduly handicapped without it. There's a very active modding community at DashcamTalk who have hacked the Yi firmware to unlock higher bitrates and resolutions, and advanced controls like white balance and exposure lock.

 

 

All things considered, the Yi was the only camera that ticked all my boxes. If budget wasn't remotely an issue, I'd pick the Hero 4 Black - but with comparable specs to the $400 GoPro Hero 3+, the Yi really is great value for the money.

Here are the pros and cons to other popular options as I see them:

 

GoPro Hero 3+

Pro: Comparable specs to Xiaomi Yi

Con: $400

 

GoPro Hero 4 Black

Pro: 2k 60fps, ProTune

Con: $500

 

Mobius:

Pro: Smaller size good for rig construction

Con: Stock lens has low FOV, no 1080p 60fps