The cinematic VR market is finally at a point where there are camera’s available in the wild to shoot stereoscopic 360º content with. There is the awesome looking Jaunt ONE, the pricey Nokia OZO, the Google/GoPro JUMP rig, Facebook’s open-source camera, or if you like stitching and making your life difficult, our custom 16-camera GoPro rig. However, having a decent camera is step zero, and is worth next to nothing without the proper post-production pipeline.

If you thought 4k mono content is hard to deal with, expect to see file sizes, and thus transfer and render times, double or triple when you start shooting stereo content. Also, none of your editing hardware and software is able to playback 4096×4096@60fps smoothly, which makes editing feel similar to waiting for your dial-up modem to connect to the internet… every time you move to the next frame! At the same time you’re also dealing with buggy software and semi-finished plugins that crash your system halfway through a several day long render…

We wrote a post before on Scaling Up a VR Company in which we thought we had figured out how to deal with the aforementioned issues, but boy, were we wrong… When we worked on 2 productions of 12+ minutes long, our pipeline fell hopelessly short. In this follow-up post, we share the amazing result of 3 months worth of frustration, in which we were able to 10x file transfer speeds, 12x render times, increase our storage capacity to 100+ Terabytes, as well as increase the quality of our productions significantly. If you also struggle with moving these insanely large video files through your pipeline, to get them to your video specialists (compositors, 3D artists, VFX, colorists) quickly, you will hopefully find value in this post and save yourself and your team several nervous breakdowns.


Moving from shoot to post

The last project we shot with OZO was a 13 minute long fictional film called Superstar VR. We used the OZO for this job because it has an extremely useful live preview function and works well in small spaces like dressing rooms and limos.

The footage was recorded on set on two different systems:

  1. The internal SSD module of the OZO camera
  2. A Blackmagic Design HyperDeck Studio Pro 2 hooked up to a Mac Pro over Thunderbolt for previewing and immediate playback of the recorded footage

This is by far the safest option, because then you will always have two copies of your footage when you leave the set. We typically shot a little under 1TB of data each day, which we copied to our RAID systems during the night, making a total of 3 copies. You get the point; the amount of data started piling up, and we didn’t even start post-production yet!


Picking the right hardware & software for the job

During all of our experimentation, we have tried dozens of software packages and burned money on complex hardware that didn’t always work. Now we believe we finally found a solution that works well for 360º 3D video productions.



We’ve been working with the combination of Autopano, Premiere, and After Effects for years, and these tools definitely progressed, but they never got to a point where they were stable, powerful and intuitive enough for a stereo 360º workflow.

Although to be honest, for a $1000 dollar, this combination of tools is the perfect starting point for learning the basics of stitching stereoscopic footage. This knowledge will prove to be of great value once you move to the way more expensive, way more complex, but way more powerful combination of NUKEX and CARA VR from The Foundry. A tool that is fi-na-llyyyy able to properly handle 360º 3D footage. NUKE gives you far greater control over your stitch, allows you to see where you f*cked up, and offer flexible tools to correct your mistakes. If you have the budget, make the switch!

But believe me, when you’ve never worked in NUKE before (like me), it has a very steep learning curve. My 2 years of training in Autopano definitely helped me to identify some of the mistakes I made, but it took me a couple of weeks to really get the hang of it. But once you get it running (or already have multiple years of experience with NUKE) it will improve the quality and control over your image dramatically, and brings things like proper reviewing, compositing, grading, and retouching within reach.

In part 2 of this blog post I will do an in-depth review of our NUKE + CARA workflow, including example OZO footage. For now though, you only need to know that there are two types of NUKE licences: 1) node locked and 2) floating. The node locked license is a license for a specific workstation. In this case you will only be able to use NUKE on that specific workstation. The floating license is a bit harder to setup, but more flexible, since you can then use NUKE on different machines, albeit not at the same time. That’s why we recommend you choose the floating license (click here for a tutorial on how to install your license).

You now have a decent stereoscopic camera and a powerful post-production suite, but then you start running into a lot of hardware limitations. Moving files over your average office network takes ages, and most disk configurations are not capable of playing back uncompressed stereo footage smoothly. And don’t even get me started on the insane render times, especially when doing compositing! There are three main issues we have to tackle to optimize our power workflow:

  1. Increase network speed
  2. Increase storage capacity and speed
  3. Decrease render time

Let’s dive into these one by one.




Because we don’t want to lose quality during post, we use 4096×4096 uncompressed DPX sequences rather than video files (more on this in part 2 of this post). These files have a lot of advantages, but the major disadvantage is that they are HUGEEEE. A typical sequence of about a minute consists of thousands of files and will end up being hundreds of gigabytes big. So if you want to finish your production before the third generation of Oculus Rift is released, we need to find a way to move these files fast.

The easiest and fastest way to do this is setting up a 10 gigabit network between your workstations. In our case we had to hook up a couple of Windows machines and a couple of Mac Pro’s into one network. When trying to do this, we discovered the age old problem that Mac’s come from Mars and Windows from Venus.

You first need to hook the machines with CAT7 cables to a dedicated 10 gigabit switch, and we highly recommend the Netgear Prosafe XS716T-100 for doing this (make sure you pick a model with a over capacity of ports, because you will run out of them very quickly when your company grows).

Our Windows machines have a dedicated 10 gigabit network port, and our Mac’s are equipped with Sanlink 2 10G Base-T external Thunderbolt 2 to 10 gigabit switches, because they only support gigabit out of the box. Now when you try to transfer a file you will see… the same low speeds, huh?!?!

Apparently there is something called Jumbo frames, which you will have to enable for each 10 gigabit port on your switch and for all of your 10 gigabit network cards inside your workstations. The difference is that normal data package size is 1500-byte MTU, while Jumbo frames are 9000-byte MTU, allowing for faster transfer speeds of large files.

On a Mac, go to System Preferences > Network > Advanced > Hardware, select Configure: Manually and set MTU to Custom: 9000. On Windows, got to Control Panel > Network and Sharing Center > Change adapter settings, right-click your adapter and click on Properties > Configure > Advanced > Jumbo Packet and select 9014 bytes (the extra 14 bytes on Windows is the header data, so in fact it’s the same as 9000 on Mac).

Congratulations, you should have a network that is capable of moving files at blistering speeds between your workstations! That is, after you have upgraded your storage to handle these speeds as well..




So where do we store the hundreds of gigabytes of raw footage and DPX files we’re working with?

Because we had a lot of Macs in our office, we initially tried to use Thunderbolt 2 technology, since it promised read/write speeds of up to 20 gigabitI, or 2.5 gigabyte, per second. So we ordered a 48TB LaCie 8Big RAID disk, and setup a Mac Mini as a server. However, this turned into one big mess when we tried to connect our LTO tape drive, which needed an expensive Thunderbolt 2 to Mini-SAS converter (Be sure to get the RocketStor 6328L version, or else your LTO drive won’t work… one more expensive lesson on our side :-p). Also, the LaCie 8Big is not capable of handling multiple employees reading and writing at the same time, resulting in extremely slow read/write speeds.

Another issue is that Windows and Mac systems don’t really like talking to each other over a network, especially when it comes to something that is connected over Thunderbolt. Finder and Explorer windows jam and connections fail if more than 1 user tries to access the drive. Seriously, if you want to save yourself months of frustration, this is the time to spend a little bit more money on dedicated network storage.

We decided to go with the Synology RS3617xs+ with 72TB of storage, which we had working at full speed within an hour!! I love it when something lives up to its promise. The Synology has two dedicated 10 gigabit network ports, a quad-core inside to manage the data flow, and is relatively cheap to expand with the RX1217/RX1217RP expansion units, which gives you the option to add an additional 24 disks for up to 240TB of high-speed storage!!

Because the Synology consists of 12 disks, you’ll need to configure it as RAID 6 to make sure your data is safe, even if a disk fails. RAID 5 is not enough in this case. For the disks we used 6TB Western Digital Red Pro disks, which are comparable to the Western Digital Gold disks, but are around a 80 dollar a disk cheaper. So unless you are running a data center, we suggest the Red Pro’s.

When hooked up properly, you should see read/write speeds of around 800 megabyte per second throughout your entire network (even from Mac to Windows!). This is around 8 to 9 times faster than your average network, which allows you to move files from workstation to workstation in a matter of minutes instead of hours.

Although 800 MB/s sounds very fast, it’s still not fast enough to playback 4096×4096 footage in real time smoothly on your workstations. So you will first need to copy the files you are working on to an even faster disk system (halleluja 10 gigabit network). We chose to install 3 x 1TB internal SSD’s in RAID 0 directly onto the motherboard of our workstations through PCI. We suggest you let a specialist build you a custom PC, because it get’s very tricky with PCI lines when using 3 internal SSD’s, a 10 gigabit unit, one or more video cards, and the amount of data your motherboard can handle from your RAID 0 setup.

With this setup we can use the full 10 gigabit speeds between workstations, so we’re transferring files at 1.25 gigabyte per second! We can even work in real-time from other machines over the network.



Render times meme

Now let’s have a look at one of the most urgent issues: those darn render times! Especially when you’re doing heavy compositing, render times increase so much that they might still be running after you landed a spot in a retirement home. By setting up a render farm, combining the power of multiple machines, you can reduce render times from several days to just a few hours for 1 min of footage (4096×4096@60fps DPX).

Someone recommended us to use Muster, but the documentation wasn’t clear at all and after three days of frustration I decided to switch to the industry standard render farm manager, Deadline by Thinkbox, which I had up and running within a few hours. Deadline also requires you to have a solid network and a central server on which you can install the license manager, so it’s important to follow the previous two steps in the blog about storage and network first.

One piece of unwelcome news: to setup a render farm, you need a NUKE Render license, a CARA Render license, and a Deadline license per workstation! This adds up to about $1k per machine you want to use in your farm.

The steps to setup a render farm with NUKE, CARA, and Deadline:

  1. Install Foundry License Tools and NUKE licenses on server
  2. Install NUKEX and CARA VR on all nodes
  3. Install Foundry License Utility on all NUKEX nodes
  4. Install Deadline Repository & Database on server
  5. Install Deadline Client on server and on all nodes
  6. Install Deadline’s NUKE plugin on nodes
  7. Open ports in firewall on license server/repository
  8. Map paths in Deadline Monitor > Repository Config if you use both Mac and Windows
  9. Give write permission to everyone on shared drive
  10. Make sure that Deadline Slave runs on all nodes
  11. Set which node should use the NUKE Interactive license (the other nodes will use the NUKE Render licenses) by going to Deadline Monitor > Tools > Plugin Config > NUKE

This should get your farm up and running in no-time!

While everything worked, we did notice that only a fraction of our total CPU power was being used on each render node. In fact, each node only used 1 CPU core, while our workstations had 6 cores! A massive waste. We figured that while NUKE supports multi-threaded rendering, CARA is a much newer tool and might not support this yet.

We then found out that when you submit a job from NUKE to Deadline, you can set the Concurrent Task Limit, which allows you to run multiple render tasks on one machine at the same time. By setting the Concurrent Task Limit to 5, the number of CPU cores we have available minus one, almost the full power of the machine was used, leading to a 5-fold decrease in render time! You just need to make sure your machines have enough RAM (we have between 64 and 128GB in our stations).


Closing Thoughts

We love our job, and we love a challenge, but sometimes you just want things to work as they should. We hope this post will save you some headaches and precious time setting up a fast and powerful post-production pipeline.

In the second part of this blog, which we hopefully launch in a week or so, we will start using NUKE and OZO Creator to move some files through our pipeline, from unloading and stitching, to compositing and grading, back into our final edit and finally distributing it with our new VR app creation platform Headjack.