Estimated reading time: 36 minutes
It’s been a while since our last blog post, not because we were out of inspiration, but mostly because this one took waaaaaaaaayyyy longer than anticipated to write. To start at the beginning, let’s move back a little over a year, to January 2016 to be precise. At that time I was seriously considering selling my firstborn to get my hands on one of the OZO’s that just came out, to avoid the process of juggling 18 GoPro’s on set to get a decent stereoscopic image.
At that time, we were finally capable of producing stereo video that could be viewed without causing immediate migraine because we messed up stereo for the 10th million time during the stitching process. We got to the point where we thought we had it all figured out, which is when you tend to become a little bit overambitious.
This blog is about the things we learned from switching our entire post-production process to a proper, stable workflow which not only scales, but which also gives us the tools to do much more creative work than ever before.
We hope you will forgive us for the radio silence and hopefully this blog will help you or at least give you some insight on how and why to make the switch from AVP/Adobe to NUKE + CARA. Our next blogs will be out there much faster, pinky swear 😉
How did we end up in this mess?
In 2016, we did a lot of agency work and we were looking for an R&D project that wasn’t just a technical challenge, but which would also heavily lean on storytelling and musical guidance. We decided to write our own script, fund and produce this piece ourselves, and apply all the lessons we learned the hard way in the previous years.
So we finished up our last agency work, cleared our agendas for the next two months, and started filming in May. By that time nobody bid on my firstborn yet, so we were still stuck with our old but gold stereo GoPro rig, and decided this would be her retirement job before we would switch to more professional cameras for future jobs, because these became available for rent later that summer.
Little did I know that ‘April’ –which is how we decided to name the piece– would cause a complete meltdown of our post-production pipeline. By the end of the two months we were stuck with only a first draft that was not nearly at the level we set our sights to a couple of months before (did I already mentioned we got a little bit overambitious at the time?), and because we already planned our next agency productions, April disappeared in the dark corners of our RAID systems waiting to be brought back to live.
After some serious hardware and software upgrades and a little gap in the agenda of our compositor, we decided to give it a go again in November/December last year. Unfortunately, April lived up to her name and it seemed that she wanted the be released in the month that is carrying her name (instead of January as we had planned).
Luckily we found the time to work on and off on this project in between our commercial work, and we finally managed to release April to the public!
In the next part of the blog we will describe in detail how April was created, which tools were used, and which hurdles had to be overcome, but before we move on to the more technical part of this blog (and if I still want to get into office tomorrow and not found dead a in a ditch somewhere) I have to thank a couple of people for making this all possible: Joris Seghers, Cassandra de Klerck, Casper Corba and Henk van Engelen. Without you guys April never would have seen the light off day, you guys rock!!!
So let’s start with the conditions April was shot in. The shoot took place both inside a studio as well as outside in nature. We needed to find a way to switch from the inside studio shots to the outside shots without doing hard cuts on every scene. We were aware of most of the problems we would encounter during the making of a film like this, which we already laid out in a previous blog post.
As already discussed, we shot hopelessly past our anticipated time frame, and with that also over budget. Luckily it wasn’t such a big deal for this project, because we did everything in house, and it was an R&D project. But it taught us some very valuable lessons on how to price our next productions and keep them in time frame as discussed with our clients.
This may seem a little bit like an open door, but when you decide to switch to another post workflow, don’t do it while working on a paid job. Hardware and software will fail, and on some occasions you need to re-learn skills you already mastered in your previous workflow, entirely or partially, from the ground up. Also, project are doomed to miss anticipated deadlines because of simple “problems” that you can’t fix immediately because you don’t know the answer yet, so even doing massive overtime won’t help you here.
So unless you are very good at explaining to your clients why they should wait a couple of months longer on their production and like to see your margins dry up faster than you can say “how the hell am I going to rotoscope this out’’, just don’t even think about switching to a new workflow.
Toolbox and Hardware
For me, and probably most of you, stitching the footage is the most frustrating part in any 360º post-production (although rendering those 4kx4k images comes in as a close second), so any tool or software that makes our lives a little bit less frustrating and speeds up the workflow seems like a solid investment.
There are some great, (semi) automated solutions on its way by Pixvana for instance or Mistika and a couple we did not use at all, like the GoPro Odyssey (no top or bottom cameras though, and again those darn GoPros), or Facebook’s Surround 360 solution (did somebody actually build their camera yet?). So for the sake of this blog, we will stick with less exotic solutions and the ones we tested, If you are interested in other solutions I highly recommend reading this blog from the guys of Visualise who did a great job setting apart all the available hardware systems out there. For most of you guys, this part of the blog is a little bit of an open door, but it will give you some insight on why we switched away (or partially) from these solutions in the first place before switching to Nuke.
Multi Cam set-up + Autopano
Autopano Video Pro is probably the first software you will encounter when you start stitching for the first time. Although it is fairly good at it’s job, and definitely good value for money, it has some major flaws and limitations. First off, it doesn’t scale in terms of rendering and has very limited output options (most of them are not industry standards or are not handled well during rendering or the rest of the post workflow). Probably the biggest issue is the lack of optical flow stitching algorithms (although they are working on it), and the very limited control over your stitch during the stitching process itself.
This problem gets even more apparent when doing stereo. We had a complete checklist, not only how to do a stitch, but even how to do things like importing the images itself. The stereo workflow is so frustrating and error prone that it sometimes felt we were not doing twice the amount of work but 4 or 5 times before getting a decent (not perfect) stitch.
I have to be honest, we haven’t used Autopano for a while now, so hopefully the software progressed and got less buggy, but for all the stereo work we do and the quality we expected from our stitch, we had to look for a different solution.
- Great value for money
- Doesn’t scale
- Limited output
- Not enough control over stitch
- No proper stereo workflow
- No review before rendering
Jaunt is definitely aware of all the problems that occur in the making and processing of high resolution stereo 360º video, and developed some great tools to solve them. Not only have they single handedly pioneered VR camera design, cloud-stitching and distribution, they also opened up the market for all of us and are one of the main reasons we are working in live action VR right now.
That being said, let’s have a look at their pipeline. I personally am not a big fan of cloud-stitching (yet). Although the process is very fast and you can essentially perform high-quality stitches from anywhere with a stable internet connection, these algorithms are not perfect and still require some manual fixes in post, especially around the close-up faces, where the stitching algorithm seems a little bit off and sometimes causes weird artefacts. Also the complete lack of control over the stitch, color, and output after uploading your source files is a big downside. Oh, and it’s quite expensive too..
However, these algorithms develop very fast and will hopefully soon become a go-to solution when you don’t have the horsepower on-site or time to stitch things on a local machine. The process of organising your files, uploading, stitching and downloading is very solid though. Next to that, their camera has the best dynamic range and color depth we have seen in a single camera design due to the large sensors, but it is a little big and not always suitable for all shoots. Probably the biggest features I am missing on their camera is live preview inside a headset during recording, and external recording on another system than the internal SD cards, but since they control both the hardware and software (which is a big plus in this industry) I am sure they will fix this in future updates. On top of that, when you work with Jaunt they will give you access to their distribution channels, which almost guarantees your work will get some serious eyeballs. So the Jaunt pipeline is definitely not suitable for all shoots, but if you are looking for a fast all-in-one package which doesn’t have a very steep learning curve and want the best image quality from a single cam design, Jaunt is a very solid choice.
- Good support
- Very solid data handling and toolset
- No review on set (yet)
- Great dynamic range (large sensor)
- No control over stitch, both color and quality
- Need very fast and stable internet connection
- Camera very large
- Limited outputs from cloud
- No local rendering / cloud
- Limited availability cam
Lastly, let’s have a look at probably the most widely used (professional) solutions out there: the Nokia OZO. This is the camera we currently use most for our shoots. It’s definitely not the camera with the best dynamic range, the best frames per second, or the sharpest lenses –it needs a lot of light to avoid noise in the image– but it actually works like a camera should work and is very stable and safe, apart from the tendency of overheating in warm environments, but that’s a common problem in all cameras.
Nokia really did an amazing job on integrating their camera with their on-set and off-set software suites, and some of these features became an absolute necessity for us during shoots. Direct playback during recording (absolute must when shooting fiction), SDI out –which allows us to not only record to the SSD Module itself, but also on external systems like Blackmagic– proper access to camera controls through a laptop or computer, seeing stitch lines during the shoot and the ability to move them, switching between different lens setups to avoid extra noise in the cameras, just to name a few of the features that we almost can’t live without now.
When moving to the local stitching process, the integration of software and hardware becomes even more apparent. The stitching quality of the camera is 90% spot-on, with a decent amount of stereo. Although the OZO shoots only partial stereo, the falloff to mono at the back is not as apparent as in previous software versions, and is for most jobs more than sufficient. The OZO Creator software has support for proper professional output formats like EXR and DPX, and even has support for depth maps, which will make integrating 3D objects in post much easier.
Last but not least, documentation on their website is very good and software progresses extremely fast compared to other solutions. Also, it is the only live stitching/broadcasting solution I would be comfortable selling to my clients, because it integrates neatly into existing broadcasting protocols and workflows.
- Small cam
- Live preview on set
- Live streaming
- Easy set up
- Free software suite
- Local rendering
- Very fast stitching both preview and HQ
- Stitching quality very good
- Double storage very safe
- Integrated ambi mic
- Depth maps
- A lot of outputs to professional formats
- A lot of options to stitch
- Software improves very fast
- Good availability for rent
- Limited dynamic range
- Partial stereo
- Smaller sensor (noise)
- Overheating (specific conditions)
- Support limited only forum
As mentioned, there are some great solutions out there, but none of them are 100% perfect, so you do need a backup solution when everything falls apart. Also, when the cameras listed above are still not sufficient and you want to work with cinema quality cameras (Radiant Images just announced their AXA camera solution which look very promising, or Facebook’s ridiculously cool 6DOF camera) or you’re still stuck with stereo GoPro footage from more than a year ago like we did, there is currently only one powerful software package that is up for the job: NUKE + CARA.
Before moving further in this blog I advise you to read our previous blog about setting up the hardware pipeline so you know what gear we used during the making of April, and to follow these tutorials to get yourself familiar with the tools NUKE offers.
Why switch? Node vs. Layer
So before we made the leap to NUKE, we worked, like most people, in a combination of the Adobe suite with various plug-ins like Mettle, Mocha and Dashwood. Although these are very good and get better very fast, they never came to a point where we could completely rely on them. They did not always allow us to do what we wanted creatively, and were often buggy and unstable. Especially the Adobe suite does not seem to like large video formats and became very unstable under heavy load, which resulted in countless crashes and more than a few failed renders. Also the lack of proper stereo tools and lack of HMD preview before rendering was starting to hold us back dramatically.
By the time we started working on April we discussed the possibility of switching our entire workflow to NUKE + CARA, which we already had been testing during their beta program. However, this required us to completely rethink our post workflow. After a month of total frustration inside the Adobe workflow, we were ready to throw everything we knew overboard and make the switch.
Boy, little did we know we were in for a treat! We had to do some ridiculous hardware investments, on top of the serious sum for a NUKE license, and had to completely restructure our internal network to get NUKE running like we wanted it to run. But after a month of IT struggles, and with support from The Foundry, we were able to get it running and ready to go.
Although cheap and a relatively good production suite, the Adobe workflow has one major disadvantage and that is that it is based on a layered system. This means you can only change the parameters that are available in every specific layer and you stack different layers on top of each other to create the final image.
NUKE on the other hand is based on a node based system, which is essentially a trench that manipulates the channels of your image through nodes, and the nodes decide which channel moves down the trench to produce the final image. So essentially, if you want to to change something at a later point you can just move up in your trench and put in an extra node that makes the change possible. This means you can completely go back in time or make different versions of your work without screwing up all the work you have already done. It also allows for more complex compositing of your image by manipulating the different channels (most of the time by changing the alpha channel) in your image to produce the final image. For a better explanation of node vs layer based systems, click here.
Next to the layer vs node based advantage, NUKE has a couple more tricks up its sleeve which make it the go-to tool for more complex compositing or retouching. It has a proper stereo workflow due to the fact that CARA is built partly on the technology of OCULA, which has been used in regular 3D productions for quite some years now. It also scales and behaves well in render farms, especially in combination with the Deadline render manager, it has proper HMD preview, and gives complete control over the stitching process.
You also have everything in one package: stitching, VFX, compositing and grading tools which all surpass their Adobe counterparts in terms of quality. So there is no need to switch between different programs or workflows. There are a lot of plug-ins and gizmos available, and last but not least, support is very good! Especially when you work in VR, they really listen to your input for upcoming features to improve their software.
Downsides are that a NUKE license is very expensive, it requires some serious hardware investments, the floating license is a pain to setup, and it has a very steep learning curve. If you never worked with NUKE before like us, a licence at FXPHD is an absolute necessity to get you started, as they have some amazing tutorials on VR compositing with NUKE.
So if you have money and time to burn, you want absolute control over every step in the process, and you serve clients that both appreciate and have the money to spend on your extra effort, make the switch. At first you will occasionally think: how did I get myself into this pile of shit (trust us you will), but when you really get the hang of it you will never go back.
Testing the waters
As mentioned earlier in this blog, we do not recommend switching when you are working for a client, but as you have probably already noticed, we are a little bit stubborn so we did it anyways.
After we failed our first attempt with April, we started working on SuperstarVR; a 12min fictional film about a superstar DJ which would premiere at the Cinekid film festival 5 weeks after filming. This piece was shot on OZO, so we could do most of the stitching in OZO creator, which saved us some valuable time in the beginning of the post-production process.
Because we had some extra time, we decided to move some of our post work to NUKE just to get ourselves familiar with the workflow and tools. As mentioned before, the OZO doesn’t really shine in terms of dynamic range, but especially on a couple of the scenes we shot outdoors some images where completely blown out. So we decided to shoot some clean plates to boost dynamic range later in post.
In some other scenes we tried out different tools like the rotopaint and grading tools, and learned how to set up a script en render it within NUKE using our local render farm. Of course this is all easier said than done –this project also caused a serious workflow meltdown– and 5 weeks of sleepless nights later we were luckily able to deliver a final product which looked good, especially for the amount of work (more than 12 scenes which all needed some form of compositing or retouching and grading) that went into it and the tight timeframe we had.
This project taught us some very valuable lessons on how to set up a proper project structure and allowed us to get an estimate on how long specific tasks take with this new workflow. This way we could at least make a planning that would work without us having to do double shifts for 5 weeks straight. We were getting more confident. We’ll now describe our NUKE workflow in detail.
Setting up a structure and choosing the right format
One thing that became very apparent in SuperstarVR is that we had to rethink how we structured our work. Before we could get away with SS1_final.mov SS1_final2.mov and SS1_finalfinal.mov, but with NUKE these finals account for thousands of separate frames that end up in multiple terabytes of data. The Superstar project was about 20TB in total, and April, because we also did stitching in NUKE, accounts for more than 40TB of total project data before rendering the final video. You get the point: you need some sort of data plan or structure in place if you don’t want to drown in files later on.
NUKE generally works in frames. This has one very big advantage, and that is that you don’t have to render out complete scenes to fix just one small part of your video, you simply replace a few frames. Most common formats for frame sequences are DPX and EXR. We use EXR, because they change is size depending on the amount of data used in a specific frame compared to DPX, which has a fixed data amount assigned to each frame which in the end will generate unnecessarily large files.
Also, NUKE handles EXR better than DPX in terms of performance, and EXR can hold a lot more information in extra layers within the data file itself, for instance depth information or extra colour info. Also, EXR files can have hundreds of layers, so the amount of information you can bundle is almost limitless.
Now that you chose the right format, let’s talk how those thousands of frames move through your pipeline and connect in different stages in the post process. Essentially, we split every scene into 3 different stages:
- Cleaning comp
- Final comp + grade
In theory you could do all these steps in a single NUKE script, but then it becomes way too slow to work with (remember you’re dealing with at least 4K x 4K images for stereo files), so you need to divide work over more than one rendering step.
Because of this necessary evil, the output of one stage will be the input of the next stage. Also, every stage needs it’s own NUKE project file that belongs to that specific stage. What this means is that you can go back in time, even when you are in the final comp.
Let me give you an example. Say I have a sequence of 3000 frames, I already did the stitching in stage one, did rig removal and sky replacement in stage two, and am now working on the final comp in stage 3. I notice a stitching error between frames 1500 and 1800 and I don’t want to redo my complete video over again. So I start up the project file of my stage one, fix the error, render out frame 1500 to 1800, and replace those files in the sequence of 3000 thousand frames that served as my input for stage 2. I then fire up the project file of stage 2, render out frames 1500 to 1800 and replace those in the sequence of 3000 frames that served as my input for stage 3. If you structured your files correctly, this will take you about an hour. If you did not take the time in the beginning of your project to set up this structure correctly, this could easily end up costing you a day or more.
To speed up the post workflow itself, you can already let your editor work on an edit based on low-res stitched files, which we also render out in stage one. This will gives your editor time to iterate more often on the edit and let him or her communicate with the director or client for instance to fix things like timing. At the same time, your compositor could work on finishing the individual scenes itself. When stage 3 is finished, you just replace your low-res stitches (which can run smoothly in real-time on any editing machine and software) with your high-res comp files, et voila! You just saved yourself a couple of weeks and a lot of headaches, because those 4Kx4K EXR sequences won’t play back in any editing software, no matter how fast your computer or network storage is.
Let’s start with stage #1, the job no one likes to do: stitching!
1. Stereo stitching in NUKE
While the OZO creator creates good stitches, it was useless for the GoPro footage we shot for April. So since we had at least some idea how to work with NUKE now, we dove into the stitching process itself. This is probably where your stitching experience in AVP comes in handy, because the process is somewhat more complicated, but also based on the same principles. So let’s start with importing the footage.
If your camera is not genlocked, make sure to sync your footage before stitching, because NUKE has no syncing options. Instead you will have to do this manually by setting the starting frame of each imported video. Make sure that in your project settings GPU availability is checked. The stitching process in NUKE is built around three different tools:
- Color Matcher
Let’s go through them one by one.
The Solver allows you to align your imported camera footage and works out, by building a virtual model of your cam setup, where each camera is located inside your rig. You can specify some additional hardware info like focal length of your lens and the size of your rig. When using OZO for instance, you can even import metadata about the cam itself directly into NUKE, although we had some problems in the past with this feature, since the latest update it seems to work well.
Now it’s time to improve the solve of your camera, which is a delicate dance of several different parameters and a process of match>solve>refine to improve the quality of the solve. Also, you can play with both the strength (how hard should NUKE force to overlay overlapping pixels to create a stitch) and converge (distance of your stitch). After you’ve repeated the the process of match>solve>refine, you should see an error threshold of about 7, which is often good enough to move on to the next step.
The Color Matcher is very similar to the workflow you are used to in AVP, and makes sure all of your cameras have the same exposure and colour space. 95% of the time it’s spot on, but for the remaining 5% it’s very easy to set parameters for each camera individually by using an extra grade node on the source. When you have footage that changes colour or exposure over time, you could also let the colour matcher calculate the change in colour/exposure over time, or even at every frame if you like. Of course this will seriously increase the render time of your stitch.
Now let’s move over to the final part of the stitching process, the Stitcher itself. The Stitcher allows for more manipulation of the stitch itself based on the input of the solver. A couple of important parameters are the converge (this converge value overwrites the value set in the solver). It could be that you have footage with such a clear distinction between fore- and background that you have to do two stitches. One with the converge at the distance of the foreground, and one at the distance of the background, and then later use some clever roto-painting to put them on top of each other to get a perfect stitch.
Another important value is the number of steps in the Stitcher. this value determines how many times NUKE should do a recalculation of the number of stitch points when performing the stitch. So essentially, when you have a lot of moving pixels and you don’t want to have a ghosting effect in your image caused by objects moving over the stitch lines, let the Stitcher do a recalculation every frame. When you have a more static scene, move the number up to save some valuable render time.
When you have calculated the stitch, it’s also possible to leave certain parts of the source image out of the stitch by manipulating the Alpha layer (quite a similar process to drawing masks in AVP). The downside of this is that you lose stereo when you leave too many lenses out of your stitch. Inside the Stitcher you can also find some dedicated stereo enhancements, like falloff to mono at the poles, or stereo disparity to determine IPD. Also, using the Anaglyph node is a perfect way to make sure your stereo aligns properly.
When you are satisfied with your stitch, it’s time to render out the final stitching sequence before moving on to the cleaning process of your image. We usually start with about a 100 frames to test if our farm doesn’t give errors and if we set up our write scripts properly. When working in stereo, make sure you render out one sequence for the left eye and one for the right eye, because you need to have them as separate inputs for the rest of the comp process (so no over/under!). Also make sure the side-by-side node, used for reviewing in a headset, is removed from your script or disabled when rendering, because sometimes it will render out the left eye twice instead of left and right separately.
When you have reviewed the first 100 frames, it’s time to render out the complete EXR image sequence. At this point we also render out a low-res proxy, which will be replaced with the final image sequence at the end of the post pipeline. The high-res left- and right eye renders will serve as input for the next stage, cleaning the image itself.
2. Cleaning comp: rig removal & sky replacement
In this phase of compositing we start cleaning the image before moving further to the final comp of the image. Let me start off by saying there are 1001 and ways to comp in NUKE and none of them are right or wrong. Although you could do everything in one script from stitching to final comp, this is not advisable because the script itself will get too heavy and will take ages to render, as we mentioned before.
Since we stitched our images in the previous stage, it is now time to remove the rig at the bottom and do some roto-cleaning on places where the stitch is still visible. In this stage we also do sky replacements and process our clean plates to digitally fake the dynamic range of our cameras or to further clean the image itself.
Here are some examples taken from April where we cleaned the complete image and have replaced some blue screens to determine the basic scene.
Before moving further to the more heavy comp work or transitions, we are also rendering out this left and right EXR image sequence to serve as input for the following stage.
Using the Disparity Generator
When working in stereo there is one node you can’t live without in the NUKE workflow, and that is the Disparity Generator. The Disparity generator calculates the offset between a pixel in the left eye compared to a pixel in the right eye. This node allows you to work on some rotoscoping for instance on the left eye and then copy this to the right eye with the correct stereo offset. This essentially means that you are doing the comp for both eyes while working on just one (!), which dramatically speeds up your workflow. As with all tools, this one is not 100% perfect all the time and you still have to do some manual retouching of the image to get it just right, but it’s an invaluable tool nevertheless.
3. Final comp & transitions
So we finally arrived at the last stage of our comping process, before grading and replacing the files in the proxy edit. For April we wanted to look for a more natural way of transitioning to a different scene by avoiding doing hard cuts all the time, so we decided to make transitions by melting two different scenes in a single comp. By using the last frames of the first scene and then importing the first frames of the second scene, you could treat this as a separate (transition) comp. Here are some examples.
A couple of things to keep in mind though when using this method:
- Use the disparity generator to your advantage
- Don’t use the rotopaint tool too often, because it’s very heavy on the render time (that is why you do cleaning in a separate phase)
- Group your nodes by function to make your script easier to read
- Be careful when using stereo that you avoid strain caused by stereo elements sticking through one another
- Sometimes it is wise to make gizmos for tasks that you have to do more than once, or in every scene/project.
A gizmo is essentially a group of nodes that do one specific task structured under a newly created node. This can dramatically reduce the time required for setting up specific tasks within your script. You can download complete gizmos made by other people in the community from Nukepedia.
The insanely cool thing about NUKE is that it already has all the professional compositing tools you would expect from a 10.000+ dollar software suite, but you can also use these to work in the 360 formats with sometimes only some minor tweaks or by using the spherical transform node. This means you can leverage the decades of experience of The Foundry in making the best available composting tools out there, and immediately use them in your post workflow without having to wait for specific 360/3D comp tools to be developed.
One last thing we encountered during the final comp process is that it’s ok to exaggerate comps and colours a little due to the limited resolution of the current generation of VR headsets, and because you inevitably lose detail in the final compression for distribution, so super fine details tend to be less visible when reviewing inside a headset or will completely disappear when using compression with lower bitrates.
This is probably one of the most commonly used features in NUKE, and one of the main reasons we switched about a year ago. In other solutions we were not satisfied with the playback inside headsets during post at all. Most of the time stereo was completely off, or colours looked way different inside a headsets than on screens or compared to our final renders due to decode protocols used in the players like the one from GoPro for example (I believe GoPro fixed the colour issues in their latest release though).
Also, these reviewing tools were super unstable or hard to get them working in the first place. NUKE on the other hand has a neat integration with both the VIVE and Rift. It’s nothing more than hooking up the HMD before starting NUKE, checking some boxes in the projects settings, and adding a side-by-side node to your script.
The biggest advantages of using the review feature in NUKE is that it’s relatively colour neutral (of course you still have slightly differences caused by the different displays used in each headset), it has the best and most natural stereo offset seen in any player, and it can playback frames at the native resolution. Although you can watch frames at full resolution inside the headsets, don’t expect to have live playback of your comped scenes, not even at ⅛th of the final quality when pre-rendered. We heard from The Foundry that playback inside NUKE studio is far more optimised for playing back image sequences, but since we work with NUKEX we can’t tell for sure.
The need for proper playback becomes even more urgent when moving to the grading process itself. We graded on various different systems and software in the past, ranging from the adobe suite to professional solutions like Baselight with Dolby grading monitors, but all these solutions lack one crucial feature: playback in a VR headset. Even on those high-end systems, colours seemed way off compared to what we saw when we put on the headset. Again, this is due to the color space / light emission of the displays itself and the distance from your eyes to the display. So to do proper grading for VR, you need to be able to look at your work in a headset while grading, and as we mentioned before, NUKE can do this right out of the box. By using this workflow, your colourist could sit next to you while you review inside the headsets and see changes in real time.
The biggest difference between grading a normal 2D image compared to a latlong image, is that in a latlong you are essentially grading different parts of a complete scene.
We start by applying a basic grade to determine the overall look of the image. Then we start grading the complete image from left to right, and treat every part as a separate grade with different grading masks and different keyframes. This way, your whole scene get’s the attention it needs and you are able to create even more depth in your image by letting objects in your scene pop out or hide them by grading them a little lighter or darker in comparison to the rest of your image.
Although this is probably one of the most overlooked processes in the overall post pipeline, for us this feels a little bit like restoring an old painting to its former glory. By properly grading your image, or at least giving it some attention, you can give the image the feel that fits the story better. This can determine the mood of the complete scene, and by using different headsets for review, you can make sure your image looks consistent on each headset. Although the grading tools in Nuke are quite good, you can’t compare them with solutions like Davinci Resolve or Baselight, we are testing these solutions right now and will cover grading in more detail in our upcoming blogs.
Video color range
So you are done grading your image in NUKE and are ready to review your work in one of the available players out there, like the GoPro VR Player or a different solution. One thing becomes very apparent very quickly, and that is that the colours of these players are not always the same as what you saw in your headset when using NUKE…
For storing color values in a video file, video codecs have multiple options. Some are simply competing standards, but most have some effect on the color detail and how the video looks on different displays. One such option is the color range, which can be set to both full or limited range. Limited color range is a TV standard, whereas full color range is used on PC (and mobile hardware).
Limited range has become the de facto standard for most video content. However, when a limited range video is not corrected by the VR player, or a full range video is assumed to be limited range, black and white in the video will appear grey and washed out when viewed.
In an attempt to preserve as much color information is possible, we used to render video at full range. Those videos displayed washed out in some VR players, like the GoPro VR Player, but not others. This inconsistency can really mess up your workflow, so the final render should always use the limited color range for maximum compatibility, even though (the old version of) the GoPro VR Player does not decode limited range videos properly. This is an issue with the player, not your source file.
Oculus Rift black levels
Another thing to keep in mind when reviewing your grade is that the Oculus Rift HMD suffers from some color space issues around completely black pixels. The issue is noticeable on the edges between pure black pixels and slightly off-black pixels, like in a gradient to black. The Rift’s OLED display turns off completely black pixels entirely, resulting in perfectly dark regions. However, almost black pixels are displayed quite brightly by the Rift, resulting in a very noticeable edge in a gradient to black.
Reports online suggest this affects some Oculus Rift headsets and not others, and might also be dependent on specific settings on the PC the Rift is connected to. This means that dark video content can look quite different between different PCs and different Rift headsets.
In order to combat this and achieve a consistent viewing experience, we have changed the way video is rendered in Windows apps built with the Headjack VR app creation platform. Completely black pixels are rendered slightly brighter. This means dark video in the Rift does not display as completely black, but there is no longer an ugly edge where video goes from dark to light.
Final render & distribution
We are finally at the stage of the final render and our distribution renders. I am going to keep this short, because this blog is becoming waayyyyy too long. We render the following files, usually in the following order:
- A high-res DPX or EXR sequence of the final comps & grades
- A CineForm file of the high-res edit, which serves as our master file
- Use this CineForm as input for our recently updated VREncoder tool, which we use to output a high-bitrate, 4096×4096 H.265 (the Headjack setting in VREncoder) which has a more manageable file size
- We then upload this H.265 to Headjack, which then transcodes it to a crisp 3840×2160 VP9 file for distribution and playback in our apps
If you prefer to fiddle with the command line, you can use the powerful FFmpeg for your renders, in which case our FFmpeg cheat sheet for 360º video might come in handy. In our next blog post we will talk in far more detail about next-gen codecs and playback, but for now I hope this crazy long blog post has given you some insight into the NUKE + CARA workflow, and I hope it gives you an idea of what is in store for you if you are considering to make the switch.
Thanks for reading this far and for your support! Let us know in the comments what you think 🙂