Mystery Box
CREATORS + INNOVATORS
Screen Shot 2015-01-06 at 7.06.46 PM.png

Blog

HDR Video Part 4: Shooting for HDR

To kick off our new weekly blog here on mysterybox.us, we’ve decided to publish five posts back-to-back on the subject of HDR video.  This is Part 4: Shooting for HDR.

The first HDR project I graded was a set space shuttle launch shots, filmed on the RED ONE camera by NASA.  The footage wasn’t filmed with HDR in mind.  In fact, HDR wasn’t anything close to ‘a thing’: the shuttle last flew in 2011, and Dolby didn’t present their proposition of “Perceptual Signal Coding for More Efficient Usage of Bit Codes” (what we now call PQ), until 2012.  And yet despite the age of the footage and the lack of consideration for HDR when it was filmed, I still had no problem grading the footage into HDR space, and getting a pretty awesome image out of it.

HDR Grade from NASA Archive Footage, shot on Red One, circa 2011

I bring this up simply as a point of perspective; while I’m going to offer some suggestions here to help make your footage better in HDR, it’s important to realize that, in general, all footage is better in HDR, regardless of its age or how it was filmed.  That being said, there are things you can do while filming to best prepare for an HDR finish, which is what we’re going to discuss here.


The Kit

Choosing a digital camera today is like how choosing film stock used to be - each one responds differently than others, and can create slightly different looks.  This doesn’t change when you’re shooting HDR.  Beyond a specific point, your camera choice doesn’t matter; camera choice is a creative (or budgetary) decision.

But there are a few features that are important, if not essential, when planning an HDR shoot.  Think of these as the minimum level of kit needed.  I’m going to outline those first.  Then, there are niceties you can add on that will make your life a little easier, and I’ll outline those next.

First and foremost for HDR recording: never, ever, EVER shoot with a standard Rec 709 / BT.1886 / Gamma 2.4 contrast curve.

It’s possible to grade footage that uses one, but the results are pretty poor.  There’s too much clipping of the darks and whites, and the loss of detail kills you.  Linear is okay if it’s a high enough bit depth; a LOG format is better, but native RAW is really the best.  LOG and RAW will preserve more of the detail through the darks while retaining better roll-off into the whites, that will make HDR grading easier / possible.

When you’re shooting with HDR mastery in mind, use the highest bit depth (and bit rate) available.  If you’re using a camera that stores its footage in a compressed 8 bit format, you’re going to do yourself a world of disservice when it comes to grading in HDR.  The same reason that all HDR formats require 10 bits minimum applies to the camera - 8 bits causes stepping.  If you’ve shot in a LOG format, it’s possible to get away with using 8 bit sources, but you can’t push the footage as far as it’s able to go, and are very likely to see stepping in the whites.

If an 8 bit camera is your only option, and you still want / need HDR, consider using an external ProRes, DNxHR, or other high bitrate, intraframe, 10b+ per channel recorder.  You’ll save yourself a world of hurt in post.

Of course, the ideal format to shoot in is a camera RAW format, like Cinema DNG, RED RAW, ARRI RAW, Sony RAW, Phantom RAW, etc.  You’ll love yourself in HDR post for using RAW, even if you typically prefer the turnaround speed offered by a ProRes or DNxHR workflow.  Here’s why:

  1. Most ProRes and DNxHR workflows normalize the RAW footage into a LOG format, which is fine, but they collapse the bit depth range used to 10 bits. With RAWs you typically have access to the full 12, 14 or 16 bits offered by the sensor! Your grading application typically uses even higher bit depth internals for color processing, so even if you’re only grading in 10 bit at the display, having the extra 2, 4, or 6 bits per channel, per pixel, is a major advantage in grading latitude, something I can’t emphasize enough as being important to HDR.

  2. ProRes and DNxHR workflows typically normalize the camera primaries into Rec. 709 color space. Most professional cameras use primaries that are wider than Rec. 709, but the video signals they output are typically conformed to Rec. 709 for ‘no brains’ compatibility, that is, it’ll just work. While this isn’t a problem per-se for HDR, it does restrict the volumes of colors available for grading, and will require a LUT or manual shifting of the primaries in grading to match HDR’s BT.2020 or DCI-P3 space.

    Typically, RAW formats record their data using the camera native RGB values, and your RAW interpreter in your color grading program can then renormalize them to whatever target space you’re looking for. Since BT.2020 is the widest of all display spaces, you’ll be able to better reproduce what your camera is already capturing.

  3. RAW formats often provide highlight and lowlight recovery not available in fixed video formats, even when using LOG or linear recording. The RAW formats here give you access to as much information as the sensor actually recorded, which is invaluable in post. Because of the extended dynamic range of the HDR environment, you’ll want as much of the highlights as you possibly can get, and may even at times push further into the noise floor, because the noise is less perceptible in the deep HDR darks.

If you’re shooting at the professional level, and using professional cinematography equipment, what you already have is probably okay for shooting in HDR.  Cameras like the RED Epic or Weapon (or pretty well all of RED’s cameras because of their RAW format), Sony’s F55 or F65, Arri’s Alexa, or any camera in the same class are perfect.  As are most film sources (S35mm or greater, for resolution), when captured with a high bit depth scanner.  Using any of these, you’ll be well suited for HDR, assuming you follow the format’s best shooting practices, which I’ll discuss in the technique section below.

If you’re using a prosumer or entry level professional camera, taking a few preparatory steps to set up how you’ll actually be capturing the image can mean the difference between getting footage that can be used in HDR mastering, vs. footage that can’t.

So in summary, when choosing your kit for HDR, consider:

  1. 16 bpc > 14 bpc > 12 bpc > 10 bpc > 8 bpc: 10 bpc should be your minimum spec.

  2. RAW > LOG > Linear > Gamma 2.4: avoid baking in your gamma at all costs!

  3. Camera Native / BT.2020 > DCI-P3 > Rec. 709 color primaries

  4. RAW > Compressed RAW > Intraframe compression (ProRes, DNxHR, AVC/HEVC Intra) > Interframe compressed (AVC/H.264, HEVC).

As a quick side note, some cameras offer a SMPTE ST.2084 signal or other HDR signal out of the camera for use with an external recorder.  These are a useful replacement to recording externally in either a LOG or gamma format - they can lead to faster turnaround times, but will require an HDR grade (or a dedicated step out of HDR) vs. being ready to be graded in HDR with the option of grading normally.


The Technique

First things first, some general, good advice: take some time learning how your camera and lenses respond to various lighting situations.  How is its roll-off into the highs?  What’s its noise level in the darks?  How does the color response change in different exposure levels?  While this is generally good practice, it’s the kind of forethought you really need in planning an HDR shoot.

When shooting for HDR mastery, you may find that you’ll need to modify your typical shooting technique.  There are three things that are important above all others: protecting your highlights, protecting your darks, and planning the expansion of the dynamic range of your scene.


Protecting Your Highlights

Most of us are rightfully excited about the creative possibilities that come with increased brightnesses at the display, and the expanded range of highlight detail that comes with it.  The catch is that there are some things that used to work well that, frankly, now look like shit.

Clipping.  May you rot in hell.

The large area to the right of the sun has no detail retention in the RAW.

All sensors clip at some level of exposure.  Film does too.  It’s unavoidable.  The goal for HDR shooting is to expose your whites to eliminating clipping the RAW data when possible, and minimizing it when it’s not.

Unlike traditional mastering workflows, where images clipped to white are simple to correct (set the clipped area to true white), clipping in HDR becomes problematic very quickly.  In HDR there is no longer such a thing as “true white”.  Instead, in the grading process (which we’ll discuss in Part 5), we make a creative decision about how bright white should be, and how to roll into it.  That roll into whatever white you pick is essential to tricking the eye to believe whatever you’ve picked to be white is, in fact, white.

The same shot can be graded with different white points in HDR, depending on the goals of the cinematographer & colorist. Both of these grades work with the snow reading as white; the lighter image feels brighter, while the darker image feels more oppressive and foreboding

The human visual system perceives any object within, or region of a scene, as colored a shade of white (that is, not as a shade of grey, but varying intensities of whites) so long as three conditions are met:

  1. The brightness level is above a specific threshold relative to the rest of the scene, which is usually around 100 nits

  2. The chromatic characteristics are relatively balanced (that is, low saturation)

  3. The area is not completely uniform in brightness level and juxtaposed with a scene (or part of the same frame) with a brighter or more natural roll-into the whites

When talking about clipping, it’s that third condition that ends up being a problem.  Clipped footage typically has large swatches of ‘white’ with an abrupt transition into the patch (once the rest of the footage is graded to a normalized brightness level).

Gentle rolls into clipped white areas appear more natural than abrupt transitions

Deciding what brightness level to place this ‘white’ at becomes problematic for a couple of reasons.

First, you have to limit the brightness of the white patch with respect to the rest of the scene - if it’s too much brighter than everything else (say, everything is under 100 nits and you put the patch at 1000 nits), without roll into the whites you have an obnoxiously bright patch that dominates and overwhelms the rest of the scene.

Second, because these patches typically have a large area, that is, make up a significant portion of the pixels used on the screen, they end up skewing the distribution of brightnesses when calculating the MaxFALL, meaning that everything else in the scene has to be significantly darker than you might like, or you have to bring down the brightness of the white to bring up the brightness of everything else.

The overall brightness around the sun limits the overall peak image brightness due to MaxFALL. For contrast, I've included both the direct SDR down grade (roll into white between 200 and 500 nits), and the same with the white point restored to full

Third, with the first two effects limiting the overall brightness of the uniform patch, it’s likely to appear grey when cut together with footage that has proper roll into the whites, since that footage is likely to have parts that are much brighter than whatever white you’re able to use for this clipped value.  The overall effect: grossness that pulls you out of the ‘magic’ that HDR creates.

In this sequence, the peak available white point of the middle shot is lower than the two shots that surround it, due to MaxFALL. In the final grade, the first and third shots were graded with lower peak whites to match

Stop down or use ND: protect your highlights and avoid clipping like the plague.

Some parts of your image, like the sun or bright lights for instance, may clip and that’s okay, so long as they don’t dominate your scene.  You can typically roll into these whites much more subtly than larger clipped areas.

Not all clipping is unnatural, even in HDR

White, puffy clouds also tend to want to clip on most cameras, but don’t let them, if possible.  Because of the frequency that most people see clouds, and see the details in them, you need to preserve as much of that as you can or risk your viewers looking into them and being jarred at the bright uniform shapes that come with clipping, vs. the gentle gradients that come with the more rounded textures.

In HDR the contrast in clouds is much more significant than in SDR, and the clipping in the clouds hurts the realism of the scene

Coupled with this, is the idea that you can’t assume that you’ll get to hide things on the other side of bright windows.  If you camera retains any detail through a window or doorway, it’ll probably be visible.  If you’re hiding crew or equipment through a blown out window, you’ll need to be doubly sure that the window will, in fact, be blown out in HDR. (The same, by the way, is true for your darks - don’t assume they’ll be crushed out.  More on that in a second).

If your monitor out from your camera allows for separate colorimetry than your recorded image signal, you may want to switch it to a LOG curve out so that you can see on your field display or eyepiece where the scene is clipping, if at all, and what details are visible in the brights.


Protecting Your Darks

While the brights tend to get the love when talking about HDR, personally, I love what HDR does for the darks.

Just like with the whites in the image, we have to get rid of the concept of ‘true black’ when discussing HDR.  Instead, we have a range of blacks, just like we have a range of whites.  Two of the three conditions we discussed above describing how the brain perceives whites is true for blacks as well: they need to be below a certain value threshold, and they can’t be large uniform areas juxtaposed with darker regions.  The only difference is that below the brightness threshold the brain typically stops perceiving chromatic value anyway: saturation doesn’t matter (unless you’re trying to supersaturate your darks?)

Just like with our whites, eventually sensors clip to black.  In most video signals, this will be a hard clip, but in many RAW formats (especially those that offer ISO adjustments in post production), the blacks are typically recoverable into the noise floor of the camera.

If you’re planning on using a PQ HDR mastery workflow, you’ll need to assume that most of these darks are in fact visible, beyond what you’d normally consider available.  Which means you may need to be concerned about overall exposure level for the detail in the darks - you can’t necessarily hide equipment there, and need to make sure your production design moves deep into the visible darks.

Details often lost in darks in SDR are often visible in HDR

Even worse, or better, depending on whether you’ve planned for it or not, even after the image is properly graded, areas that appear black with the full HDR grade can ‘open up’ to the eye just by obscuring with your hand the brightest regions of the image, just like how blocking the light from a spotlight pointed at you allows you to see behind the light source.

Simulated images showing details visible in the darks when you block the lights in HDR. This does not happen in SDR.

The good news is that noise is far less perceivable in the darkest depths of HDR than when the footage is normalized into SDR, largely because of the lack of saturation and our vision’s greater degree of tolerance to luminance noise than to chromatic noise.  So while it’s important to keep important details above the noise floor, it’s not as essential as protecting against clipping.

To the darks the same rules apply: open up or increase your ISO to avoid clipping in the darks.


Planning the Scene in HDR

You may have noticed the two pieces of contradictory advice from the last two considerations: stop down to protect your highlights, and open up to protect your darks: a paradox.  Something has to give: how do you plan for that?

Don’t worry: planning your scene for HDR is actually even more complicated than that.

When you shoot for HDR, you can’t assume that every consumer display will be HDR.  So you need to consider that how the darks and lights will play in both HDR and SDR.  With the whites, it’s relatively simple to adjust your roll into or clip so that it plays well in SDR, but crushing the blacks isn’t always the best option.  Creatively, you may want to highlight action or detail in the darks in a way that will be lost with a simple crush.

Crushing the darks in SDR maintains the mood of the HDR image, but at the expense of detail retention

A solution, of course, would be to bring up part of the darks during post, which increases the visible noise in SDR and may require a clipping or flattening of the whites to maintain the contrast and detail across the scene.

Noise is more perceptible in SDR darks than in HDR darks

Alternatively, you can adjust your lighting to bring up the darks and compress the range, then re-expand the range while color grading in HDR space.  So long as you’re shooting in a RAW format and capturing 12+ bits / channel, you won’t see stepping with this technique, since your mid gradients on a log curve are allocated sufficient bits that expansion is possible.

Another thing to consider when planning the scene is the MaxFALL limitation of HDR mastering.  The overall dynamic range of the scene needs to be planned in a way that the super bright / HDR elements are restricted to a small portion of the overall frame, so as to not push up the frame average light level.  Shooting interiors with a few bright windows or patches of direct sunlight tend to be fine, larger bay windows with cloudy or limited outdoor light also work so long as the eternal ambient isn’t too high (dusk / dawn, not noonday sun).

Both of these shots were done in the same space, about a year apart. The time of day plays an important role to how much the windows affect the MaxFALL of the scene, with the blown out windows limiting overall brightness.

Particularly problematic are blue skies.  Why?  Because blue skies often take up a much bigger part of a frame than you expect, and contribute more to the MaxFALL since our eyes perceive blue values of similar absolute brightnesses as darker than those of other colors.  What we see as mid range blues can suddenly push up MaxFall and limit your overall scene brightness while still looking ‘normal’ or ‘average’ to the eye.  Exposing for blue skies often means keeping the blueness in the traditional light level range, which can leave the rest of your brights muddied (especially when shooting into the sun).

The amount of the image take up by the blue sky limited the overall MaxFALL of this image. The result: in HDR, the sky never felt 'bright' like the trees or the tower.

Essentially, when designing your scene for HDR you need to plan the bulk of each frame to land below the traditional film standard light levels, so as to not push up your MaxFALL / average light level.  Of particular concern here is planning your edits for HDR - small patches of direct sun in a darker scene are fine, until you move in for the close up and that small patch behind the actor’s face dominates part of the scene.

In this wide and close pair, the wide shot is only limited by the available peak brightness of the display, while the close up is limited by the MaxFALL

While as an individual shot it’s fine to limit the MaxCLL / peak light level of a close up’s bright patch, when you’re cutting between two shots you’ll need to adjust the wider shot’s MaxCLL to match the MaxCLL permitted by the close up’s MaxFALL.

Or, in plain english, you’ll be limited on the maximum brightness in the wide because the maximum brightness of the close-up will be more limited, if the brighter areas take up more of the frame.  If you’re looking to push the 1000 nits limit of current HDR displays for creative reasons, your scene blocking needs to take in account that average brightness for the close up: plan on minimizing bright areas around the talent or inserts to keep the bright patches bright across a sequence.

Otherwise the shifting brightness levels can be much more visible and leave a ‘greying’ feeling in the more restricted close ups (which the eye would normally perceive as white, except in contrast with something whiter).

Because of the expanded darkness range of HDR, you can design much more ‘dark and moody’ lighting setups than you normally would for standard film or video exhibition.  Whole detailed-filled scenes can play out in levels under 30 nits!  However, be aware that this is a bad idea if you’re intending your work to end up on consumer displays.  In a darkened reference environment, our eyes will adjust to the lower light levels and we’ll see deeper into the darks.  But in a consumer’s home, where the ambient light level around the display may be higher than cinema or grading spaces, the viewer’s adjustment levels may be limited.

You can, however, still allow scenes to play much darker than typically available in television exhibition, keeping your maximum brightness below 80 nits.  This can be used with great effect when cutting between darker average to lighter average scenes: it’s in this contrast that HDR really pops.

In HDR darker footage can be cut in with brighter footage without the details in the blacks feeling milky, or the drop in brightness being jarring.


RED HDRx

If you don’t shoot using the RED ecosystem, ignore this section (seriously). But if you do, all this talk of shooting in HDR may make you tempted to shoot using RED’s HDRx.  I’m not saying this is a bad idea, but I am saying this is difficult to execute.

The real problem here is getting the HDR grade right using the HDRx footage.  We shot with it once, and our takeaway from that is has been: only shoot it if you absolutely, certainly, without a doubt, need it.  Which, in this case, means an increase in the recorded dynamic range of the scene (or scene elements).

RED HDRx Blend in HDR and SDR

The reason not to use it comes down to grading.  Blending the two separately exposed elements is fine in REDCINE, but you’re going to run into difficulties with HDR Grading in REDCINE, simply because of the limited grading toolset.  When you grade in DaVinci, you run into severe performance issues using the API blending tool in the RAW decoding.  DaVinci’s split input tool is better, but you still run into problems compressing the larger dynamic range and maintaining the overall look of HDR video.

In the end, the most efficient (inefficient) workflow was actually grade the shot twice in HDR - once for the standard exposure to grade the darks, and a second time with the blended exposure to grade the lights.  Then, both shots need to be passed through a compositing program like After Effects to selectively decide which set of contrast you want for which parts of the image - far more like traditional HDR photography than HDR video.

Dark and Light Plates in HDR and SDR with Final Image Blend

You can get great results this way, but it’s way, way more involved.

 

A Grain of Salt

Cameras, settings, best practices, planning.  Here’s the caveat: take all this advice with a grain of salt, not as a set of hard and fast rules.

Going back to the story that I opened with, even footage never planned to be shown in HDR can give excellent results.  Comparing the SDR version of the shuttle launch footage to the HDR grade, the HDR looks better.  The darks are darker while preserving all the details, and the range is higher.  This is, of course, an ideal case since high quality RAWs were available; the same is true for film sources when negatives are available.

We’ve done a lot of HDR regrading of our back catalog of footage, and I haven’t found a single shot that looks worse in HDR than SDR (even when ignoring the benefits of BT.2020 and 10 bit displays).

 

But even when you’re limited to just 8 bit log or standard gamma footage, you can often find more detail within an scene when grading in HDR then is perceptible in SDR.  While you’ll want to be far more cautious with how far you push the footage, but you’ll still be able to get good results.

Detail recovery is often possible when grading from sufficiently high quality SDR graded sources



Generally, if you are already following best practices for digital cinematography, and if you spend a little bit of time reviewing HDR grades of your existing footage with a colorist, you’ll quickly get a feel for how the HDR space works and what you can do with it, and that’s when you can unleash your own creative potential.


But once it’s shot and edited, what happens next?  Grading, mastering, and delivering in HDR is our next topic, so stay tuned for Part 5.

Written by Samuel Bilodeau, Head of Technology and Post Production