Update as of June 23, 2019
Plenty has changed since the original posting. I figured it was worth updating this post to highlight two excellent videos provided by two of the celebrities in the community, Gleb Alexandrov and Andrew Price. Both of the videos are top shelf quality, and well worth viewing.
Mr. Price was largely responsible for kicking the monumental interest in camera rendering transforms forward, including exposing plenty of folks to ACES. The Secret Ingredient to Photorealism has now hit over 1.7 million views, attracting viewers from plenty of different domains.
Mr. Alexandrov's video came out slightly thereafter, and has some terrific examples with synthetic imagery, as well as a humorous dive into some of the more nuanced details.
Update as of October 5, 2016
There has been a huge amount of interest in the creative affordances colour management provides an imager. If you want to jump right in and know what you are doing, there is an updated filmic set of tools available in a new filmic Blender set. The new set offers:
- Very easy view based selection of five basic contrasts. Added to support new imagers that are interested in trying the package out, but worry about grading. The new set should be turn-key for this audience.
- Much improved contrast curves selection. In addition to a simplified naming convention, the basic transfer curves are much more refined, perfectly pegging your scene referred middle grey values of 0.18 to 0.50 in every single transform.
- Much improved desaturation and crosstalk film emulation. This new transform is complex and offers imagers an extremely graceful roll-off to display referred white. In addition to this, there has been a crosstalk element added which carefully mixes the primaries as the values reach peak. This will result in images that are much closer to what one would expect from typical photographic mediums.
It would be excellent if the talented imagers out there would try this new set and render out some sample images for this posting.
Issues
Post your issues to the GitHub repository.
Original Update
If you are seeking the original OpenColorIO set, you can find it at the original link.
After a recent presentation I did on the subject, there is a Google Slides presentation on dynamic range and intensity. For those interested in this subject, or perhaps for a greater understanding, feel free to watch the presentation located in this link.
Sample Images
Here are some sample images generated from the OCIO configuration. You should notice immediately how all of the physical based light interactions are augmented such as subsurface scattering, indirect lighting etc. Also note how the configuration allows for proper photo-based highlights captured in the display referred transform. Sample files compliments of Eugenio Pignataro, Mike Pan, Henri Hebeisen, Tynaud, and Mareck, Dmitry Ryabov, Rachel Frick, Marius Kreiser, and Andrew Price. Click to enlarge.
Mareck:
Click to Enlarge
Dmitry Ryabov:
Click to Enlarge
Rachel Frick
Click to Enlarge
Marius Kreiser
Click to Enlarge
Andrew Price
Click to Enlarge
Eugenio Pignataro:
Click to Enlarge
Mike Pan:
Click to Enlarge
Henri Hebeisen
Click to Enlarge
Tynaud
Simple Example of the Filmic Desaturation / Crosstalk 3D LUT
Two simple images that demonstrate the critical differences of high intensity values and how purely saturated colours fail to behave photographically. Note how the average greyscale values desaturate and bloom to display referred white as expected, while the purely saturated colours break. Compare against the transformed version which blooms as one would expect as values increase:
Click to enlarge
The Question
To get images that seem more photorealistic, not only a much wider
Dynamic Range is needed, but also having the color information
desaturate towards white as it would happen in an overexposed
photograph.
The key word here is photorealistic. While many imagers focus on modeling, texturing, and other critical nuances, this is an often overlooked term with some profound implications on imaging.
What is Photorealistic?
While an apparently obvious question, breaking it down into components will help us provide a solution to the initial question. Photorealism has a direct link to the photographic world. This means that to solve the problem at hand, we need to break down what exactly a photograph is and how it has influenced our learned aesthetic response. Once having done so, we can compare how the photographic model relates to a CGI model, and provide bridging tissue to derive a solution.
What is Film?
The advent of film provided a unique aesthetic transformation of a physical scene into a convention. This convention brought with it particular nuances of photographic emulsion, and later digital sensors that sought to imitate the medium. These nuances can be loosely broken down into two categories when evaluating CGI and its relationship to the photographic and photorealistic.
Breaking Down Film, and the Photograph
The vectorscope reveals another issue as well: The Red, Green and Blue
colors, also reach a point in saturation at 1 and keep going past what
is possible to represent correctly.
For our purposes, we will examine the later era of colour photographic reproduction. The first concept we need to address is why, when photographing something of intense light, does the image desaturate? First, the composition of the film itself:
Click to enlarge
Given that the spectral locus, or range of all visible light, is a strangely curved mapping of wavelengths to colour, we can begin to see some of the reasons that images blow out to a white. In the above example, we see that there are three primary layers that are sensitive to loose regions of the spectral wavelength, crystallizing and "recording" the wavelengths. Here are some simple spectral responses based on the layers:
Click to enlarge
What we learn immediately is that film is not a narrow band recording medium. That is, if we think of the primaries, or colours for each RGB such as sRGB, the primaries are extremely narrow band, representing a unique and singular colour of light for each channel.
What are the Implications of Non-Narrow Band Recording?
Both film and DSLRs use the filtering technique to record their data. With regard to colour, we know that the filtering mechanism, due to a variety of complex reasons, are sensitive to a non-narrow range of actual physical wavelengths in the visible spectrum. This means that even though the "green" layer is attempting to only record a specific colour of green, the emulsion / DSLR photosites will also register recorded information because of the wider wavelength gathering.
The net sum is that when a specific "blue" light lands on a photograph, it is also crosstalking with the other layers or photosites, creating an extremely unique mixture of values. Part of this is colour response, and forms another discussion. The critical part we need to grasp regarding the desaturation of film / DSLRs, is that there is no single, physically plausible colour of light that will solely trigger an isolated emulsion layer or photosite. As a result, the stock or sensor will bloom out to white given enough exposure time. This is very much unlike the default sRGB transfer curve applied to Blender's default view.
The Intensity of Light in Relation to a Photograph
Photography has unique colour characteristics above which result in an image "blowing out to white" as well as many more subtle crosstalk features that yield the unique looks of film and DSLRs. At least as important as this facet is the dynamic range of the medium itself.
Film had an extremely unique feature that even DSLRs struggle to match today: a logarithmic encoding scheme. That is, as the particles of silver were exposed to light and crystallised, the response became harder and harder to influence the negative; once a granule was exposed, it became physically harder to expose the grains behind or around it. This meant that film was able to respond to light in a logarithmic form, and in doing so, record a tremendous range of intensity of light.
Film and a Camera
When we dial in a camera to record a scene, we set an aperture, shutter speed, and typically select an ISO sensitivity. These three facets restrict the scene's intensity values that the logarithmic film records.
To understand this better, we need to consider two CGI terms that break our computerized models down into a more granular format: Scene Referred and Display Referred (aka Output Referred, Device Referred, etc.)
Scene Referred Capture of a Photograph to a Display
If we consider that the scene, or scene referred data in our photographic examples above, cover a vast range of intensities, we can see a transformation happen at the camera / emulsion level. This is a mapping of the scene referred linear light values to the logarithmic encoding structure of film. The following image shows an arbitrary twelve and a bit stop mapping of scene referred values to the display referred / device referred encoding of film or DSLRs, as viewed on an sRGB device:
Click to enlarge
What is Happening in Blender from Cycles?
In Blender, and particular using a raytracing engine such as Cycles, we are generating scene referred values in the internal model, and passing those values via a display referred transform to output. The default "sRGB" display referred viewing transform appears to be a blind and ignorant hard cut. While some might call this "clipping", it would be more accurate to consider this a transform from the scene referred domain to the display referred, and the value of 1.0, while happens to match the same value of 1.0 in the display referred domain. The values, despite being identical, represent different things.
The "Default" transform is a strict inversion of the sRGB transfer curve that was developed as part of the sRGB specification. Here is roughly what it looks like from a layperson's vantage:
Click to enlarge
This transform is particularly confusing for a number of reasons:
- It grossly ignores much of the scene referred data present in Cycles.
- It uncharacteristically models the range of intensity of light to the display referred output image.
- Imagers unaware of the transformation of scene referred to display referred values conflate the concept as existing on a single continuum, forcing unfortunate mangling of their rendered scenes to fit in under this completely arbitrary and extremely important transformation.
Almost At the Solution
Before we "solve" the above to issues of desaturation and latitude, it is worth revisiting how a clear division between scene referred data and display referred data can greatly elevate an imager's ability to craft work.
How is Scene Referred Data Different?
1 is white and, of course, there is nothing whiter than than white
- Scene referred data has no notion of white nor black. Those concepts do not exist until the display referred transformation. "Whiter than white" is an anachronistic term that typically referred to safe video encodes.
- Scene referred data, much like a true scene in reality, can represent a colossal and infinitely large range of data. Imagine a planet with one sun? Now imagine one with two suns? Three? The only limit on scene referred data is the bit depth of the actual architecture, and even then, it is constantly evolving.
- Scene referred data is rendered from architecture such as Cycles that seeks to model a version of reality. If an imager is unaware of the transformation from scene referred to the display referred domain, they may end up artificially mangling their lighting, data, textures, etc. to fit in under an arbitrary view transform.
- Scene referred data is stored linearly, or more specifically, a visual radiometrically linear fashion. That means the ratios of light emulate a physical model of light, and respond accordingly.
- Very few formats store scene referred data effectively. EXR is the most robust format for such storage.
How is Display Referred Data Different?
- Display referred data has a minimum and maximum creative point, typically zero and one respectively.
- Display referred data is most typically stored nonlinearly, with the arbitrary middle grey point mapped to a particular middling value in the display referred encoding.
- Only at the display referred transform do values end up mapped from a given high and low point to white and black respectively. Speaking in terms of white or black prior to this transform is utterly meaningless. Only terms like achromatic, or without colour, apply.
- The display referred transformation is handled via OpenColourIO in Blender. This transformation is arbitrary and a creative tool for imagers.
- Display referred encodings will almost always be discarding information when stored on disk, and as such imagers should be well aware of formats used to store their data. This extends to alpha storage concerns, as some formats such as PNG mangle alpha.
The Long Path to the Solution
In summary, we are faced with two unique problems posed in the original question by @cegaton.
- Latitude or dynamic range of the encoded image.
- Unique colour characteristics such as desaturation to emulate the photographic.
We know that OpenColorIO controls the transformation from the scene referred domain to the display referred domain, and as such, the solution will revolve around our manipulation of the configuration for OpenColourIO.
What Might a Solution Look Like?
Dealing with Latitude / Dynamic Range
With regard to how to capture the latitude range, we need to consider what a more optimal solution would be than the default sRGB display referred transform. We could suggest it might look something like the following:
Click to enlarge
The above image maps approximately six and a half stops above middle grey to our display referred notion of white. It also maps a scene referred value of 0.2 (again, in scene linear) to our middle grey value. This keeps roughly to the values that many display referred images would have their middle grey values mapped to when converted to display linear. The above ignores a more complex display referred black range mapping, and simply maps zero to zero.
This can be accomplished via a 1D display referred viewing transform in OpenColourIO. An imager can use a spreadsheet or other tools to generate such a LUT. More information on this can be provided if someone chooses to ask the question.
Dealing with Desaturation or Crosstalk
Dealing with the desaturation or crosstalk issue is much more subtly complicated. In the case of a desaturation, we expect that as say, the blue primary pushes up toward the display referred maximum, that the other channels move up as well. This is impossible to achieve using any amount of curves.
It should be noted that no matter how hard one tries with the default sRGB display referred viewing transform, that colours will always be mangled as they near the ceiling of the viewing transform. Why is this? This is because the 1D LUT simply hard cuts the scene referred data within the transform. This yields colours that, while it is possible to force them to clip to white, do so in a manner that is entirely unlike any sort of desaturation known in a photographic medium.
The technique to achieve this is typically via a 3D LUT. A 3D LUT differs from a 1D LUT only in the sense of the input and output influence. While a 1D LUT takes an input value and converts it to an output value, a 3D LUT is able to take a single input value and adjust, in addition to the input value, also the other channels. This provides us the magic tool to simulate not only desaturation but also the complexities of filmic crosstalk.
3D LUTs have a unique problem however, in that the input range must be very well defined as their size and resolution grows exponentially. To accomplish a suitable 3D LUT, it is prudent to convert scene referred linearized data to a display referred perceptual domain. This allows the 3D LUT to be applied in a perceptually uniform manner to the data, increasing the quality of the transform.
One Possible Solution
While the above hopefully highlights how much creative control an imager has, it hopefully also sheds light on how not to deal with the complexities of image based lighting or high dynamic range lighting. Instead of mangling and crunching scene values to fit the display referred transform, it is much more prudent for an imager to create a clear division between the scene referred data and the display referred encoding. Doing so will not only elevate the imager's work, but also her creative control when grading at a later step.
While the actual generation of 1D and 3D LUTs, as well as the OpenColourIO configuration details, is beyond the scope of the original question, the following is left here for any imager to experiment and light with. It is a fully compatible OpenColourIO configuration for use immediately.
What it does:
- The "-10-+6.5" represents a viewing transform that grabs approximately 6.5 stops over middle grey and maps it to the display referred encode.
- In the Looks you will find a number of different looks. While well documented in the README, the "Basic" adds onto the above transform a desaturation / convergence toward display referred white. This is the exact shaper transformation, further transformed into a 3D LUT that deals with the desaturation component explained above. The LUT was generated using the idea that the luminance of the primaries would be roughly a decent entry point for emulating the desaturation of the layers / photosites. As such, it uses the sRGB / 709 primaries weights to more accurately desaturate as the intensity values near the display referred maximum point, at approximately two and a half stops below the maximum scene referred value of 16.291, or roughly 3.0 scene referred linear.
Other useful Looks:
- A False Color look that offers a visually shifted "heatmap" of exposure useful for lighting.
- Several Sharp variants which interpret the data under a power curve that increases contrast. Useful as a rough approximation of a grade.
- A Scaled set that maps middle grey from the view's .6 to sRGB's 0.466. This can be considered training wheels for those not used to grading footage.
- A Greyscale Look on both the desaturated and standard views for evaluating contrast. It uses 709 primaries as weights.
Updated LUTs are located at this GitHub link. Please read the README to spot specific issues with some of Blender's yet-to-be-addressed shortcomings when operating on scene referred imagery. Sadly, many of the problems present in Blender are simply because very few imagers realize the extent to which the default view impacts their view of the scene referred data. In this regard, it makes the scene referred data appears as though it were display referred simply because imagers rarely are aware of the data in their scene.
The more imagers that give the LUT pack a spin, the more likely they are going to help Blender evolve as a tool. That is of course, in addition to almost magically transforming their imagery.
To use it, an imager merely needs to:
- Backup / copy / move their [BLENDER DIR]/bin/[VERSION NUMBER]/datafiles/colormangement directory to a different directory.
- Link or copy the files into a fresh colormangement directory in the datafiles directory.
- Change between the rendered views using the Scene's Colormanagement Properties panel.
References and Further Reading / Viewing: