Implementing a Physically Based Camera: Automatic Exposure

Implementing a Physically Based Camera:

  1. Understanding Exposure
  2. Manual Exposure
  3. Automatic Exposure

Automatic exposure is the process of adjusting camera settings in response to a given scene to achieve a well exposed image. This is sometimes referred to as eye-adaptation in games.  However, the goal here is to obtain settings that will be used by other rendering effects such as Depth of Field and Motion Blur. As you will see, instead of directly adjusting the exposure, we will move through the camera settings in a couple different ways to reach our desired exposure. Assuming that we already have a linear HDR render target containing the scene, the automatic exposure process starts with metering.

Metering

Metering is the process of the camera measuring the current average scene luminance. In the most basic implementation it would simply calculate the log average of the scene; however, there are a few other methods of metering a scene. We will look at Spot, Center Weighted, and Multizone metering modes in addition to Average. Each of these methods weight the luminance contribution of the scene using a weighting function w(x, y) resulting in a weighted average luminance calculation in the form:

w_{total} = \sum_{x=0}^{Width} \sum_{y=0}^{Height} \mathrm{w}(x, y)

\log(L_{avg}) = \frac{\sum_{x=0}^{Width} \sum_{y=0}^{Height} \log(L_{x,y}) \times \mathrm{w}(x, y)}{w_{total}}

Average metering

Average metering is the most simple, but also a very effective method, of metering. To perform average metering we simply assign every pixel in the scene the same weight, making our weighting function:

\mathrm{w}(x,y) = {1}

Metering_0006_Average

Spot metering

Spot metering involves measuring only the scene luminance that overlaps with a chosen region of the scene. Typically this is done in a small circle in the center of the image (1-5% of the image) but it can be performed by using any window of data. This type of metering is useful in photography when you wish to make a targeted object or region well exposed without worrying about the exposure of the rest of the scene.

\mathrm{w}(x,y) = \begin{cases} 1 & \|P_{x,y} - S_{x,y}\| \leq S_{r}\\ 0 & \|P_{x,y} - S_{x,y}\| > S_r\end{cases}

This slideshow requires JavaScript.

Center weighted metering

Center weighted metering gives more influence to scene luminance values located in the center of the screen. This type of metering was popular in the film days and has been carried over into digital.

\mathrm{w}(x,y) = \mathrm{smooth}(\frac{\|P_{x,y} - C\|}{\frac{Width}{2}})

where smooth(x) is a smoothing function such as smoothstep.

This slideshow requires JavaScript.

Matrix/Multizone metering

The final type of metering is called matrix or multizone metering depending on the manufacturer of the camera. Each manufacturer has a different proprietary formula for this type of metering, but the intention is more or less the same: prioritize exposure for the most “important” parts of the scene. Another way of thinking of this type of metering is informed. First, picture the scene broken up into a grid. Then processing is performed on each cell giving it a classification; i.e. by a function of its focus and min/max luminance. The results of all of the cells are read back and used to index into a database of exposure patterns to obtain a final exposure target. For the sake of this article, we will define our informed weighting function for multizone metering as a simple function of the distance from the focal plane and bypass the grid based approach:

\mathrm{w}(x, y) = \frac{1}{0.1 + |z(x,y) - z_{f}|}

where z(x,y) is the viewspace z at pixel <x, y> and z_{f} is the viewspace z of the focal plane.

This slideshow requires JavaScript.

Comparison

The described metering modes result in a very different exposure from the same simple scene. There is no correct answer for what type of metering you should use but Average seems to be the most used currently. The Multi (informed) metering may be a good fit for certain types of games because the focal plane of the camera is often changed to emphasize objectives, or simply follows where the player is aiming. Here is a final comparison of the different modes.

Metering Comparison

Results using the mentioned metering modes in a simple rendering engine.

Comparing these results to that of a camera we can see that they behave similarly (with the exception of Multi due to proprietary weighting functions).

Metering Comparison 2

Real world results from various metering modes for a back lit scene.

Smoothing the results of metering

To create a smooth transition and avoid large changes in exposure in a small period of time, it is a good idea to smooth the metering results. This can be done by either recording N frames of average scene luminance and then taking average of averages. Alternatively you can use a simple exponential feedback loop to smooth the results of scene metering. I first saw this used in the Average luminance calculation using a compute shader1 post by MJP, which was originally from a SIGGRAPH paper about visual adaption models Time-Dependent Visual Adaptation For Fast Realistic Image Display2. It is really easy to implement and doesn’t involve storing a histogram or large buffer of previous values:

L_{avg} = L_{avg} + (L_{new} - L_{avg}) * (1 - e^{-\Delta t \tau}))

where \Delta t is the delta time from the previous frame and \tau is a constant that controls how fast the model adapts to a new target luminance (higher is faster). For my implementation I use \tau=1.

Applying the results from metering

As originally stated we want to use our metered scene value to change the camera settings in our physical camera model. We will do this through the use of the EV_{100}, or exposure value at ISO 100. To calculate the EV_{100} from the metered scene luminance we will borrow some calculations from light meters.3

EV_{100} = \log_{2}(\frac{L \times {100}}{K})

where K is the reflected-light meter calibration constant, assumed to be 12.5.3 EV_{100} relates to camera settings using the following formula:

EV_{100}= \log_{2}(\frac{N^2 \times 100}{t \times S})

We can use these functions to calculate our desired camera settings by fixing some of the variables to a user defined input or supplying an initial guess. Depending on the settings that become fixed, we will be putting the camera into one of three modes: Aperture Priority, Shutter Speed Priority, or Program Auto. For the following calculations to be valid, we first state that for each camera setting there exists a minimum and maximum value (either physical in the case of aperture or subjective in the case of ISO) that the camera model is bound to.

Exposure Compensation

Almost all cameras have an additional feature in them called exposure compensation(EV_{comp}). This is a simple bias to correct the automatic exposure system. This compensation, specified in EV, adjusts the the EV_{100} prior to executing the settings determination stage. To the end user, this allows them to inform the metering system that there is a discrepancy between what they are seeing and what the camera is capturing. Obviously the camera doesn’t know the intended subject, so it might be fooled into exposing for a neon sign rather than a person standing in front of it. By dialing in the exposure compensation, the user can get the image as they see it. To factor in exposure compensation we just have to modify the original EV_{100} to reach the final target exposure value, EV_{t}:

EV_{t}=EV_{100}-EV_{comp}

It makes sense to subtract the exposure compensation because for a positive adjustment we are telling the exposure adjustment system that average scene is darker than it actually is, resulting in a higher overall exposure. This target EV is then fed in to one of the adjustment modes. One interesting thing to note in the series of images below, is that we are applying a modification of 1 EV which represents a change in the amount of light by a factor of 2. This results in the ISO setting changing between each shot, in the -1 EV we are at ISO 100, 0 – ISO 200, +1 ISO 400. This confirms that we are capturing twice as much light between each image.

Exposure Compensation

Comparison of the same scene shot with different exposure compensation values.

Aperture Priority

Aperture priority mode allows the user to select an aperture for aesthetic reasons and the camera will adjust the remaining variables to obtain a well exposed image. Since we only have one equation and two remaining unknowns we will have to make an initial guess. There is a general rule that it is preferable to have a shutter speed that is faster than \frac{1}{f}, where f is the focal length of the lens in mm. The intended effect of this rule is to negate the camera shake that is introduced by the hands of an average photographer. If you look back at the exposure compensation images (shot at 70mm) the shutter speed chosen by the camera was \frac{1}{80} which is the closest shutter speed, price is right rules, that satisfies \frac{1}{f}. This rule is the basis for the initial shutter speed, t_{i}:

t_{i} = \frac{1}{f}

Given the initial shutter speed and user defined final aperture, N_{f}, we can determine the desired ISO:

S_{d} = \frac{N_{f}^2 \times 100}{t_{i} \times 2^{EV_{t}}}

The ISO is then clamped to the range provided by the camera to find the final ISO value.

S_{f} = \mathrm{clamp}(S_{d}, S_{min}, S_{max})

With the final aperture and the final ISO known we can adjust our initial shutter speed guess to arrive at the final camera settings. To do this we compute the exposure value obtained with the current settings:

EV_{c} = \log_{2}(\frac{N_{f}^2 \times S_{f}}{t_{i} \times 100})

We use this to figure out the difference from the original target exposure value:

EV_{diff} = EV_{t} - EV_{c}

Recalling that leaving the shutter open for twice as long will allow twice as much light into the sensor (i.e. represent a 1 EV shift), we can calculate the desired shutter speed.

t_{d} = t_{i} \times 2^{-EV_{diff}}

Once again we have to clamp the desired value to the range that the camera is capable of:

t_{f} = \mathrm{clamp}(t_{d}, t_{min}, t_{max})

Shutter Speed Priority

Similarly to the aperture priority mode, the user provides the final value for the shutter speed (t_f). This time we need to make an initial guess for the aperture. Through inspecting the behavior of my personal camera f / 4.0 is a good place to start for the initial aperture (N_i). Following the aperture priority example, we first find the final ISO, and then the difference in exposure value from our target. Finally, we adjust our initial aperture guess to arrive at our final settings. Once again, recalling that increasing the surface area of the aperture opening by a factor of 2 will double the amount of light hitting the sensor (1 EV shift), and assuming that the aperture is round, it can be figured out that the amount the radius of the aperture by a factor of \sqrt{2}. Using this relationship, we apply the adjustment and clamp the aperture in similar fashion:

N_{d} = N_{i} \times {\sqrt{2}}^{EV_{diff}}

N_{f} = \mathrm{clamp}(N_{d}, N_{min}, N_{max})

Note: The sign changes on the adjustment factor because the f-number (N) is actually the ratio of the focal length to the aperture diameter N = \frac{f}{D}.

Program Auto

Program auto mode is a hybrid of the first two approaches. It aims to keep the shutter speed at a sufficient speed and set the aperture for a reasonable depth of field. We will follow the same approach as the previous two examples except that we will guess both the aperture, and the shutter speed, using the same best guesses as before. This time, when we go to apply EV_{diff} we will divide the adjustment in half and apply it to the aperture. We then recompute the EV_{diff} with the adjusted aperture and apply the remaining adjustment to the shutter speed. This can be seen in the code example below.

Implementing automatic exposure in code

As explained in the first post, the aperture, shutter speed, and ISO are properties of the lens, camera body, and sensor respectively. The following values will be used:

t_{min} = \frac{1}{4000}s
t_{max} = \frac{1}{30}s
N_{min} = 1.8
N_{max} = 22.0
S_{min} = 100
S_{max} = 6400

// References:
// http://en.wikipedia.org/wiki/Film_speed
// http://en.wikipedia.org/wiki/Exposure_value
// http://en.wikipedia.org/wiki/Light_meter

// Notes:
// EV below refers to EV at ISO 100

// Given an aperture, shutter speed, and exposure value compute the required ISO value
float ComputeISO(float aperture, float shutterSpeed, float ev)
{
	return (Sqr(aperture) * 100.0f) / (shutterSpeed * powf(2.0f, ev));
}

// Given the camera settings compute the current exposure value
float ComputeEV(float aperture, float shutterSpeed, float iso)
{
	return Log2((Sqr(aperture) * 100.0f) / (shutterSpeed * iso));
}

// Using the light metering equation compute the target exposure value
float ComputeTargetEV(float averageLuminance)
{
	// K is a light meter calibration constant
	static const float K = 12.5f;
	return Log2(averageLuminance * 100.0f / K);
}

void ApplyAperturePriority(float focalLength,
						   float targetEV,
						   float& aperture,
						   float& shutterSpeed,
						   float& iso)
{
	// Start with the assumption that we want a shutter speed of 1/f
	shutterSpeed = 1.0f / (focalLength * 1000.0f);

	// Compute the resulting ISO if we left the shutter speed here
	iso = Clamp(ComputeISO(aperture, shutterSpeed, targetEV), MIN_ISO, MAX_ISO);

	// Figure out how far we were from the target exposure value
	float evDiff = targetEV - ComputeEV(aperture, shutterSpeed, iso);

	// Compute the final shutter speed
	shutterSpeed = Clamp(shutterSpeed * powf(2.0f, -evDiff), MIN_SHUTTER, MAX_SHUTTER);
}

void ApplyShutterPriority(float focalLength,
						  float targetEV,
						  float& aperture,
						  float& shutterSpeed,
						  float& iso)
{
	// Start with the assumption that we want an aperture of 4.0
	aperture = 4.0f;

	// Compute the resulting ISO if we left the aperture here
	iso = Clamp(ComputeISO(aperture, shutterSpeed, targetEV), MIN_ISO, MAX_ISO);

	// Figure out how far we were from the target exposure value
	float evDiff = targetEV - ComputeEV(aperture, shutterSpeed, iso);

	// Compute the final aperture
	aperture = Clamp(aperture * powf(Sqrt(2.0f), evDiff), MIN_APERTURE, MIN_APERTURE);
}

void ApplyProgramAuto(float focalLength,
					  float targetEV,
					  float& aperture,
					  float& shutterSpeed,
					  float& iso)
{
	// Start with the assumption that we want an aperture of 4.0
	aperture = 4.0f;

	// Start with the assumption that we want a shutter speed of 1/f
	shutterSpeed = 1.0f / (focalLength * 1000.0f);

	// Compute the resulting ISO if we left both shutter and aperture here
	iso = Clamp(ComputeISO(aperture, shutterSpeed, targetEV), MIN_ISO, MAX_ISO);

	// Apply half the difference in EV to the aperture
	float evDiff = targetEV - ComputeEV(aperture, shutterSpeed, iso);
	aperture = Clamp(aperture * powf(Sqrt(2.0f), evDiff * 0.5f), MIN_APERTURE, MIN_APERTURE);

	// Apply the remaining difference to the shutter speed
	evDiff = targetEV - ComputeEV(aperture, shutterSpeed, iso);
	shutterSpeed = Clamp(shutterSpeed * powf(2.0f, -evDiff), MIN_SHUTTER, MAX_SHUTTER);
}

Other Considerations for Exposure

Filters

If a photographer is presented with a tricky exposure, they always have the option of using filters on their lens. Filters serve many different purposes: from decreasing the incoming light to reducing reflections through polarization. A popular type of filter used in sunny situations is the Natural Density, or ND filter. This type of filter blocks a percentage of light as it enters the camera, usually specified in stops; so a 3 stop ND filter would reduce the scene brightness by a factor of 2^3. These are useful when the light is so bright that it becomes impossible to open the aperture past a certain point. To factor these into the camera model, we would simply multiply the rendered scene before it goes into metering or the rest of the post processing pipeline. Another type of filter that is used in landscape photography is a graduated ND filter. Think of this filter as a gradient, from bottom to top, where the effect of the ND filter is weaker at the bottom and stronger at the top. These are used to reduce the brightness of the sky so the values do not clip to white when exposing for foreground objects.

141210-001

Looking at a light through an ND filter brings it back into a range that the camera is able to capture

Valid settings

We make the assumption above that the camera settings can be anything between the min and max settings. On a still camera this is often not the case. Most of the time the shutter speed will have a number of preset speeds that it can operate at, moving in a fixed EV increment like +-0.3. Similarly, aperture will have discrete settings following 0.3 or 0.5 EV. One exception to this is on lenses designed for cinema. These have a “De-Clicked” aperture ring, which means, the aperture can be any value between min and max, allowing the operator to adjust the exposure smoothly with-in a single take. This doesn’t need to be explicitly considered but it may suggest that placing the camera in shutter priority may make the most sense.

Conclusion

Automatic exposure in games is likely here to stay. It would be extremely time consuming, and in some cases, completely impossible to provide camera settings and exposure values for every situation in a game. Hopefully this article provided some insight into how a camera deals with auto-exposure and how it relates back to the camera model settings. In the very least, it may give you some additional ideas for constructing a better interface to control the look and feel of the scene through the use of exposure compensation, setting priority modes, and scene metering modes. Now that we have a well exposed scene that can adjust to various lighting conditions we will start to use the camera settings in some post effects. Stay tuned for the next post!

Advertisements