The Implementation Notes series of blog posts is an attempt to outline the process of implementing a research paper or technique from a presentation. I find that more often than not when trying to recreate others results that not all the dots are connected. It takes some time and digging to get the whole picture, this is my attempt at sharing that background work.

In this edition of Implementation Notes (actually the first edition) we will be looking at two papers relating to rendering lens flares.

Physically-Based Real-Time Lens Flare Rendering^{1}

Practical Real-Time Lens-Flare Rendering^{2}

Some of the foundation work from the first paper will be presented and then an implementation based on the second paper will be outlined.

# Background

Cameras are comprised of many complex systems, each of which can introduce visual artifacts. If these visual artifacts are not modeled to a certain extent in our rendered images then they will produce suspiciously clean results and look artificial. However, if we introduce some of these artifacts then the images can appear more natural to the viewer. One such artifact is internal reflections, sometimes referred to as lens flares. These two papers explore modeling lens flares as the result of light reflecting off the surfaces of the optics inside of the lens, hence the “Physically-Based”.

The lens configuration inside most lenses is surprisingly complex and can consist of dozens of individual lens elements. As the light from the scene passes through each optical interface a portion is reflected and the rest is refracted. The reflected light then moves backwards through the optical system striking other optical interfaces along the way, the process then repeats. For every even number of bounces that occurs the reflected light is then directed back towards the image sensor; however, the light rays are no longer focused. This light represents a single lens flare, or ghost as referred to by the papers.

# Optical Systems

This technique is heavily based off on the representation of the lens elements in an *Optical System.* This referrers to the combination of lenses and empty spaces designed in such a way that light is focused on to an image plane. We begin with the assumption that the image plane *I* represents standard film (24x36mm) with an image circle diagonal of 43.3mm.

The optical axis of the system is the axis that runs through the center of the lens elements and intersects the center of the image plane. Lens elements are arranged along the optical axis either directly next to each other or separated by a small amount of air. The standard description of an optical system is a table of data that defines the interfaces between air and lens elements. The data looks something like this:

# | r | d | n |

1 | 30.810 | 7.700 | 1.652 |

2 | -89.350 | 1.850 | 1.603 |

… | … | … | … |

Here r is the radius of curvature for the interface, d is the distance to the **next** interface, and n is the refractive index of the lens element. The radius can be positive, indicating a convex surface, negative, indicating a concave surface, or 0, indicating a flat surface. Using this definition of an optical system allows us to use existing optical system data. The best place to find optical system descriptions is the patent database, be warned though I don’t know the legality of using the optical design data to reproduce the lens digitally.

When reading a lens patent there are a couple things to note. The first is that there is **a lot** of information in the patent! Most of it you don’t need and shouldn’t care about. The main thing you are looking for is the tabular data describing the lens system. In the table there is usually a header that describes the *focal length* of the lens and the *f-number*. This represents the field of view of the lens and essentially how much light the lens can let in respectfully. If you are a real ambitious go-getter you can use this to drive the settings on your camera model.

It took me a while to find a zoom lenses that had a focal length that represented a normal game field of view (~70 degrees) and had a low f-number, meaning that the lens allows in a lot of light. So I can save you the pain patent searching and point you at a couple options:

Nikon 28-75mm f/2.8 (US5835272)

Canon 28-80 f/2.8 (US5576890)

Canon 36-135mm f/3.5-f/4.6 (USPN4629294)

I have found that there is some information missing when reading the patent, namely the heights of the various elements. Luckily the height that matters the most is the *aperture stop *and this can be calculated using the *effective aperture diameter ()*^{3}*.* The effective aperture represents what the physical aperture looks like if you were looking through the front of the lens. Since the elements in front of the physical aperture often have a magnification property it will differ from the physical aperture stop. The formula for this is based only on the focal length, and the f-number :

So for the Nikon zoom lens we can determine that at a focal length of 28mm and an f-number of roughly 2.8 that the effective aperture diameter should be 10mm. Once we are able to cast rays through the optical system this will be used to determine the size of the physical aperture.

Quick note: On more modern lenses (1980s and onward) there are often aspherical lens elements in the optical system. Since the intent here is not to rely on the precise qualities of the optical system I chose to ignore these elements and instead treat them as spherical.

# Ray tracing an Optical System

The first step towards modeling lens flares is to establish the optical system we will be evaluating. This can be validated by simply drawing (in 2D) the profile of the optical system. To perform the drawing you simply walk over the elements of the optical system, drawing each and then increment the draw cursor down the optical axis by the *d* value for that element. Each interface is drawn as an angular patch using the radius of curvature (*r) *and angle given by . For the first pass I just took a guess at the various element heights making sure they didn’t intersect with the next elements. As mentioned before, we only care about the height of the aperture which will be computed. Here is what it the result should look like:

Next we will trace a single ray through the optical system considering only refraction, the pseudo code for this is:

OpticalInterface i = first; while (i) { col = intersect(ray, i); if (!col.valid) break; ray.pos += ray.dir * col.dist; ray.dir = refract(ray.dir, col.norm, i.n1/i.n2); i = i.next; }

The result should look something like this:

Be mindful that there can be elements with radius 0, in these cases you should just do a ray plane intersection. Once we have a single ray going through we can cast a bunch of rays to see if they all converge on the focal plane. It is a good time to mention that the patents are specified assuming that you are focusing on an object at infinity. This means that all rays entering the lens are travelling in parallel.

As you can see, the rays that make it through the optical system are focusing to a single point. The reason many rays are terminated prior to reaching the surface is that the *effective aperture *is much smaller than the first element. Recalling that we can compute the *effective aperture* by using the formula , and that for the Nikon lens we found the . Now if we cast rays 5mm above and 5mm below the optical axis we can find determine the height of the aperture (green line) as the height of the ray when it passes through the aperture stop, in this case ~22mm diameter.

To further validate the optical system, we can sweep rays through the full FOV range specified by the patent^{4}. Remember it isn’t going to be perfect since we ignore aspherical elements but it should be relatively close. On a side note, this an interesting experiment because you can see that fewer rays make it through at steeper angles causing *Vignetting*^{5}.

Using this as the final test we can consider the optical system validated. If you have something horribly wrong with it you would notice it at this point. From here we can start to trace the actual flares.

# Flares/Ghosts

The first thing we need to do related to rendering flares is to enumerate the different paths that can end up back at the image plane. Since we lose a significant amount of intensity each time a ray reflects at an optical surface we will follow the papers and only consider rays which have bounced exactly twice. An additional consideration we can make is that a significant amount of light is lost when the light passes through the aperture stop. To factor this into the enumeration process we can say that we only care about light that passes the aperture stop exactly one time and bounces exactly two times. Another way of thinking about this is that we are only considering paths of light that bounce both times *before *the aperture or *after* the aperture. The pseudo code for enumerating the flares/ghosts is below:

a = getApertureIndex() n = getNumElements() for (i = 0; i < a; ++i) for (j = i + 1; j < a; ++j) OpticalGhost ghost; ghost._reflections[0] = j; ghost._reflections[1] = i; ghosts.push_back(ghost); for (i = a + 1; i < n; ++i) for (int j = i + 1; j < n; ++j) OpticalGhost ghost; ghost._reflections[0] = j; ghost._reflections[1] = i; ghosts.push_back(ghost);

Now that we have a list of all possible ghosts in the system we need to modify the ray tracing to account for reflecting at a certain interface. I’ll leave that as an exercise for the reader since it is fairly straight forward. The result of tracing a single ghost should look like this:

This result suggests that we will have a flare at the center of the screen and it will be clipped by the aperture plane (thus aperture shaped). This will be the end of the ray tracing approach presented by the first paper but we will use it to confirm an alternative approach, *Matrix Method.*

# Matrix Method

The matrix method is a general optics technique that approximates the effects of an optical system as a linear system. Since each optical interface is represented as a linear system the results can be concatenated into a single matrix describing the entire system. At the core approach of this is the *paraxial approximation*^{6}. This approximation states that for small angles (< 10 degrees) of :

Using this approximation we can represent the transforms of our optical system, namely translation(T), reflection(L), and refraction(R), as 2×2 matrices functioning on the height and angle of a ray: . For full derivations of these matrices see Matrix Methods in Paraxial Optics.

So now instead of tracing N rays for each ghost we will simply concatenate the matrices that represent each optical interface and project a flare quad. Generally there will be pairs of transforms that translate to then refract by the interface, since we are talking about matrices the order is reverse . When we hit the interface that we should reflect at we multiply by and then as we progress backwards through the system we apply the inverse of the above pairs, (Translation is unaffected by the change in direction), until we hit the second reflection , at which point you resume concatenating the remaining pairs using the original . A simple example would be: representing a ray entering a 3 interface system translating to and refracting through the first element (), translating and reflecting off the second (), traveling back through the second (), reflecting off the first and then propagating out of the system ().

**Important! **The inverse in this case doesn’t refer to the matrix inverse but rather the operation in the reverse direction i.e. , , and . This isn’t mentioned in the original paper and if you use the standard matrix inverse the results will be quite a bit off.

# Placing the ghosts

Using the matrices that describe the system, we generate a flare quad for the final flare rendering. To do this we actually need 2 matrices, the first is the matrix that describes the optical system up to the aperture stop (), the second describes the optical system from the aperture stop to the sensor plane (). will be used to find the entrance *pupil* of the ghost and will be used to find the projected aperture. If you recall from earlier, the *entrance pupil* is the aperture as seen from the front of the lens following the path of light that we care about. Essentially we want solve for the height of the *entrance pupil* and using the direction of the light , the height of the aperture , and the optical system matrix of the lens prior to the aperture .

Disregarding since we only care about the height of the entrance pupil we can expand the matrix multiply:

(column major)

solving for :

Using this equation and plugging in we can find tightly fitting bounds for the aperture. These bounds are then projected through the entire optical system () to find the final flare quad position on the sensor. As a final step the positions are projected on the sensor’s image circle, which for 35mm film is diameter, to get the final normalized coordinates and size.

These values represent offets along the vector from the center of the screen to the light position (in my case the sun). I found the screen space position of the sun by projecting a point a far distance from the camera in the direction of the sun although I’m sure there are more accurate ways. The starting angle is simply where is the forward vector of the camera in world space and is the direction of the sun. The resulting pseudo code looks like this:

Theta_e = acosf(dot(E, L)); Axis = Project(CameraPos + L * 10000.0).Normalize(); H_a = GetApertureHeight(); foreach ghost // Aperture projected onto the entrance H_e1 = (H_a - ghost.M_a[1][0] * Theta_e) / ghost.M_a[0][0]; H_e2 = (-H_a - ghost.M_a[1][0] * Theta_e) / ghost.M_a[0][0]; // Flare quad bounds projected through the system H_p1 = ((ghost.M_s * ghost.M_a) * Vector2(H_e1, Theta_e)).x; H_p2 = ((ghost.M_s * ghost.M_a) * Vector2(H_e2, Theta_e)).x; // Project on to image circle H_p1 /= 21.65; H_p2 /= 21.65; Center = (H_p1 + H_p2) / 2; Radius = abs(H_p1 - H_p2) / 2; Transform = CreateScale(Radius, Radius * AspectRatio()); Transform *= CreateTranslation(Axis * Center); RenderFlare(Transform);

# Intensity and Color

Once we have the placement of the ghost we need to determine what the intensity and color should be. I chose to deviate from the papers slightly for both of these values. For the intensity I chose to make the intensity relative to that of the optical system’s effective aperture. I calculate it by determining for the optical system with out introducing any reflections. Then I calculate the projected size of the aperture with , since it is symmetrical you only need to do this once:

Then intensity, , is initialized to the ratio between surface area of the ghost’s effective pupil vs the surface area of the system’s effective pupil. The intensity is then scaled by the final projected area of the flare quad. The final intensity represents the relative intensity compared to an in-focus pixel so you would multiply the light color by it to get your final pre-exposed color.

entrancePupil = H_a / system.M_a[0][0]; Intensity = Sqr(H_e1 - H_e2) / Sqr(2 * entrancePupil); Intensity /= Sqr(2 * Radius);

With the intensity calculated the color must be determined. Once again I deviated slightly from the papers to do this. The main contributor to color for a flare is the anti-reflective coating on the lens. These coatings are designed to create an interference pattern and cancel out light that is reflected at each interface similar to noise cancelling headphones. The shift in color occurs because the coatings are designed to minimize reflections at a specific wavelength, generally a mid length wavelength like 500nm, this results in non-uniform reflection across the various wavelengths. I found the best explanation and reference here, this is the same as is presented in the papers. I deviate from the authors because rather than trace a single ray to determine the color, I chose to evaluate the reflectance at random angle per ghost (once for red, green, and blue). This produces a stable color per flare and makes it easier to ensure that the angle doesn’t exceed the critical angle of the system, resulting in invalid reflectance values.

// http://www.gen-opt.com/lamda_admin/Lamda_Edit/UploadFile/2011222112937198.pdf // lambda - Wavelength of light being tested for reflectance // d - Thickness of the anti-reflectance coating (lambda0 / 4.0f / n1) where // lambda0 is the wavelenght of light the coating was designed for, typically // a midrange wavelenght like green (550 nm) // theta1 - Angle of incidence at the edge of the coating // n1 - The Index of Refraction for the incoming material // n2 - The Index of Refraction for the coating, max(sqrt(n1 * n2), 1.38) where 1.38 // is the index of refraction for a common magnesium floride coating. // n3 - The Index of Refraction for the outgoing material static float Reflectance(float lambda, float d, float theta1, float n1, float n2, float n3) { // Apply Snell's law to get the other angles float theta2 = asinf(n1 * sinf(theta1) / n2); float theta3 = asinf(n1 * sinf(theta1) / n3); float cos1 = cosf(theta1); float cos2 = cosf(theta2); float cos3 = cosf(theta3); float beta = (2.0f * MATH_PI) / lambda * n2 * d * cos2; // Compute the fresnel terms for the first and second interfaces for both s and p polarized // light float r12p = (n2 * cos1 - n1 * cos2) / (n2 * cos1 + n1 * cos2); float r12p2 = r12p * r12p; float r23p = (n3 * cos2 - n2 * cos3) / (n3 * cos2 + n2 * cos3); float r23p2 = r23p * r23p; float rp = (r12p2 + r23p2 + 2.0f * r12p * r23p * cosf(2.0f * beta)) / (1.0f + r12p2 * r23p2 + 2.0f * r12p * r23p * cosf(2.0f * beta)); float r12s = (n1 * cos1 - n2 * cos2) / (n1 * cos1 + n2 * cos2); float r12s2 = r12s * r12s; float r23s = (n2 * cos2 - n3 * cos3) / (n2 * cos2 + n3 * cos3); float r23s2 = r23s * r23s; float rs = (r12s2 + r23s2 + 2.0f * r12s * r23s * cosf(2.0f * beta)) / (1.0f + r12s2 * r23s2 + 2.0f * r12s * r23s * cosf(2.0f * beta)); return (rs + rp) * 0.5f; }

Anti-reflection coatings is one of the areas where lens making has made significant strides in the past 20 years. Below is a comparison between a lens from the 80s and a lens from today.

# Flare Texture

Now that we have a color, intensity, and placement for the flare we need to generate a texture. This would be an ideal place to hand it over to an artist to whip something up but I don’t have that luxury at the moment so we will just have to make due. The flare texture should be a projection of the aperture’s shape. On most lenses this would be an n-gon, where n is the number of aperture blades. On higher end lenses a lot of effort goes in to maintaining a circular aperture shape at all f-numbers. The simplest way to do either of these is to generate a small distance field texture, 64×64 in my implementation. If you simply want a circular aperture then the distance is just:

distance = length(In.Texcoord * 2.0f - 1.0f);

If you want to have an n-gon then you can calculate the distance as the max distance of the coordinate dotted with each axis, where the axis is a vector that is perpendicular to the aperture edge:

distance = 0 coord = In.Texcoord * 2.0f - 1.0f; foreach axis distance = max(distance, dot(axis, coord));

This is the most flexible representation but you could also exploit symmetry to execute less dot products. For example, an octagon can be represented with 1 dot and 2 max operators instead of 8 dot products and 8 max operators.

coord = abs(Texcoord * 2.0f - 1.0f); distance = max(coord.x, coord.y); distance = max(distance, dot(coord, float2(0.70710678118f, 0.70710678118f)));

## Smoothing the result

Directly using this shape will give acceptable results but when applying a falloff based on the distance field some peaks at aperture edges can be seen. To eliminate these I simply applied the smooth max operator, adapted from Inigo Quilez’s Smooth Min article and the identity .

// polynomial smooth min // from: http://www.iquilezles.org/www/articles/smin/smin.htm float smin(float a, float b, float k) { float diff = b - a; float h = saturate(0.5 + 0.5 * diff / k); return b - h * (diff + k * (1.0f - h)); } float smax(float a, float b, float k) { float diff = a - b; float h = saturate(0.5 + 0.5 * diff / k); return b + h * (diff + k * (1.0f - h)); }

Using this technique you can get lots of different source aperture shapes.

# Entrance Clipping

To get the final shape of the flare we need to account for the case when our effective pupil lies outside the physical entrance heights. In this case we should be clipping the aperture shape with the entrance shape. Incorporating this effect is relatively simple and can break up the otherwise repeating effect. Pseudo code follows:

... H_e = GetEntranceHeight(); foreach ghost ... // Entrance projected on to the aperture E_a1 = (ghost->_ma * Vector2(H_e, thetaE)).x; E_a2 = (ghost->_ma * Vector2(-H_e, thetaE)).x; // Move the projection into aperture space as a position and radius float C_e = ((e_a1 + e_a2) * 0.5) / H_a - Center; float R_e = (abs(E_a1 - E_a2) * 0.5) / H_a;

These two parameters are uploaded along with the screen space light direction and are combined with the distance field using the same smooth max approach as before:

// Sample the distance field float fDistance = FlareTexture.SampleLevel(LinearSampler, Texcoord, 0.0f).x; // Clip against the entrance in aperture space float2 coord = Texcoord * 2.0f - 1.0f; fDistance = smax(fDistance, length(coord - vEntranceCenterRadius.x * vViewspaceDirection) / vEntranceCenterRadius.y); // Finally fade the edge to get the final aperture mask float fApertureMask = smoothstep(0.95, 0.80, fDistance);

# Final Results

This brings us to the end of what I decided to implement from the papers. Here are some final results using the info presented in the post.

# Optimizations

The authors present some recommendations for performance such as culling flares whose intensity falls below a certain value or only rendering the top 40% flares. These ideas could be extended with rendering out the flares at multiple resolutions to avoid the fill rate cost of the larger flares that cover a majority of the screen.On the CPU side the system matrices only need to be computed once so there is no need to precompute their values.

# Conclusion

The papers present an interesting method for representing a “Physically Based” lens flare model. If I were to implement it in a production engine I would leave the flare shape and colors as artist configurable and simply implement the flare placement and entrance clipping. The method outlined in this post has it’s limitations since it’s still representing the entire optical system with a linear approximation. Perhaps an interesting approach would be to apply this method to a majority of the flares in a system and then use the more expensive ray-tracing technique proposed by the first paper to represent complex flares. Well, I hope you enjoyed the first installment of Implementation Notes. If you liked it let me know and if you do this type of thing in your own time you should write about it!

- http://resources.mpi-inf.mpg.de/lensflareRendering/pdf/flare.pdf ↩
- https://graphics.tudelft.nl/Publications-new/2013/LE13/PracticalReal-TimeLens.pdf ↩
- http://en.wikipedia.org/wiki/F-number ↩
- http://en.wikipedia.org/wiki/Angle_of_view ↩
- http://en.wikipedia.org/wiki/Vignetting ↩
- http://en.wikipedia.org/wiki/Paraxial_approximation ↩

Pingback: Anamorphic lens flares and visual effects | Bart Wronski

Very useful. Thanks !

LikeLike

Very helpful! I just used it! Thanks!

LikeLike

Pingback: RTGfx What really is PBR? - RTGfx

Pingback: RTGfx What PBR really is? - RTGfx

Your Implementation notes made it possible for me to create a simple alpha implementation of the alpha. I’ve build a few Matrices M_a and M_s by hand and the rendering result looks promising. Still there is no mentioning whether reflection can only occur in the M_a or also in M_s.

If they also can occur in M_s (which would be logical to me) the light ray could go through the iris aperture more than once. If that’s really case I don’t know which matrix multiplications to put into M_a und which into M_s?

Can you provide me some useful hints?

LikeLike

Only about 4% of light reflects off of a reflection so it would take 2 reflections to have it go back to the sensor. This would be a very small amount of light actually making it back after reflections so they are ignored in this model.

LikeLike