Temporal anti-aliasing
Temporal anti-aliasing (TAA) is a spatial anti-aliasing technique for computer-generated video that combines information from past frames and the current frame to remove jaggies in the current frame. In TAA, each pixel is sampled once per frame but in each frame the sample is at a different location within the frame. Pixels sampled in past frames are blended with pixels sampled in the current frame to produce an anti-aliased image. Although this method makes TAA achieve a result comparable to supersampling, the technique inevitably causes ghosting and blurriness to the image.[1]
TAA compared to MSAA
[edit]Prior to the development of TAA, MSAA was the dominant anti-aliasing technique. MSAA samples (renders) only the edges of polygons, then averages the samples to produce the final pixel value, making it surprisingly efficient in GPU-bound scenarios. In contrast, TAA samples information from previous and current frames, which makes TAA faster than MSAA, but often results in artifacting. In parts of the picture without motion, TAA effectively computes MSAA over multiple frames, but does not always achieve the same quality as the latter.[opinion]
TAA compared to FXAA
[edit]TAA and FXAA both sample each pixel only once per frame, but FXAA does not take into account pixels sampled in past frames, so FXAA is simpler and faster but can not achieve the same image quality as MSAA [opinion]or TAA[opinion]. Similarly to TAA, FXAA is infamous for the blur it applies to the image, which isn't ideal for detail-heavy games (The difference is, FXAA blurs the screen out of necessity, while TAA blur is a byproduct of the method itself).
Implementation
[edit]Sampling the pixels at a different position in each frame can be achieved by adding a per-frame "jitter" when rendering the frames. The "jitter" is a 2D offset that shifts the pixel grid, and its X and Y magnitude are between 0 and 1.[2][3]
When combining pixels sampled in past frames with pixels sampled in the current frame, care needs to be taken to avoid blending pixels that contain different objects, which would produce ghosting or motion-blurring artifacts. Different implementation of TAA have different ways of achieving this. Possible methods include:
- Using motion vectors from the game engine to perform motion compensation before blending.
- Limiting (clamping) the final value of a pixel by the values of pixels surrounding it.[2]
TAA compared to DLSS
[edit]Nvidia's DLSS operates on similar principles to TAA. Like TAA, it uses information from past frames to produce the current frame. Unlike TAA, DLSS does not sample every pixel in every frame. Instead, it samples different pixels in different frames and uses pixels sampled in past frames to fill in the unsampled pixels in the current frame. DLSS uses machine learning to combine samples in the current frame and past frames, and it can be thought of as an advanced TAA implementation.[4][5]
See also
[edit]- Multisample anti-aliasing
- Fast approximate anti-aliasing
- Deep learning super sampling
- Deep learning anti-aliasing
- Supersampling
- Deinterlacing
- Spatial anti-aliasing
- Morphological antialiasing
References
[edit]- ^ Yang, Lei; Liu, Shiqiu; Salvi, Marco (2020-06-13). "A Survey of Temporal Antialiasing Techniques". Computer Graphics Forum. 39 (2): 607–621. doi:10.1111/cgf.14018. ISSN 0167-7055. S2CID 220514131 – via Wiley.
- ^ a b Brian Kari, Epic Games "High Quality Temporal Supersampling".
- ^ Ziyad Barakat "Temporal Anti Aliasing – Step by Step".
- ^ Edward Liu, NVIDIA "DLSS 2.0 - Image Reconstruction for Real-time Rendering with Deep Learning"
- ^ yellowstone6 "How DLSS 2.0 works (for gamers)".