zaro

How does fragment shader work?

Published in Graphics Programming 5 mins read

The fragment shader is a powerful, programmable stage in the graphics pipeline responsible for determining the final color and depth of each pixel on your screen. It processes individual "fragments" generated during rasterization, transforming them into the visible output.

The Core Function of the Fragment Shader

A Fragment Shader is the shader stage that will process a fragment generated by the Rasterization into a set of colors and a single depth value. This means it takes preliminary pixel data and calculates what you ultimately see.

It operates as the OpenGL pipeline stage after a primitive (like a triangle) has been rasterized. For each sample of the pixels covered by a primitive, a "fragment" is generated. Each fragment represents a potential pixel on the screen and carries various interpolated data from the previous stages.

How Fragments Are Processed

The fragment shader's primary job is to take the interpolated data associated with a fragment and compute its final color and depth. This stage is executed for every single fragment, making it highly parallel and computationally intensive.

Key Inputs to the Fragment Shader

Before a fragment reaches the fragment shader, it has already been processed by earlier pipeline stages, primarily the Vertex Shader. The vertex shader calculates attributes for each vertex, and during rasterization, these attributes are interpolated across the surface of the primitive to create per-fragment values.

Common inputs include:

  • Interpolated Varying Variables: Values like varying (or out in modern GLSL) color, texture coordinates (UVs), and normal vectors that have been smoothly interpolated across the triangle's surface.
  • Uniform Variables: Global data that remains constant for an entire draw call, such as light positions, camera view/projection matrices, time, or global color tints.
  • Sampler Objects: References to textures, allowing the shader to sample (read) color data from image files.

Core Operations Performed

Within the fragment shader, complex calculations transform the input data into the final pixel properties. These operations often involve:

  • Texture Sampling: Reading color information from a texture map based on interpolated texture coordinates. This is how images are applied to 3D models.
  • Lighting Calculations: Determining how light interacts with the surface at the fragment's position, taking into account surface normals, light sources, and material properties. This can range from simple Lambertian shading to advanced Physically Based Rendering (PBR).
  • Color Manipulation: Adjusting the sampled color based on various factors like tint, brightness, or contrast.
  • Alpha Blending/Transparency: Determining the opacity of the fragment. If a fragment is semi-transparent, its color will be blended with the color of whatever is behind it.
  • Depth Testing & Writing: Outputting a gl_FragDepth value, which determines if this fragment is closer to the camera than existing fragments at the same pixel location in the depth buffer.
  • Discarding Fragments: A fragment shader can choose to discard a fragment, effectively preventing it from being drawn. This is useful for creating cut-out effects (e.g., foliage with transparent areas).

Outputs of the Fragment Shader

The output of a fragment shader dictates what appears on the screen:

Output Type Description GLSL Variable (Common)
Color Data The final Red, Green, Blue, and Alpha (RGBA) color for the pixel. vec4 FragColor
Depth Value A single floating-point value indicating the fragment's depth from the camera, used for depth testing. float gl_FragDepth

These outputs are then typically sent to the Blending and Depth Testing stage of the pipeline, where they are compared with existing pixel data in the framebuffer before being written.

Fragment Shader in the Graphics Pipeline Flow

To understand its context, consider the simplified flow of the graphics pipeline:

  1. Application: Your program sends model data (vertices, textures) to the GPU.
  2. Vertex Shader: Processes individual vertices, transforming their positions from model space to clip space and calculating other per-vertex attributes.
  3. Primitive Assembly: Connects vertices into primitives (points, lines, triangles).
  4. Rasterization: Converts primitives into a set of fragments. This is where the "fragment" is generated for each potential pixel covered by a primitive.
  5. Fragment Shader: This is where each generated fragment is processed into a final color and depth value.
  6. Blending & Depth Test: The fragment's color and depth are compared with existing pixel data in the frame and depth buffers. If it passes, it's written.
  7. Framebuffer: The final image is stored here, ready to be displayed on your screen.

Practical Applications and Examples

The versatility of the fragment shader makes it indispensable for creating rich, realistic, and visually appealing graphics:

  • Realistic Lighting: Implementing sophisticated lighting models like Phong or PBR to simulate how light reflects off surfaces.
  • Texture Mapping: Applying images (textures) onto 3D models for detailed surfaces like wood grain, brick walls, or character skin.
  • Normal Mapping: Using a texture to store surface normal information, allowing for highly detailed lighting without needing complex geometry.
  • Environmental Effects: Simulating fog, atmospheric scattering, or underwater distortion.
  • Post-Processing Effects: Applying visual filters to the entire rendered scene, such as blur, bloom, color correction, or chromatic aberration.
  • Transparent Objects: Rendering glass, water, or smoke with correct blending.
  • Special Effects: Creating outlines, heat haze, force fields, and more.

In essence, the fragment shader is the artist's canvas in the 3D rendering pipeline, allowing for fine-grained control over the final appearance of every visible pixel.