Triangles on Web Ch7 WebGPU Shading Language
We have used WebGPU shading language (wgsl) frequently, but we haven't actually talked about it in detail. In this chapter, we will talk about the syntax and semantics of wgsl. We mainly talks about its difference from universal programming languages like C and JavaScript.
Annotations
The very first significant difference is annotations starting with @. There are mainly two kinds of annotations, the function annotations and the parameters annotations.
Function Annotations
We have used only four of them so far, @vertex, @fragment, @compute and @workgroup_size. The first three are used to specify the type of the function, and the last one is used to specify the workgroup size of the compute shader. We have already introduced their usage in the previous chapters.
Parameters Annotations
Different from universal programming languages, shading language needs you to explicitly put the memory layout of the parameters and the return value of a function.
We have already used @builtin and @location, now let's dive deeper into them.
Let's consider the following shader function:
struct VertexInput {
@builtin(vertex_index) index: u32,
@location(0) position: vec3f,
@location(1) norm: vec3f,
};
struct VertexOutput {
@builtin(position) position: vec4f,
@location(0) norm: vec3f,
};
@vertex
fn vertex_main(input: VertexInput) -> VertexOutput {
// ...
}
struct in the shading language is more like that in C language- it specifies a memory layout.
First let's look at the input- the input struct of a function in shading language consists of two parts, the locational fields and the builtin fields.
The locational fields are specified by @location, which specifies the location of the field in the memory. For example, the first chunk of the memory is the normal vector, which is specified by @location(1) norm: vec3f.
However, aside from the locational fields, there are some hidden field that is automatically filled by the WebGPU runtime. For example, the vertex_index is automatically filled by the WebGPU runtime. They are distinct from the locational parameters and handled by the WebGPU runtime, thus you just use @builtin to specify them, without the need to specify the location.
The input layout is specified by the vertex buffer layout in the pipeline layout. Whereas the output layout of the vertex shader is determined by the WebGPU runtime- a builtin position field is required.
As for the fragment shader, the input is just simply the output of the vertex shader, and the output is dependent on the attachments you defined. For color attachments, it is locational. For depth attachment, there is a builtin @builtin(frag_depth).
You can also tear apart teh struct and specify the location or builtin field directly in the function signature. For example:
@vertex
fn vertex_main(
@builtin(vertex_index) index: u32,
@location(0) position: vec3f,
@location(1) norm: vec3f,
) -> VertexOutput {
// ...
}
Types
Types here functions the same but have many builtins for vectors and matrix, commonly used are,
u32,i32,f32, for unsigned integer, signed integer and float. The number signals the number of bits.vec2<T>,vec3<T>,vec4<T>, for vectors. Becausef32vector is used most frequently, you can usevec2f,vec3f,vec4fas shorthand.mat2x2<T>,mat3x3<T>,mat4x4<T>, for matrices. Becausef32matrix is used most frequently, you can usemat2x2f,mat3x3f,mat4x4fas shorthand. There are alsomat2x3,mat3x2,mat3x4,mat4x3andmat2x4,mat4x2for other combinations.bool,bvec2,bvec3,bvec4, for boolean values.sampler,sampler_comparison,sampler_shadow, for samplers.
Builtin Functions
There are many builtin functions in the shading language,
- math functions, like
sin,cos,ceil,floor,clamp,exp, etc. - type converters, like
f32. - sampler functions, like
textureSample.textureSamplereturns avec4ffor color, we have introduced it in the previous chapters. - derivative functions, like
dpdxordpdy. Such functions are only valid in fragment shaders. How they calculate the derivatives is special- they take a two times two pixel block, then calculate the difference between the pixels, then divide by the size of the block, as is indicated by the@builtin(position). - vector functions, like
dot,cross,normalize. Besides, since lighting calculation is frequently used, there are alsoreflect,refractandfaceforwardfunctions. - matrix functions, like
matrixCompMult,outerProduct,transpose,inverse. - packing functions, like
pack4x8snorm. Basically, it just takes a vector and pack it into a 32-bit integer. Thesnormsuffix means that the vector is normalized to the range of [-1, 1]. Such functions are used in the texture sampling process. - unpacking functions, like
unpack4x8snorm. It is the reverse of the packing functions.
Control Flow
The control flow is similar to that in C language, but there are some differences.
- There is no
switchstatement. - There is no
gotostatement. - You can use
discardto abandon the current fragment.
Trivial
Here are some more trivial differences,
- There is no
nullvalue. - There is no
undefinedvalue. ++and--operators are not supported, so are the+=,-=, or alike.letfor constant andvarfor variable.