Triangles on Web Ch6 Lighting
Previously, we got the shape of our model right- with perspective projection. Now, we make the color right with lighting.
There are many lighting models- each tells how the fragment color is computed. The most common is Phong lighting model. It has 3 components: ambient, diffuse, and specular, which we will implement in this chapter.
Ambient
Ambient simply refers to the, well, ambient. It is the light that presents everywhere.
Ambient light will cause the object to have a constant light shad on it. And thus it renders the object with a constant color.
If we take a simple model, that the ambient light is white and the color is commensurate to the ambient light, then the color with ambient light taken into account is:
Where is the intensity of the ambient light. In computer graphics, the intensity of light usually just use a unit of one, without any unit. is the intensity of the light considering all factors, only ambient light in this case.
is just a fancy word for the color of the object if it is not illuminated by any light.
is a factor that denotes the reflectivity of the object. You can take arbitrary values, but usually, it is taken as 1.0.
Normally, we can take .
Diffuse
However, there is not only ambient light that is scattered everywhere- usually, there are also directed light sources. The light will shad on the object, reflected off its surface and reach the camera (more detailed lighting will just be ignored, we only consider reflection and the first reflection).
In physics, we know that if a light reflects off a surface, there are two cases- if the surface is smooth, it is a specular reflection, where the reflection is sharp and clear, with reflected light following the law of reflection. If the surface is rough, because each surface can be viewed as smaller smooth surfaces of many different angles, the light will be reflected in all directions, and the reflection is called diffuse reflection.
In Phong lighting model, both reflections are considered, albeit this is not the case in real life.
Diffuse reflection follows the Lambert cosine law. The law states that the intensity of the light reflected off a surface is proportional to the cosine of the angle between the light source and the surface normal.
That is, if the unit normal vector is , a light with and intensity , the intensity of the light reflected off the surface is:
Specular
We also consider the specular reflection. The specular reflection denotes the light that is directly reflected to the camera.
If we have a light that is reflected off a surface with the intensity , the intensity of the light perceived by the camera is determined by the phong reflection model, with,
Where is the reflection of the light, is the vector from the fragment to the camera, and is the shininess of the object, or say, how rough the highlight is. The higher the value, the more concentrated the highlight is, and vice versa.
Putting it all together
Now, we have all the components of the Phong lighting model. We can put it all together to get the final color of the fragment.
First off, for a point at (the world axis), the normal vector of the primitive at this point being , the camera position being , and the light position being .
Now, let's first consider how we can get all the vectors.
is directly given by the model.
can be calculated by the cross product of the two edges of the triangle, and then normalized. So in vertex shader, we pass both the normal vector and the position of the vertex.
is simply the camera position minus the position of the fragment.
is the light position minus the position of the fragment. This is determined by the light source we want.
is the reflection of the light, which can be calculated by the formula:
Now, we can calculate the intensity of the light by the formula:
And the final color of the fragment is:
For each component of the light,
Implementation
Now let's add a point light source to our cube.
First, we need to calculate the normal vector, then pass it to the vertex shader.
const get_shader = async (device: GPUDevice): Promise<GPUShaderModule> => {
return device.createShaderModule({
label: "sm",
code: `
@group(0) @binding(0) var<uniform> projection: mat4x4<f32>;
@group(0) @binding(1) var<uniform> angle: f32;
struct VertexOutput {
@builtin(position) pos: vec4f,
@location(0) @interpolate(flat) face: u32,
};
@vertex
fn vertexMain(@location(0) position: vec3f, @location(1) norm: vec3f, @builtin(vertex_index) vertexIndex: u32) -> VertexOutput {
let rotation = mat3x3<f32>(
vec3<f32>(1.0, 0.0, 0.0),
vec3<f32>(0.0, cos(angle), sin(angle)),
vec3<f32>(0.0, -sin(angle), cos(angle)),
);
let rotated = vec4<f32>(rotation * (position - vec3f(0.4, 0.4, 0.0)), 1.0);
var projected = projection * (rotated - vec4<f32>(0.0, 0.0, ${f}, 0.0));
let final_position = vec4<f32>(projected.xy, 1.0 - rotated.z, projected.w);
var output = VertexOutput(final_position, vertexIndex / 6);
return output;
}
struct FragmentOutput {
@location(0) color: vec4<f32>,
};
@fragment
fn fragmentMain(input: VertexOutput) -> FragmentOutput {
var output = FragmentOutput(vec4<f32>(1.0, 1.0, 1.0, 1.0));
if (input.face == 0u) {
output.color = vec4<f32>(1.0, 0.0, 0.0, 1.0);
} else if (input.face == 1u) {
output.color = vec4<f32>(0.0, 1.0, 0.0, 1.0);
} else if (input.face == 2u) {
output.color = vec4<f32>(0.0, 0.0, 1.0, 1.0);
} else if (input.face == 3u) {
output.color = vec4<f32>(1.0, 1.0, 0.0, 1.0);
} else if (input.face == 4u) {
output.color = vec4<f32>(1.0, 0.0, 1.0, 1.0);
} else {
output.color = vec4<f32>(0.0, 1.0, 1.0, 1.0);
}
return output;
}
`,
})
}
const get_vertices = (device: GPUDevice): [GPUBuffer, GPUVertexBufferLayout] => {
// cube
// use triangle
const vertices = new Float32Array([
// xy
1.0, 0.0, 0.0,
0.0, 1.0, 0.0,
0.0, 0.0, 0.0,
1.0, 0.0, 0.0,
0.0, 1.0, 0.0,
1.0, 1.0, 0.0,
1.0, 0.0, 1.0,
0.0, 1.0, 1.0,
0.0, 0.0, 1.0,
1.0, 0.0, 1.0,
0.0, 1.0, 1.0,
1.0, 1.0, 1.0,
// yz
0.0, 1.0, 0.0,
0.0, 0.0, 1.0,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0,
0.0, 0.0, 1.0,
0.0, 1.0, 1.0,
1.0, 1.0, 0.0,
1.0, 0.0, 1.0,
1.0, 0.0, 0.0,
1.0, 1.0, 0.0,
1.0, 0.0, 1.0,
1.0, 1.0, 1.0,
// zx
0.0, 0.0, 1.0,
1.0, 0.0, 0.0,
1.0, 0.0, 1.0,
0.0, 0.0, 1.0,
1.0, 0.0, 0.0,
0.0, 0.0, 0.0,
0.0, 1.0, 1.0,
1.0, 1.0, 0.0,
1.0, 1.0, 1.0,
0.0, 1.0, 1.0,
1.0, 1.0, 0.0,
0.0, 1.0, 0.0,
]).map((v) => v * 0.5 - 0.25);
const primitiveCount = vertices.length / 9;
const verticesWithNormals = new Float32Array(primitiveCount * 6 * 3);
const cross = (a: number[], b: number[]): number[] => {
return [
a[1] * b[2] - a[2] * b[1],
a[2] * b[0] - a[0] * b[2],
a[0] * b[1] - a[1] * b[0],
]
}
for (let i = 0; i < primitiveCount; i++) {
const a = [vertices[i * 9 + 0], vertices[i * 9 + 1], vertices[i * 9 + 2]];
const b = [vertices[i * 9 + 3], vertices[i * 9 + 4], vertices[i * 9 + 5]];
const c = [vertices[i * 9 + 6], vertices[i * 9 + 7], vertices[i * 9 + 8]];
const ab = [b[0] - a[0], b[1] - a[1], b[2] - a[2]];
const ac = [c[0] - a[0], c[1] - a[1], c[2] - a[2]];
const normal = cross(ab, ac);
const length = Math.sqrt(normal[0] * normal[0] + normal[1] * normal[1] + normal[2] * normal[2]);
normal[0] /= length;
normal[1] /= length;
normal[2] /= length;
for (let j = 0; j < 3; j++) {
verticesWithNormals[i * 18 + j * 6 + 0] = vertices[i * 9 + j * 3 + 0];
verticesWithNormals[i * 18 + j * 6 + 1] = vertices[i * 9 + j * 3 + 1];
verticesWithNormals[i * 18 + j * 6 + 2] = vertices[i * 9 + j * 3 + 2];
verticesWithNormals[i * 18 + j * 6 + 3] = normal[0];
verticesWithNormals[i * 18 + j * 6 + 4] = normal[1];
verticesWithNormals[i * 18 + j * 6 + 5] = normal[2];
}
}
const layout: GPUVertexBufferLayout = {
arrayStride: 3 * 4 * 2,
attributes: [{
format: "float32x3",
offset: 0,
shaderLocation: 0,
}, {
format: "float32x3",
offset: 3 * 4,
shaderLocation: 1
}]
}
const buffer = device.createBuffer({
size: verticesWithNormals.length * 4,
usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
})
device.queue.writeBuffer(buffer, 0, verticesWithNormals.buffer);
return [buffer, layout];
}
Then we simply put norm vector into the output of the vertex shader, asking a linear interpolation of the norm vector. In addition, we also need the real position of the vertex for calculating lighting.
struct VertexOutput {
@builtin(position) pos: vec4f,
@location(0) @interpolate(flat) face: u32,
@location(1) @interpolate(linear) real_position: vec3f,
@location(2) @interpolate(linear) normal: vec3f,
};
@vertex
fn vertexMain(@location(0) position: vec3f, @location(1) norm: vec3f, @builtin(vertex_index) vertexIndex: u32) -> VertexOutput {
let rotation = mat3x3<f32>(
vec3<f32>(1.0, 0.0, 0.0),
vec3<f32>(0.0, cos(angle), sin(angle)),
vec3<f32>(0.0, -sin(angle), cos(angle)),
);
let rotated = vec4<f32>(rotation * (position - vec3f(0.4, 0.4, 0.0)), 1.0);
var projected = projection * (rotated - vec4<f32>(0.0, 0.0, ${f}, 0.0));
let final_position = vec4<f32>(projected.xy, 1.0 - rotated.z, projected.w);
var output = VertexOutput(final_position, vertexIndex / 6, rotation * position, rotation * norm);
return output;
}
Now, in the fragment shader, we have our normal vector and positional vector. We can calculate the light intensity by the formula above.
First, the albedo,
@fragment
fn fragmentMain(input: VertexOutput) -> @location(0) vec4<f32> {
var output = vec4<f32>(0.3, 0.3, 0.3, 1.0);
return output;
}
Then calculate all the vectors. Here we use a point light source.
@group(0) @binding(0) var<uniform> projection: mat4x4<f32>;
@group(0) @binding(1) var<uniform> angle: f32;
struct VertexOutput {
@builtin(position) pos: vec4f,
@location(0) @interpolate(flat) face: u32,
@location(1) @interpolate(linear) real_position: vec3f,
@location(2) @interpolate(linear) normal: vec3f,
};
@vertex
fn vertexMain(@location(0) position: vec3f, @location(1) norm: vec3f, @builtin(vertex_index) vertexIndex: u32) -> VertexOutput {
let rotation = mat3x3<f32>(
vec3<f32>(1.0, 0.0, 0.0),
vec3<f32>(0.0, cos(angle), sin(angle)),
vec3<f32>(0.0, -sin(angle), cos(angle)),
);
let rotated = vec4<f32>(rotation * (position - vec3f(0.3, 0.3, 0.0)), 1.0);
var projected = projection * (rotated - vec4<f32>(0.0, 0.0, ${f}, 0.0));
let final_position = vec4<f32>(projected.xy, 1.0 - rotated.z, projected.w);
var output = VertexOutput(final_position, vertexIndex / 6, rotated.xyz, rotation * norm);
return output;
}
@fragment
fn fragmentMain(input: VertexOutput) -> @location(0) vec4<f32> {
let light_source = vec3<f32>(-1.0, 0.0, -5.0);
let l = normalize(light_source - input.real_position);
let n = normalize(input.normal);
let r = reflect(-l, n);
let camera = vec3<f32>(0.0, 0.0, -1.0);
let v = normalize(camera - input.real_position);
let spec = pow(max(dot(r, v), 0.0), 32.0) * 0.8;
let diff = max(dot(-n, l), 0.0) * 0.4;
let amb = 0.2;
let albedo = vec3<f32>(1.0, 1.0, 1.0);
let color = albedo * min(amb + diff + spec, 1.0);
return vec4<f32>(color, 1.0);
}
Please note that, we have not yet set the position of the camera previously- since we did not do the scaling based on the field of view, so you can basically choose any position of the camera. Here we choose (0, 0, -1).
Also, some negative-positive signs were changed based since now l
is the vector points to the source from the fragment.
Now, you should see a cube with lighting. However- you may encounter an issue, that some primitives are properly lit, while some are not. This is because the normal vector has two directions. When creating hte vertices, you should make sure that the normal vector is pointing outwards. Previously shown vertices and cross product are correct.