Triangles on Web Ch8 OBJ Model

Previously we manually set the vertices and indices of the triangle. However, in real-world applications, we usually use a 3D modeling software to create the models. The models are usually saved in a file format, and the most common file format is the OBJ file format.

In this chapter, we will talk about the OBJ file format and how to load the models from the OBJ file format.

Parsing OBJ File

Of course, you don't want to parse the OBJ file manually. I'd recommend this library to parse the OBJ file.

Actually, after writing this tutorial, I found that library was not, well, good enough for all cases. So I wrote one myself, called obj-model-parser. If you consider using that, you can find it here.

If you are using my library, simply call the parseObj function, and you will get something that containing the following structure,

{
    objects: [
        {
            name: string | undefined;
            groups: {
                groupNames: string[] | undefined;
                componentsChunks: {
                    polygons: (d.Polygon | d.Line | d.Face)[];
                };
            }[];
        }
    ]
}

All you need is the polygons. Also, if you like this one (or maybe this tutorial), consider giving me a star on github, I'll appreciate that.

Anyways, let's back to our tutorial.

First, let's copy our previous lighting demo- we don't want to write them all over again. Then install this for loading the obj file.

bun install obj-file-parser-ts

Then let's just get a random obj file from the internet. I found this bird and download it. Please put it into the static assets, the public folder if you used vite. Do not change the file name because the mtl file has the information about the material based on the file name.

Then we can write the code to load the obj file.

import ObjFileParser from "obj-file-parser";

const objFile = await fetch("bird/12213_Bird_v1_l3.obj").then((res) => res.text());
const model = new ObjFileParser(objFile).parse();
console.log(model);

The parsed result is like,

{
  models: [
    {
      name: 'unit_cube',
      vertices: [
        { x: 1.0, 2.0, 3.0 },
        ...
      ],
      textureCoords: [
        { u: 1.0, v: 2.0, w: 3.0 },
        ...
      ],
      vertexNormals: [
        { x: 1.0, y: 2.0, z: 3.0 },
        ...
      ],
      faces: [
        {
          material: 'brick',
          group: 'group1',
          smoothingGroup: 0,
          vertices: [
            { vertexIndex: 1, textureCoordsIndex: 1, vertexNormalIndex: 1 },
            ...
          ]
        }
      ]
    },
    {
      ...
    }
  ],

  materialLibraries: [
    'mat_lib1.mtl',
    ...
  ]
}
  • materialLibraries: The material library file names. We will talk about textures later.
  • models: The models in the obj file. Each model has vertices, textureCoords, vertexNormals, and faces. Each face consists of many vertices, and each of such vertices has the indices for vertices position, textureCoords, and vertexNormals.

That is to say, we should first, change our code to,

import ObjFileParser from "obj-file-parser";

const objFile = await fetch("bird/12213_Bird_v1_l3.obj").then((res) => res.text());
const model = new ObjFileParser(objFile).parse().models[0];

Remember previously we have a get_vertices function that returns the vertices and normals? We just change that to,

const get_vertices = (device: GPUDevice, model: ObjFileParser.ObjModel): [GPUBuffer, GPUVertexBufferLayout] => {


    const layout: GPUVertexBufferLayout = {
        arrayStride: 3 * 4 * 2,
        attributes: [{
            format: "float32x3",
            offset: 0,
            shaderLocation: 0,
        }, {
            format: "float32x3",
            offset: 3 * 4,
            shaderLocation: 1
        }]
    }
    const buffer = device.createBuffer({
        size: verticesWithNormals.length * 4,
        usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    })
    device.queue.writeBuffer(buffer, 0, verticesWithNormals.buffer);
    return [buffer, layout];
}

Show the Model

Now we need to replace the verticesWithNormals with the data from the obj file. We can do this by,

const get_vertices = (device: GPUDevice, model: ObjFileParser.ObjModel): [GPUBuffer, GPUVertexBufferLayout] => {
    const verticesWithNormals = new Float32Array(model.faces.length * 3 * 2 * 3);
    for (let i = 0; i < model.faces.length; i++) {
        const face = model.faces[i];
        for (let j = 0; j < 3; j++) {
            const vertex = model.vertices[face.vertices[j].vertexIndex - 1];
            const normal = model.vertexNormals[face.vertices[j].vertexNormalIndex - 1];
            verticesWithNormals.set([vertex.x, vertex.y, vertex.z], (i * 3 + j) * 6);
            verticesWithNormals.set([normal.x, normal.y, normal.z], (i * 3 + j) * 6 + 3);
        }
    }

    const layout: GPUVertexBufferLayout = {
        arrayStride: 3 * 4 * 2,
        attributes: [{
            format: "float32x3",
            offset: 0,
            shaderLocation: 0,
        }, {
            format: "float32x3",
            offset: 3 * 4,
            shaderLocation: 1
        }]
    }
    const buffer = device.createBuffer({
        size: verticesWithNormals.length * 4,
        usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    })
    device.queue.writeBuffer(buffer, 0, verticesWithNormals.buffer);
    return [buffer, layout];
}

Please note that the index of obj file starts from 1, so we need to minus 1 to get the correct index.

Lastly, change the count of the draw call to model.faces.length * 3.

Now you should see something on the screen- but if you used the same model as me, you will see that there are missing triangles- this is because the model uses square primitives.

To fix this, there is no significant changes (thank god). Similar as before, we just break squares into two triangles.

const get_vertices = (device: GPUDevice, model: ObjFileParser.ObjModel): [GPUBuffer, GPUVertexBufferLayout] => {
    const verticesWithNormals = new Float32Array(model.faces.length * 2 * 3 * 2 * 3);
    let groupOffset = model.faces.length * 2 * 3 * 3;
    for (let i = 0; i < model.faces.length; i++) {
        const face = model.faces[i];

        const firstGroup = [
            face.vertices[0],
            face.vertices[1],
            face.vertices[2]
        ]

        const secondGroup = [
            face.vertices[0],
            face.vertices[2],
            face.vertices[3],
        ]

        for (let j = 0; j < 3; j++) {
            const vertex = model.vertices[firstGroup[j].vertexIndex - 1];
            const normal = model.vertexNormals[firstGroup[j].vertexNormalIndex - 1];
            verticesWithNormals.set([vertex.x, vertex.y, vertex.z], (i * 3 + j) * 6);
            verticesWithNormals.set([normal.x, normal.y, normal.z], (i * 3 + j) * 6 + 3);
        }

        for (let j = 0; j < 3; j++) {
            const vertex = model.vertices[secondGroup[j].vertexIndex - 1];
            const normal = model.vertexNormals[secondGroup[j].vertexNormalIndex - 1];
            verticesWithNormals.set([vertex.x, vertex.y, vertex.z], (i * 3 + j) * 6 + groupOffset);
            verticesWithNormals.set([normal.x, normal.y, normal.z], (i * 3 + j) * 6 + 3 + groupOffset);
        }
    }

    const layout: GPUVertexBufferLayout = {
        arrayStride: 3 * 4 * 2,
        attributes: [{
            format: "float32x3",
            offset: 0,
            shaderLocation: 0,
        }, {
            format: "float32x3",
            offset: 3 * 4,
            shaderLocation: 1
        }]
    }
    const buffer = device.createBuffer({
        size: verticesWithNormals.length * 4,
        usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    })
    device.queue.writeBuffer(buffer, 0, verticesWithNormals.buffer);
    return [buffer, layout];
}

Now you can see the model- but it's a bit too big. It is simple- we just move the camera a bit or scale- whichever you prefer.

 let rotated = vec4<f32>(rotation * 0.1 * (position - vec3f(0.0, 0.0, 0.5)), 1.0);
 ```

This will do the work. The `0.5` is for showing the whole model- or else the model will be clipped by the near plane.

However, the axis doesn't seem right- currently it is a top view, we generally want a front view. We can just swap the `z` and `y` axis in the vertices reading to solve this- also flip the y. The following code will do the work,

```typescript
for (let j = 0; j < 3; j++) {
    const vertex = model.vertices[firstGroup[j].vertexIndex - 1];
    const normal = model.vertexNormals[firstGroup[j].vertexNormalIndex - 1];
    verticesWithNormals.set([vertex.x, -vertex.z, vertex.y], (i * 3 + j) * 6);
    verticesWithNormals.set([normal.x, -normal.z, normal.y], (i * 3 + j) * 6 + 3);
}

for (let j = 0; j < 3; j++) {
    const vertex = model.vertices[secondGroup[j].vertexIndex - 1];
    const normal = model.vertexNormals[secondGroup[j].vertexNormalIndex - 1];
    verticesWithNormals.set([vertex.x, -vertex.z, vertex.y], (i * 3 + j) * 6 + groupOffset);
    verticesWithNormals.set([normal.x, -normal.z, normal.y], (i * 3 + j) * 6 + 3 + groupOffset);
}

Now you should see a bird model successfully loaded on the screen.

Apply Texture

We need to first, of course, load the texture. Texture resources are defined by mtl file, we also need to parse that.

Sadly, I haven't found any appropriate library for parsing the mtl file, (there I found one but it is not working), so, whatever, guess we just have to parse it manually.

Likely, like obj file, mtl file is also pure text. You can just open in whatever text editor and see the content. The content is like,

# 3ds Max Wavefront OBJ Exporter v0.97b - (c)2007 guruware
# File Created: 08.12.2011 13:29:39

newmtl 12213_bird
    Ns 10.0000
    Ni 1.5000
    d 1.0000
    Tr 0.0000
    Tf 1.0000 1.0000 1.0000 
    illum 2
    Ka 1.0000 1.0000 1.0000
    Kd 1.0000 1.0000 1.0000
    Ks 0.0000 0.0000 0.0000
    Ke 0.0000 0.0000 0.0000
    map_Ka 12213_bird_diffuse.jpg
    map_Kd 12213_bird_diffuse.jpg
    map_bump Map__7_Normal Bump.jpg

Let's explain the content. So basically, it's kind of like yaml. newmtl 12213_bird declares a new material, and the following lines under indentation are the properties of the material.

Ns is the shininess of the material, Ni is the optical density, d is the dissolve factor, Tr is the transparency factor, Tf is the transmission filter, illum is the illumination model, Ka is the ambient color, Kd is the diffuse color, Ks is the specular color, Ke is the emission color.

We don't really care about them- since we have only built one lighting model. What's important is only map_Kd- which are the ambient and diffuse textures.

map_bump is the bump map- which is kind of too complex. Basically, such texture allows to simulate small bumps on the surface. We don't really care about it right now.

So, that all said, we only need to find map_Kd and load them. We can do this by,

const parseMtl = (mtl: string) => {
    const ret = []

    for (const line of mtl.split("\r\n")) {
        const tokens = line.trim().split(" ");
        if (tokens[0] === "newmtl") {
            ret.push({
                name: tokens[1],
                map_Kd: "",
            })
        }
        if (tokens[0] === "map_Kd") {
            ret[ret.length - 1].map_Kd = tokens.slice(1).join(" ");
        }
    }
    return ret;
}

Yeah, this code is shit, but it works whatever.

Now let's check the faces properties of our model once again,

faces: [
    {
        material: 'brick',
        group: 'group1',
        smoothingGroup: 0,
        vertices: [
            { vertexIndex: 1, textureCoordsIndex: 1, vertexNormalIndex: 1 },
            ...
        ]
    }
]

Now it's obvious- we just need to use the same material as this material field here.

Since in our file, we only have one texture- so I'll just use load one. If you have multiple, do them one by one.

Do you still remember how to load the texture?

const get_texture = async (device: GPUDevice, url: string): Promise<GPUTexture> => {
    const img = await fetch(url).then((res) => res.blob()).then((blob) => createImageBitmap(blob));
    const bitmap = await createImageBitmap(img);
    const texture = device.createTexture({
        size: [bitmap.width, bitmap.height, 1],
        format: "rgba8unorm",
        usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.COPY_DST | GPUTextureUsage.TEXTURE_BINDING,
    });
    device.queue.copyExternalImageToTexture({ source: bitmap }, { texture }, [bitmap.width, bitmap.height, 1]);
    return texture;
}

We also need a sampler,

const get_sampler = (device: GPUDevice): GPUSampler => {
    return device.createSampler({
        magFilter: "linear",
        minFilter: "linear",
    })
}

How to pass the texture to the shader? We need to use the texture and sampler binding.

const get_texture_bind_group = (device: GPUDevice, texture: GPUTexture, sampler: GPUSampler): [GPUBindGroup, GPUBindGroupLayout] => {
    const layout = device.createBindGroupLayout({
        entries: [{
            binding: 0,
            visibility: GPUShaderStage.FRAGMENT,
            texture: {
                sampleType: "float",
            }
        }, {
            binding: 1,
            visibility: GPUShaderStage.FRAGMENT,
            sampler: {}
        }]
    });
    const bindGroup = device.createBindGroup({
        layout: layout,
        entries: [{
            binding: 0,
            resource: texture.createView()
        }, {
            binding: 1,
            resource: sampler,
        }]
    });
    return [bindGroup, layout];
}
const pipelineLayout = device.createPipelineLayout({
    bindGroupLayouts: [bindGroupLayout, textureBindGroupLayout]
});
pass.setBindGroup(1, textureBindGroup);

Now, for each vertex, we need to pass it's position, normal, and texture coordinates.

const get_vertices = (device: GPUDevice, model: ObjFileParser.ObjModel): [GPUBuffer, GPUVertexBufferLayout] => {
    const verticesWithNormals = new Float32Array(model.faces.length * 2 * 3 * 3 * 3);
    let groupOffset = model.faces.length * 3 * 3 * 3;
    for (let i = 0; i < model.faces.length; i++) {
        const face = model.faces[i];
        const firstGroup = [
            face.vertices[0],
            face.vertices[1],
            face.vertices[2]
        ]

        const secondGroup = [
            face.vertices[0],
            face.vertices[2],
            face.vertices[3],
        ]

        for (let j = 0; j < 3; j++) {
            const vertex = model.vertices[firstGroup[j].vertexIndex - 1];
            const normal = model.vertexNormals[firstGroup[j].vertexNormalIndex - 1];
            const tex = model.textureCoords[firstGroup[j].textureCoordsIndex - 1];
            verticesWithNormals.set([vertex.x, -vertex.z, vertex.y], (i * 3 + j) * 9);
            verticesWithNormals.set([normal.x, -normal.z, normal.y], (i * 3 + j) * 9 + 3);
            verticesWithNormals.set([tex.u, tex.v, tex.w], (i * 3 + j) * 9 + 6);
        }

        for (let j = 0; j < 3; j++) {
            const vertex = model.vertices[secondGroup[j].vertexIndex - 1];
            const normal = model.vertexNormals[secondGroup[j].vertexNormalIndex - 1];
            const tex = model.textureCoords[secondGroup[j].textureCoordsIndex - 1];
            verticesWithNormals.set([vertex.x, -vertex.z, vertex.y], groupOffset + (i * 3 + j) * 9);
            verticesWithNormals.set([normal.x, -normal.z, normal.y], groupOffset + (i * 3 + j) * 9 + 3);
            verticesWithNormals.set([tex.u, tex.v, tex.w], groupOffset + (i * 3 + j) * 9 + 6);
        }
    }

    const layout: GPUVertexBufferLayout = {
        arrayStride: 3 * 4 * 3,
        attributes: [{
            format: "float32x3",
            offset: 0,
            shaderLocation: 0,
        }, {
            format: "float32x3",
            offset: 3 * 4,
            shaderLocation: 1
        }, {
            format: "float32x3",
            offset: 3 * 4 * 2,
            shaderLocation: 2,
        }]
    }
    const buffer = device.createBuffer({
        size: verticesWithNormals.length * 4,
        usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    })
    device.queue.writeBuffer(buffer, 0, verticesWithNormals.buffer);
    return [buffer, layout];
}

Now, we need to change the shader to use the texture.

@group(0) @binding(0) var<uniform> projection: mat4x4<f32>;
@group(0) @binding(1) var<uniform> angle: f32;
@group(1) @binding(0) var texture: texture_2d<f32>;
@group(1) @binding(1) var tx_sampler: sampler;

struct VertexOutput {
    @builtin(position) pos: vec4f,
    @location(0) @interpolate(flat) face: u32,
    @location(1) @interpolate(linear) real_position: vec3f,
    @location(2) @interpolate(linear) normal: vec3f,
    @location(3) @interpolate(linear) tex_coords: vec2f,
};

@vertex
fn vertexMain(@location(0) position: vec3f, @location(1) norm: vec3f, @location(2) tex: vec3f, @builtin(vertex_index) vertexIndex: u32) -> VertexOutput {
    let rotation = mat3x3<f32>(
        vec3<f32>(cos(angle), 0.0, sin(angle)),
        vec3<f32>(0.0, 1.0, 0.0),
        vec3<f32>(-sin(angle), 0.0, cos(angle)),
    );
    let rotated = vec4<f32>(rotation * 0.1 * (position - vec3f(0.0, 0.0, 0.5)), 1.0);
    var projected = projection * (rotated - vec4<f32>(0.0, 0.0, ${f}, 0.0));
    let final_position = vec4<f32>(projected.xy, 1.0 - rotated.z, projected.w);

    var output = VertexOutput(final_position, vertexIndex / 6, rotated.xyz, rotation * norm, tex.xy);
    return output;
}

@fragment
fn fragmentMain(input: VertexOutput) -> @location(0) vec4<f32> {

    let light_source = vec3<f32>(-5.0, -2.0, -9.0);
    let l = normalize(light_source - input.real_position);
    let n = normalize(input.normal);
    let r = reflect(-l, n);
    let camera = vec3<f32>(0.0, 0.0, -1.0);
    let v = normalize(camera - input.real_position);
    let spec = pow(max(dot(r, v), 0.0), 32.0) * 0.8;
    let diff = max(dot(-n, l), 0.0) * 0.4;
    let amb = 0.2;
    let albedo = textureSample(texture, tx_sampler, input.tex_coords).xyz;
    let color = albedo * min(amb + diff + spec, 1.0);
    return vec4<f32>(color, 1.0);
}

However, now the bird is colored- but the wrong way. This is because we didn't enable flipY- like we previously said in the texture chapter, the texture coordinate system is different from the screen coordinate system.

const get_texture = async (device: GPUDevice, url: string): Promise<GPUTexture> => {
    const img = await fetch(url).then((res) => res.blob()).then((blob) => createImageBitmap(blob));
    const bitmap = await createImageBitmap(img);
    const texture = device.createTexture({
        size: [bitmap.width, bitmap.height, 1],
        format: "rgba8unorm",
        usage: GPUTextureUsage.COPY_DST | GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.RENDER_ATTACHMENT,
    });
    device.queue.copyExternalImageToTexture({ source: bitmap, flipY: true }, { texture }, [bitmap.width, bitmap.height, 1]);
    return texture;
}

Now the bird looks nice, albeit looks like of like metal. We can modify the parameters of our lighting model to make it look more like a bird, by reducing alpha for example. But to achieve the best result, you should use the same lighting model as the mtl file suggests, which, is far too complex for this tutorial.