<- home
Posted 2 years ago

Wireframes with barycentric coordinates

Often when writing WebGL programs, it takes quite some time until we reach stage when we can tell if our shapes render correctly. That's when wireframes come in as a handy tool for debugging. Seeing all the triangles is valuable. And the least disturbing, the simplest to use method is the one we want.

Some time ago I've learned about a cool trick to use barycentric coordinates for drawing the wireframes. Here's how it looks like in practice:

What does it take to draw them? Some modifications to the vertex and fragment shaders, another vertex attribute and generating corresponding barymetric coordinate for each point.

What are those barycentric coordinates

In general, barycentric coordinates define locations in a selected simplex (triangle, tetrahedron and so on, whatever comes after that). In our case, they express position of any point in a triangle using three scalars.

P = xA + yB + zC
x + y + z = 1

Quite a lot of smart math can be written about them, so instead I will provide an example. Imagine I have a simple triangle and additional attribute, a_barycentric, which vectors equal respectively (1, 0, 0), (0, 1, 0), (0, 0, 1). I will just show the shaders:

attribute vec2 a_position;
attribute vec3 a_barycentric;
varying vec3 vbc;

void main() {
  vbc = a_barycentric;
  gl_Position = vec4(a_position.xy, 0, 1);
}
precision mediump float;
varying vec3 vbc;

void main() {
  gl_FragColor = vec4(vbc, 1.0);
}

And the result:

As it turns out, the values of the vbc vector are perfectly interpolated.

By looking at the code, you can guess that passing value from vertex to the fragment shader makes the GPU interpolate values. Just as a reminder, here are the variable qualifiers in the shaders:

  • const – compile time constant
  • attribute – assigned one per vertex, visible only in vertex shaders, read-only
  • varying – used for interpolating data between vertex and fragment shaders; available for writing in vertex shader, read-only in fragment

Back to the wireframes

The coordinates we will pass to the GPU look like this. For every triangle consisting of 6 vertices (3 points of two coordinates), we will have three points of three coordinates, respectively: p_1 = (1, 0, 0), p_2 = (0, 1, 0), p_3 = (0, 0, 1).

const calculateBarycentric = length => {
  const n = length / 6
  const barycentric = []
  for (let i = 0; i < n; i++) barycentric.push(1, 0, 0, 0, 1, 0, 0, 0, 1)
  return new Float32Array(barycentric)
}

Usual setup

I am leaving out the setup and draw functions. They have nothing special in them. It's just the matter of acquiring a_position, a_barycentric and u_matrix along with the buffers. And then, rendering the shape in 2D projection, restricting size of the target view to the max x and y positions in the vertex arrays. Checkout the sources below for the code.

Whole program

One here that is definitely non-standard is the presence of gl.getExtension function. It allows us to use standard derivatives in shaders.

const vertices = new Float32Array([
  45,
  95,
  50,
  185,
  0,
  80,
  45,
  95,
  0,
  80,
  100,
  0,
  190,
  85,
  270,
  35,
  345,
  140,
  190,
  85,
  345,
  140,
  255,
  130,
  190,
  85,
  255,
  130,
  215,
  210,
  190,
  85,
  215,
  210,
  140,
  70,
  140,
  70,
  45,
  95,
  100,
  0,
  140,
  70,
  100,
  0,
  190,
  85,
])
const barycentric = calculateBarycentric(vertices.length)

const canvas = setUpCanvas()
const gl = canvas.getContext('webgl')
const vertexShader = createShader(gl, gl.VERTEX_SHADER, vertex)
const fragmentShader = createShader(gl, gl.FRAGMENT_SHADER, fragment)
const program = createProgram(gl, vertexShader, fragmentShader)
const scene = setup(gl, program, vertices, barycentric)
const render = () => draw(gl, program, scene)

render()
window.addEventListener('resize', render)

Shaders

And the last part – shaders. I've waited with them until now on purpose, since they will be the most interesting part.

The vertex shader is simple. It's all about using projection matrix to calculate vertex position and passing the barycentric coordinate to the vertex shader as a varying parameter.

attribute vec2 a_position;
attribute vec3 a_barycentric;
uniform mat3 u_matrix;
varying vec3 vbc;

void main() {
  vbc = a_barycentric;
  gl_Position = vec4((u_matrix * vec3(a_position, 1)).xy, 0, 1);
}

It gets more interesting in the fragment shader. Assuming we only know that the barycentric coordinates are the clue here, let's start with something simple. If any of the coordinates is lower than 0.01 – render it black.

precision mediump float;
varying vec3 vbc;

void main() {
  if(vbc.x < 0.01 || vbc.y < 0.01 || vbc.z < 0.01) {
    gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
  } else {
    gl_FragColor = vec4(0.5, 0.5, 0.5, 1.0);
  }
}

The result:

It looks like it doesn't work. And if you think about it for a moment – it makes perfect sense. We are making ourselves dependent on a constant value: 0.01 of the distance from edge to the center of mass. As every triangle is stretched differently, we get borders of varying width. That's not exactly what we expected.

Fixing it

As always, math comes to the rescue. If the exact value of the function of change is not the one we want, how about the pace of its change? In other words: let's go for the derivative.

fwidth = |∂f / ∂x * b| + |∂f / ∂x * b|

Having the derivative, we have to somehow calculate if we are close enough to the edge to consider it border. For that I am using a step function. If you are not familiar with it, here is how it can be implemented:

const step = (v1, v2) => v1.map((_, i) => (v1[i] > v2[i] ? 1.0 : 0.0))

Then, I am getting the lowest out of those three resulting coordinates. This is then extended to a vector of three (the value gets copied to each coordinate, like: (v, v, v)). Since those values will be either 0 or 1, I am getting min of it and color, which will result in 0 (black) for the borders and color elsewhere. Seems to do the trick.

#extension GL_OES_standard_derivatives : enable
precision mediump float;
varying vec3 vbc;

const float lineWidth = 1.0;
const vec3 lineColor = vec3(0.7, 0.7, 0.7);

float edgeFactor() {
  vec3 d = fwidth(vbc);
  vec3 f = step(d * lineWidth, vbc);
  return min(min(f.x, f.y), f.z);
}

void main() {
  gl_FragColor = vec4(min(vec3(edgeFactor()), color), 1.0);
}

One more thing, but very crucial: add this line to your code. It might be necessary to ask for the extension in the fragment shader.

// const gl = canvas.getContext('webgl') <-- under this line
gl.getExtension('OES_standard_derivatives')

Result

And the ready wireframe once again:

Source: github.com.

Hint: in this particular case you could encode barymetric coordinate using binary encoding on two values, with (1, 0, 0) -> (0, 1), (0, 1, 0) -> (1, 0), (0, 0, 1) -> (1, 1) or any other way you want. Why use two values? Because we can take vec4, use first two coordinates for position and use the other two for barymetric coordinates. It's common among GPU programmers to use tricks like that in order to send less information to the GPU. It quickly becomes crucial to the performance.

Cool resources

codeflow – easy wireframe display with barycentric coordinates – the place where I originally learned about this technique.

scratchapixel on barycentric coordinates – great information regarding barycentric coordinates in general.

glsl-solid-wireframe – there are libs for achieving this effect, too.

<- home
@tchayen
React (Native) dev. SECCLO @AaltoUniversity. He/him.
Twitter / Github