Learn Creative Coding (#64) - Custom Materials with ShaderMaterial

in StemSocial3 hours ago

Learn Creative Coding (#64) - Custom Materials with ShaderMaterial

cc-banner

Last episode we built meshes from raw vertex data -- typed arrays of positions, normals, colors, all computed procedurally. Terrain from noise, parametric surfaces from equations, lathe solids from profiles. We had full control over geometry. But we were still stuck using Three.js's built-in materials for the surface appearance. MeshStandardMaterial, MeshNormalMaterial, MeshBasicMaterial -- they're good. But they're someone else's shaders.

Remember episodes 21 through 45? Twelve episodes of GLSL. We wrote fragment shaders from scratch -- SDFs, noise patterns, cosine palettes, raymarching, feedback loops, post-processing. That was all flat. Fullscreen quads, 2D UV coordinates, no geometry. But the GLSL itself? That knowledge transfers directly to 3D. A fragment shader doesn't care if it's running on a fullscreen quad or on the surface of a torus knot. It receives interpolated values, computes a color, outputs it. Same language, same concepts, different context.

Three.js gives us ShaderMaterial -- a material where YOU write the vertex and fragment shaders. Three.js handles the scene graph, the projection matrices, the attribute binding. You handle what the surface actually looks like. It's the best of both worlds: Three.js's scene management with your own GPU code.

This is where the 2D shader arc meets the 3D scene arc. And honestly it's the episode I've been most excited about writing since we started this Three.js section :-)

ShaderMaterial basics

A ShaderMaterial takes two strings: a vertex shader and a fragment shader. Three.js compiles them, passes built-in uniforms (projection matrix, view matrix, model matrix), and binds vertex attributes (position, normal, uv) automatically.

import * as THREE from 'three';

const vertexShader = `
  void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  }
`;

const fragmentShader = `
  void main() {
    gl_FragColor = vec4(1.0, 0.4, 0.2, 1.0);
  }
`;

const material = new THREE.ShaderMaterial({
  vertexShader,
  fragmentShader
});

const mesh = new THREE.Mesh(
  new THREE.SphereGeometry(1, 32, 32),
  material
);
scene.add(mesh);

That's a sphere with a flat orange color. No lighting, no shading -- just a solid color on every pixel. The vertex shader transforms each vertex position from local object space to clip space (screen coordinates) using the matrices Three.js provides. The fragment shader outputs a constant color.

Three.js injects several things into your shader automatically:

  • projectionMatrix -- camera projection (perspective or orthographic)
  • modelViewMatrix -- combined model transform + camera view transform
  • viewMatrix -- camera view transform only
  • modelMatrix -- object's world transform only
  • normalMatrix -- for correctly transforming normals (it's the transpose of the inverse of the upper-left 3x3 of modelViewMatrix -- sounds intimidating but you rarely need to think about why)
  • position -- vertex position attribute (vec3)
  • normal -- vertex normal attribute (vec3)
  • uv -- texture coordinate attribute (vec2)

These are the same matrices you'd have to set up manually in raw WebGL. Three.js just does it for you.

Adding uniforms

Just like in our 2D shader episodes, uniforms are how you send data from JavaScript to the GPU. Time, mouse position, colors, parameters -- anything that's the same across all vertices/fragments but changes per frame.

const material = new THREE.ShaderMaterial({
  vertexShader: `
    varying vec2 vUv;

    void main() {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }
  `,
  fragmentShader: `
    uniform float uTime;
    uniform vec3 uColor;
    varying vec2 vUv;

    void main() {
      float pattern = sin(vUv.x * 20.0 + uTime * 2.0) *
                      sin(vUv.y * 20.0 + uTime * 1.5);
      vec3 col = uColor * (0.5 + 0.5 * pattern);
      gl_FragColor = vec4(col, 1.0);
    }
  `,
  uniforms: {
    uTime: { value: 0.0 },
    uColor: { value: new THREE.Color(0.2, 0.6, 0.9) }
  }
});

The uniforms object defines what gets sent to the GPU. Each entry has a value property. In the animation loop, update the value:

const clock = new THREE.Clock();

function animate() {
  requestAnimationFrame(animate);
  material.uniforms.uTime.value = clock.getElapsedTime();
  renderer.render(scene, camera);
}

Notice the varying vec2 vUv -- varyings are how you pass data from the vertex shader to the fragment shader. The vertex shader writes vUv = uv for each vertex, and the GPU interpolates the value across each triangle's surface. When the fragment shader reads vUv, it gets the smoothly interpolated UV coordinate for that specific pixel. Same concept as the varying keyword in our 2D shader work (ep032), just now applied to actual 3D geometry instead of a fullscreen quad.

Vertex displacement: making meshes breathe

This is where it gets really fun. In the vertex shader, you can modify the vertex position before projecting it. Displace vertices along their normals using noise and the mesh deforms in real time:

const vertexShader = `
  uniform float uTime;
  uniform float uAmplitude;
  varying vec3 vNormal;
  varying vec3 vPosition;

  // simple noise (you can use the functions from ep035)
  float hash(vec3 p) {
    p = fract(p * vec3(443.897, 441.423, 437.195));
    p += dot(p, p.yzx + 19.19);
    return fract((p.x + p.y) * p.z);
  }

  float noise3D(vec3 p) {
    vec3 i = floor(p);
    vec3 f = fract(p);
    f = f * f * (3.0 - 2.0 * f);

    float a = hash(i);
    float b = hash(i + vec3(1, 0, 0));
    float c = hash(i + vec3(0, 1, 0));
    float d = hash(i + vec3(1, 1, 0));
    float e = hash(i + vec3(0, 0, 1));
    float f2 = hash(i + vec3(1, 0, 1));
    float g = hash(i + vec3(0, 1, 1));
    float h = hash(i + vec3(1, 1, 1));

    return mix(mix(mix(a, b, f.x), mix(c, d, f.x), f.y),
               mix(mix(e, f2, f.x), mix(g, h, f.x), f.y), f.z);
  }

  void main() {
    // layered noise displacement
    float n = 0.0;
    n += noise3D(position * 1.5 + uTime * 0.3) * 0.6;
    n += noise3D(position * 3.0 + uTime * 0.5) * 0.3;
    n += noise3D(position * 6.0 + uTime * 0.8) * 0.1;

    // displace along the normal
    vec3 displaced = position + normal * n * uAmplitude;

    vNormal = normalize(normalMatrix * normal);
    vPosition = (modelViewMatrix * vec4(displaced, 1.0)).xyz;

    gl_Position = projectionMatrix * modelViewMatrix * vec4(displaced, 1.0);
  }
`;

Three octaves of 3D noise, evaluated at the vertex position, displacing along the vertex normal. The mesh pushes outward where noise is positive and pulls inward where it's negative. Add time to the noise input and the displacement animates -- the surface ripples, breathes, morphs organically.

This is the 3D equivalent of what we did in the noise shader episode (ep035). Same technique, same layered noise, but now it deforms actual geometry instead of coloring pixels. On a sphere, it creates a blob that looks like it's alive. On a plane, it creates animated terrain (like ep063 but on the GPU -- way faster for high-resolution grids).

One thing to know: the normals we pass to the fragment shader are the ORIGINAL normals, not the displaced ones. Computing correct normals for displaced geometry in the vertex shader is possible but requires either computing partial derivatives numerically (sample noise at neighboring positions and compute the cross product) or using a normal map approach. For creative coding purposes, the original normals are usually close enough -- the visual difference is subtle unless the displacement is extreme.

Fresnel effect: edge glow

The Fresnel effect makes edges of an object glow brighter than the center. It's based on the angle between the surface normal and the view direction -- at steep angles (edges), the surface reflects more. This is why the rim of a glass catches more light, why water surfaces reflect more at shallow angles, and why soap bubbles have bright edges.

const fragmentShader = `
  uniform float uTime;
  varying vec3 vNormal;
  varying vec3 vPosition;

  void main() {
    // view direction (from surface point toward camera)
    vec3 viewDir = normalize(-vPosition);

    // fresnel: how much the surface faces away from the camera
    float fresnel = 1.0 - dot(vNormal, viewDir);
    fresnel = pow(fresnel, 3.0);

    // base color
    vec3 baseColor = vec3(0.05, 0.1, 0.2);

    // rim color
    vec3 rimColor = vec3(0.3, 0.6, 1.0);

    vec3 col = mix(baseColor, rimColor, fresnel);

    gl_FragColor = vec4(col, 1.0);
  }
`;

dot(vNormal, viewDir) gives 1.0 when the surface faces directly toward the camera (center of the sphere) and approaches 0.0 at the edges. We invert it (1.0 - dot(...)) so the edges get the high value. pow(fresnel, 3.0) sharpens the falloff -- a higher exponent makes the glow thinner and more concentrated at the very edge. Exponent 1.0 gives a broad, soft glow. Exponent 5.0 gives a razor-thin rim.

The result is a dark sphere with glowing blue edges. It looks like a hologram, or a force field, or a deep-sea bioluminescent creature. And it's maybe 10 lines of GLSL. This is one of those effects that impresses people way out of proportion to its complexity.

Cosine palettes on 3D meshes

Remember cosine palettes from episode 37? The Inigo Quilez technique: color = a + b * cos(2pi * (c * t + d)) where a, b, c, d are vec3 parameters that define the palette. We used it extensively in the 2D shader episodes. It works exactly the same on 3D geometry.

const fragmentShader = `
  uniform float uTime;
  varying vec3 vNormal;
  varying vec3 vPosition;
  varying vec2 vUv;

  vec3 cosinePalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
    return a + b * cos(6.28318 * (c * t + d));
  }

  void main() {
    vec3 viewDir = normalize(-vPosition);
    float fresnel = pow(1.0 - dot(vNormal, viewDir), 2.5);

    // use Y position + time as the palette input
    float t = vPosition.y * 0.3 + uTime * 0.15;

    // iridescent palette
    vec3 col = cosinePalette(t,
      vec3(0.5, 0.5, 0.5),
      vec3(0.5, 0.5, 0.5),
      vec3(1.0, 1.0, 1.0),
      vec3(0.0, 0.10, 0.20)
    );

    // boost the rim
    col += fresnel * 0.4;

    gl_FragColor = vec4(col, 1.0);
  }
`;

Using the vertex's Y position as the palette input creates horizontal color bands that flow over the mesh surface. Adding uTime * 0.15 makes the colors slowly shift and cycle. Combined with the Fresnel rim boost, you get an iridescent, color-shifting surface that looks like oil on water or a pearl.

Swap the palette input from vPosition.y to the UV coordinates, or to the dot product of the normal with a fixed direction, or to the distance from a point -- each gives a completely diferent visual pattern using the exact same palette function. This is the power of separating the "what drives the color" (the input t) from the "what colors are produced" (the palette parameters).

RawShaderMaterial: full control

ShaderMaterial is convenient because Three.js auto-injects uniforms and attributes. But sometimes you want complete control -- no magic, no hidden includes. That's RawShaderMaterial. You write everything yourself, including the attribute and uniform declarations:

const rawVertexShader = `
  precision mediump float;

  // you must declare these yourself
  uniform mat4 projectionMatrix;
  uniform mat4 modelViewMatrix;

  attribute vec3 position;
  attribute vec3 normal;
  attribute vec2 uv;

  varying vec2 vUv;
  varying vec3 vNormal;

  void main() {
    vUv = uv;
    vNormal = normal;
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  }
`;

const rawFragmentShader = `
  precision mediump float;

  varying vec2 vUv;
  varying vec3 vNormal;

  void main() {
    vec3 light = normalize(vec3(1.0, 1.0, 1.0));
    float diff = max(dot(vNormal, light), 0.0);
    vec3 col = vec3(0.8, 0.4, 0.2) * (0.3 + 0.7 * diff);
    gl_FragColor = vec4(col, 1.0);
  }
`;

const rawMaterial = new THREE.RawShaderMaterial({
  vertexShader: rawVertexShader,
  fragmentShader: rawFragmentShader
});

Same result, more typing. So why use it? When you want to know exactly what's going into the shader pipeline. No surprises, no auto-injected code. If you're debugging a shader bug and can't figure out what Three.js is sneaking in, switch to RawShaderMaterial and you see every line. Also useful when your shaders need to be compatible with specific WebGL versions or when you're porting shader code from another framework.

For creative coding, ShaderMaterial is almost always the right choice. RawShaderMaterial is for when you need to understand the plumbing or strip away everything for performance.

Writing your own lighting

Three.js's built-in materials use physically-based shading -- accurate, realistic, and computationally expensive. For creative coding, you often want simpler or stylized lighting. With ShaderMaterial, you write your own:

const fragmentShader = `
  uniform float uTime;
  uniform vec3 uLightPos;
  varying vec3 vNormal;
  varying vec3 vPosition;

  void main() {
    // diffuse lighting (Lambert)
    vec3 lightDir = normalize(uLightPos - vPosition);
    float diff = max(dot(vNormal, lightDir), 0.0);

    // specular highlight (Blinn-Phong)
    vec3 viewDir = normalize(-vPosition);
    vec3 halfDir = normalize(lightDir + viewDir);
    float spec = pow(max(dot(vNormal, halfDir), 0.0), 32.0);

    // combine
    vec3 ambient = vec3(0.08, 0.06, 0.12);
    vec3 diffColor = vec3(0.6, 0.2, 0.3) * diff;
    vec3 specColor = vec3(1.0, 0.9, 0.8) * spec * 0.5;

    vec3 col = ambient + diffColor + specColor;

    gl_FragColor = vec4(col, 1.0);
  }
`;

const material = new THREE.ShaderMaterial({
  vertexShader,
  fragmentShader,
  uniforms: {
    uTime: { value: 0.0 },
    uLightPos: { value: new THREE.Vector3(3, 5, 4) }
  }
});

Lambert diffuse shading is the simplest physically-motivated lighting model: max(dot(normal, lightDirection), 0.0). The brighter the surface faces the light, the brighter it is. Blinn-Phong adds a specular highlight -- a shiny hotspot where the light direction and view direction are mirrored around the surface normal. The exponent (32.0 here) controls the tightness of the specular spot.

This is the same math from ep040 (raymarching materials and lighting), just applied to Three.js geometry instead of raymarched SDFs. The lighting model doesn't care how the geometry was produced -- SDF or mesh, the normal and position are what matter.

You can get creative with this. Toon shading: quantize the diffuse value into discrete steps (floor(diff * 3.0) / 3.0). Rim lighting: add the Fresnel term on top of the diffuse. Holographic: replace the diffuse color with a cosine palette driven by the view angle. The freedom to write whatever lighting you want is the whole point of ShaderMaterial.

Combining displacement and shading

Time to put the pieces together. A sphere with vertex noise displacement, Fresnel edge glow, cosine palette coloring, and custom lighting. The living, breathing, iridescent orb:

import * as THREE from 'three';
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';

const scene = new THREE.Scene();
scene.background = new THREE.Color(0x050508);

const camera = new THREE.PerspectiveCamera(
  60, window.innerWidth / window.innerHeight, 0.1, 50
);
camera.position.set(0, 0, 3.5);

const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setPixelRatio(window.devicePixelRatio);
document.body.appendChild(renderer.domElement);

const controls = new OrbitControls(camera, renderer.domElement);
controls.enableDamping = true;

The scene setup -- dark background, camera pulled back enough to see the sphere comfortably.

const vertShader = `
  uniform float uTime;
  uniform float uNoiseScale;
  uniform float uDisplaceAmp;

  varying vec3 vNormal;
  varying vec3 vPosition;
  varying float vDisplacement;

  float hash(vec3 p) {
    p = fract(p * vec3(443.897, 441.423, 437.195));
    p += dot(p, p.yzx + 19.19);
    return fract((p.x + p.y) * p.z);
  }

  float noise3D(vec3 p) {
    vec3 i = floor(p);
    vec3 f = fract(p);
    f = f * f * (3.0 - 2.0 * f);
    float a = hash(i);
    float b = hash(i + vec3(1,0,0));
    float c = hash(i + vec3(0,1,0));
    float d = hash(i + vec3(1,1,0));
    float e = hash(i + vec3(0,0,1));
    float f2 = hash(i + vec3(1,0,1));
    float g = hash(i + vec3(0,1,1));
    float h = hash(i + vec3(1,1,1));
    return mix(mix(mix(a,b,f.x), mix(c,d,f.x), f.y),
               mix(mix(e,f2,f.x), mix(g,h,f.x), f.y), f.z);
  }

  void main() {
    float n = 0.0;
    n += noise3D(position * uNoiseScale + uTime * 0.25) * 0.55;
    n += noise3D(position * uNoiseScale * 2.0 + uTime * 0.4) * 0.3;
    n += noise3D(position * uNoiseScale * 4.0 + uTime * 0.6) * 0.15;

    n = n * 2.0 - 0.8;  // center around zero

    vec3 displaced = position + normal * n * uDisplaceAmp;

    vNormal = normalize(normalMatrix * normal);
    vPosition = (modelViewMatrix * vec4(displaced, 1.0)).xyz;
    vDisplacement = n;

    gl_Position = projectionMatrix * modelViewMatrix * vec4(displaced, 1.0);
  }
`;

The vertex shader passes vDisplacement to the fragment shader as a varying -- this lets the fragment shader know how much the surface was pushed out at each point. We'll use it for coloring.

const fragShader = `
  uniform float uTime;

  varying vec3 vNormal;
  varying vec3 vPosition;
  varying float vDisplacement;

  vec3 cosinePalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
    return a + b * cos(6.28318 * (c * t + d));
  }

  void main() {
    vec3 viewDir = normalize(-vPosition);

    // fresnel rim
    float fresnel = pow(1.0 - max(dot(vNormal, viewDir), 0.0), 3.0);

    // simple diffuse light
    vec3 lightDir = normalize(vec3(2.0, 3.0, 4.0));
    float diff = max(dot(vNormal, lightDir), 0.0);

    // color from displacement + time
    float t = vDisplacement * 0.8 + uTime * 0.08;
    vec3 palColor = cosinePalette(t,
      vec3(0.5, 0.5, 0.5),
      vec3(0.5, 0.5, 0.5),
      vec3(1.0, 1.0, 0.5),
      vec3(0.8, 0.90, 0.30)
    );

    // combine: palette * lighting + rim glow
    vec3 col = palColor * (0.15 + 0.85 * diff);
    col += fresnel * vec3(0.4, 0.7, 1.0) * 0.5;

    gl_FragColor = vec4(col, 1.0);
  }
`;

And the material + animation loop:

const orbMaterial = new THREE.ShaderMaterial({
  vertexShader: vertShader,
  fragmentShader: fragShader,
  uniforms: {
    uTime: { value: 0 },
    uNoiseScale: { value: 1.8 },
    uDisplaceAmp: { value: 0.35 }
  }
});

const orb = new THREE.Mesh(
  new THREE.SphereGeometry(1, 128, 128),
  orbMaterial
);
scene.add(orb);

const clock = new THREE.Clock();

function animate() {
  requestAnimationFrame(animate);
  const t = clock.getElapsedTime();

  orbMaterial.uniforms.uTime.value = t;
  orb.rotation.y = t * 0.1;

  controls.update();
  renderer.render(scene, camera);
}

animate();

window.addEventListener('resize', () => {
  camera.aspect = window.innerWidth / window.innerHeight;
  camera.updateProjectionMatrix();
  renderer.setSize(window.innerWidth, window.innerHeight);
});

128x128 segments on the sphere gives enough resolution for smooth noise displacement. The displacement amplitude of 0.35 creates visible deformation without the mesh tearing apart. The cosine palette cycles through warm-to-cool colors based on displacement value plus time, so peaks (positive displacement) have different colors than valleys (negative displacement), and the whole thing slowly shifts.

Orbit around this thing. The Fresnel rim catches the edges with a cool blue glow. The surface ripples and morphs organically. The color shifts through the palette as displacement changes. It genuinely looks like a living organism or a small alien planet. And it's all GPU-side -- JavaScript just updates one float (time) per frame. The GPU does everything else.

Post-processing with custom shaders

Here's another place your 2D shader skills become useful. EffectComposer lets you apply fullscreen shader passes to the rendered image -- exactly like the post-processing effects from episode 43. Same concept, different API:

import { EffectComposer } from 'three/addons/postprocessing/EffectComposer.js';
import { RenderPass } from 'three/addons/postprocessing/RenderPass.js';
import { ShaderPass } from 'three/addons/postprocessing/ShaderPass.js';

// custom post-processing shader
const vignetteShader = {
  uniforms: {
    tDiffuse: { value: null },  // previous pass texture (auto-set)
    uIntensity: { value: 0.7 }
  },
  vertexShader: `
    varying vec2 vUv;
    void main() {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }
  `,
  fragmentShader: `
    uniform sampler2D tDiffuse;
    uniform float uIntensity;
    varying vec2 vUv;

    void main() {
      vec4 color = texture2D(tDiffuse, vUv);

      // vignette: darken edges
      vec2 center = vUv - 0.5;
      float dist = length(center);
      float vignette = 1.0 - dist * uIntensity;
      vignette = clamp(vignette, 0.0, 1.0);
      vignette = smoothstep(0.0, 1.0, vignette);

      color.rgb *= vignette;

      gl_FragColor = color;
    }
  `
};

const composer = new EffectComposer(renderer);
composer.addPass(new RenderPass(scene, camera));
composer.addPass(new ShaderPass(vignetteShader));

In the animation loop, replace renderer.render(scene, camera) with composer.render():

function animate() {
  requestAnimationFrame(animate);
  orbMaterial.uniforms.uTime.value = clock.getElapsedTime();
  controls.update();
  composer.render();  // not renderer.render!
}

The tDiffuse uniform is the magic glue. EffectComposer renders the scene to a texture (via RenderPass), then passes that texture as tDiffuse to the next ShaderPass. Your fragment shader samples from it and outputs the processed result. You can chain as many passes as you want -- vignette, then chromatic aberration, then film grain, then color grading. Each one reads the output of the previous.

Any 2D shader effect from episodes 32-45 can become a Three.js post-processing pass. That feedback loop shader from ep036? Make it a ShaderPass. The noise-based distortion from ep035? ShaderPass. The color manipulation from ep037? ShaderPass. All that GLSL knowledge plugs directly into this pipeline. You're not relearning anything -- you're reusing it in a 3D context.

ShaderChunks: surgical shader modification

Three.js's built-in materials (MeshStandardMaterial, MeshPhysicalMaterial, etc.) are themselves built from modular shader pieces called ShaderChunks. You can override specific chunks while keeping everything else. This is useful when you want 90% of the standard PBR pipeline but need to tweak one specific thing:

const customMaterial = new THREE.MeshStandardMaterial({
  color: 0x4488aa,
  roughness: 0.4,
  metalness: 0.6
});

// override the normal calculation
customMaterial.onBeforeCompile = (shader) => {
  shader.uniforms.uTime = { value: 0 };

  // inject the time uniform declaration
  shader.vertexShader = shader.vertexShader.replace(
    '#include <common>',
    `#include <common>
     uniform float uTime;`
  );

  // modify the vertex position before projection
  shader.vertexShader = shader.vertexShader.replace(
    '#include <begin_vertex>',
    `#include <begin_vertex>
     float wave = sin(position.x * 3.0 + uTime * 2.0) * 0.1;
     transformed.y += wave;`
  );

  // store a reference so we can update the uniform
  customMaterial.userData.shader = shader;
};

onBeforeCompile gives you access to the shader source before it's compiled. You find a chunk (#include <begin_vertex>) and replace it with your modified version. The begin_vertex chunk sets vec3 transformed = position; -- by appending code after it, you can displace the position. The rest of the PBR pipeline (lighting, shadows, reflections) continues to work normally.

This is a power-user technique. You need to know which chunks exist and what they do. The Three.js source code has them all in src/renderers/shaders/ShaderChunk/. The main ones you'd want to override:

  • begin_vertex -- vertex position (for displacement)
  • beginnormal_vertex -- vertex normal (for normal mapping)
  • color_fragment -- final color output (for custom color effects)
  • emissivemap_fragment -- emissive color (for custom glow)

It's less clean than writing a full ShaderMaterial but WAY less work when you only want to change one thing. You get all of Three.js's lighting, shadows, and PBR for free, plus your custom modification on top.

Transferring 2D shader skills to 3D

Lets make this connection explicit. Everything from the shader arc works on 3D geometry. The trick is choosing what drives the shader input:

const fragmentShader = `
  uniform float uTime;
  varying vec2 vUv;
  varying vec3 vNormal;
  varying vec3 vPosition;

  // SDF circle (from ep033)
  float sdCircle(vec2 p, float r) {
    return length(p) - r;
  }

  // from ep037: cosine palette
  vec3 palette(float t) {
    return vec3(0.5) + vec3(0.5) * cos(6.28 * (vec3(1.0) * t + vec3(0.0, 0.1, 0.2)));
  }

  void main() {
    // use UVs as 2D coordinates -- same as a fullscreen shader
    vec2 uv = vUv * 2.0 - 1.0;

    // draw SDF circles on the surface of the mesh
    float d = sdCircle(uv, 0.5 + sin(uTime) * 0.1);
    float rings = sin(d * 30.0 + uTime * 3.0);

    vec3 col = palette(d * 0.5 + uTime * 0.1);
    col *= smoothstep(0.0, 0.02, abs(rings));

    // add some depth from the normal
    float light = max(dot(vNormal, normalize(vec3(1, 1, 1))), 0.0);
    col *= 0.3 + 0.7 * light;

    gl_FragColor = vec4(col, 1.0);
  }
`;

SDF rings from episode 33, painted on the UV coordinates of a 3D mesh. On a sphere, the UVs map the 2D pattern onto the spherical surface. On a torus, the pattern wraps around both loops. On a custom mesh from ep063, the pattern follows whatever UV mapping you gave the geometry.

This is the mental shift: in 2D shaders, the UV coordinates were always (0,0) at bottom-left to (1,1) at top-right of the screen. On 3D geometry, the UVs are a 2D parameterization of the 3D surface. But once the fragment shader has vUv, it doesn't know or care whether those coordinates came from a fullscreen quad or a twisted Mobius strip. The shader math is identical. The visual result is radically different because the geometry maps those coordinates onto a 3D shape.

You can also use world-space position instead of UVs. This tiles the pattern across the mesh in 3D:

// triplanar-ish: use world position for seamless tiling
vec3 wp = vPosition * 2.0;
float pattern = sin(wp.x * 10.0) * sin(wp.y * 10.0) * sin(wp.z * 10.0);

No UV seams, no stretching at the poles of a sphere. The pattern exists in 3D space and the mesh "cuts through" it like a cookie cutter through patterned dough. This technique is called triplanar mapping (or volumetric texturing) and it's extremely useful for procedural surfaces on arbitrary geometry.

Creative exercise: shader gallery

Build three objects, each with a different custom ShaderMaterial, displayed together in one scene. Different geometry, different vertex effects, different fragment shading:

// object 1: noisy displaced torus with iridescent coloring
const torus = new THREE.Mesh(
  new THREE.TorusKnotGeometry(0.8, 0.25, 200, 32),
  new THREE.ShaderMaterial({
    vertexShader: vertShader,
    fragmentShader: fragShader,
    uniforms: {
      uTime: { value: 0 },
      uNoiseScale: { value: 2.2 },
      uDisplaceAmp: { value: 0.15 }
    }
  })
);
torus.position.x = -3;
scene.add(torus);

// object 2: flat-shaded icosahedron with SDF surface pattern
const ico = new THREE.Mesh(
  new THREE.IcosahedronGeometry(1, 3),
  new THREE.ShaderMaterial({
    vertexShader: `
      varying vec2 vUv;
      varying vec3 vNormal;
      varying vec3 vPos;
      void main() {
        vUv = uv;
        vNormal = normalize(normalMatrix * normal);
        vPos = (modelViewMatrix * vec4(position, 1.0)).xyz;
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
      }
    `,
    fragmentShader: `
      uniform float uTime;
      varying vec2 vUv;
      varying vec3 vNormal;
      varying vec3 vPos;
      void main() {
        vec3 wp = vPos * 3.0;
        float p = sin(wp.x * 5.0 + uTime) * sin(wp.y * 5.0) * sin(wp.z * 5.0 + uTime * 0.7);
        float fresnel = pow(1.0 - abs(dot(vNormal, normalize(-vPos))), 2.0);
        vec3 col = mix(vec3(0.1, 0.05, 0.15), vec3(0.9, 0.4, 0.1), smoothstep(-0.3, 0.3, p));
        col += fresnel * vec3(0.2, 0.5, 0.8);
        gl_FragColor = vec4(col, 1.0);
      }
    `,
    uniforms: { uTime: { value: 0 } }
  })
);
scene.add(ico);

// object 3: plane with animated wave displacement
const plane = new THREE.Mesh(
  new THREE.PlaneGeometry(3, 3, 128, 128),
  new THREE.ShaderMaterial({
    vertexShader: `
      uniform float uTime;
      varying vec2 vUv;
      varying vec3 vPos;
      void main() {
        vUv = uv;
        vec3 p = position;
        p.z += sin(p.x * 4.0 + uTime * 2.0) * cos(p.y * 3.0 + uTime * 1.5) * 0.2;
        vPos = (modelViewMatrix * vec4(p, 1.0)).xyz;
        gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1.0);
      }
    `,
    fragmentShader: `
      uniform float uTime;
      varying vec2 vUv;
      varying vec3 vPos;
      void main() {
        float wave = sin(vUv.x * 20.0 + uTime * 3.0) * sin(vUv.y * 15.0 + uTime * 2.0);
        vec3 col = vec3(0.1 + wave * 0.3, 0.3 + wave * 0.2, 0.5 - wave * 0.1);
        gl_FragColor = vec4(col, 1.0);
      }
    `,
    uniforms: { uTime: { value: 0 } },
    side: THREE.DoubleSide
  })
);
plane.position.x = 3;
plane.rotation.y = -0.4;
scene.add(plane);

Update all three materials in the loop:

function animate() {
  requestAnimationFrame(animate);
  const t = clock.getElapsedTime();

  torus.material.uniforms.uTime.value = t;
  ico.material.uniforms.uTime.value = t;
  plane.material.uniforms.uTime.value = t;

  torus.rotation.y = t * 0.2;
  ico.rotation.y = t * 0.15;
  ico.rotation.x = t * 0.1;

  controls.update();
  renderer.render(scene, camera);
}

Three objects, three different shader approaches, one scene. The torus knot with organic noise displacement and iridescent coloring. The icosahedron with volumetric patterns and Fresnel glow. The plane with wave displacement and animated UV patterns. Orbit around the scene and each object shows different shader techniques working on different geometry. It's a gallery of what's possible when you combine GLSL knowledge with Three.js's scene management.

Performance notes

A few things worth knowing about ShaderMaterial performance:

Vertex displacement recalculates on the GPU every frame -- that's the point. But the number of vertices matters. A SphereGeometry(1, 32, 32) has ~1,000 vertices. SphereGeometry(1, 128, 128) has ~16,000 vertices. SphereGeometry(1, 512, 512) has ~260,000. The noise function runs once per vertex per frame. For the creative coding projects in this series, 128x128 is plenty smooth and runs well on any modern GPU. Only go higher if you need very fine displacement detail.

Fragment shader complexity matters more than vertex count in most cases. Every pixel covered by the mesh runs the fragment shader. A fullscreen object at 1080p runs the fragment shader ~2 million times per frame. Complex noise calculations, deep raymarching loops, or heavy texture sampling in the fragment shader can tank performance. Keep it simple where you can -- precompute what doesn't change, use step() and smoothstep() instead of branching, avoid unnecessary texture lookups.

needsUpdate on ShaderMaterial properties: unlike BufferGeometry attributes (where you set needsUpdate = true to re-upload data), ShaderMaterial uniforms update automatically every frame. You just set the value and Three.js sends it to the GPU. No flag needed. This confused me at first coming from the geometry animation in ep063 where you had to explicitly flag position updates.

What's coming

We've bridged the gap between 2D shaders and 3D scenes. The GLSL skills from episodes 21-45 now work on actual geometry -- vertex displacement, custom lighting, cosine palettes on meshes, post-processing through EffectComposer. ShaderMaterial gives you full GPU control while Three.js handles the boring parts (matrices, attributes, the render pipeline).

Next up we're looking at particles in 3D. Not the 2D particle systems from episode 11 -- actual 3D point clouds and instanced meshes. Thousands of objects moving through space, each with its own position and velocity, rendered efficiently on the GPU. The techniques from our emergence arc (boids, swarms, trails) are about to get a whole new visual dimension.

Allez, wa weten we nu allemaal?

  • ShaderMaterial lets you write custom vertex and fragment shaders while Three.js handles scene management, matrix uniforms, and attribute binding. You write the GLSL, Three.js handles the pipeline. RawShaderMaterial is the same but without auto-injected code -- full manual control
  • Three.js auto-injects projectionMatrix, modelViewMatrix, normalMatrix, position, normal, and uv into ShaderMaterial shaders. You just use them. Uniforms are passed via the uniforms object and updated by setting .value in JavaScript -- no needsUpdate flag required
  • Vertex displacement: offset position along normal in the vertex shader using noise. Three octaves of 3D noise with time offset creates organic breathing/morphing geometry. Same fractal noise layering from ep035, now deforming actual meshes instead of coloring pixels
  • Fresnel effect: pow(1.0 - dot(normal, viewDirection), exponent) gives edge glow. Low exponent = broad glow, high exponent = thin rim. Combined with a rim color, you get holographic/force-field/bioluminescent aesthetics with about 5 lines of GLSL
  • Cosine palettes (ep037) work on 3D meshes. Drive the palette input t from vertex position, displacement, UV, view angle -- each gives a different visual pattern. Add time for color-shifting animations
  • Post-processing via EffectComposer + ShaderPass applies fullscreen fragment shaders to the rendered image. Any 2D shader from ep032-045 can become a post-processing pass. tDiffuse is the previous pass's texture, automatically bound
  • onBeforeCompile on standard materials lets you modify specific ShaderChunks without rewriting the entire PBR pipeline. Override begin_vertex for displacement, color_fragment for custom coloring. Best of both worlds: PBR lighting + custom effects
  • All 2D shader techniques transfer directly to 3D. SDFs on UV coordinates, noise patterns on world positions, cosine palettes on normals. The fragment shader doesn't care where its input coordinates come from -- fullscreen quad or torus knot, the math is the same. Triplanar mapping (using world position instead of UVs) avoids seam issues on complex geometry

Sallukes! Thanks for reading.

X

@femdev