Learn Creative Coding (#67) - 3D Animation and Motion

Last episode we built complex 3D forms from pure math -- terrain islands, noise-displaced asteroids, marching cubes metaballs, recursive crystals, tapered tubes. All of those meshes just sat there. Static sculptures in a 3D gallery. Pretty, but lifeless.
Now we make them move.
Animation in Three.js follows the same core loop we've been using since episode 16 (easing and lerp) and episode 11 (particle systems): update state, redraw, repeat. The difference is we're now working with 3D transforms, quaternion rotations, and vertex-level deformation on the GPU. The principles are identical -- sin waves for oscillation, lerp for smooth transitions, noise for organic drift -- but the extra dimension and the power of the GPU shader pipeline open up motion that wasn't possible on a flat canvas.
We'll cover the animation loop itself (frame-independent timing), procedural animation from pure math, spring physics for natural feel, morph targets for shape-shifting, skeletal concepts for articulated motion, keyframe animation via Three.js's built-in system, and vertex animation in shaders for massive-scale deformation. By the end of this episode you'll have all the tools to bring your procedural meshes to life.
The animation loop: time, not frames
We've been calling requestAnimationFrame since episode 62. But there's a subtle trap: if you animate by incrementing values per frame (x += 0.01), the animation speed depends on the frame rate. A 60fps display moves twice as fast as a 30fps display. That's fine for personal sketches but terrible for anything you want to look consistent across devices.
THREE.Clock solves this by giving you frame-independent time:
import * as THREE from 'three';
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
const scene = new THREE.Scene();
scene.background = new THREE.Color(0x080810);
const camera = new THREE.PerspectiveCamera(
60, window.innerWidth / window.innerHeight, 0.1, 100
);
camera.position.set(0, 2, 6);
const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setPixelRatio(window.devicePixelRatio);
document.body.appendChild(renderer.domElement);
const controls = new OrbitControls(camera, renderer.domElement);
controls.enableDamping = true;
scene.add(new THREE.AmbientLight(0x222244, 1.5));
const sun = new THREE.DirectionalLight(0xffeedd, 2.0);
sun.position.set(4, 6, 3);
scene.add(sun);
const clock = new THREE.Clock();
function animate() {
requestAnimationFrame(animate);
const delta = clock.getDelta(); // seconds since last frame
const elapsed = clock.getElapsedTime(); // total seconds since start
// use 'elapsed' for absolute positions, 'delta' for velocities
controls.update();
renderer.render(scene, camera);
}
animate();
window.addEventListener('resize', () => {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
});
getDelta() returns the time since the last call in seconds. A 60fps frame gives you ~0.0167, a 30fps frame gives you ~0.033. Multiply your velocity by delta and the object moves the same distance per second regardless of frame rate. getElapsedTime() gives total time since the clock started -- use it for absolute oscillations like Math.sin(elapsed * 2.0) where you want the same position at the same wallclock time on every device.
This is the foundation for everything that follows. From here on, all animation code uses elapsed or delta, never raw frame counters.
Transform animation: position, rotation, scale
The simplest animation: change an object's transform properties over time. Three.js objects have .position, .rotation, .scale -- all mutable, all updated before each render:
const cube = new THREE.Mesh(
new THREE.BoxGeometry(1, 1, 1),
new THREE.MeshStandardMaterial({ color: 0x4488aa })
);
scene.add(cube);
const sphere = new THREE.Mesh(
new THREE.SphereGeometry(0.5, 32, 32),
new THREE.MeshStandardMaterial({ color: 0xaa4466 })
);
sphere.position.x = 3;
scene.add(sphere);
const torus = new THREE.Mesh(
new THREE.TorusGeometry(0.5, 0.2, 16, 32),
new THREE.MeshStandardMaterial({ color: 0x66aa44 })
);
torus.position.x = -3;
scene.add(torus);
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
// oscillating position (sin/cos from ep013)
cube.position.y = Math.sin(t * 1.5) * 0.5;
// continuous rotation
cube.rotation.y = t * 0.8;
cube.rotation.x = t * 0.3;
// pulsing scale
const pulse = 1.0 + Math.sin(t * 3.0) * 0.15;
sphere.scale.set(pulse, pulse, pulse);
// orbital motion (parametric circle)
torus.position.x = Math.cos(t * 0.7) * 3;
torus.position.z = Math.sin(t * 0.7) * 3;
torus.rotation.x = t * 1.2;
controls.update();
renderer.render(scene, camera);
}
Same trig from episode 13. sin(t) for up-down bobbing, cos(t) and sin(t) together for circular orbits, continuous t multiplication for steady rotation. The cube bobs and tumbles, the sphere breathes, the torus orbits and spins. Layer these and you get complex-looking motion from very simple math.
For smooth transitions between states (A to B over time), use lerp -- just like episode 16:
// lerp: move toward target smoothly
function lerp(a, b, t) {
return a + (b - a) * t;
}
// in the animation loop:
let targetY = 2;
cube.position.y = lerp(cube.position.y, targetY, 0.03);
// change targetY to -2 and the cube smoothly drifts there
That 0.03 is the easing factor. Low values give slow, floaty movement. High values snap quickly. Because we're lerping toward the target every frame, the motion has natural ease-out -- it starts fast and decelerates as it approaches the target. Same exponential decay behavior from ep016.
Quaternions: no gimbal lock
Euler angles (.rotation.x, .rotation.y, .rotation.z) are intuitive but have a fundamental problem: gimbal lock. When two rotation axes align, you lose a degree of freedom -- the object can't rotate in certain directions anymore. You've probaly seen this in games where a character's head snaps weirdly at extreme angles.
Three.js uses quaternions internally for rotation. A quaternion is a four-component number (x, y, z, w) that represents a rotation without gimbal lock. You don't need to understand the math deeply (it involves 4D complex numbers, and honestly even after years of graphics programming I still find them a bit mystical). What matters is how to use them:
// rotate from one orientation to another smoothly
const startQuat = new THREE.Quaternion();
startQuat.setFromEuler(new THREE.Euler(0, 0, 0));
const endQuat = new THREE.Quaternion();
endQuat.setFromEuler(new THREE.Euler(Math.PI / 2, Math.PI, 0));
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
// oscillate between two orientations
const blend = (Math.sin(t * 0.5) + 1) / 2; // 0..1
// slerp = spherical linear interpolation (smooth rotation blend)
cube.quaternion.slerpQuaternions(startQuat, endQuat, blend);
controls.update();
renderer.render(scene, camera);
}
slerpQuaternions interpolates between two rotations along the shortest path on a 4D sphere. The motion is smooth and constant-speed -- no acceleration, no deceleration, no weird flipping. It's the rotation equivalent of lerp, but for orientations.
When to use Euler angles vs quaternions:
- Simple continuous rotation (
rotation.y += delta * speed) -- Euler is fine, clear to read - Smooth transition between two specific orientations -- quaternion slerp
- Combining multiple rotations -- quaternion multiplication (avoids order-dependent Euler weirdness)
- Animation data from external files -- usually quaternions (glTF format stores rotation as quaternions)
For most creative coding, Euler angles work perfectly. Quaternions become necessary when you need precise interpolation between orientations, like camera transitions or character animation.
Camera animation: fly-throughs and paths
Moving the camera along a path creates cinematic fly-through effects. Three.js has CatmullRomCurve3 -- a smooth spline through a set of 3D points:
const cameraPath = new THREE.CatmullRomCurve3([
new THREE.Vector3(0, 3, 8),
new THREE.Vector3(5, 2, 4),
new THREE.Vector3(6, 4, -2),
new THREE.Vector3(0, 3, -6),
new THREE.Vector3(-5, 2, -2),
new THREE.Vector3(-4, 4, 4),
new THREE.Vector3(0, 3, 8) // back to start for a loop
], true); // closed=true for seamless loop
// visualize the path (optional, for debugging)
const pathGeo = new THREE.BufferGeometry().setFromPoints(
cameraPath.getPoints(100)
);
const pathLine = new THREE.Line(
pathGeo,
new THREE.LineBasicMaterial({ color: 0x444466 })
);
scene.add(pathLine);
// a target for the camera to look at
const lookTarget = new THREE.Vector3(0, 1, 0);
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
// move camera along path (t * speed, modulo 1 for looping)
const pathT = (t * 0.04) % 1;
const pos = cameraPath.getPointAt(pathT);
camera.position.copy(pos);
// always look at the target
camera.lookAt(lookTarget);
renderer.render(scene, camera);
}
The camera smoothly glides along the Catmull-Rom spline, always looking at the target. Because the curve is closed (last point = first point, closed=true), it loops seamlessly. Speed is controlled by the multiplier on t -- lower for slow cinematic, higher for a quick flyby.
You can combine this with OrbitControls for interactive + automated camera:
// let the user take control by clicking, auto-resume after inactivity
let userControlling = false;
let lastInteraction = 0;
controls.addEventListener('start', () => {
userControlling = true;
lastInteraction = clock.getElapsedTime();
});
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
// resume auto camera after 5 seconds of inactivity
if (userControlling && t - lastInteraction > 5) {
userControlling = false;
}
if (!userControlling) {
const pathT = (t * 0.04) % 1;
camera.position.copy(cameraPath.getPointAt(pathT));
camera.lookAt(lookTarget);
} else {
controls.update();
}
renderer.render(scene, camera);
}
This creates a presentation-mode experience: the camera automatically glides around the scene, but as soon as the viewer clicks and drags, they take over. After 5 seconds of not touching anything, the automated path resumes. Great for exhibitions, portfolio pieces, or any creative work you want to show off.
Morph targets: shape-shifting
Morph targets (also called blend shapes) let you define multiple versions of the same mesh and smoothly interpolate between them. Same vertex count, same triangle connectivity, different vertex positions. A sphere can become a cube, a face can shift from neutral to smile, an asteroid can breathe.
// create a sphere
const geo = new THREE.SphereGeometry(1, 64, 64);
const positionArray = geo.attributes.position.array;
const vertexCount = positionArray.length / 3;
// morph target 1: cube-ish shape
const cubePositions = new Float32Array(vertexCount * 3);
for (let i = 0; i < vertexCount; i++) {
const i3 = i * 3;
const x = positionArray[i3];
const y = positionArray[i3 + 1];
const z = positionArray[i3 + 2];
// push toward cube corners (normalize each axis independently)
const ax = Math.abs(x), ay = Math.abs(y), az = Math.abs(z);
const maxAxis = Math.max(ax, ay, az, 0.001);
cubePositions[i3] = (x / maxAxis) * 1.0 - x;
cubePositions[i3 + 1] = (y / maxAxis) * 1.0 - y;
cubePositions[i3 + 2] = (z / maxAxis) * 1.0 - z;
}
// morph target 2: spiky shape
const spikePositions = new Float32Array(vertexCount * 3);
for (let i = 0; i < vertexCount; i++) {
const i3 = i * 3;
const x = positionArray[i3];
const y = positionArray[i3 + 1];
const z = positionArray[i3 + 2];
// push vertices outward based on a pattern
const spikeFactor = Math.sin(x * 5) * Math.sin(y * 5) * Math.sin(z * 5);
const dir = Math.max(0, spikeFactor) * 0.6;
spikePositions[i3] = x * dir;
spikePositions[i3 + 1] = y * dir;
spikePositions[i3 + 2] = z * dir;
}
geo.morphAttributes.position = [
new THREE.BufferAttribute(cubePositions, 3),
new THREE.BufferAttribute(spikePositions, 3)
];
const mesh = new THREE.Mesh(
geo,
new THREE.MeshStandardMaterial({
color: 0x6688cc,
roughness: 0.5,
morphTargets: true
})
);
scene.add(mesh);
The morph target arrays store OFFSETS from the base position, not absolute positions. That's why we compute (cubePos - basePos) for each vertex. Three.js adds these offsets scaled by the morph influence:
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
// oscillate between sphere, cube, and spiky
mesh.morphTargetInfluences[0] = (Math.sin(t * 0.7) + 1) / 2; // cube blend
mesh.morphTargetInfluences[1] = (Math.sin(t * 0.5 + 1.5) + 1) / 2; // spike blend
mesh.rotation.y = t * 0.2;
controls.update();
renderer.render(scene, camera);
}
Each influence goes 0 to 1. At 0, no effect. At 1, fully morphed. Both can be active simultaneously -- the sphere becomes a spiky cube. The interpolation happens on the GPU, so even with 64x64 segments (thousands of vertices) it's instant.
Morph targets are used heavily in facial animation (blend shapes for smile, blink, frown etc) and in creative coding for shape-shifting generative forms. The limitation: both shapes must have the same vertex count and topology. You can't morph a 32-vertex sphere into a 64-vertex one.
Spring physics: bounce and overshoot
Linear lerp gets you from A to B smoothly but it feels mechanical. Springs add physical character -- overshoot, bounce, settle. The same spring physics from episode 18 (physics lite) applied to 3D object properties:
class Spring3D {
constructor(stiffness, damping) {
this.stiffness = stiffness;
this.damping = damping;
this.position = new THREE.Vector3();
this.velocity = new THREE.Vector3();
this.target = new THREE.Vector3();
}
update(dt) {
// spring force: pull toward target
const dx = this.target.x - this.position.x;
const dy = this.target.y - this.position.y;
const dz = this.target.z - this.position.z;
// acceleration = stiffness * displacement - damping * velocity
this.velocity.x += (dx * this.stiffness - this.velocity.x * this.damping) * dt;
this.velocity.y += (dy * this.stiffness - this.velocity.y * this.damping) * dt;
this.velocity.z += (dz * this.stiffness - this.velocity.z * this.damping) * dt;
this.position.x += this.velocity.x * dt;
this.position.y += this.velocity.y * dt;
this.position.z += this.velocity.z * dt;
}
}
// a bouncy follower
const spring = new Spring3D(120, 8);
const leader = new THREE.Mesh(
new THREE.SphereGeometry(0.3, 16, 16),
new THREE.MeshStandardMaterial({ color: 0xffaa33 })
);
scene.add(leader);
const follower = new THREE.Mesh(
new THREE.SphereGeometry(0.25, 16, 16),
new THREE.MeshStandardMaterial({ color: 0x33aaff })
);
scene.add(follower);
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
const dt = clock.getDelta();
// leader moves in a figure-8
leader.position.x = Math.sin(t * 0.8) * 3;
leader.position.y = Math.sin(t * 1.6) * 1.5;
leader.position.z = Math.cos(t * 0.8) * 2;
// follower chases with spring physics
spring.target.copy(leader.position);
spring.update(dt * 60); // scale by 60 for stable simulation at variable framerate
follower.position.copy(spring.position);
controls.update();
renderer.render(scene, camera);
}
The follower bounces and overshoots as it chases the leader. High stiffness (200+) gives tight, snappy following. Low stiffness (30) gives lazy, floaty motion. High damping kills the overshoot (no bounce), low damping lets it ring (lots of oscillation before settling).
Springs work on any property, not just position. Apply a spring to rotation and objects tilt with momentum when turning. Apply it to scale and things bounce when they pop into existence. Apply it to a uniform in a shader and material properties respond with physical spring feel. The character difference between "lerp to target" and "spring to target" is dramatic -- springs feel alive, lerp feels robotic.
The keyframe system: AnimationClip and AnimationMixer
Three.js has a built-in keyframe animation system. You define property values at specific times, and the mixer interpolates between them. This is the same system that plays back animations from glTF files, but you can create clips programmatically too:
// keyframe track: position.y bounces from 0 to 3 and back
const yTrack = new THREE.NumberKeyframeTrack(
'.position[y]', // property path
[0, 0.5, 1.0, 1.5, 2.0], // times (seconds)
[0, 3, 0, 2, 0] // values at those times
);
// keyframe track: rotation.y makes a full spin
const rotTrack = new THREE.NumberKeyframeTrack(
'.rotation[y]',
[0, 2.0],
[0, Math.PI * 2]
);
// keyframe track: scale pulses
const scaleTrack = new THREE.VectorKeyframeTrack(
'.scale',
[0, 1.0, 2.0],
[1,1,1, 1.5,1.5,1.5, 1,1,1] // three vec3 values flattened
);
const clip = new THREE.AnimationClip('bounce-spin', 2.0, [
yTrack, rotTrack, scaleTrack
]);
const animatedBox = new THREE.Mesh(
new THREE.BoxGeometry(0.8, 0.8, 0.8),
new THREE.MeshStandardMaterial({ color: 0xcc6644, roughness: 0.6 })
);
scene.add(animatedBox);
const mixer = new THREE.AnimationMixer(animatedBox);
const action = mixer.clipAction(clip);
action.play();
function animate() {
requestAnimationFrame(animate);
const dt = clock.getDelta();
mixer.update(dt); // advance the animation
controls.update();
renderer.render(scene, camera);
}
NumberKeyframeTrack for scalar properties (single float), VectorKeyframeTrack for vec3 properties (position, scale), QuaternionKeyframeTrack for rotations. The mixer handles interpolation, looping, crossfading between clips, adjusting playback speed -- all the things you'd want from an animation system.
For creative coding, the keyframe system is useful when you want choreographed, timed sequences rather than freeform procedural motion. Music sync, title sequences, narrative animations where specific things need to happen at specific times. For organic, generative motion, procedural approaches (sin, noise, springs) are usually more flexible.
Procedural animation: no keyframes, just math
This is the creative coding sweet spot. Instead of predefining keyframes, you compute every property from mathematical functions every frame. A walking cycle from sin waves, tentacles that follow noise paths, organic pulsing from layered oscillations. The motion is generated, not authored.
// a row of pillars that wave like kelp
const pillarCount = 15;
const pillars = [];
for (let i = 0; i < pillarCount; i++) {
const height = 2 + Math.random() * 2;
const geo = new THREE.CylinderGeometry(0.06, 0.08, height, 8);
// shift pivot to bottom
geo.translate(0, height / 2, 0);
const mat = new THREE.MeshStandardMaterial({
color: new THREE.Color().setHSL(0.3 + Math.random() * 0.1, 0.5, 0.3),
roughness: 0.7
});
const mesh = new THREE.Mesh(geo, mat);
mesh.position.x = (i - pillarCount / 2) * 0.5;
mesh.position.z = (Math.random() - 0.5) * 2;
scene.add(mesh);
pillars.push({
mesh,
height,
phase: Math.random() * Math.PI * 2,
speed: 0.8 + Math.random() * 0.6,
amplitude: 0.15 + Math.random() * 0.1
});
}
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
for (const p of pillars) {
// sway from the base using rotation
p.mesh.rotation.z = Math.sin(t * p.speed + p.phase) * p.amplitude;
p.mesh.rotation.x = Math.sin(t * p.speed * 0.7 + p.phase + 1) * p.amplitude * 0.5;
}
controls.update();
renderer.render(scene, camera);
}
Each pillar has its own phase offset and speed, so they sway independently but with a coherent wave-like pattern. The geometry is translated so the pivot point is at the bottom (base of the pillar), which means rotating it makes it swing from the root -- like a plant swaying in current.
The beauty of procedural animation: change a few parameters and the entire feel shifts. Lower the speed for lazy deep-water kelp. Raise the amplitude for a storm. Add a second sin layer at a different frequency for more complex motion. Multiply amplitude by (1 + noise(t * 0.1 + p.phase) * 0.5) for organic variation in sway strength. No keyframes to edit, no timeline to manage -- just functions and their parameters.
Vertex animation in shaders
When you need to animate thousands or millions of vertices, doing it in JavaScript is too slow. The vertex shader runs on the GPU, processes every vertex in parallel, and can handle dense meshes effortlesly. We touched on this in ep064 (vertex displacement) -- now we're going deeper with time-driven deformation.
const waveVertShader = `
uniform float uTime;
varying vec3 vNormal;
varying vec3 vPos;
void main() {
vec3 p = position;
// wave along X axis
float wave = sin(p.x * 3.0 + uTime * 2.0) * 0.15;
wave += sin(p.x * 7.0 - uTime * 3.0) * 0.05;
// wave along Z axis (cross-wave)
wave += sin(p.z * 4.0 + uTime * 1.5) * 0.1;
p.y += wave;
vNormal = normalize(normalMatrix * normal);
vPos = (modelViewMatrix * vec4(p, 1.0)).xyz;
gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1.0);
}
`;
const waveFragShader = `
varying vec3 vNormal;
varying vec3 vPos;
void main() {
vec3 light = normalize(vec3(2.0, 4.0, 3.0));
float diff = max(dot(vNormal, light), 0.0);
vec3 viewDir = normalize(-vPos);
float fresnel = pow(1.0 - max(dot(vNormal, viewDir), 0.0), 3.0);
vec3 deep = vec3(0.02, 0.08, 0.18);
vec3 surface = vec3(0.1, 0.35, 0.5);
vec3 col = mix(deep, surface, diff * 0.7 + 0.3);
col += fresnel * vec3(0.3, 0.5, 0.7) * 0.4;
gl_FragColor = vec4(col, 1.0);
}
`;
const waterPlane = new THREE.Mesh(
new THREE.PlaneGeometry(10, 10, 200, 200),
new THREE.ShaderMaterial({
vertexShader: waveVertShader,
fragmentShader: waveFragShader,
uniforms: { uTime: { value: 0 } },
side: THREE.DoubleSide
})
);
waterPlane.rotation.x = -Math.PI / 2;
scene.add(waterPlane);
200x200 segments = 40,000 vertices, all displaced by layered sine waves every frame. On the GPU this runs at full 60fps without breaking a sweat. Try doing that in a JavaScript loop -- you'd be updating 120,000 floats per frame and killing your frame rate. The vertex shader does it in microseconds because every vertex is processed in parallel.
The wave equation is simple: multiple sine waves at different frequencies and directions, summed together. sin(x * 3 + t * 2) is a wave moving along X. sin(z * 4 + t * 1.5) is a wave moving along Z. Add them and you get a cross-hatched wave pattern that looks like an ocean surface. Layer more frequencies (like noise octaves from ep012) for more realistic waves.
For even more complex vertex animation, use noise in the shader:
const noiseVertShader = `
uniform float uTime;
varying vec3 vNormal;
varying float vHeight;
float hash(vec3 p) {
p = fract(p * vec3(443.897, 441.423, 437.195));
p += dot(p, p.yzx + 19.19);
return fract((p.x + p.y) * p.z);
}
float noise3D(vec3 p) {
vec3 i = floor(p);
vec3 f = fract(p);
f = f * f * (3.0 - 2.0 * f);
float a = hash(i);
float b = hash(i + vec3(1,0,0));
float c = hash(i + vec3(0,1,0));
float d = hash(i + vec3(1,1,0));
float e = hash(i + vec3(0,0,1));
float f2 = hash(i + vec3(1,0,1));
float g = hash(i + vec3(0,1,1));
float h = hash(i + vec3(1,1,1));
return mix(mix(mix(a,b,f.x), mix(c,d,f.x), f.y),
mix(mix(e,f2,f.x), mix(g,h,f.x), f.y), f.z);
}
void main() {
vec3 p = position;
// noise-based displacement with time evolution
float n = 0.0;
n += noise3D(vec3(p.xz * 0.5, uTime * 0.2)) * 0.5;
n += noise3D(vec3(p.xz * 1.2, uTime * 0.3 + 10.0)) * 0.25;
n += noise3D(vec3(p.xz * 2.5, uTime * 0.5 + 20.0)) * 0.12;
p.y += n;
vHeight = n;
vNormal = normalize(normalMatrix * normal);
gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1.0);
}
`;
Now the terrain deforms procedurally on the GPU, evolving over time. The noise field slides through the plane surface, creating hills that rise and valleys that sink, all animated. This is basically the animated terrain from ep063 but orders of magnitude faster because the GPU handles it.
Animation composition: layering motion
Real organic motion isn't one oscillation -- it's many layered on top of each other. A fish swimming has body undulation (big sine wave) plus tail flutter (fast small sine wave) plus subtle drift (noise) plus response to obstacles (spring). Composition means adding these layers together:
// a chain of segments that compose multiple animation sources
const segmentCount = 20;
const segments = [];
const segGroup = new THREE.Group();
scene.add(segGroup);
for (let i = 0; i < segmentCount; i++) {
const seg = new THREE.Mesh(
new THREE.SphereGeometry(0.12 - i * 0.004, 8, 8),
new THREE.MeshStandardMaterial({
color: new THREE.Color().setHSL(0.55 + i * 0.01, 0.6, 0.4)
})
);
segGroup.add(seg);
segments.push(seg);
}
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
for (let i = 0; i < segmentCount; i++) {
const phase = i * 0.3;
const seg = segments[i];
// layer 1: primary body wave (slow, big)
const bodyWave = Math.sin(t * 2.0 + phase) * 0.3;
// layer 2: secondary flutter (fast, small)
const flutter = Math.sin(t * 8.0 + phase * 2) * 0.03;
// layer 3: forward motion
const forward = t * 0.5;
// layer 4: gentle vertical drift (noise-like)
const drift = Math.sin(t * 0.3 + i * 0.5) * Math.sin(t * 0.7 + i * 0.3) * 0.15;
seg.position.x = -i * 0.15 + forward;
seg.position.y = drift;
seg.position.z = bodyWave + flutter;
}
// keep group centered
segGroup.position.x = -t * 0.5;
controls.update();
renderer.render(scene, camera);
}
Each segment's position is the SUM of four independent animation layers. The body wave creates the big S-curve undulation. The flutter adds high-frequency vibration (like a tail fin). The forward motion moves the whole chain. The drift adds subtle vertical bobbing. Together they produce movement that looks organic and alive -- no single sine wave could produce this, but layering four simple ones gets remarkably close to real fish motion.
This composition approach scales to any complexity. Add a noise layer for random perturbation. Add a spring layer that responds to mouse clicks. Add a fear response that pulls away from a predator. Each layer is simple and independent. The sum is complex and lifelike.
Creative exercise: underwater scene
Allez, time to bring it all together. A procedurally animated underwater scene: waving kelp, drifting particles, a camera on a slow orbit, and animated wave lighting on the ground:
import * as THREE from 'three';
const scene = new THREE.Scene();
scene.background = new THREE.Color(0x061622);
scene.fog = new THREE.FogExp2(0x061622, 0.08);
const camera = new THREE.PerspectiveCamera(
60, window.innerWidth / window.innerHeight, 0.1, 80
);
const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setPixelRatio(window.devicePixelRatio);
document.body.appendChild(renderer.domElement);
// lighting: blue-green ambient + caustic-like moving light
const ambient = new THREE.AmbientLight(0x0a2040, 2.0);
scene.add(ambient);
const causticLight = new THREE.PointLight(0x44aacc, 3.0, 20);
causticLight.position.set(0, 8, 0);
scene.add(causticLight);
const clock = new THREE.Clock();
// kelp: cylinders that sway procedurally
const kelpGroup = new THREE.Group();
scene.add(kelpGroup);
const kelps = [];
for (let i = 0; i < 30; i++) {
const segments = 6 + Math.floor(Math.random() * 4);
const kelpParts = [];
const baseX = (Math.random() - 0.5) * 12;
const baseZ = (Math.random() - 0.5) * 12;
for (let s = 0; s < segments; s++) {
const radius = 0.04 - s * 0.003;
const seg = new THREE.Mesh(
new THREE.CylinderGeometry(
Math.max(radius, 0.01),
Math.max(radius + 0.005, 0.015),
0.5, 6
),
new THREE.MeshStandardMaterial({
color: new THREE.Color().setHSL(
0.28 + Math.random() * 0.08,
0.5 + Math.random() * 0.2,
0.15 + Math.random() * 0.1
),
roughness: 0.8
})
);
seg.position.set(baseX, s * 0.45, baseZ);
kelpGroup.add(seg);
kelpParts.push(seg);
}
kelps.push({
parts: kelpParts,
baseX,
baseZ,
phase: Math.random() * Math.PI * 2,
speed: 0.4 + Math.random() * 0.3,
amplitude: 0.08 + Math.random() * 0.06
});
}
// floating particles (plankton / debris)
const particleCount = 3000;
const pPositions = new Float32Array(particleCount * 3);
const pSizes = new Float32Array(particleCount);
for (let i = 0; i < particleCount; i++) {
pPositions[i * 3] = (Math.random() - 0.5) * 15;
pPositions[i * 3 + 1] = Math.random() * 8;
pPositions[i * 3 + 2] = (Math.random() - 0.5) * 15;
pSizes[i] = 1.0 + Math.random() * 3.0;
}
const pGeo = new THREE.BufferGeometry();
pGeo.setAttribute('position', new THREE.BufferAttribute(pPositions, 3));
pGeo.setAttribute('aSize', new THREE.BufferAttribute(pSizes, 1));
const pMat = new THREE.ShaderMaterial({
vertexShader: `
attribute float aSize;
varying float vAlpha;
void main() {
vec4 mvPos = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = aSize * (100.0 / -mvPos.z);
gl_Position = projectionMatrix * mvPos;
vAlpha = 0.3 + 0.4 * (1.0 - clamp(-mvPos.z / 15.0, 0.0, 1.0));
}
`,
fragmentShader: `
varying float vAlpha;
void main() {
float d = length(gl_PointCoord - 0.5);
if (d > 0.5) discard;
float alpha = smoothstep(0.5, 0.1, d) * vAlpha;
gl_FragColor = vec4(0.6, 0.8, 0.7, alpha);
}
`,
transparent: true,
depthWrite: false,
blending: THREE.AdditiveBlending
});
scene.add(new THREE.Points(pGeo, pMat));
// sea floor
const floorGeo = new THREE.PlaneGeometry(20, 20, 1, 1);
const floor = new THREE.Mesh(
floorGeo,
new THREE.MeshStandardMaterial({
color: 0x1a2a20,
roughness: 0.95
})
);
floor.rotation.x = -Math.PI / 2;
floor.position.y = -0.1;
scene.add(floor);
// camera path: slow circular orbit
const camRadius = 6;
const camHeight = 3;
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
const dt = clock.getDelta();
// camera orbits slowly
camera.position.x = Math.cos(t * 0.08) * camRadius;
camera.position.z = Math.sin(t * 0.08) * camRadius;
camera.position.y = camHeight + Math.sin(t * 0.15) * 0.5;
camera.lookAt(0, 2, 0);
// animate kelp
for (const k of kelps) {
for (let s = 0; s < k.parts.length; s++) {
const seg = k.parts[s];
const heightFactor = s / k.parts.length;
// sway increases with height (rooted at the base)
const sway = Math.sin(t * k.speed + k.phase + s * 0.4)
* k.amplitude * heightFactor;
const swayCross = Math.sin(t * k.speed * 0.6 + k.phase + s * 0.3 + 2)
* k.amplitude * 0.4 * heightFactor;
seg.position.x = k.baseX + sway;
seg.position.z = k.baseZ + swayCross;
seg.rotation.z = sway * 1.5;
seg.rotation.x = swayCross * 1.2;
}
}
// drift particles slowly
const pos = pGeo.attributes.position.array;
for (let i = 0; i < particleCount; i++) {
const i3 = i * 3;
pos[i3] += Math.sin(t * 0.1 + i * 0.01) * 0.001;
pos[i3 + 1] += 0.002 + Math.sin(t * 0.3 + i * 0.05) * 0.001;
pos[i3 + 2] += Math.cos(t * 0.1 + i * 0.01) * 0.001;
// wrap around
if (pos[i3 + 1] > 8) pos[i3 + 1] = 0;
if (pos[i3] > 7.5) pos[i3] = -7.5;
if (pos[i3] < -7.5) pos[i3] = 7.5;
if (pos[i3 + 2] > 7.5) pos[i3 + 2] = -7.5;
if (pos[i3 + 2] < -7.5) pos[i3 + 2] = 7.5;
}
pGeo.attributes.position.needsUpdate = true;
// caustic light movement
causticLight.position.x = Math.sin(t * 0.3) * 3;
causticLight.position.z = Math.cos(t * 0.4) * 3;
causticLight.intensity = 2.5 + Math.sin(t * 2.0) * 0.5;
renderer.render(scene, camera);
}
animate();
Everything in this scene is procedural and animated. The kelp sways using layered sine waves with per-segment increase (higher parts move more). The particles drift upward in a gentle current and wrap around when they leave the volume. The caustic light drifts and pulses, casting moving shadows on the floor. The camera orbits slowly, giving you a full view of the scene.
The fog (FogExp2) adds depth -- distant kelp and particles fade into the dark blue, creating a sense of underwater atmosphere. Combined with the additive blending on the particles (which makes them glow slightly against the dark background), it reads as an aquatic scene from pure code. No textures, no models, no assets -- just math and Three.js primitives.
Performance notes for animation
A few things to keep in mind:
JavaScript loop vs GPU: any per-vertex animation should be in a shader if possible. JavaScript loops over 10,000+ elements per frame will drop frames on weaker hardware. The vertex shader does the same work in a fraction of the time because it runs in parallel across thousands of GPU cores.
needsUpdate flags: when you modify BufferGeometry attributes in JavaScript (like the particle positions above), you MUST set attributes.position.needsUpdate = true. Without it, Three.js doesn't re-upload the data to the GPU and the mesh appears frozen. Uniform updates on ShaderMaterial do NOT need this flag -- they're sent automatically.
Object count matters: 30 kelp stems with 8 segments each = 240 meshes. Each mesh is a separate draw call. At 240, this is fine. At 2400, you'd start noticing frame drops. For massive counts, use InstancedMesh (ep065) instead of individual Mesh objects.
getDelta() vs getElapsedTime(): call getDelta() once per frame at the top of your animation loop and store it. Calling it multiple times per frame gives wrong values (it resets the timer each call). getElapsedTime() is safe to call multiple times -- it returns the same value within a frame.
What comes next
We've covered the animation toolkit: frame-independent timing, transform animation, quaternion slerp, camera paths, morph targets, spring physics, the keyframe system, procedural animation from math, shader vertex animation, and composition of multiple animation layers. These techniques apply to every 3D creative coding project from here on.
Next episode we'll look at creative lighting and shadows -- how light itself becomes a creative tool. Colored lights, volumetric beams, baked and dynamic shadows, light as a storytelling element. The meshes and animations we've built need light to come alive visually, and Three.js gives us a lot of control over how that light behaves.
't Komt erop neer...
THREE.Clockprovides frame-independent timing.getDelta()gives seconds since last frame (multiply by velocity for consistent speed).getElapsedTime()gives total seconds (use for absolute oscillations likesin(elapsed)). Never animate by incrementing per-frame -- always use time-based values- Transform animation: set
.position,.rotation,.scaleeach frame using sin/cos for oscillation, lerp for smooth transitions, parametric equations for orbits. Same trig and easing from ep013 and ep016, now with three axes - Quaternions avoid gimbal lock.
slerpQuaternions()interpolates between two orientations along the shortest path -- the rotation equivalent of lerp. Use Euler angles for simple continuous rotation, quaternions for smooth transitions between specific orientations - Camera paths with
CatmullRomCurve3: define control points, callgetPointAt(t)to position the camera along a smooth spline. Combine withlookAt()for fly-through cinematics. Add an inactivity timer to blend between automated and user-controlled camera - Morph targets: define multiple vertex position sets (same topology), interpolate with
morphTargetInfluences[n]from 0 to 1. Sphere to cube, neutral to smile, calm to storm. Morph arrays store OFFSETS from the base position, not absolutes. GPU handles the blending - Spring physics (stiffness + damping) make objects bounce and overshoot toward targets. Same concept as ep018 springs, applied to 3D position, rotation, or any property. High stiffness = snappy, low stiffness = floaty, low damping = bouncy oscillation
AnimationClip+AnimationMixer: Three.js's built-in keyframe system. Define values at specific times, the mixer interpolates. Good for choreographed sequences and glTF playback. For generative work, procedural math is usually more flexible- Vertex shader animation runs per-vertex on the GPU in parallel. Layered sine waves for ocean surfaces, noise displacement for organic terrain, all at 60fps on meshes with tens of thousands of vertices. Move anything computationally heavy from JavaScript to the vertex shader
- Animation composition: sum multiple independent layers (body wave + flutter + drift + spring response). Each layer is simple, the combined motion is complex and organic. This is the procedural animation mindset -- don't author motion, generate it from layered functions
Sallukes! Thanks for reading.
X