Learn Creative Coding (#53) - Creative Reaction-Diffusion

Last episode we built the Gray-Scott reaction-diffusion model from scratch. Two chemicals, A and B, diffusing at different rates, reacting with each other, producing spots, stripes, spirals, and coral growth from pure math. We got the simulation running, explored the parameter space, added interactive seeding and parameter animation. The system works and it looks great.
But we only scratched the surface of what you can DO with it creatively. Everything in episode 52 used uniform parameters -- the same feed rate and kill rate across the entire grid. The same diffusion constants everywhere. The same chemistry, top to bottom, left to right. That's fine for understanding the model, but for making art? The real power comes when you break uniformity.
Today we make reaction-diffusion into a creative tool. Spatially varying parameters. Image-driven chemistry. Text dissolving into biological patterns. Anisotropic diffusion that creates wood-grain and muscle-fiber textures. Multiple chemical species. And a GPU version that runs at full resolution in real time. Same underlying math as episode 52 -- we're just getting creative with HOW we apply it.
Quick recap: the core we're building on
From episode 52, the Gray-Scott update for each cell:
// per-cell update (from ep052)
const lapA = laplacian(gridA, x, y, W, H);
const lapB = laplacian(gridB, x, y, W, H);
const reaction = a * b * b;
nextA[idx] = a + (Da * lapA - reaction + f * (1.0 - a)) * dt;
nextB[idx] = b + (Db * lapB + reaction - (k + f) * b) * dt;
Da = 1.0, Db = 0.5. The feed rate f and kill rate k determine the pattern. The laplacian measures the difference between a cell and its neighbors (the 3x3 weighted kernel from last time). That's our foundation. Everything today modifies some part of this equation -- the parameters, the diffusion, the seeding, or the rendering.
Noise-driven parameter fields
In episode 52 we did a simple linear gradient for spatial variation -- left side had different f/k than the right side. That was a proof of concept. The real technique is to use a noise field (from episode 12, our Perlin noise implementation) to set f and k at every cell.
// using a simple value noise (or use the perlin from ep012)
function valueNoise(x, y, scale) {
const ix = Math.floor(x / scale);
const iy = Math.floor(y / scale);
const fx = (x / scale) - ix;
const fy = (y / scale) - iy;
// hash function for pseudo-random grid values
function hash(px, py) {
let h = px * 374761393 + py * 668265263;
h = (h ^ (h >> 13)) * 1274126177;
return (h & 0x7fffffff) / 0x7fffffff;
}
const tl = hash(ix, iy);
const tr = hash(ix + 1, iy);
const bl = hash(ix, iy + 1);
const br = hash(ix + 1, iy + 1);
// bilinear interpolation with smoothstep
const sx = fx * fx * (3 - 2 * fx);
const sy = fy * fy * (3 - 2 * fy);
const top = tl + (tr - tl) * sx;
const bot = bl + (br - bl) * sx;
return top + (bot - top) * sy;
}
// precompute parameter maps
const fMap = new Float32Array(W * H);
const kMap = new Float32Array(W * H);
for (let y = 0; y < H; y++) {
for (let x = 0; x < W; x++) {
const n1 = valueNoise(x, y, 60);
const n2 = valueNoise(x + 500, y + 500, 40);
fMap[y * W + x] = 0.020 + n1 * 0.025; // f: 0.020 to 0.045
kMap[y * W + x] = 0.055 + n2 * 0.012; // k: 0.055 to 0.067
}
}
Two separate noise fields for f and k, with different offsets and scales so they don't correlate. The ranges are chosen to span across several pattern regimes -- from stripe-forming (low f) to spot-forming (high f) to coral growth (high f, moderate k) to dead zones (k too high, B can't survive).
Now use these in the update loop:
function stepWithNoise() {
for (let y = 0; y < H; y++) {
for (let x = 0; x < W; x++) {
const idx = y * W + x;
const a = gridA[idx];
const b = gridB[idx];
const localF = fMap[idx];
const localK = kMap[idx];
const lapA = laplacian(gridA, x, y, W, H);
const lapB = laplacian(gridB, x, y, W, H);
const reaction = a * b * b;
nextA[idx] = a + (Da * lapA - reaction + localF * (1.0 - a)) * dt;
nextB[idx] = b + (Db * lapB + reaction - (localK + localF) * b) * dt;
nextA[idx] = Math.max(0, Math.min(1, nextA[idx]));
nextB[idx] = Math.max(0, Math.min(1, nextB[idx]));
}
}
const tmpA = gridA; const tmpB = gridB;
gridA = nextA; gridB = nextB;
nextA = tmpA; nextB = tmpB;
}
The result is a landscape of different pattern types. Islands of spots surrounded by oceans of stripes. Regions where patterns dissolve into uniform substrate because k is too high locally. Boundaries where spots elongate into stripes as f changes gradually. The noise field creates organic transitions between regimes that look like satellite photos of different biomes. No hard edges, no sharp transitions -- just smooth gradients between chemistries producing smooth gradients between pattern types.
The noise scale matters a lot. Large scale (80-100) produces big regions of uniform pattern with wide transition zones. Small scale (20-30) produces a chaotic patchwork where patterns barely have room to form before the parameters change. Medium scale (40-60) is the sweet spot for most creative work -- large enough for recognizable patterns, small enough for visual variety.
Image-driven parameters: photographs as chemistry
This is where it gets properly wild. Take a photograph, read its pixel brightness, and use that brightness to set the reaction-diffusion parameters. The pattern grows differently in bright areas vs dark areas, and the result is a biological interpretation of the image.
First, load an image and extract brightness (we covered pixel reading in episode 10):
const img = new Image();
img.crossOrigin = 'anonymous';
img.src = 'your-image-url.jpg';
img.onload = function() {
// draw image to offscreen canvas to read pixels
const offscreen = document.createElement('canvas');
offscreen.width = W;
offscreen.height = H;
const offCtx = offscreen.getContext('2d');
offCtx.drawImage(img, 0, 0, W, H);
const imageData = offCtx.getImageData(0, 0, W, H);
// build parameter maps from brightness
for (let i = 0; i < W * H; i++) {
const r = imageData.data[i * 4];
const g = imageData.data[i * 4 + 1];
const b = imageData.data[i * 4 + 2];
const brightness = (r * 0.299 + g * 0.587 + b * 0.114) / 255;
// bright areas: spots (high f, high k)
// dark areas: stripes or dead (low f, moderate k)
fMap[i] = 0.020 + brightness * 0.025;
kMap[i] = 0.058 + brightness * 0.008;
}
// seed B across the entire grid randomly
for (let i = 0; i < W * H; i++) {
if (Math.random() < 0.05) {
gridB[i] = 1.0;
gridA[i] = 0.0;
}
}
startSimulation();
};
The brightness-to-parameter mapping is the creative decision. In this version, bright image areas produce spot-forming chemistry and dark areas produce stripe-forming or dead chemistry. The face in a portrait emerges as a region of active spots. The dark background stays quiet -- stripes at most, or no pattern at all if k is pushed high enough.
The seeding strategy changes too. Instead of one centered seed, we scatter random seeds across the entire grid. This way the pattern has starting material everywhere, and the parameter landscape determines where it survives and what type it becomes. Seeds in dead-zone regions just decay. Seeds in active regions grow into the locally-determined pattern type.
After a few hundred iterations, the image's structure emerges through the lens of reaction-diffusion. Not a reproduction of the photograph -- a reinterpretation. A face becomes an island of organic spots. A landscape becomes a terrain of competing pattern types. The detail isn't pixel-level; it's structure-level. You recognzie the shapes and the composition, but every surface is alive with chemical texture.
Text as parameter map
Same technique, different source. Render text to a hidden canvas, use the text shape as a parameter mask:
function createTextMask(text, fontSize) {
const offscreen = document.createElement('canvas');
offscreen.width = W;
offscreen.height = H;
const offCtx = offscreen.getContext('2d');
offCtx.fillStyle = 'black';
offCtx.fillRect(0, 0, W, H);
offCtx.fillStyle = 'white';
offCtx.font = fontSize + 'px monospace';
offCtx.textAlign = 'center';
offCtx.textBaseline = 'middle';
offCtx.fillText(text, W / 2, H / 2);
const textData = offCtx.getImageData(0, 0, W, H);
for (let i = 0; i < W * H; i++) {
const isText = textData.data[i * 4] > 128;
if (isText) {
// inside text: active pattern
fMap[i] = 0.035;
kMap[i] = 0.065;
} else {
// outside text: different pattern or dead
fMap[i] = 0.025;
kMap[i] = 0.060;
}
}
}
createTextMask('TURING', 80);
Inside the letterforms: spot-forming parameters. Outside: stripe-forming parameters. The text emerges as islands of spots in a sea of worms. Or flip it -- make the text the dead zone and the background the active zone. The letters appear as voids in a living field, outlines maintained by the parameter boundary.
The transition at the text boundary is sharp here (binary: is-text or not-text). For smoother results, blur the text mask before mapping to parameters. A Gaussian blur on the text canvas produces a soft falloff at the edges, which translates to a gradual parameter transition. The patterns morph smoothly from one regime to another instead of having a hard edge.
You can animate the text too. Change the text content every 500 frames, recompute the mask, and watch the old text dissolve as the new text crystallizes. The chemical field needs time to re-equilibrate, so for a while you see ghosts of the previous word being overwritten by the new one. It's beautiful -- biological typography that grows and decays.
Anisotropic diffusion: directional patterns
In standard reaction-diffusion, chemicals diffuse equally in all directions. The laplacian kernel treats all neighbor directions the same (modulo the diagonal weight). This produces isotropic patterns -- spots are round, stripes curve freely in any direction.
Anisotropic diffusion changes that. Instead of equal diffusion everywhere, we make diffusion stronger in one direction than another. The direction can come from a noise field, an image, or a mathematical function.
function anisotropicLaplacian(grid, x, y, angle, strength) {
const c = grid[y * W + x];
const t = grid[((y - 1 + H) % H) * W + x];
const b = grid[((y + 1) % H) * W + x];
const l = grid[y * W + ((x - 1 + W) % W)];
const r = grid[y * W + ((x + 1) % W)];
// direction vector from angle
const dx = Math.cos(angle);
const dy = Math.sin(angle);
// weights along and perpendicular to the direction
const along = 0.3 + strength * 0.2; // stronger along the flow
const perp = 0.3 - strength * 0.15; // weaker perpendicular
// project neighbor directions onto the flow direction
const wR = perp + (along - perp) * Math.abs(dx);
const wL = perp + (along - perp) * Math.abs(dx);
const wT = perp + (along - perp) * Math.abs(dy);
const wB = perp + (along - perp) * Math.abs(dy);
// normalize so weights sum to 1 (before subtracting center)
const total = wR + wL + wT + wB;
return (t * wT + b * wB + l * wL + r * wR) / total - c;
}
The angle parameter sets the preferred diffusion direction at each cell. The strength parameter controls how anisotropic the diffusion is (0 = isotropic, 1 = strongly directional). Chemicals spread faster along the flow direction and slower perpendicular to it.
Map the angle to a noise field:
const angleMap = new Float32Array(W * H);
for (let y = 0; y < H; y++) {
for (let x = 0; x < W; x++) {
angleMap[y * W + x] = valueNoise(x, y, 50) * Math.PI * 2;
}
}
Now stripes align with the noise flow. Instead of curving randomly, the patterns follow the directional field like wood grain following the tree's growth rings. The result looks like muscle fibers, flowing water, or wind-sculpted sand dunes. Round spots become elongated ellipses stretched along the flow direction. Stripes that would normally meander straighten out and follow the grain.
For an even stronger effect, use the Perlin noise curl field (we built flow fields in episode 11 with particles -- same noise, different application). The curl of a 2D noise field produces a divergence-free flow that creates beautiful swirling patterns. Use that as the anisotropy direction and the reaction-diffusion stripes become swirling, flowing streams.
Multiple chemical species: three-way reactions
The standard Gray-Scott model has two chemicals. What if we add a third?
let gridC = new Float32Array(W * H).fill(0);
let nextC = new Float32Array(W * H);
const Dc = 0.3; // C diffuses slowly
function stepThreeSpecies() {
for (let y = 0; y < H; y++) {
for (let x = 0; x < W; x++) {
const idx = y * W + x;
const a = gridA[idx];
const b = gridB[idx];
const c = gridC[idx];
const lapA = laplacian(gridA, x, y, W, H);
const lapB = laplacian(gridB, x, y, W, H);
const lapC = laplacian(gridC, x, y, W, H);
// A + 2B -> 3B (standard reaction)
const reactionAB = a * b * b;
// B + 2C -> 3C (second reaction)
const reactionBC = b * c * c;
nextA[idx] = a + (Da * lapA - reactionAB + f * (1.0 - a)) * dt;
nextB[idx] = b + (Db * lapB + reactionAB - reactionBC - (k + f) * b) * dt;
nextC[idx] = c + (Dc * lapC + reactionBC - (k * 0.8 + f) * c) * dt;
nextA[idx] = Math.max(0, Math.min(1, nextA[idx]));
nextB[idx] = Math.max(0, Math.min(1, nextB[idx]));
nextC[idx] = Math.max(0, Math.min(1, nextC[idx]));
}
}
const tmpA = gridA; const tmpB = gridB; const tmpC = gridC;
gridA = nextA; gridB = nextB; gridC = nextC;
nextA = tmpA; nextB = tmpB; nextC = tmpC;
}
Chemical A feeds B (same as before). But now B also feeds C through a second autocatalytic reaction: B + 2C -> 3C. So C parasitizes B the same way B parasitizes A. It's a food chain. A -> B -> C.
The patterns are qualitatively different from two-species systems. You get layered structures: A dominates in areas B and C haven't reached. B forms patterns where A is available but C hasn't consumed it yet. C forms patterns on top of B's patterns, eating into them. The three chemicals compete for space and produce nested structures -- spots within spots, stripes bordering different stripes, regions where one species dominates surrounded by regions where another dominates.
For rendering, map the three chemicals to RGB:
function renderThreeSpecies() {
for (let i = 0; i < W * H; i++) {
const a = gridA[i];
const b = gridB[i];
const c = gridC[i];
// A = blue channel, B = green channel, C = red channel
imgData.data[i * 4 + 0] = Math.floor(c * 255);
imgData.data[i * 4 + 1] = Math.floor(b * 255);
imgData.data[i * 4 + 2] = Math.floor(a * 200);
imgData.data[i * 4 + 3] = 255;
}
ctx.putImageData(imgData, 0, 0);
}
The color shows which chemical dominates where. Blue regions are A-dominated (no activity). Green regions are B territory. Red regions are where C has taken over. The boundaries between colors are the reaction fronts -- where one species is actively consuming another. These fronts are often the most visually interesting parts because that's where the dynamics are fastest.
RD as a displacement map: connecting to shaders
Reaction-diffusion output makes excellent input for other systems. One powerful technique: use the B concentration field as a displacement map for an image or a 3D mesh.
function renderDisplaced(sourceImage) {
const srcCanvas = document.createElement('canvas');
srcCanvas.width = W;
srcCanvas.height = H;
const srcCtx = srcCanvas.getContext('2d');
srcCtx.drawImage(sourceImage, 0, 0, W, H);
const srcData = srcCtx.getImageData(0, 0, W, H);
for (let y = 0; y < H; y++) {
for (let x = 0; x < W; x++) {
const idx = y * W + x;
const b = gridB[idx];
// displace pixel lookup by B concentration
const displaceAmount = 8;
const dxOffset = Math.floor((b - 0.5) * displaceAmount);
const dyOffset = Math.floor((b - 0.5) * displaceAmount);
const srcX = Math.max(0, Math.min(W - 1, x + dxOffset));
const srcY = Math.max(0, Math.min(H - 1, y + dyOffset));
const srcIdx = srcY * W + srcX;
imgData.data[idx * 4 + 0] = srcData.data[srcIdx * 4 + 0];
imgData.data[idx * 4 + 1] = srcData.data[srcIdx * 4 + 1];
imgData.data[idx * 4 + 2] = srcData.data[srcIdx * 4 + 2];
imgData.data[idx * 4 + 3] = 255;
}
}
ctx.putImageData(imgData, 0, 0);
}
Where B is high (spots, stripes), pixels get displaced. Where B is low, pixels stay in place. The original image appears to be growing organic bumps -- spots push pixels outward, stripes create ripple distortions. It's like the image has a skin condition. In the best possible way.
This connects directly to the shader work from episodes 32-46. A displacement map in a shader works identically -- sample a texture at an offset determined by another texture. The reaction-diffusion grid IS the offset texture. If you port the simulation to a fragment shader with ping-pong framebuffers (which we'll get to shortly), the displacement rendering is trivial -- just an extra texture sample in the output shader.
Combining RD with particle systems
Another powerful crossover: use the reaction-diffusion field to drive a particle system. Particles spawn where B exceeds a threshold. They move along the gradient of B (toward higher concentrations). They die when B drops below the threshold. The result is a swarm of particles that traces out the pattern's structure in real time.
const particles = [];
const MAX_PARTICLES = 2000;
function spawnParticles() {
if (particles.length >= MAX_PARTICLES) return;
for (let attempt = 0; attempt < 20; attempt++) {
const x = Math.floor(Math.random() * W);
const y = Math.floor(Math.random() * H);
const idx = y * W + x;
if (gridB[idx] > 0.3) {
particles.push({
x: x,
y: y,
life: 100 + Math.random() * 200,
size: 1 + gridB[idx] * 3
});
break;
}
}
}
function updateParticles() {
for (let i = particles.length - 1; i >= 0; i--) {
const p = particles[i];
const px = Math.floor(p.x);
const py = Math.floor(p.y);
if (px < 1 || px >= W - 1 || py < 1 || py >= H - 1) {
particles.splice(i, 1);
continue;
}
// move along B gradient
const bHere = gridB[py * W + px];
const bRight = gridB[py * W + px + 1];
const bLeft = gridB[py * W + px - 1];
const bDown = gridB[(py + 1) * W + px];
const bUp = gridB[(py - 1) * W + px];
const gx = (bRight - bLeft) * 0.5;
const gy = (bDown - bUp) * 0.5;
p.x += gx * 5 + (Math.random() - 0.5) * 0.5;
p.y += gy * 5 + (Math.random() - 0.5) * 0.5;
p.life--;
if (p.life <= 0 || bHere < 0.1) {
particles.splice(i, 1);
}
}
}
function drawParticles() {
for (const p of particles) {
const alpha = Math.min(1, p.life / 50);
ctx.fillStyle = 'rgba(255, 220, 180, ' + (alpha * 0.6) + ')';
ctx.fillRect(p.x - p.size / 2, p.y - p.size / 2, p.size, p.size);
}
}
The particles follow the B concentration gradient -- they climb toward the peaks of the pattern. When B drops (because the pattern is shifting or the particle drifted off a stripe), the particle dies. New particles spawn where B is active. The particle cloud becomes a luminous trace of the pattern's living edges.
This is the same gradient-following technique from the flow field particles in episode 11, just with a different field. Back then we used Perlin noise as the flow field. Now we're using a reaction-diffusion concentration field. Same particle motion code, completely different visual character.
GPU version: the fragment shader
We've been doing all this on the CPU with JavaScript arrays. It works at 256x256 or even 512x512, but for full HD or larger, the CPU can't keep up -- especially with multiple steps per frame. The reaction-diffusion update is embarassingly parallel: every cell computes its own next state based on neighbor values. That's exactly what fragment shaders do. Each pixel runs the same code independently.
The GPU approach uses ping-pong rendering (two framebuffers, alternating between them, same concept as our double-buffered arrays). Here's the GLSL fragment shader for the Gray-Scott update:
// reaction-diffusion update shader
precision highp float;
uniform sampler2D uState; // current state (A in r, B in g)
uniform vec2 uResolution;
uniform float uFeed;
uniform float uKill;
uniform float uDt;
void main() {
vec2 uv = gl_FragCoord.xy / uResolution;
vec2 texel = 1.0 / uResolution;
// read current state
vec2 state = texture2D(uState, uv).rg;
float a = state.r;
float b = state.g;
// laplacian via neighbor sampling
float lapA = 0.0;
float lapB = 0.0;
// direct neighbors (weight 0.2 each)
vec2 tState = texture2D(uState, uv + vec2(0, texel.y)).rg;
vec2 bState = texture2D(uState, uv - vec2(0, texel.y)).rg;
vec2 lState = texture2D(uState, uv - vec2(texel.x, 0)).rg;
vec2 rState = texture2D(uState, uv + vec2(texel.x, 0)).rg;
// diagonal neighbors (weight 0.05 each)
vec2 tlState = texture2D(uState, uv + vec2(-texel.x, texel.y)).rg;
vec2 trState = texture2D(uState, uv + vec2(texel.x, texel.y)).rg;
vec2 blState = texture2D(uState, uv + vec2(-texel.x, -texel.y)).rg;
vec2 brState = texture2D(uState, uv + vec2(texel.x, -texel.y)).rg;
lapA = tState.r * 0.2 + bState.r * 0.2 + lState.r * 0.2 + rState.r * 0.2
+ tlState.r * 0.05 + trState.r * 0.05 + blState.r * 0.05 + brState.r * 0.05
- a;
lapB = tState.g * 0.2 + bState.g * 0.2 + lState.g * 0.2 + rState.g * 0.2
+ tlState.g * 0.05 + trState.g * 0.05 + blState.g * 0.05 + brState.g * 0.05
- b;
// Gray-Scott equations
float reaction = a * b * b;
float newA = a + (1.0 * lapA - reaction + uFeed * (1.0 - a)) * uDt;
float newB = b + (0.5 * lapB + reaction - (uKill + uFeed) * b) * uDt;
gl_FragColor = vec4(clamp(newA, 0.0, 1.0), clamp(newB, 0.0, 1.0), 0.0, 1.0);
}
This is almost a line-by-line translation of the JavaScript version. The key differences: we read from a texture instead of arrays, we write to a pixel instead of an array element, and we use vec2/float instead of separate variables. The laplacian is computed with 8 texture samples (same weights as our 3x3 kernel from ep052).
The ping-pong setup uses two framebuffers: render the update shader to framebuffer B while reading from framebuffer A's texture, then swap. Next frame, render to A while reading from B. This is the GPU equivalent of our double-buffered Float32Arrays.
The performance difference is dramatic. A 256x256 grid with 10 steps per frame runs at maybe 30fps on the CPU in JavaScript. The same grid on the GPU runs at 60fps at 1920x1080 with 20 steps per frame. The GPU processes every pixel in parallel -- a million pixels updating simultaneously instead of sequentially. For creative exploration at high resolution, the GPU version is essential.
You don't need to write the WebGL boilerplate from scratch. We set up shader rendering infrastructure in episodes 32-37. The framebuffer ping-pong is the same setup we used for shader feedback loops in episode 36. If you've done those episodes, you have all the scaffolding already.
Creative exercise: bio-digital portrait
Here's the big payoff exercise. Combine everything from this episode into one piece: load a portrait photo as the parameter map, use anisotropic diffusion aligned to the image gradients, render with a color palette, and overlay particles.
const W = 400, H = 400;
// 1. load image and build parameter maps
function buildFromImage(imageData) {
for (let y = 0; y < H; y++) {
for (let x = 0; x < W; x++) {
const i = y * W + x;
const r = imageData.data[i * 4];
const g = imageData.data[i * 4 + 1];
const bl = imageData.data[i * 4 + 2];
const brightness = (r * 0.299 + g * 0.587 + bl * 0.114) / 255;
// bright = active spots, dark = quiet stripes
fMap[i] = 0.022 + brightness * 0.022;
kMap[i] = 0.057 + brightness * 0.009;
// compute image gradient for anisotropy direction
if (x > 0 && x < W - 1 && y > 0 && y < H - 1) {
const iRight = imageData.data[(i + 1) * 4];
const iLeft = imageData.data[(i - 1) * 4];
const iDown = imageData.data[(i + W) * 4];
const iUp = imageData.data[(i - W) * 4];
// gradient angle perpendicular to edges
const gx = (iRight - iLeft);
const gy = (iDown - iUp);
angleMap[i] = Math.atan2(gy, gx) + Math.PI / 2;
}
}
}
}
// 2. scatter seeds everywhere
for (let i = 0; i < W * H; i++) {
if (Math.random() < 0.04) {
gridB[i] = 1.0;
gridA[i] = 0.0;
}
}
// 3. main loop: simulate + particles + render
function loop() {
for (let s = 0; s < 6; s++) {
stepWithNoise(); // uses fMap, kMap, angleMap
}
spawnParticles();
updateParticles();
// render RD with color palette
for (let i = 0; i < W * H; i++) {
const b = gridB[i];
const ratio = b / (gridA[i] + b + 0.001);
const r = Math.floor(20 + ratio * 180);
const g = Math.floor(10 + ratio * 80);
const bl = Math.floor(30 + ratio * 40);
imgData.data[i * 4 + 0] = r;
imgData.data[i * 4 + 1] = g;
imgData.data[i * 4 + 2] = bl;
imgData.data[i * 4 + 3] = 255;
}
ctx.putImageData(imgData, 0, 0);
drawParticles();
requestAnimationFrame(loop);
}
The image gradients drive the anisotropy: patterns align perpendicular to edges in the photograph. Spots form along contours, stripes follow the natural flow of the image's structure. The face doesn't appear as a recognizable photo -- it appears as a coherent structure made entirely of biological patterns. The forehead might be one pattern regime, the cheeks another, the background a third. The overall shape is recognizable but every surface is alive.
The particles add a secondary layer of luminous detail. They trace the pattern edges, highlighting the active chemistry. Against the dark organic palette, they look like bioluminescence -- firefly trails mapping the contours of a living surface.
Run it for a few hundred frames. The pattern stabilizes into a portrait-shaped living system. Then slowly animate the parameters (shift f and k over time) and watch the portrait morph. Spots become stripes. Stripes dissolve. New patterns grow from the boundaries. The face remains recognizable but its texture is in constant flux. It's the biological portrait of someone who is literally growing :-)
Turing was right
Everything we did today -- noise fields, image-driven chemistry, text masks, anisotropic diffusion, multiple species -- is still Turing's reaction-diffusion mechanism from 1952. We're not adding new math. We're just varying the parameters and initial conditions that go into the same equations he wrote seventy years ago. The diversity of output comes not from complex rules but from creative input. Same system, infinite variation.
This is the theme of the emergent systems arc. Simple rules, complex output. The Game of Life had four rules and produced gliders and computers. Boids had three rules and produced flocking and murmurations. Reaction-diffusion has two equations and produces every pattern from leopard spots to brain coral to fingerprint ridges. And by treating these equations as creative tools -- feeding them images, text, noise fields -- we get output that no fixed parameter set could ever produce.
Seven episodes into the arc. Coming up we'll leave chemistry behind and look at formal grammars as a growth mechanism. Simple substitution rules applied recursively, producing branching structures that look like plants, blood vessels, river networks. Different formalism, same magic.
't Komt erop neer...
- Noise-driven parameter fields use Perlin or value noise to set different feed rate (f) and kill rate (k) at every cell. This creates a landscape of competing pattern types -- spots, stripes, coral, dead zones -- with smooth organic transitions between regimes. The noise scale controls region size
- Image-driven parameters map a photograph's brightness to f and k values. Bright areas produce active patterns, dark areas stay quiet. The photograph's structure emerges as a biological reinterpretation -- not a pixel copy but a pattern-level echo of the original shapes
- Text masks render text to a hidden canvas and use it to set different parameters inside vs outside the letterforms. Text appears as pattern islands in a different-patterned sea. Blurring the mask softens the transitions for smoother results
- Anisotropic diffusion makes chemicals spread faster in one direction than another. The direction comes from a noise field or image gradients. Produces wood-grain, muscle-fiber, and flow-aligned patterns instead of isotropic spots and stripes
- Three-species reactions create a food chain: A feeds B feeds C. The layered competition produces nested patterns and colored territory maps that two-species systems can't generate
- The GPU shader version uses ping-pong framebuffers (same as ep036 feedback loops) with a fragment shader that's nearly identical to the JavaScript update loop. Runs at 1080p in real time because every pixel updates in parallel
- RD output works as input for other systems: displacement maps for images, gradient fields for particles, flow fields for boids. Crossing systems multiplies the creative possibilities
Sallukes! Thanks for reading.
X
Congratulations @femdev! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOPCheck out our last posts: