Learn Creative Coding (#49) - Continuous Cellular Automata

in StemSocial8 days ago

Learn Creative Coding (#49) - Continuous Cellular Automata

cc-banner

In episode 47 we built 1D cellular automata -- rows of cells that were either on or off, 0 or 1. In episode 48 we went 2D with Conway's Game of Life -- grids of cells, still binary, still alive-or-dead. And both times, the patterns were blocky. Pixel grids. Sharp edges. Digital in the most literal sense.

What if cells could be 0.37? Or 0.91? Or any floating-point value between 0.0 and 1.0?

That's continuous cellular automata. Instead of a cell being alive or dead, it has a state -- a real number. Instead of counting discrete neighbors (3 alive neighbors = birth), you integrate over a region and pass the result through a smooth function. The output isn't pixel grids anymore. It's smooth, organic, almost biological. Blob-like structures that move, pulse, split, and merge. Things that genuinley look alive.

This is where cellular automata stop being a computer science curiosity and start being a creative coding tool that produces things you can't get any other way.

From discrete to continuous: the core idea

In the Game of Life, the update rule checks: how many of my 8 neighbors are alive? Then applies hard thresholds. Exactly 3 = birth. 2 or 3 = survival. Everything else = death. There's no in-between. No "almost alive." No gradual transition.

A continuous CA replaces those three components:

  1. Discrete neighborhood (8 adjacent cells) becomes continuous kernel (a smooth function over a region, typically a ring or disk)
  2. Integer neighbor count becomes weighted integral (sum of cell values weighted by the kernel)
  3. Hard threshold rules become smooth growth functions (sigmoid-like curves that transition gradually between growth and decay)

The result: cells don't snap between 0 and 1. They drift smoothly. A cell at 0.3 might increase to 0.35 next step if conditions are favorable, or decrease to 0.27 if not. The grid evolves continuously instead of jumping between discrete states.

SmoothLife: continuous Game of Life

SmoothLife, published by Stephan Rafler in 2011, is the direct continuous generalization of Conway's Game of Life. The idea is surprisingly clean.

In Life, each cell has an inner state (itself) and an outer neighborhood (8 surrounding cells). SmoothLife replaces this with two concentric circles:

  • Inner disk (radius ri): represents the cell itself. The average value inside this disk is the cell's effective state
  • Outer ring (inner radius ri, outer radius ra): represents the neighborhood. The average value in this ring is the effective neighbor density

The inner disk average tells you "how alive is this spot." The outer ring average tells you "how alive is the neighborhood around this spot." Same conceptual split as Life, but smoothed over circular regions instead of a 3x3 grid.

const canvas = document.getElementById('canvas');
const ctx = canvas.getContext('2d');
const W = 256;
const H = 256;
canvas.width = W;
canvas.height = H;

// grid of floats between 0 and 1
let grid = new Float32Array(W * H);
let next = new Float32Array(W * H);

// random initialization with smooth blobs
for (let y = 0; y < H; y++) {
  for (let x = 0; x < W; x++) {
    // scattered circular blobs
    const cx = x - W/2, cy = y - H/2;
    let v = 0;
    for (let i = 0; i < 6; i++) {
      const bx = (Math.random() - 0.5) * W;
      const by = (Math.random() - 0.5) * H;
      const r = Math.sqrt((cx - bx) * (cx - bx) + (cy - by) * (cy - by));
      if (r < 15 + Math.random() * 10) v = 0.8 + Math.random() * 0.2;
    }
    grid[y * W + x] = v;
  }
}

The initialization matters more in continuous CAs than in discrete ones. Random noise per pixel doesn't work well -- the spatial frequency is too high and everything just averages out to mush. You want blobs: smooth regions of high values surrounded by zeros. Think of it like placing droplets of paint on a surface.

The kernel: rings and disks

The kernel defines what "neighborhood" means for each cell. In SmoothLife, it's a ring -- zero inside the inner radius, positive between inner and outer radius, zero outside. But not a hard ring. The edges are smoothed with a sigmoid so there's no sharp cutoff.

const ri = 4.0;   // inner radius (the "cell")
const ra = 12.0;  // outer radius (the neighborhood)

// sigmoid function for smooth transitions
function sigma(x, a, alpha) {
  return 1.0 / (1.0 + Math.exp(-(x - a) * 4.0 / alpha));
}

// compute inner disk average and outer ring average
function computeAverages(grid, cx, cy) {
  let innerSum = 0, innerCount = 0;
  let outerSum = 0, outerCount = 0;

  const r = Math.ceil(ra);

  for (let dy = -r; dy <= r; dy++) {
    for (let dx = -r; dx <= r; dx++) {
      const dist = Math.sqrt(dx * dx + dy * dy);
      if (dist > ra) continue;

      const nx = (cx + dx + W) % W;
      const ny = (cy + dy + H) % H;
      const val = grid[ny * W + nx];

      if (dist <= ri) {
        innerSum += val;
        innerCount++;
      } else {
        outerSum += val;
        outerCount++;
      }
    }
  }

  const m = innerCount > 0 ? innerSum / innerCount : 0;
  const n = outerCount > 0 ? outerSum / outerCount : 0;
  return { m, n };
}

m is the inner disk average -- how alive the cell is. n is the outer ring average -- how alive the neighborhood is. These two numbers drive the growth function.

The ratio between ri and ra matters a lot. Too small and the cells can't "see" far enough to form structures. Too large and the computation becomes expensive and the patterns are huge and slow. ri=4, ra=12 is a good starting point -- gives you patterns that are visible on a 256x256 grid.

The growth function: smooth birth and death

Here's where SmoothLife diverges from Life in the most interesting way. Instead of "if neighbors == 3 then birth", we have a smooth function that maps the neighborhood density n and the cell state m to a growth rate. The function produces values around 0 (no change), positive (growth), or negative (decay).

// SmoothLife transition function
// b1, b2: birth range (neighborhood density that triggers growth)
// d1, d2: survival range (density that allows continued existence)
// sigmaM: smooth interpolation based on inner state

function smoothLifeTransition(n, m) {
  const b1 = 0.278, b2 = 0.365;  // birth thresholds
  const d1 = 0.267, d2 = 0.445;  // death/survival thresholds
  const alpha_m = 0.147;
  const alpha_n = 0.028;

  // smooth interpolation between birth and survival thresholds
  // based on inner state m
  const s1 = sigma(m, 0.5, alpha_m);
  const threshold1 = b1 * (1 - s1) + d1 * s1;
  const threshold2 = b2 * (1 - s1) + d2 * s1;

  // growth function: 1 if n is in range, 0 if not (smoothly)
  const growth = sigma(n, threshold1, alpha_n) * (1 - sigma(n, threshold2, alpha_n));

  return growth;
}

The magic is in that smooth interpolation. When m is low (cell is nearly dead), the thresholds are b1 and b2 -- the birth range. When m is high (cell is alive), the thresholds shift to d1 and d2 -- the survival range. The smooth sigma function blends between them based on the cell's current state. No hard if-else branching. Just continuous functions all the way down.

The output of smoothLifeTransition is between 0 and 1. We use it as the new cell value, or mix it with the old value for a more gradual transition:

function step(dt) {
  for (let y = 0; y < H; y++) {
    for (let x = 0; x < W; x++) {
      const { m, n } = computeAverages(grid, x, y);
      const target = smoothLifeTransition(n, m);

      // gradual update instead of snap
      const i = y * W + x;
      next[i] = grid[i] + dt * (2.0 * target - 1.0);

      // clamp to 0..1
      if (next[i] < 0) next[i] = 0;
      if (next[i] > 1) next[i] = 1;
    }
  }

  [grid, next] = [next, grid];
}

The dt * (2.0 * target - 1.0) part converts the 0..1 growth function output into a signed change. If target > 0.5, the cell grows. If target < 0.5, it decays. The dt parameter controls how fast the system evolves -- smaller dt means smoother, more gradual transitions. Typically 0.1 to 0.5 works well.

Rendering continuous states

Binary CAs render trivially: alive = color, dead = background. Continuous CAs give you a gradient, which is much more interesting to render. Each cell has a value between 0 and 1 that you can map to any color scheme.

function draw() {
  const imageData = ctx.createImageData(W, H);

  for (let y = 0; y < H; y++) {
    for (let x = 0; x < W; x++) {
      const v = grid[y * W + x];
      const i = (y * W + x) * 4;

      // smooth gradient: dark -> amber -> white
      if (v < 0.5) {
        const t = v * 2.0;
        imageData.data[i + 0] = Math.floor(10 + t * 220);
        imageData.data[i + 1] = Math.floor(10 + t * 150);
        imageData.data[i + 2] = Math.floor(15 + t * 40);
      } else {
        const t = (v - 0.5) * 2.0;
        imageData.data[i + 0] = Math.floor(230 + t * 25);
        imageData.data[i + 1] = Math.floor(160 + t * 80);
        imageData.data[i + 2] = Math.floor(55 + t * 180);
      }
      imageData.data[i + 3] = 255;
    }
  }

  ctx.putImageData(imageData, 0, 0);
}

function loop() {
  step(0.2);
  draw();
  requestAnimationFrame(loop);
}
loop();

Dead cells (0.0) are dark. Partially alive cells (0.3-0.5) glow amber. Fully alive cells (0.8-1.0) shift toward bright white-amber. The gradient reveals the internal state of each cell in a way that binary rendering never can. You see the edges of structures as gradients, not hard lines. Cells that are "about to die" look different from cells that are "about to be born." The whole grid has this soft, organic quality.

Run it and you'll see something that looks nothing like the Game of Life. Instead of blocky gliders and blinkers, you get smooth blobs. They drift slowly across the grid, change shape, sometimes split into two smaller blobs, sometimes merge when they get close. The motion is fluid, not jerky. It genuinely looks like watching organisms under a microscope.

Lenia: creatures from convolution

SmoothLife takes Life and makes it continuous. Bert Chan's Lenia (2018) takes that idea much further. Instead of just smoothing the neighborhood, Lenia uses arbitrary convolution kernels and growth functions. The system becomes a general framework for continuous cellular automata that can produce stunningly lifelike "creatures."

The core loop is simple:

  1. Convolve the grid with a kernel K (compute the weighted average of surrounding cells for every point)
  2. Pass the convolution result through a growth function G
  3. Add the growth to the current state, clamped to 0..1
// Lenia kernel: ring-shaped with Gaussian cross-section
function leniaKernel(r, peakRadius, width) {
  // r is distance from center, normalized to 0..1
  const d = Math.abs(r - peakRadius);
  return Math.exp(-(d * d) / (2 * width * width));
}

// build the kernel array
function buildKernel(radius) {
  const size = radius * 2 + 1;
  const kernel = new Float32Array(size * size);
  let sum = 0;

  for (let dy = -radius; dy <= radius; dy++) {
    for (let dx = -radius; dx <= radius; dx++) {
      const dist = Math.sqrt(dx * dx + dy * dy) / radius;
      if (dist > 1.0) continue;

      const k = leniaKernel(dist, 0.5, 0.15);
      const i = (dy + radius) * size + (dx + radius);
      kernel[i] = k;
      sum += k;
    }
  }

  // normalize
  for (let i = 0; i < kernel.length; i++) {
    kernel[i] /= sum;
  }

  return { data: kernel, size, radius };
}

const K = buildKernel(13);

The kernel is ring-shaped: zero at the center, peaks at a certain radius, drops back to zero at the edge. The Gaussian cross-section makes the ring smooth instead of sharp. This kernel shape is what makes Lenia produce round, blob-like creatures instead of blocky pixel patterns.

The peak radius and width parameters control the shape of the ring. A peak at 0.5 with narrow width gives you a thin ring -- creatures are defined by what's at a specific distance, not nearby. A wider ring means the creature "sees" a broader region. Different kernel shapes produce completely different creature morphologies.

The growth function: where life happens

The growth function is what determines whether a cell grows or decays based on the convolution result. In Lenia, it's typically a Gaussian bump centered on a target value:

// growth function: Gaussian bump
function growth(u, mu, sigma) {
  return 2.0 * Math.exp(-((u - mu) * (u - mu)) / (2 * sigma * sigma)) - 1.0;
}

// mu = target neighborhood density (where growth is maximized)
// sigma = width of the growth peak
const mu = 0.15;    // sweet spot for neighborhood density
const gSigma = 0.015;

When the convolution result equals mu, the growth function returns its maximum (close to +1.0). Too much or too little neighborhood activity and the growth drops below zero -- the cell decays. This creates a self-regulating system: regions that are too dense shrink, regions that are too sparse die, and only regions with the right density of neighbors sustain themselves.

The full Lenia step:

function leniaStep(dt) {
  // step 1: convolve grid with kernel
  const conv = new Float32Array(W * H);

  for (let y = 0; y < H; y++) {
    for (let x = 0; x < W; x++) {
      let sum = 0;
      for (let ky = -K.radius; ky <= K.radius; ky++) {
        for (let kx = -K.radius; kx <= K.radius; kx++) {
          const ki = (ky + K.radius) * K.size + (kx + K.radius);
          if (K.data[ki] === 0) continue;

          const nx = (x + kx + W) % W;
          const ny = (y + ky + H) % H;
          sum += grid[ny * W + nx] * K.data[ki];
        }
      }
      conv[y * W + x] = sum;
    }
  }

  // step 2: apply growth function and update
  for (let y = 0; y < H; y++) {
    for (let x = 0; x < W; x++) {
      const i = y * W + x;
      const g = growth(conv[i], mu, gSigma);
      next[i] = Math.max(0, Math.min(1, grid[i] + dt * g));
    }
  }

  [grid, next] = [next, grid];
}

That nested loop in the convolution is O(W * H * K.size^2). For a 256x256 grid with a radius-13 kernel, that's about 256 * 256 * 27 * 27 = ~48 million multiply-adds per frame. JavaScript handles this at maybe 2-5 fps depending on your machine. Not great.

FFT convolution: making it fast

The convolution is the bottleneck. The brute-force nested loop is O(n * k^2) where n is the number of cells and k is the kernel diameter. But convolution in the spatial domain equals multiplication in the frequency domain. Fourier transform the grid, Fourier transform the kernel, multiply element-wise, inverse Fourier transform the result. The FFT reduces O(n * k^2) to O(n log n), independent of kernel size. For large kernels this is a massive speedup.

// using a simple 2D FFT implementation
// (a full FFT is ~100 lines, I'll use the concept here)

// pseudocode for FFT-based convolution:
// 1. fft2d(grid) -> Grid_freq
// 2. fft2d(kernel_padded) -> K_freq  (precompute this once!)
// 3. multiply: Result_freq = Grid_freq * K_freq (element-wise complex multiply)
// 4. ifft2d(Result_freq) -> convolution result

// the kernel FFT only needs to be computed once
// since the kernel doesn't change between frames
const kernelPadded = new Float32Array(W * H);
// copy kernel into center of padded array...
// const K_freq = fft2d(kernelPadded);

function leniaStepFFT(dt) {
  // const gridFreq = fft2d(grid);
  // const convFreq = complexMultiply(gridFreq, K_freq);
  // const conv = ifft2d(convFreq);
  // ... apply growth function same as before
}

I'm showing pseudocode here because a full FFT implementation is a detour from the creative coding point. In practice, you'd use a library (there are several JavaScript FFT packages on npm) or move to the GPU. The important takeaway: FFT makes Lenia run at interactive framerates even with large kernels, and the kernel transform is precomputed once. Only the grid needs to be transformed each frame.

On the GPU this is even better. The convolution is naturally parallelizable -- each cell's convolution is independent. You could use the compute shader techniques from episode 46, or even do it with fragment shaders and ping-pong framebuffers. Store the grid as a texture, apply the kernel as a weighted texture read in a fragment shader, write the result to a second framebuffer. Same feedback loop pattern we've used since episode 36.

Lenia creatures: what emerges

The remarkable thing about Lenia is what comes out. With the right kernel and growth parameters, you get self-organizing structures that look and behave like simple organisms. Bert Chan named several species discovered through parameter exploration:

  • Orbium: a smooth circular blob that moves steadily in one direction. It maintains its shape while traveling, like a cell swimming. If it hits the edge (with wrapping), it comes back around
  • Hydrogeminium: a more complex creature with internal structure. It has a dense core and a thinner outer layer. It moves and rotates
  • Scutium: a creature with bilateral symmetry. It looks like a tiny swimming organism seen from above

None of these were designed. They emerge from the parameter space. You set the kernel shape, the growth function parameters, initialize the grid with random blobs, and evolution produces these structures spontaneously. Some parameter combinations produce nothing (everything dies or everything fills up). Some produce static patterns. And some produce motile creatures that genuinely appear to be alive.

// parameter presets for known Lenia creatures

const orbium = {
  radius: 13,
  peakR: 0.5,
  peakW: 0.15,
  mu: 0.15,
  sigma: 0.015,
  dt: 0.1
};

const hydrogeminium = {
  radius: 13,
  peakR: 0.5,
  peakW: 0.23,
  mu: 0.14,
  sigma: 0.03,
  dt: 0.05
};

function initCreature(cx, cy, radius, density) {
  // place a circular blob as initial condition
  for (let y = 0; y < H; y++) {
    for (let x = 0; x < W; x++) {
      const dx = x - cx, dy = y - cy;
      const dist = Math.sqrt(dx * dx + dy * dy);
      if (dist < radius) {
        const edge = 1.0 - dist / radius;
        grid[y * W + x] = density * edge * edge;
      }
    }
  }
}

// place an orbium-like blob in the center
initCreature(W / 2, H / 2, 15, 1.0);

The initialization is a smooth circular blob with a quadratic falloff (edge * edge) so the edges taper gently. Hard-edged circles don't work well -- the sharp boundary creates artifacts in the convolution. Smooth initial conditions produce smoother evolution.

Try different blob sizes and densities. A blob that's too small won't have enough mass to sustain itself and it'll decay. Too large and it might split into multiple creatures. The sweet spot depends on the parameters, which is part of the exploration.

Multi-channel Lenia: predator and prey

Standard Lenia has one channel -- each cell has one float value. Multi-channel Lenia gives each cell multiple values (like RGB). Each channel has its own kernel and growth function, and channels can interact: channel A's density influences channel B's growth, and vice versa.

// two-channel Lenia
const channels = 2;
let gridA = new Float32Array(W * H);
let gridB = new Float32Array(W * H);

// channel A: "prey" -- grows where A density is moderate, shrinks near B
// channel B: "predator" -- grows where A is present, dies without food

function growthA(convA, convB) {
  const g = growth(convA, 0.15, 0.015);
  const inhibition = convB * 3.0;  // B suppresses A
  return g - inhibition;
}

function growthB(convA, convB) {
  const g = growth(convB, 0.2, 0.02);
  const food = convA * 2.0;  // A feeds B
  const starvation = (convA < 0.05) ? -0.5 : 0.0;
  return g + food + starvation;
}

This is conceptual -- the actual multi-channel implementation needs separate convolutions for each channel and cross-channel growth terms. But the result is stunning. Channel A forms creatures that roam around. Channel B forms creatures that chase channel A creatures. When a B creature catches an A creature, A shrinks and B grows. When B can't find any A, it starves and dies. Actual predator-prey dynamics emerging from convolution kernels and growth functions. No agent logic, no if-statements about "if predator is near prey then eat." Just math.

If you render channel A as one color and channel B as another (say amber and teal), you get a living ecosystem on your screen. Prey blobs drifting around, predator blobs chasing them, populations oscillating. It looks like a nature documentary shot through a microscope.

Connection to reaction-diffusion

You might have noticed that Lenia's loop -- convolve with a kernel, apply a growth function, update -- looks a lot like a reaction-diffusion system. That's not a coincidence. The convolution kernel IS the diffusion: it spreads each cell's influence to its neighbors. The growth function IS the reaction: it determines whether the local state grows or decays based on the concentrations around it.

Reaction-diffusion systems (like Gray-Scott, which produces leopard spots and coral-like patterns) are continuous PDEs that can be discretized the same way. The main difference is that classic reaction-diffusion uses a Laplacian kernel (which measures how different a cell is from its neighbors) while Lenia uses a ring-shaped kernel (which measures the average state at a specific distance). The ring kernel is what gives Lenia its motile, creature-like behavior -- the Laplacian produces stationary or slowly-drifting patterns.

We'll explore reaction-diffusion in depth soon. For now, know that continuous CA and reaction-diffusion are closely related mathematically. If you understand one, you're already halfway to understanding the other.

GPU implementation: real-time at high resolution

The brute-force JavaScript version works for learning but it's painfully slow at useful resolutions. For real-time interaction, move the convolution to the GPU. The approach is the same ping-pong framebuffer technique from episode 36 and 48:

precision mediump float;
uniform sampler2D u_state;
uniform sampler2D u_kernel;
uniform vec2 u_resolution;
uniform float u_dt;
uniform float u_mu;
uniform float u_sigma;
uniform float u_kernelRadius;

// growth function
float growth(float u) {
  return 2.0 * exp(-pow(u - u_mu, 2.0) / (2.0 * u_sigma * u_sigma)) - 1.0;
}

void main() {
  vec2 uv = gl_FragCoord.xy / u_resolution;
  vec2 pixel = 1.0 / u_resolution;

  // convolution with ring kernel
  float conv = 0.0;
  float kernelR = u_kernelRadius;

  for (float dy = -13.0; dy <= 13.0; dy += 1.0) {
    for (float dx = -13.0; dx <= 13.0; dx += 1.0) {
      float dist = length(vec2(dx, dy)) / kernelR;
      if (dist > 1.0) continue;

      // ring kernel with Gaussian cross-section
      float k = exp(-pow(dist - 0.5, 2.0) / (2.0 * 0.15 * 0.15));

      vec2 sampleUV = uv + vec2(dx, dy) * pixel;
      sampleUV = fract(sampleUV);  // toroidal wrap
      conv += texture2D(u_state, sampleUV).r * k;
    }
  }

  // normalize (approximate -- precompute for accuracy)
  conv /= 120.0;

  float current = texture2D(u_state, uv).r;
  float g = growth(conv);
  float next = clamp(current + u_dt * g, 0.0, 1.0);

  gl_FragColor = vec4(next, next, next, 1.0);
}

This runs the Lenia step entirely on the GPU. Each pixel reads its neighborhood from the state texture, applies the kernel and growth function, writes the new state. The fract() call handles toroidal wrapping. A second render pass reads the state texture and applies the color mapping for display.

The loop hardcodes -13 to 13 which limits the kernel radius. In practice you'd use a uniform for the loop bounds or tile the computation. But even with this naive approach, a 512x512 Lenia runs at 30-60 fps on a modern GPU -- compared to 2-5 fps in JavaScript. The difference is night and day.

For even better performance, use the FFT approach on the GPU. WebGPU compute shaders can run FFTs efficiently, making kernels of any size essentially free. But the brute-force fragment shader approach is good enough for creative coding experiments and it's simpler to set up.

Parameter exploration: finding interesting patterns

The fun part of continuous CA is exploring the parameter space. Small changes in the growth function or kernel shape produce dramatically different behavior. Here's a systematic way to explore:

// parameter explorer
const params = {
  radius: 13,
  peakR: 0.5,     // where the kernel ring peaks (0..1)
  peakW: 0.15,    // width of the kernel ring
  mu: 0.15,       // growth function center
  sigma: 0.015,   // growth function width
  dt: 0.1,        // time step
};

// keyboard controls for live tweaking
document.addEventListener('keydown', function(e) {
  if (e.key === 'q') params.mu += 0.005;
  if (e.key === 'a') params.mu -= 0.005;
  if (e.key === 'w') params.sigma += 0.002;
  if (e.key === 's') params.sigma -= 0.002;
  if (e.key === 'e') params.peakR += 0.02;
  if (e.key === 'd') params.peakR -= 0.02;
  if (e.key === 'r') params.peakW += 0.01;
  if (e.key === 'f') params.peakW -= 0.01;
  if (e.key === 't') params.dt += 0.02;
  if (e.key === 'g') params.dt -= 0.02;

  // reset with current params
  if (e.key === ' ') {
    grid.fill(0);
    initCreature(W/2, H/2, 15, 1.0);
  }

  console.log(JSON.stringify(params));
});

Start from known-good parameters and nudge them. Push mu higher -- creatures need more neighborhood density to survive. They either die out (too demanding) or become more compact (packing tighter to meet the threshold). Push sigma wider -- the growth function is more forgiving, creatures become fuzzier and more irregular. Push dt higher -- evolution is faster but less stable, creatures move faster but might disintegrate. Each parameter has a visual effect you can feel after a few minutes of tweaking.

When you find something interesting, log the parameters. Lenia creatures are reproducible -- same parameters and same initial conditions produce the same evolution every time. Your parameter log becomes a collection of species.

Creative exercise

Build a continuous CA explorer with these features:

  1. A 256x256 (or larger) grid of floats, rendered with a smooth color gradient
  2. SmoothLife or Lenia update rule with adjustable parameters
  3. Click to place blobs (initial conditions)
  4. Keyboard controls for tweaking mu, sigma, kernel shape, and dt in real time
  5. Spacebar to reset the grid with a fresh blob

If you want to push it:

  • Implement two-channel Lenia with different colored channels
  • Move the convolution to a GPU shader for real-time performance at higher resolutions
  • Record parameter presets for interesting creatures you discover
  • Add the cosine palette from episode 37 for rendering -- map cell value to the palette instead of a simple gradient

The parameter exploration is genuinely addictive. You'll spend hours nudging values, watching creatures form and die, finding the sweet spots where complex behavior emerges. It's the same feeling as the rule explorer from episode 47, but the continuous state space makes the output so much richer.

And this is only the beginning of the emergent systems arc. Continuous CA shows how smooth, organic patterns emerge from local rules. The same principles -- local interactions producing global structure, self-organization without central control -- appear in everything coming next. Flocking simulations where autonomous agents create swarm patterns. Reaction-diffusion where chemical interactions produce animal markings and coral structures. The math changes but the core idea stays the same: define simple rules, watch complex beauty emerge.

Three episodes into the arc now. From 1D rows (episode 47) to 2D grids (episode 48) to continuous floating-point states (this one). We've gone from blocky pixel patterns to smooth organic forms that look like actual biology. The rules got smoother and the output got wilder. Makes sense, right? :-)

Allez, wa weten we nu allemaal?

  • Continuous cellular automata replace binary on/off states with floating-point values between 0.0 and 1.0. Instead of hard birth/death rules, smooth growth functions determine whether a cell increases or decreases. The output is organic and smooth instead of blocky
  • SmoothLife is the continuous generalization of Conway's Game of Life. It uses an inner disk (the "cell") and an outer ring (the "neighborhood") with smooth sigmoid transitions. The birth and survival thresholds blend based on the cell's current state
  • Lenia generalizes further with ring-shaped convolution kernels (Gaussian cross-section) and Gaussian growth functions. The loop is: convolve grid with kernel, apply growth function, update state. Three operations producing genuinely lifelike behavior
  • Lenia creatures (Orbium, Hydrogeminium, etc.) are self-organizing structures that move, maintain shape, split, and merge. They are not designed -- they emerge from the parameter space. Different kernel and growth parameters produce different species
  • The convolution bottleneck: brute-force is O(n * k^2). FFT reduces it to O(n log n), independent of kernel size. For real-time performance, move to the GPU using ping-pong framebuffers (same technique as episodes 36 and 48)
  • Multi-channel Lenia gives each cell multiple float values with separate kernels and growth functions per channel. Cross-channel interaction creates predator-prey dynamics and ecosystems
  • Continuous CA and reaction-diffusion are mathematically related. The convolution kernel is diffusion, the growth function is reaction. The ring-shaped kernel is what gives Lenia motile creatures -- Laplacian kernels produce stationary patterns
  • Parameter exploration is the creative practice: nudge mu, sigma, kernel shape, and dt to discover new creatures. Same parameters and initial conditions always produce the same evolution -- creatures are reproducible and collectible

Sallukes! Thanks for reading.

X

@femdev