Learn Creative Coding (#61) - Emergence: When Simple Rules Create Complex Beauty

in StemSocial5 days ago

Learn Creative Coding (#61) - Emergence: When Simple Rules Create Complex Beauty

cc-banner

Fourteen episodes. That's how many we've spent building emergent systems. Grid automata (ep047-049), free-moving flocks (ep050-051), continuous chemistry (ep052-053), formal grammars (ep054-055), autonomous crawlers (ep056), erosion and growth (ep057), swarm intelligence (ep058), wave simulation (ep059), and the full ecosystem combining everything (ep060). Each one took a different approach to the same underlying idea: define simple local rules, run many agents following those rules, and watch complex structure appear from nothing.

But we never stopped to ask the big question. What IS this? What exactly happens when a hundred boids following three steering forces produce a flock that looks alive? When a grid of cells checking their neighbors creates patterns no individual cell could predict? When ants depositing chemical find the shortest path without any ant knowing geometry?

The word for it is emergence. And it's not just a computer science concept -- it's one of the deepest ideas in philosophy, physics, and biology. Today we're going to look at what we've built through that lens. What emergence actually means. Why it works. When to use it as a creative tool and when deterministic code is the better choice. And then we'll build one more thing -- not a specific algorithm this time, but a framework for designing your OWN emergent systems from scratch.

This episode is more conceptual than usual. There's still code -- we'll build a generic emergence playground and design a novel emergent system from first principles. But the core of it is understanding WHY all those simulations worked the way they did. Once you see the pattern, you can invent new ones without following someone else's recipe :-)

The common pattern

Look at everything we've built. Strip away the specifics -- the grid sizes, the force coefficients, the pheromone decay rates -- and every single simulation shares the same skeleton:

  1. Many agents (cells, boids, particles, ants) exist in a shared space
  2. Each agent follows simple local rules based on its immediate surroundings
  3. No agent has access to the global state or the "big picture"
  4. Agents interact indirectly (through the environment) or directly (through local sensing)
  5. Run it for enough steps and structure appears at the collective level

That's it. That's emergence in five bullet points. The structure that appears -- flocking formation, Turing patterns, transport networks, predator-prey oscillations -- exists ONLY at the collective level. No single boid is "flocking." No single ant has "found the shortest path." The pattern is a property of the system, not of any individual component.

Let me make this concrete with a minimal example. The simplest possible emergent system I can think of:

const canvas = document.getElementById('canvas');
const ctx = canvas.getContext('2d');
const W = canvas.width = 400;
const H = canvas.height = 400;

// 500 particles with random positions and velocities
const particles = [];
for (let i = 0; i < 500; i++) {
  particles.push({
    x: Math.random() * W,
    y: Math.random() * W,
    vx: (Math.random() - 0.5) * 2,
    vy: (Math.random() - 0.5) * 2
  });
}

function step() {
  for (const p of particles) {
    // one rule: move toward the average position of nearby particles
    let avgX = 0, avgY = 0, count = 0;
    for (const other of particles) {
      const dx = other.x - p.x;
      const dy = other.y - p.y;
      if (dx * dx + dy * dy < 2500 && other !== p) {
        avgX += other.x;
        avgY += other.y;
        count++;
      }
    }

    if (count > 0) {
      avgX /= count;
      avgY /= count;
      p.vx += (avgX - p.x) * 0.001;
      p.vy += (avgY - p.y) * 0.001;
    }

    // drift
    p.vx *= 0.99;
    p.vy *= 0.99;
    p.x += p.vx;
    p.y += p.vy;

    // wrap
    p.x = ((p.x % W) + W) % W;
    p.y = ((p.y % H) + H) % H;
  }
}

Render it with simple dots and you can watch the clustering happen in real time:

function render() {
  ctx.fillStyle = 'rgba(0, 0, 0, 0.1)';
  ctx.fillRect(0, 0, W, H);

  ctx.fillStyle = 'rgba(200, 180, 140, 0.6)';
  for (const p of particles) {
    ctx.fillRect(p.x - 1, p.y - 1, 2, 2);
  }
}

function loop() {
  step();
  render();
  requestAnimationFrame(loop);
}

loop();

One rule: drift toward the average position of nearby particles. That's cohesion from the boids model (ep050), isolated. No separation, no alignment. Run it and the particles clump into clusters. Multiple clusters form, merge, sometimes split. The final state depends on the initial conditions -- different random seeds produce different cluster arrangements. From one rule applied 500 times per frame, spatial structure emerges.

Now add one more rule -- separation when too close:

function stepWithSeparation() {
  for (const p of particles) {
    let avgX = 0, avgY = 0, count = 0;
    let sepX = 0, sepY = 0;

    for (const other of particles) {
      if (other === p) continue;
      const dx = other.x - p.x;
      const dy = other.y - p.y;
      const d2 = dx * dx + dy * dy;

      if (d2 < 2500) {
        avgX += other.x;
        avgY += other.y;
        count++;

        if (d2 < 200) {
          // too close - push apart
          sepX -= dx;
          sepY -= dy;
        }
      }
    }

    if (count > 0) {
      avgX /= count;
      avgY /= count;
      p.vx += (avgX - p.x) * 0.001;
      p.vy += (avgY - p.y) * 0.001;
    }

    p.vx += sepX * 0.02;
    p.vy += sepY * 0.02;

    p.vx *= 0.99;
    p.vy *= 0.99;
    p.x += p.vx;
    p.y += p.vy;
    p.x = ((p.x % W) + W) % W;
    p.y = ((p.y % H) + H) % H;
  }
}

Now instead of collapsing into dense clumps, particles form stable clusters with spacing between members. They look like molecules. Or cells. Or colonies. Two rules -- attract at medium range, repel at close range -- and you get structured spatial organization. This is fundamentally the same mechanism that gives atoms their arrangement in crystals, cells their spacing in tissues, and galaxies their distribution in the cosmos. Different physics, same math.

Levels of emergence

Not all emergence is equal. Philosophers distinguish between two types, and it's worth knowing the difference because it affects how you think about your art.

Weak emergence: the global pattern is surprising but in principle predictable from the rules. Given enough computational power, you could simulate the rules forward and predict exactly what the system will do. All of our simulations are weakly emergent. The boids flocking pattern is a deterministic consequence of the three steering forces. There's nothing mysterious about it -- it's just hard to predict without actually running the simulation.

Strong emergence: the global pattern is genuinely novel -- it has properties that can't even in principle be derived from the rules of the components. This is the controversial one. Some philosophers argue consciousness is strongly emergent -- neurons fire according to electrochemistry, but subjective experience is something qualitatively new that can't be reduced to neural signals. Others argue strong emergence doesn't really exist and everything is ultimately reducible.

For creative coding purposes we're firmly in the weak emergence camp. And that's fine! Weak emergence is plenty powerful. The whole point is that although the patterns ARE predictable in principle, they're NOT predictable in practice without running the simulation. You can look at the boids rules all day and you won't be able to visualize the complex flock dynamics they produce. You have to run it to see it. That's why it feels like magic even though it's just math.

This has a practical implication for your workflow: you can't design emergent art by planning the output. You design the rules and then discover the output. The gap between intention and result is the creative space. If you knew exactly what the rules would produce, there'd be no point running them.

The edge of chaos

Here's something I noticed while tuning all those simulations over the past fourteen episodes. The most interesting emergent behavior always happens at a specific parameter sweet spot -- not too orderly, not too chaotic. Too much structure and you get boring crystals or frozen patterns. Too much randomness and you get noise. The good stuff is in between.

This has a name: the edge of chaos. And it's not just an observation about our simulations -- it shows up everywhere in complexity science. Conway's Game of Life, the most famous cellular automaton, lives at the edge of chaos. Its rules (B3/S23) sit in a narrow band where patterns neither die out (too sparse) nor fill the grid (too dense). Instead they produce gliders, oscillators, and universal computation. Shift one number in the birth/survive rules and you get either a dead grid or total chaos.

Let's build a tool for finding that edge. A parameter scanner that runs a CA with different rules and measures the "interestingness" of the output:

function measureComplexity(grid, w, h) {
  // simple measure: count distinct local patterns (2x2 blocks)
  const patterns = new Set();

  for (let y = 0; y < h - 1; y++) {
    for (let x = 0; x < w - 1; x++) {
      const key = grid[y * w + x] +
                  grid[y * w + x + 1] * 2 +
                  grid[(y + 1) * w + x] * 4 +
                  grid[(y + 1) * w + x + 1] * 8;
      patterns.add(key);
    }
  }

  // complexity peaks when pattern diversity is moderate
  // dead grid: 1 pattern (all zeros). Noise: 16 patterns (all possible)
  // edge of chaos: 6-12 patterns (structured but varied)
  return patterns.size;
}

function runCAWithRule(birthSet, surviveSet, steps, w, h) {
  let grid = new Uint8Array(w * h);
  // random initial state
  for (let i = 0; i < w * h; i++) {
    grid[i] = Math.random() < 0.4 ? 1 : 0;
  }

  for (let s = 0; s < steps; s++) {
    const next = new Uint8Array(w * h);
    for (let y = 1; y < h - 1; y++) {
      for (let x = 1; x < w - 1; x++) {
        let neighbors = 0;
        for (let dy = -1; dy <= 1; dy++) {
          for (let dx = -1; dx <= 1; dx++) {
            if (dx === 0 && dy === 0) continue;
            neighbors += grid[(y + dy) * w + (x + dx)];
          }
        }

        const alive = grid[y * w + x];
        if (alive && surviveSet.has(neighbors)) {
          next[y * w + x] = 1;
        } else if (!alive && birthSet.has(neighbors)) {
          next[y * w + x] = 1;
        }
      }
    }
    grid = next;
  }

  return { grid, complexity: measureComplexity(grid, w, h) };
}

The measureComplexity function counts distinct 2x2 block patterns in the grid. A dead grid has 1 pattern (all empty blocks). Pure random noise has 16 patterns (all possible 2x2 combinations). Interesting emergent behavior sits between these extremes -- enough variety to be visually rich, but enough structure to have recognizable features.

Run this across different birth/survive rule combinations and you'll find that the Game of Life (B3/S23) scores higher than most alternatives. Rules like B1/S12 (too eager to birth) produce noise. Rules like B5/S45 (too restrictive) produce dead grids. The sweet spot is narrow. That's the edge of chaos.

Self-organized criticality: the sandpile

Some systems don't need you to tune them to the edge of chaos -- they drive themselves there. This is self-organized criticality, and the classic example is the sandpile model. Drop grains of sand one at a time onto a pile. Each grain either stays put (if the local slope is shallow enough) or triggers an avalanche (if the slope exceeds a threshold). The pile naturally evolves to a critical state where avalanches of ALL sizes occur -- tiny ones (one grain sliding) and huge ones (the whole face collapsing) -- following a power law distribution.

const SIZE = 100;
const grid = new Int32Array(SIZE * SIZE);
const threshold = 4;

function addGrain(x, y) {
  grid[y * SIZE + x]++;

  if (grid[y * SIZE + x] >= threshold) {
    topple(x, y);
  }
}

function topple(x, y) {
  // stack-based toppling to avoid recursion overflow
  const stack = [{ x, y }];

  while (stack.length > 0) {
    const pos = stack.pop();
    const idx = pos.y * SIZE + pos.x;

    if (grid[idx] < threshold) continue;

    grid[idx] -= 4;

    // distribute to 4 neighbors
    const neighbors = [
      { x: pos.x - 1, y: pos.y },
      { x: pos.x + 1, y: pos.y },
      { x: pos.x, y: pos.y - 1 },
      { x: pos.x, y: pos.y + 1 }
    ];

    for (const n of neighbors) {
      if (n.x >= 0 && n.x < SIZE && n.y >= 0 && n.y < SIZE) {
        grid[n.y * SIZE + n.x]++;
        if (grid[n.y * SIZE + n.x] >= threshold) {
          stack.push(n);
        }
      }
      // grains at the edge fall off (open boundary)
    }
  }
}

// drop 50000 grains at random positions
for (let i = 0; i < 50000; i++) {
  const x = Math.floor(Math.random() * SIZE);
  const y = Math.floor(Math.random() * SIZE);
  addGrain(x, y);
}

After many grains, the pile reaches a state where it's perpetually "almost unstable." Every new grain has a chance of triggering cascades. Small cascades happen often, big cascades happen rarely, and the frequency follows a power law (log-log plot is a straight line). No tuning required -- the system organizes itself to this critical state.

The visual output is gorgeous. Color each cell by its height (0 to 3, since 4 triggers toppling):

function renderSandpile() {
  const imgData = ctx.createImageData(SIZE, SIZE);

  const colors = [
    [15, 15, 30],    // 0: near empty - dark blue
    [40, 80, 120],   // 1: moderate - steel blue
    [120, 160, 60],  // 2: filling up - yellow-green
    [220, 180, 40]   // 3: critical - gold (about to topple)
  ];

  for (let i = 0; i < SIZE * SIZE; i++) {
    const h = Math.min(3, grid[i]);
    imgData.data[i * 4 + 0] = colors[h][0];
    imgData.data[i * 4 + 1] = colors[h][1];
    imgData.data[i * 4 + 2] = colors[h][2];
    imgData.data[i * 4 + 3] = 255;
  }

  ctx.putImageData(imgData, 0, 0);
}

To animate it, drop one grain per frame and watch cascades happen live:

function animateSandpile() {
  // drop a grain at a random position
  const x = Math.floor(Math.random() * SIZE);
  const y = Math.floor(Math.random() * SIZE);
  addGrain(x, y);

  renderSandpile();
  requestAnimationFrame(animateSandpile);
}

animateSandpile();

The pattern that forms has fractal-like structure. Regions of height 3 (gold, critical) form connected clusters bordered by lower-height cells. The boundaries between clusters are where avalanches propagate. If you animate it -- drop one grain per frame and re-render -- you can watch avalanches cascade across the surface, some tiny, some sweeping across half the grid. The unpredictability of avalanche size from a single grain drop is what makes it captivating.

The art of rule design

So how do you actually design a new emergent system? Not follow a tutorial for boids or Game of Life -- but invent something original? Here's the framework I've been using across all these episodes, made explicit:

Step 1: choose your agents. What are they? Particles? Grid cells? Creatures with internal state? The agent type constrains what kinds of rules make sense. Grid cells have discrete neighbors. Free-moving particles interact at arbitrary distances. Agents with internal state (energy, direction, memory) can have richer behavior.

Step 2: define local rules. Each agent should only look at its immediate neighborhood. The temptation is to add global rules ("steer toward the center of all agents") -- resist it. Global rules don't produce emergence. They produce predictable convergence. Keep the sensing radius small relative to the world.

Step 3: include competing forces. This is the key insight. A single attractive force produces clumping. A single repulsive force produces uniform spacing. Neither is interesting. But attraction at one scale competing with repulsion at another scale -- that produces patterns. Turing proved this mathematically for reaction-diffusion: you need a short-range activator and a long-range inhibitor. The same principle applies to every emergent system we've built. Boids: cohesion (attract) vs separation (repel). Ants: pheromone reinforcement (amplify) vs evaporation (decay). Ecosystem: reproduction (growth) vs starvation (death).

Step 4: add feedback. Agents should modify the environment and the environment should modify agents. This circular causality is what makes emergence work. Ants deposit pheromone (agent -> environment) and follow pheromone (environment -> agent). Crawlers leave trails (agent -> surface) and sense trails (surface -> agent). Without feedback, agents are just independent walkers that don't create collective structure.

Step 5: tune to the edge. Start with aggressive parameters (strong forces, fast rates) and watch the system. It will probably either collapse to a static state or explode into noise. Dial things back toward the transition zone. The interesting behavior is always at the boundary between order and chaos.

Let me demonstrate with a novel system that doesn't exist in any textbook. I'm calling it "magnetic ink" -- particles that deposit trails and are attracted to existing trail density, but repelled by other particles at close range:

const W = 500;
const H = 500;
const trail = new Float32Array(W * H);

const agents = [];
for (let i = 0; i < 800; i++) {
  agents.push({
    x: Math.random() * W,
    y: Math.random() * H,
    dir: Math.random() * Math.PI * 2,
    speed: 1.2
  });
}

function stepMagneticInk() {
  for (const a of agents) {
    // sense trail ahead (three sensors, like Physarum)
    const sAngle = 0.5;
    const sDist = 10;

    const fL = sampleTrail(
      a.x + Math.cos(a.dir - sAngle) * sDist,
      a.y + Math.sin(a.dir - sAngle) * sDist
    );
    const fC = sampleTrail(
      a.x + Math.cos(a.dir) * sDist,
      a.y + Math.sin(a.dir) * sDist
    );
    const fR = sampleTrail(
      a.x + Math.cos(a.dir + sAngle) * sDist,
      a.y + Math.sin(a.dir + sAngle) * sDist
    );

    // steer toward trail (attraction to existing marks)
    if (fC >= fL && fC >= fR) {
      a.dir += (Math.random() - 0.5) * 0.1;
    } else if (fL > fR) {
      a.dir -= 0.3;
    } else {
      a.dir += 0.3;
    }

    // repulsion from nearby agents
    for (const other of agents) {
      if (other === a) continue;
      const dx = other.x - a.x;
      const dy = other.y - a.y;
      const d2 = dx * dx + dy * dy;
      if (d2 < 100 && d2 > 0) {
        const dist = Math.sqrt(d2);
        a.dir += Math.atan2(-dy, -dx) * 0.05;
      }
    }

    // move
    a.x += Math.cos(a.dir) * a.speed;
    a.y += Math.sin(a.dir) * a.speed;
    a.x = ((a.x % W) + W) % W;
    a.y = ((a.y % H) + H) % H;

    // deposit trail
    const ix = Math.floor(a.x);
    const iy = Math.floor(a.y);
    trail[iy * W + ix] += 0.5;
  }

  // decay and diffuse trail
  for (let i = 0; i < W * H; i++) {
    trail[i] *= 0.98;
  }
}

function sampleTrail(x, y) {
  const ix = Math.floor(((x % W) + W) % W);
  const iy = Math.floor(((y % H) + H) % H);
  return trail[iy * W + ix];
}

This isn't Physarum (no food sources, different force balance). It isn't boids (no alignment, no cohesion -- just trail-following). It's a new system that combines elements. The agents are attracted to ink (trail following creates reinforcement) but repelled by each other (preventing pile-ups). The result? Lines. The agents naturally organize into flowing streams that trace filament-like paths across the canvas. Where two streams merge, the trail brightens and attracts more agents. Where a stream thins out, agents wander and eventually find other streams. The network of lines that forms is genuinely beautiful and it's not something I could have predicted from reading the rules.

That gap between the rules and the output is the whole point.

When NOT to use emergence

This is important because emergence isn't always the right tool. If you know exactly what you want the output to look like -- a specific pattern, a particular arrangement, a precise shape -- emergence is a terrible approach. You can't control the output directly. You can nudge it by adjusting paramters, but the specific configuration is up to the system.

Use emergence when:

  • You want to discover visual forms you couldn't have imagined
  • You want organic, natural-looking structure (biological, geological, fluid)
  • You want behavior that adapts and responds to changing conditions
  • You want variation -- every run producing a slightly different result

Use deterministic code when:

  • You need exact reproducibility (same output every time)
  • You need a specific composition (logo, typographic layout, precise geometry)
  • You need guaranteed performance (emergent systems can have unpredictable computational cost when populations explode)
  • The visual goal is well-defined enough to code directly

Most generative art projects use BOTH. A deterministic composition framework (where things go) populated with emergent details (how things look). The fractal tree from ep054 has a deterministic branching structure but emergent variation in leaf placement. The ecosystem from ep060 has deterministic energy conversion ratios but emergent population dynamics.

The artist's role in emergent art

So if the system generates itself, what does the artist do? Everything except the final image.

You design the rules. You choose the initial conditions. You set the parameters. You pick the color map. You decide when to stop the simulation. You curate -- running it hundreds of times and selecting the outputs that resonate. The system explores a possibility space. You navigate that space by adjusting knobs. And you select the destinations worth keeping.

This is fundamentally different from traditional art where you place every mark. It's closer to gardening than painting. You prepare the soil, plant seeds, control water and sunlight, and then you see what grows. You don't control the shape of each leaf. But you absolutely control what kind of garden it is.

Let's build a minimal curation tool -- a system that generates variants and lets you compare them:

function generateVariant(seed) {
  // deterministic RNG from seed
  let s = seed;
  function rand() {
    s = (s * 1103515245 + 12345) & 0x7fffffff;
    return s / 0x7fffffff;
  }

  const particles = [];
  for (let i = 0; i < 200; i++) {
    particles.push({
      x: rand() * W,
      y: rand() * H,
      dir: rand() * Math.PI * 2,
      turnRate: 0.1 + rand() * 0.5,
      depositRate: 0.3 + rand() * 0.7
    });
  }

  const field = new Float32Array(W * H);
  const decay = 0.95 + rand() * 0.04;
  const senseDistance = 5 + rand() * 15;

  return { particles, field, decay, senseDistance, seed };
}

// generate 4 variants, render each to a quadrant
function showVariants() {
  const seeds = [42, 137, 256, 999];
  const quadW = W / 2;
  const quadH = H / 2;

  for (let q = 0; q < 4; q++) {
    const variant = generateVariant(seeds[q]);

    // run 500 steps
    for (let step = 0; step < 500; step++) {
      for (const p of variant.particles) {
        // sense and move (simplified Physarum-like)
        const ahead = sampleField(variant.field,
          p.x + Math.cos(p.dir) * variant.senseDistance,
          p.y + Math.sin(p.dir) * variant.senseDistance, W, H);

        p.dir += (Math.random() - 0.5) * p.turnRate;

        p.x += Math.cos(p.dir) * 1.2;
        p.y += Math.sin(p.dir) * 1.2;
        p.x = ((p.x % W) + W) % W;
        p.y = ((p.y % H) + H) % H;

        const ix = Math.floor(p.x);
        const iy = Math.floor(p.y);
        variant.field[iy * W + ix] += p.depositRate;
      }

      // decay
      for (let i = 0; i < W * H; i++) {
        variant.field[i] *= variant.decay;
      }
    }

    // render to quadrant
    renderToQuadrant(variant.field, q, quadW, quadH);
  }
}

function sampleField(field, x, y, w, h) {
  const ix = Math.floor(((x % w) + w) % w);
  const iy = Math.floor(((y % h) + h) % h);
  return field[iy * w + ix];
}

Four different random seeds, four different emergent patterns. Same rules, different initial conditions and parameters derived from the seed. As the artist you look at all four and decide which one to keep, refine, or use as a starting point for further exploration. This is how generative artists actually work -- generating batches, curating, adjusting, generating more. The algorithm is the collaborator. Your taste is the filter.

Natural precedents

Every emergent system we've coded has a natural counterpart. This isn't a coincidence -- it's becuse emergence is how nature builds complex things from simple parts.

  • Cellular automata (ep047-049): crystal growth, mineral patterns, zebra stripes, seashell markings. All follow local interaction rules between cells or molecules.
  • Boids flocking (ep050-051): bird murmurations, fish schooling, herding behavior. Three-force models match observed animal movement with high accuracy.
  • Reaction-diffusion (ep052-053): Turing patterns in animal coats (leopard spots, angelfish stripes), chemical oscillations (BZ reaction), morphogenesis in embryo development.
  • L-systems (ep054-055): plant growth, branching structures in lungs and blood vessels, river delta networks. Recursive grammars match botanical structure across species.
  • Agent crawlers (ep056): root growth, fungal mycelium, neural axon pathfinding. Organisms that navigate by sensing local chemical gradients.
  • Erosion (ep057): actual geological erosion, river formation, glacier carving, wind-sculpted dunes. Physical processes that shape terrain through local material transport.
  • Swarm intelligence (ep058): ant colony foraging, bee pollination networks, immune system response. Collective computation in biological colonies.
  • Waves (ep059): water surface waves, sound propagation, light diffraction, earthquake seismology. The wave equation describes all of them.

Nature doesn't have a rendering engine. It doesn't call requestAnimationFrame. But it does have particles (atoms, molecules, cells, organisms) following local rules (physics, chemistry, biology) without global coordination. The complexity of life -- from protein folding to ecosystem dynamics to consciousness -- is emergence all the way up.

This is why emergent art looks "natural." It's not imitating nature's output. It's imitating nature's process. A reaction-diffusion simulation doesn't paint spots that look like a leopard. It runs the same math that actually makes leopard spots. The resemblance isn't cosmetic -- it's mechanistic.

The philosophical edge

Here's where it gets weird. And I mean genuinely weird, not just "oh cool, math produces patterns."

When a cellular automaton produces a glider -- a pattern that moves across the grid, maintains its shape, and can interact with other gliders -- is the glider "real"? It's not a thing in the code. There's no glider object, no glider class, no variable called glider. It's a pattern in the cell states that we, as observers, recognize and name. The cells don't know they're part of a glider. The glider exists only in our description of the system, not in the system's description of itself.

And yet the glider has causal power. It moves. It collides with things. It's destroyed by some interactions and passes through others. It behaves AS IF it's a thing. This is the philosophical puzzle of emergence: patterns that act like objects without being objects.

The same question applies to everything we've built. Is the "flock" in the boids simulation a real entity? The predator-prey oscillation in the ecosystem -- does it exist, or is it just a pattern in the population counts that we've decided to call a cycle? When the ant colony "finds" the shortest path, who found it? Not any individual ant. The colony? But the colony is just a bunch of ants. There's no colony entity with a brain.

I don't have answers. Philosophers have been arguing about this for decades and haven't settled it either. But I think it's worth sitting with these questions as a creative coder because they change how you relate to your own work. You're not just making pretty pictures. You're creating systems that exhibit behavior you didn't program. Something emerges that you didn't put in. Where did it come from?

Creative exercise: design your own

Time to put the framework into practice. Design an emergent system that's genuinely yours -- not a variant of boids, not a tweaked CA, not Physarum with different colors. Something new.

Here's a starter template. Fill in the sensing, force, and deposit functions with your own invention:

const W = 500;
const H = 500;
const field = new Float32Array(W * H);

class EmergentAgent {
  constructor() {
    this.x = Math.random() * W;
    this.y = Math.random() * H;
    this.dir = Math.random() * Math.PI * 2;
    this.state = 0;  // internal state -- use for whatever you want
    this.energy = 1.0;
  }

  sense(field, agents) {
    // YOUR RULES HERE
    // what does this agent perceive about its local environment?
    // return forces or desired direction change
    return { turn: 0, speedMod: 1.0 };
  }

  act(field) {
    // YOUR RULES HERE
    // how does this agent modify the environment?
    // deposit, erase, transform the field
    const ix = Math.floor(this.x);
    const iy = Math.floor(this.y);
    if (ix >= 0 && ix < W && iy >= 0 && iy < H) {
      field[iy * W + ix] += 0.5;
    }
  }

  update(field, agents) {
    const perception = this.sense(field, agents);
    this.dir += perception.turn;
    const speed = 1.0 * perception.speedMod;

    this.x += Math.cos(this.dir) * speed;
    this.y += Math.sin(this.dir) * speed;
    this.x = ((this.x % W) + W) % W;
    this.y = ((this.y % H) + H) % H;

    this.act(field);
  }
}

const agents = [];
for (let i = 0; i < 400; i++) {
  agents.push(new EmergentAgent());
}

function simulate() {
  for (const a of agents) {
    a.update(field, agents);
  }

  // field decay (essential for dynamics)
  for (let i = 0; i < W * H; i++) {
    field[i] *= 0.97;
  }
}

Some ideas to get you started:

  • Agents that avoid their own trail but follow others' trails (inverse stigmergy)
  • Agents that change state (color, behavior) based on field density -- one behavior in low-density areas, different behavior in high-density
  • Agents that "eat" the field (subtract instead of add) and starve when the field is depleted
  • Two species of agents with different rules occupying the same field, competing or cooperating
  • Agents that align their direction to the local field gradient (moving uphill or downhill in the concentration landscape)

The key: keep the rules simple and local. Resist the urge to add more rules. Three rules maximum. Run it. See what happens. If it's boring (uniform or noisy), adjust one parameter at a time. If something interesting appears, freeze the parameters and watch it longer. Document what you see versus what you expected -- that gap is where the learning is.

Looking back and looking forward

This episode closes a chapter. Fourteen episodes of emergent systems -- the longest arc in this series so far. We started with the simplest possible cellular automaton (Wolfram's elementary rules, ep047) and ended with a three-layer ecosystem combining cellular growth, flocking, hunting, energy metabolism, and seasonal variation (ep060). Along the way we implemented algorithms that have real applications in optimization (ant colony), biology (reaction-diffusion), ecology (predator-prey), physics (wave equation), and computer graphics (L-system trees, Physarum networks).

The through-line across all of it: complexity from simplicity. Local rules producing global structure. No central control. No blueprint. Just agents and interactions and time.

Everything we've done so far has been flat. Two-dimensional. Pixels on a canvas, cells in a grid, particles moving on a plane. But the world has depth. Objects occlude each other. Light bounces off surfaces at angles. Shapes have volume and casting shadows. The next part of this series steps off the flat plane and into three dimensions. Different tools, different math, different creative possibilities -- but the same fundamental approach of writing code that produces visuals you couldn't have made by hand.

't Komt erop neer...

  • All fourteen emergent systems we built share the same skeleton: many agents, local rules, no global knowledge, indirect interaction, and structure that appears only at the collective level. Emergence is the pattern -- from cellular automata to ecosystems
  • Weak emergence (surprising but predictable from rules) describes everything we've coded. Strong emergence (genuinely irreducible novelty) is the philosophical version that may or may not exist. Both are interesting but weak emergence is already powerful enough to produce endlessly surprising visual output
  • The edge of chaos is the parameter sweet spot where emergent behavior is most interesting -- not frozen order, not random noise, but the narrow band between them. Conway's Game of Life sits at this edge. Finding it in your own systems requires experimentation and tuning
  • Self-organized criticality (the sandpile model) shows that some systems drive themselves to the critical state without tuning. Drop grains one at a time, the pile evolves to a state where avalanches of all sizes occur following a power law distribution
  • The framework for designing emergent systems: choose agents, define local rules, include competing forces (attraction vs repulsion at different scales), add feedback between agents and environment, tune to the edge of chaos
  • Every emergent system we coded has a natural counterpart because nature uses the same mechanism -- local rules without global control produce complex biological structure from molecules to ecosystems
  • The artist's role in emergent art is designing rules, setting initial conditions, adjusting parameters, and curating outputs. You don't control the final image -- you control the process that generates it. Gardening, not painting
  • The emergent systems arc is complete. Fourteen episodes from elementary cellular automata to full ecosystems. Next: we leave the flat plane behind and step into three dimensions

Sallukes! Thanks for reading.

X

@femdev