Learn Creative Coding (#8) - Mini-Project: Generative Poster

Seven episodes in and we've covered shapes, animation, randomness, grids, interactivity, and color theory. Time to smash all of it together into one project that actually produces something you could print and frame.
We're building a generative poster system. Not a toy — a proper composition engine with deterministic randomness, golden ratio placement, noise-driven flow fields, perceptually balanced color, and high-res export. Every click generates a unique poster. The good ones will genuinely surprise you.
Fair warning: this one goes deep. I'm going to show you the math behind things p5.js normally hides from you. If you've been comfortable so far — good, because comfortable ends here :-)
Planning the piece
Before I write a single line, I think about three things:
- Structure — what gives the piece visual order? (Grid, golden ratio anchors, alignment)
- Variation — what changes between runs? (Color, position, size, flow field seed)
- Palette — what colors, and are they perceptually balanced? (Decide early, not as an afterthought)
For this poster: a grid-based composition with geometric shapes placed at golden ratio focal points, noise-driven flow lines adding organic texture, a perceptually balanced warm palette, and deterministic seeds so you can recreate any poster you like. Like a Swiss design poster that got fed through a physics simulator.
Deterministic randomness: building your own PRNG
Here's the thing about random() in p5.js — you can't reproduce a specific result. Sure, randomSeed(42) exists, but do you know what algorithm is running underneath? Can you serialize the state and resume it later? Can you run two independent random streams simultaneously?
No. So let's build our own.
The **xoshiro128**** (pronounced "xo-shee-ro") algorithm is what many modern languages use internally. It was designed by David Blackman and Sebastiano Vigna in 2018, and it's both fast and statistically excellent. The ** in the name refers to the output scrambler it uses (multiplication + shift). Here's the full implementation:
class Xoshiro128 {
constructor(seed) {
// initialize state from a single seed using splitmix32
// (you need a seeding function to expand one number into four)
let s = seed >>> 0;
this.state = new Uint32Array(4);
for (let i = 0; i < 4; i++) {
s += 0x9e3779b9; // golden ratio constant
let z = s;
z = Math.imul(z ^ (z >>> 16), 0x85ebca6b);
z = Math.imul(z ^ (z >>> 13), 0xc2b2ae35);
z = (z ^ (z >>> 16)) >>> 0;
this.state[i] = z;
}
}
// core xoshiro128** generator
next() {
const s = this.state;
// scrambler: multiply s[1]*5, rotate left 7, multiply by 9
const result = (Math.imul(this.rotl(Math.imul(s[1], 5), 7), 9)) >>> 0;
// state update (the "xoshiro" part - XOR, shift, rotate)
const t = s[1] << 9;
s[2] ^= s[0];
s[3] ^= s[1];
s[1] ^= s[2];
s[0] ^= s[3];
s[2] ^= t;
s[3] = this.rotl(s[3], 11);
return result;
}
rotl(x, k) {
return ((x << k) | (x >>> (32 - k))) >>> 0;
}
// normalized float in [0, 1)
random() {
return this.next() / 4294967296; // divide by 2^32
}
// float in [min, max)
range(min, max) {
return min + this.random() * (max - min);
}
// integer in [min, max] inclusive
randInt(min, max) {
return min + Math.floor(this.random() * (max - min + 1));
}
}
What's happening here? The seeding phase uses splitmix32 — a simpler hash function that expands our single seed number into four 32-bit state values. That constant 0x9e3779b9 is the integer form of 2^32 / φ (golden ratio again — it shows up everywhere). The core next() function does three things: applies the ** scrambler to produce output, then updates internal state via XOR and bit shifts, then rotates one state word. The rotation in rotl wraps bits around — bit 31 becomes bit 0 — which is a single CPU instruction on modern hardware.
Why does this matter for art? Because now:
let rng = new Xoshiro128(42);
// this ALWAYS produces the exact same poster
let x = rng.range(0, 800); // always the same x
let y = rng.range(0, 1100); // always the same y
Give someone the seed number and they can reproduce your exact poster. Put the seed in the corner of the image. Gallery-ready generative art always includes the seed — it's like the edition number on a print.
We'll use this throughout the rest of the project instead of p5's random().
Golden ratio composition
The golden ratio φ = (1 + √5) / 2 ≈ 1.618 isn't just a math curiosity — it defines where the human eye naturally rests in a composition. Photographers know this as the "phi grid", and it's been used in visual art since at least the Parthenon (447 BC, for the architecture nerds).
The focal points sit at the intersections of lines placed at 1/φ and 1 - 1/φ of the width and height:
function goldenPoints(w, h) {
const phi = (1 + Math.sqrt(5)) / 2;
const gx1 = w / phi; // ≈ 0.618 * width
const gx2 = w - gx1; // ≈ 0.382 * width
const gy1 = h / phi;
const gy2 = h - gy1;
return [
{ x: gx2, y: gy2, weight: 1.0 }, // top-left intersection (primary)
{ x: gx1, y: gy2, weight: 0.85 }, // top-right
{ x: gx2, y: gy1, weight: 0.85 }, // bottom-left
{ x: gx1, y: gy1, weight: 0.7 }, // bottom-right (weakest)
];
}
The weights represent visual importance — the top-left focal point gets the strongest elements because Western eyes scan left-to-right, top-to-bottom. Bottom-right is weakest.
But here's where it gets interesting. We don't just place shapes AT these points — we create a gravitational field where shapes cluster more densely near focal points. The probability of placing a shape decreases with distance from the nearest focal point, following an inverse-square falloff:
function focalDensity(x, y, focalPoints, falloff) {
let density = 0;
for (let fp of focalPoints) {
let dx = x - fp.x;
let dy = y - fp.y;
let distSq = dx * dx + dy * dy;
// inverse square with softening (prevents division by zero)
density += fp.weight / (1 + distSq / (falloff * falloff));
}
return density;
}
The falloff parameter controls how tightly shapes cluster. Small falloff = tight clusters around focal points, large falloff = more even distribution. We're essentially modeling gravitational attraction — each focal point pulls shapes toward it proportional to weight / distance².

The four golden ratio focal points with their inverse-square gravity fields visualized as heat zones. Shape density follows this field.
The palette: perceptually balanced
From episode 7 we know that equal HSB brightness does NOT produce equal perceived brightness. Yellow screams while blue hides. Let's fix this properly for our poster.
function buildPalette(rng) {
colorMode(HSB, 360, 100, 100, 100);
// start from a random base hue
let baseHue = rng.range(0, 360);
// golden angle distribution for maximally spread hues
const goldenAngle = 137.508;
let hues = [];
for (let i = 0; i < 5; i++) {
hues.push((baseHue + i * goldenAngle) % 360);
}
// correct brightness for perceptual uniformity
let palette = hues.map(h => {
let bri = correctedBrightness(h, 0.50);
return {
hue: h,
sat: 70 + rng.range(0, 20),
bri: bri,
};
});
return palette;
}
function correctedBrightness(hue, targetLum) {
// binary search for HSB brightness that gives target
// perceptual luminance at this hue (ITU-R BT.709)
let lo = 0, hi = 100;
for (let i = 0; i < 20; i++) {
let mid = (lo + hi) / 2;
let c = color(hue, 80, mid);
let r = red(c) / 255, g = green(c) / 255, b = blue(c) / 255;
let lum = 0.2126 * r + 0.7152 * g + 0.0722 * b;
if (lum < targetLum) lo = mid;
else hi = mid;
}
return (lo + hi) / 2;
}

Top: same HSB brightness — yellow screams, blue vanishes. Bottom: BT.709 perceptual correction — all hues equally loud.
Five hues spread by the golden angle (they'll never cluster, no matter what base hue we start from), each brightness-corrected so they look equally "loud" to the eye. The luminance formula 0.2126R + 0.7152G + 0.0722B comes from the ITU-R BT.709 standard — the same specification used in HDTV broadcasting. Green dominates at 71.5% because human retinas have roughly twice as many green-sensitive cone cells (M-cones, peaking at ~530nm wavelength) as red-sensitive ones (L-cones, ~560nm), and far more than blue (S-cones, ~420nm). Evolution optimized us for detecting movement in green foliage. Our poster palette compensates for this biological asymmetry.
Noise field flow lines
Here's where this poster goes from "nice grid layout" to "wait, how did code make that?"
A flow field is a 2D vector field where every point has a direction, defined by Perlin noise. If you drop a particle into this field and let it follow the local direction at each step, it traces a curved path — a flow line. Hundreds of these flow lines create organic, hair-like textures that look hand-drawn.
The math: at any point (x, y), the noise value noise(x * scale, y * scale) gives us a number between 0 and 1. Multiply by TWO_PI to get an angle. That angle is the direction a particle at that point should move.
function traceFlowLine(startX, startY, steps, stepSize, noiseScale) {
let points = [];
let x = startX;
let y = startY;
for (let i = 0; i < steps; i++) {
points.push({ x, y });
// sample the noise field to get direction
let angle = noise(x * noiseScale, y * noiseScale) * TWO_PI * 2;
// multiply by 2 for more curl — single TWO_PI gives
// gentle curves, double gives tighter spirals
// euler integration: step in that direction
x += cos(angle) * stepSize;
y += sin(angle) * stepSize;
// stop if out of bounds (generous margin for edge flow)
if (x < -80 || x > width + 80 || y < -80 || y > height + 80) break;
}
return points;
}
function drawFlowField(rng, palette, focalPoints) {
let numLines = 800;
let noiseScale = 0.0025;
for (let i = 0; i < numLines; i++) {
// bias starting positions toward focal points
let sx, sy;
let attempts = 0;
do {
sx = rng.range(-20, width + 20);
sy = rng.range(-20, height + 20);
attempts++;
} while (rng.random() > focalDensity(sx, sy, focalPoints, 300) * 2.5
&& attempts < 30);
let points = traceFlowLine(sx, sy, 120, 2.2, noiseScale);
if (points.length < 15) continue;
// pick color from palette — vary opacity and weight for depth
let ci = rng.randInt(0, palette.length - 1);
let c = palette[ci];
let opacity = rng.range(12, 35);
let weight = rng.range(0.4, 2.2);
stroke(c.hue, c.sat * 0.55, c.bri * 0.9, opacity);
strokeWeight(weight);
noFill();
beginShape();
for (let p of points) {
curveVertex(p.x, p.y);
}
endShape();
}
}
What's actually happening mathematically is Euler integration of a velocity field. At each step, we're solving:
dx/dt = cos(θ(x,y)) · stepSize
dy/dt = sin(θ(x,y)) · stepSize
where θ(x,y) = noise(x·s, y·s) · 4π is the angle field. This is the same numerical method used in fluid dynamics simulations — we're just using it to draw pretty lines instead of modeling airflow over a wing. The stepSize parameter controls spatial resolution: smaller steps = smoother curves but slower. 120 steps at size 2.2 gives lines about 264 pixels long, which works nicely at poster scale.

800 flow lines traced through a Perlin noise vector field — starting positions biased toward golden ratio focal points. Varying opacity and stroke weight creates depth.
The starting positions are biased toward the golden ratio focal points using rejection sampling — do/while loop tries random positions and rejects ones far from focal points with higher probability. This means flow lines cluster where the eye naturally looks. The * 2.5 factor controls how aggressively they cluster.
The shape layer
On top of the flow lines, a small number of geometric shapes placed directly at focal points provides structure. Less is more — instead of filling a grid, we scatter ~18 shapes tightly around the four golden ratio intersections. The flow field is the hero; the shapes are accents.
function drawShapeLayer(rng, palette, focalPoints) {
let maxShapes = 18;
let placedShapes = 0;
for (let fpi = 0; fpi < focalPoints.length && placedShapes < maxShapes; fpi++) {
let focal = focalPoints[fpi];
// 4-5 shapes per focal point, scattered within radius
let numHere = fpi < 2 ? 5 : 4;
let radius = 120;
for (let s = 0; s < numHere && placedShapes < maxShapes; s++) {
let angle = rng.range(0, TWO_PI);
let dist = rng.range(20, radius);
let cx = focal.x + cos(angle) * dist;
let cy = focal.y + sin(angle) * dist;
// skip if out of bounds
if (cx < 30 || cx > width - 30 || cy < 30 || cy > height - 30) continue;
let n = noise(cx * 0.01, cy * 0.01);
let shapeType = floor(n * 4); // 0-3
let size = rng.range(35, 70);
let ci = rng.randInt(0, palette.length - 1);
let c = palette[ci];
let hueShift = rng.range(-10, 10);
push();
translate(cx, cy);
if (shapeType === 0) {
// filled circle
noStroke();
fill((c.hue + hueShift + 360) % 360, c.sat * 0.9, c.bri, 55);
ellipse(0, 0, size, size);
} else if (shapeType === 1) {
// arc (pac-man)
noStroke();
fill((c.hue + hueShift + 360) % 360, c.sat * 0.9, c.bri, 55);
let startAngle = floor(rng.random() * 4) * HALF_PI;
arc(0, 0, size, size, startAngle, startAngle + PI + HALF_PI);
} else if (shapeType === 2) {
// ring
noFill();
stroke((c.hue + hueShift + 360) % 360, c.sat * 0.8, c.bri, 50);
strokeWeight(2.5);
ellipse(0, 0, size, size);
} else {
// rectangle
noStroke();
fill((c.hue + hueShift + 360) % 360, c.sat * 0.9, c.bri, 50);
rotate(floor(rng.random() * 4) * HALF_PI);
rectMode(CENTER);
rect(0, 0, size * 0.9, size * 0.4, 4);
}
pop();
placedShapes++;
}
}
}
Four shape types, placed in tight clusters around each focal point with a 120px scatter radius. The shapes are semi-transparent (alpha 40-55) so the flow lines show through. The key insight: placing shapes directly at computed positions rather than filtering a grid gives you precise control over density and breathing room.
Accent lines and dot texture
Thin lines crossing the poster add subtle rhythm. The dot pattern goes down as the very first layer — a halftone-style texture that gives the background a print feel:
function drawDotPattern(darkColor) {
noStroke();
fill(darkColor.hue, 15, 75, 12);
for (let x = 10; x < width; x += 14) {
for (let y = 10; y < height; y += 14) {
if (noise(x * 0.015 + 50, y * 0.015 + 50) > 0.5) {
ellipse(x, y, 1.5, 1.5);
}
}
}
}
function drawAccents(rng, palette) {
stroke(palette[0].hue, 20, 40, 18);
strokeWeight(0.8);
for (let i = 0; i < 3; i++) {
let y = rng.range(height * 0.15, height * 0.85);
line(rng.range(0, width * 0.15), y, rng.range(width * 0.85, width), y);
}
for (let i = 0; i < 2; i++) {
let x = rng.range(width * 0.2, width * 0.8);
line(x, rng.range(0, height * 0.1), x, rng.range(height * 0.7, height * 0.95));
}
}
Deep dive: high-resolution export and pixel density
When you call save('poster.png') in p5.js, you get an 800x1100 image. That's fine for screens but terrible for print. A 300 DPI poster at A3 size (297×420mm) needs 3508×4961 pixels. How do we get there?
pixelDensity() is the key, but most people don't understand what it actually does. Here's the math.
A canvas of width × height in CSS pixels actually has width × pixelDensity × height × pixelDensity physical pixels in the backing buffer. On a Retina display, pixelDensity() returns 2 by default — your 800×1100 canvas is really 1600×2200 pixels. When you call pixelDensity(4), it becomes 3200×4400 — approaching print resolution.
But there's a catch. Everything renders at the higher resolution: line widths, font sizes, shape coordinates all scale automatically through the canvas transform matrix. Your code doesn't change. EXCEPT for things measured in pixels: loadPixels()/pixels[] now has (width*4)*(height*4)*4 values instead of width*height*4. And performance: a 4x density means 16x the pixels to render. Flow lines with 800 curves at density 4 will pause for a few seconds.
function exportHighRes(seed, density) {
// save current pixel density
let originalDensity = pixelDensity();
pixelDensity(density);
// recreate canvas at same logical size but higher physical res
createCanvas(800, 1100);
// regenerate with same seed = identical output
let rng = new Xoshiro128(seed);
generatePoster(rng);
// physical resolution
let physW = width * density;
let physH = height * density;
console.log(`Exporting at ${physW}x${physH} pixels (density ${density})`);
console.log(`At 300 DPI: ${(physW/300*25.4).toFixed(0)}×${(physH/300*25.4).toFixed(0)}mm`);
save(`poster-seed-${seed}-${physW}x${physH}.png`);
// restore
pixelDensity(originalDensity);
createCanvas(800, 1100);
}
At density 4: 3200×4400 pixels, which at 300 DPI gives 271×373mm — just over A3 size. At density 5: 4000×5500 — clean A2 at 300 DPI. The deterministic seed means the high-res version is pixel-for-pixel identical to the preview, just sharper.
The formula for any target print size:
density = ceil(targetWidthMM / 25.4 * targetDPI / canvasWidth)
For A3 landscape at 300 DPI: ceil(420 / 25.4 * 300 / 800) = ceil(4.96) = 5.
Putting it all together
Here's the complete poster system. Every function we built above, wired together:
let currentSeed;
let rng;
function setup() {
createCanvas(800, 1100);
colorMode(HSB, 360, 100, 100, 100);
currentSeed = floor(millis());
generateFromSeed(currentSeed);
}
function generateFromSeed(seed) {
currentSeed = seed;
rng = new Xoshiro128(seed);
noiseSeed(seed);
generatePoster(rng);
}
function generatePoster(rng) {
// 1. palette
let palette = buildPalette(rng);
let darkColor = { hue: palette[0].hue, sat: 40, bri: 15 };
// 2. background
background(palette[0].hue, 6, 97);
// 3. focal points
let fp = goldenPoints(width, height);
// 4. layers (back to front)
drawDotPattern(darkColor); // subtle halftone texture
drawFlowField(rng, palette, fp); // the hero — 800 flow lines
drawShapeLayer(rng, palette, fp); // ~18 shapes at focal points
drawAccents(rng, palette); // thin crossing lines
drawSeedLabel(currentSeed);
}
function drawSeedLabel(seed) {
// small seed number in bottom corner — the "edition number"
fill(0, 0, 50);
noStroke();
textSize(9);
textAlign(RIGHT, BOTTOM);
text('seed: ' + seed, width - 15, height - 12);
}
function mousePressed() {
generateFromSeed(floor(millis()));
}
function keyPressed() {
if (key === 's') {
save('poster-' + currentSeed + '.png');
}
if (key === 'h') {
// high-res export at 4x density
exportHighRes(currentSeed, 4);
}
if (key === 'r') {
// re-enter a seed
let input = prompt('Enter seed number:');
if (input) generateFromSeed(parseInt(input));
}
}
Click for a new poster. Press s to save at screen resolution, h for high-res (4x), r to replay a specific seed. The seed label in the bottom-right corner means you can always go back to a composition you liked.

Output of the complete poster system with seed 42. Flow field + focal point shapes + accent lines + dot texture, all composed around golden ratio intersections with a perceptually balanced palette.
Every click generates something completely different. Here are four seeds — same code, different palettes and flow patterns:

Seeds 42, 7, 256, and 1337. Same system, wildly different results. That's the power of deterministic generative art — infinite variety from one codebase.
Okay but what IS noise actually?
Since we went this far.. let me show you what noise() actually IS. p5.js uses an implementation based on Ken Perlin's improved noise (2002). It's not random — it's a deterministic function that generates smooth, continuous pseudorandom values.
The algorithm works in three steps:
Step 1 — Grid setup. Imagine a grid of integer coordinates. At every grid point, there's a random gradient vector — a direction, like a tiny arrow pointing somewhere. These are determined by the seed and never change.
Step 2 — Interpolation. For a point like noise(3.7, 2.4), find the four surrounding grid points: (3,2), (4,2), (3,3), (4,3). At each grid point, compute the dot product between that point's gradient vector and the vector FROM the grid point TO your sample point. This gives four scalar values.
Step 3 — Smoothing. Interpolate those four values using a quintic fade curve: f(t) = 6t⁵ - 15t⁴ + 10t³. Why not linear interpolation? Because linear produces visible grid artifacts — straight lines where cells meet. The quintic curve has zero first AND second derivatives at t=0 and t=1, which means the transitions between grid cells are smooth up to the second derivative. No seams. No visible grid.
This is why noise at different scales looks self-similar — the same grid-and-interpolate process works at any frequency. When you write noise(x * 0.003) vs noise(x * 0.3), you're sampling the same function at different zoom levels. Stack multiple scales with decreasing amplitude and you get fractal Brownian motion (fBm):
function fbmNoise(x, y, octaves, lacunarity, gain) {
// lacunarity: frequency multiplier per octave (typically 2.0)
// gain: amplitude multiplier per octave (typically 0.5)
let value = 0;
let amplitude = 1;
let frequency = 1;
let maxValue = 0; // for normalization
for (let i = 0; i < octaves; i++) {
value += amplitude * noise(x * frequency, y * frequency);
maxValue += amplitude;
amplitude *= gain;
frequency *= lacunarity;
}
return value / maxValue; // normalize to [0, 1]
}
With 4 octaves, lacunarity 2.0, gain 0.5: each octave doubles the frequency and halves the amplitude. The first octave gives broad structure, the second adds medium detail, the third adds fine texture, the fourth adds grain. Natural phenomena — clouds, terrain, marble — follow this exact pattern, which is why Perlin noise looks "natural". Perlin won an Academy Award (Technical Achievement, 1997) for this contribution to computer graphics. Not bad for a gradient grid with fancy interpolation.
You can swap noise() for fbmNoise() in the flow field to get dramatically more detailed flow patterns:
// in traceFlowLine, replace:
let angle = noise(x * noiseScale, y * noiseScale) * TWO_PI * 2;
// with:
let angle = fbmNoise(x * noiseScale, y * noiseScale, 4, 2.0, 0.5) * TWO_PI * 2;
The flow lines go from smooth curves to intricate, turbulent paths that look like wind patterns on a weather map. Same code, same structure — just a deeper noise function underneath.
't Komt erop neer...
This was a big one. Here's everything we built:
- **Xoshiro128**** PRNG — a real random number generator with seedable, reproducible state, using splitmix32 for initialization and bit rotation for state updates
- Golden ratio composition — focal points at
1/φintersections with inverse-square gravitational density falloff for shape placement - Perceptually balanced palettes — golden angle hue distribution + ITU-R BT.709 luminance correction via binary search
- Noise field flow lines — Euler integration of particle trajectories through a Perlin noise angle field, with rejection sampling to bias starting positions toward focal points
- Fractal Brownian motion — stacking noise octaves with lacunarity and gain for self-similar detail at multiple scales
- High-res export — pixel density scaling with exact DPI-to-millimeter calculations for print-ready output
- The actual math inside
noise()— gradient grids, dot products, quintic fade curves (6t⁵ - 15t⁴ + 10t³), and why Perlin won an Oscar for it
That's Phase 1 done. Eight episodes and you've gone from ellipse(200, 200, 50, 50) to building a deterministic generative art engine with proper composition theory, perceptual color science, and numerical integration. Not bad at all :-)
Sallukes! Thanks for reading.
X
Holy moly Char! This is absolute TOP QUALITY posting!