Learn Creative Coding (#26) - Generative Typography

in StemSocial21 hours ago

Learn Creative Coding (#26) - Generative Typography

cc-banner

Letters are shapes. Fonts are systems of shapes. And when you treat typography as raw visual material rather than just "text to read," incredible things happen. Letters can flow, dissolve, grow, morph, explode, and reconstruct. Text becomes texture. Words become landscapes.

Generative typography is one of the most immediately impressive creative coding techniques because everyone recognizes letters. When familiar shapes behave in unfamiliar ways, it creates a visceral reaction that abstract art sometimes can't match. A field of particles that spell out a word? People get it instantly. Text that assembles itself from scattered dots? That draws people in before they even know what the code is doing.

This episode combines almost everything we've built so far: Canvas text measurement, pixel manipulation (remember reading pixel data from a hidden canvas in episode 10?), particle systems from episode 11, noise from episode 12, and the composition thinking from last episode. We'll turn text into point clouds, animate letter transitions, create typographic flow fields, and build a morph effect between words. It's one of my favorite topics in all of creative coding :-)

Text as a pixel source

The core technique behind generative typography: draw text to a hidden canvas, read the pixel data, and use those pixel positions as input for something else entirely. This converts letterforms into a density map you can use for anything -- particles, lines, noise fields, whatever.

It's the same principle we used back in episode 10 for pixel manipulation, but applied to text instead of photographs. Draw something, read the pixels, transform the data. The only difference is that this time we're drawing text to get our source pixels.

function getTextPixels(text, fontSize, w, h) {
  // create an offscreen canvas we'll never actually show
  let offscreen = document.createElement('canvas');
  offscreen.width = w;
  offscreen.height = h;
  let ctx = offscreen.getContext('2d');

  // draw white text on black background
  ctx.fillStyle = 'white';
  ctx.font = `bold ${fontSize}px Arial`;
  ctx.textAlign = 'center';
  ctx.textBaseline = 'middle';
  ctx.fillText(text, w / 2, h / 2);

  // read the pixel data
  let imageData = ctx.getImageData(0, 0, w, h);
  let pixels = imageData.data;

  // collect positions where text exists
  let positions = [];
  let step = 4;  // sample every 4th pixel for performance

  for (let y = 0; y < h; y += step) {
    for (let x = 0; x < w; x += step) {
      let i = (y * w + x) * 4;
      if (pixels[i] > 128) {  // bright pixel = text is here
        positions.push({ x, y });
      }
    }
  }

  return positions;
}

That step = 4 is important. Sampling every single pixel gives you way too many points for most effects (on an 800x400 canvas that's potentially 320,000 positions). Every 4th pixel is a good balance between density and performance. For denser text effects, use step = 2. For sparse, airy looks, try step = 6 or step = 8. The step size is essentially the resolution of your typographic effect.

Now you have an array of {x, y} coordinates that form the shape of the text. Replace the text with literally anything. Dots, circles, lines, particles, flow field seeds. The text is just the map. What you draw at those positions is up to you.

Particle text with spring physics

Allez, let's do the most satisfying version first. Scatter particles at random positions, then spring them toward the text positions. The text assembles itself.

let particles = [];
let textPositions;

function setup() {
  createCanvas(800, 400);
  textPositions = getTextPixels('HELLO', 200, 800, 400);

  for (let pos of textPositions) {
    particles.push({
      x: random(width),   // start at random position
      y: random(height),
      targetX: pos.x,
      targetY: pos.y,
      vx: 0,
      vy: 0,
      size: random(2, 5)
    });
  }
}

function draw() {
  background(15);

  for (let p of particles) {
    // spring force toward target
    let fx = (p.targetX - p.x) * 0.05;
    let fy = (p.targetY - p.y) * 0.05;
    p.vx += fx;
    p.vy += fy;
    p.vx *= 0.85;  // damping
    p.vy *= 0.85;
    p.x += p.vx;
    p.y += p.vy;

    fill(100, 200, 255, 200);
    noStroke();
    ellipse(p.x, p.y, p.size);
  }
}

Each particle starts scattered across the canvas and springs toward its target position in the word "HELLO." The spring constant (0.05) controls how fast they converge. The damping (0.85) prevents infinite oscillation -- without it the particles would overshoot and bounce back and forth forever. This is the same spring physics from episode 18, applied to thousands of particles simultaneously.

The motion is beautiful. Particles swarm inward, overshoot slightly, wobble, settle. The text assembles itself organically. Some particles arrive fast (the ones that started close to their target), others take longer. That staggered arrival gives the animation depth without you having to choreograph anything.

Mouse disruption

Now let's add interactivity. Push particles away from the mouse cursor, then let them spring back:

function draw() {
  background(15);

  for (let p of particles) {
    // repel from mouse
    let dx = p.x - mouseX;
    let dy = p.y - mouseY;
    let dist = Math.sqrt(dx * dx + dy * dy);

    if (dist < 100) {
      let force = (100 - dist) / 100;
      p.vx += (dx / dist) * force * 3;
      p.vy += (dy / dist) * force * 3;
    }

    // spring toward target position
    p.vx += (p.targetX - p.x) * 0.03;
    p.vy += (p.targetY - p.y) * 0.03;
    p.vx *= 0.9;
    p.vy *= 0.9;
    p.x += p.vx;
    p.y += p.vy;

    fill(100, 200, 255, 200);
    noStroke();
    ellipse(p.x, p.y, p.size);
  }
}

Run your mouse through the text. The letters explode outward, then reassemble. Deeply satisfying. The repulsion radius (100 pixels) and force multiplier (3) control how dramatic the disruption feels. A bigger radius makes the mouse feel like a wrecking ball. A smaller radius feels like a gentle poke. The spring constant here is slightly lower (0.03 vs 0.05) which means the particles take longer to recover. That gives you more time to appreciate the destruction before the text heals itself :-)

Notice the dx / dist normalization -- that converts the distance vector into a unit direction. We multiply by force to scale it. Without normalization, particles very close to the mouse would get ridicolous velocities because dx and dy would be tiny numbers but the raw force would be unchanged. Normalizing first, then scaling, keeps the physics smooth.

Text made of lines

Different rendering, completely different feel. Instead of dots at each text position, connect nearby positions with lines:

function setup() {
  createCanvas(800, 400);
  let positions = getTextPixels('CODE', 180, 800, 400);

  background(15);
  stroke(100, 200, 255, 20);
  strokeWeight(0.5);

  // connect each point to nearby neighbors
  for (let i = 0; i < positions.length; i++) {
    let p = positions[i];

    for (let j = i + 1; j < positions.length; j++) {
      let q = positions[j];
      let d = dist(p.x, p.y, q.x, q.y);

      if (d < 15) {
        line(p.x, p.y, q.x, q.y);
      }
    }
  }
}

The text appears as a mesh of fine lines. Ghostly, almost holographic. The low alpha (20 out of 255) is crucial -- where many lines overlap, the brightness accumulates. Dense areas of the letterform glow brighter than sparse areas. The connection distance (15 pixels) controls the mesh density. Increase it for a denser web, decrease for something more skeletal.

Be warned: this has O(n^2) complexity -- every point checks every other point. With step=4 in getTextPixels, you might have a few thousand points, which means millions of distance checks. It runs fine as a one-time setup() computation, but you wouldn't want to recalculate this every frame. For animated line-mesh text, you'd need a spatial hash grid to check only nearby neighbors. But for static renders, this is fine.

Noise-displaced text

Draw text to a buffer, then displace each pixel using Perlin noise. The text stays recognizable but warps and flows like it's underwater:

function setup() {
  createCanvas(800, 400);

  // render text to a graphics buffer
  let pg = createGraphics(800, 400);
  pg.background(0);
  pg.fill(255);
  pg.textSize(200);
  pg.textAlign(CENTER, CENTER);
  pg.textFont('Georgia');
  pg.text('NOISE', 400, 200);

  // read buffer pixels and displace them
  pg.loadPixels();
  background(15);

  for (let y = 0; y < height; y++) {
    for (let x = 0; x < width; x++) {
      let i = (y * width + x) * 4;

      if (pg.pixels[i] > 128) {
        // displace this pixel using noise
        let noiseVal = noise(x * 0.01, y * 0.01);
        let displaceX = x + (noiseVal - 0.5) * 30;
        let displaceY = y + (noise(x * 0.01 + 100, y * 0.01) - 0.5) * 30;

        let hue = map(noiseVal, 0, 1, 180, 280);
        stroke(color(`hsl(${hue}, 70%, 60%)`));
        point(displaceX, displaceY);
      }
    }
  }
}

The text is recognizable but warped. Like looking at it through running water or heat haze. The noise scale (0.01) controls how smooth the displacement field is -- lower values give gentle, broad warps, higher values give choppy, pixelated distortion. The displacement amount (30 pixels here) controls how far pixels can drift from their original position. Push it to 50 or 60 and the text starts to dissolve. Pull it back to 10 and it's barely noticeable. That sweet spot around 20-40 is where it's clearly distorted but still readable.

The two different noise samples (one for X displacement, one for Y) ensure the displacement isn't correlated in both axes. That + 100 offset in the second noise call shifts us to a different region of the noise field. Same technique we used back in episode 12 when building flow fields -- offset sampling to get independent noise values from the same function.

Kinetic typography: letters in motion

Everything so far has been static. Let's animate individual letters -- give each one its own position, offset, and rotation that evolves over time.

let letters;
let word = 'CREATIVE';

function setup() {
  createCanvas(800, 400);
  textFont('monospace');
  textSize(80);
  textAlign(CENTER, CENTER);

  letters = [];
  let totalWidth = textWidth(word);
  let startX = (width - totalWidth) / 2;

  for (let i = 0; i < word.length; i++) {
    let charWidth = textWidth(word[i]);
    letters.push({
      char: word[i],
      baseX: startX + textWidth(word.substring(0, i)) + charWidth / 2,
      baseY: height / 2,
      offsetY: 0,
      rotation: 0
    });
  }
}

function draw() {
  background(15);

  for (let i = 0; i < letters.length; i++) {
    let L = letters[i];

    // wave offset based on index and time
    L.offsetY = sin(frameCount * 0.05 + i * 0.5) * 30;
    L.rotation = sin(frameCount * 0.03 + i * 0.4) * 0.1;

    // color cycle per letter
    let hue = (frameCount * 2 + i * 40) % 360;

    push();
    translate(L.baseX, L.baseY + L.offsetY);
    rotate(L.rotation);

    fill(color(`hsl(${hue}, 80%, 65%)`));
    noStroke();
    text(L.char, 0, 0);
    pop();
  }
}

Each letter bobs independently in a wave pattern. The i * 0.5 offset in the sin function creates a ripple effect -- letter 0 starts its wave first, letter 1 follows slightly behind, letter 2 even later. The whole word undulates like a flag in the wind.

The textWidth() function is doing important work here. We're measuring the exact pixel width of each character and the substring before it so we know precisely where each letter sits. Monospace fonts make this easier (every character is the same width), but textWidth() works with proportional fonts too -- the 'i' in "CREATIVE" takes less horizontal space than the 'A', and the measurements reflect that.

The color cycling (frameCount * 2 + i * 40) means each letter is at a different point in the hue cycle at any given moment. The i * 40 offset spreads the letters across 320 degrees of the color wheel (8 letters * 40 degrees). Looks like a rainbow ripple through the word. The trig here is the same sin-wave thinking from episode 13 -- just applied to letter offsets instead of geometric shapes.

Morphing between words

This is the showstopper. Blend particle positions between two different texts. "HELLO" dissolves into "WORLD" and back again.

The key challenge: the two words produce different numbers of pixel positions. We need to match the arrays so each particle has a position in both words to lerp between.

let particles = [];
let positionsA, positionsB;
let morphT = 0;
let morphDir = 1;

function setup() {
  createCanvas(800, 400);
  positionsA = getTextPixels('HELLO', 180, 800, 400);
  positionsB = getTextPixels('WORLD', 180, 800, 400);

  // equalize array lengths by duplicating random entries
  let maxLen = Math.max(positionsA.length, positionsB.length);

  while (positionsA.length < maxLen) {
    positionsA.push(positionsA[Math.floor(Math.random() * positionsA.length)]);
  }
  while (positionsB.length < maxLen) {
    positionsB.push(positionsB[Math.floor(Math.random() * positionsB.length)]);
  }

  for (let i = 0; i < maxLen; i++) {
    particles.push({
      x: positionsA[i].x,
      y: positionsA[i].y
    });
  }
}

function draw() {
  background(15, 15, 20, 30);

  // oscillate morph parameter
  morphT += 0.005 * morphDir;
  if (morphT > 1 || morphT < 0) morphDir *= -1;
  morphT = constrain(morphT, 0, 1);

  // smoothstep easing
  let eased = morphT * morphT * (3 - 2 * morphT);

  for (let i = 0; i < particles.length; i++) {
    let ax = positionsA[i].x;
    let ay = positionsA[i].y;
    let bx = positionsB[i].x;
    let by = positionsB[i].y;

    particles[i].x = lerp(ax, bx, eased);
    particles[i].y = lerp(ay, by, eased);

    fill(255, 200);
    noStroke();
    ellipse(particles[i].x, particles[i].y, 2);
  }
}

"HELLO" dissolves into "WORLD" and back. Each particle travels from its position in one word to its position in the other. The smoothstep easing (t * t * (3 - 2 * t)) makes the transition feel organic rather than linear -- it accelerates out of the starting position and decelerates into the target. We used lerp for interpolation back in episode 16. Now we're applying it to thousands of particles at once.

The low-alpha background (background(15, 15, 20, 30)) creates a motion trail effect. Instead of a clean wipe each frame, the previous frame fades slightly, leaving ghost trails of the particles' paths. The transition mid-point -- where particles are between words -- looks like a swirling cloud of dots that's neither word. That chaotic middle state is often the most beautiful moment of the whole animation.

One thing to watch for: the equalization step where we duplicate random entries to match array lengths. This means some positions in the shorter word have two particles targeting them. You'll see slightly denser spots in the shorter word. For production work, you'd want a smarter matching algorithm (like Hungarian algorithm or nearest-neighbor pairing), but random duplication works surprisingly well for a first pass.

Text as a flow field mask

Use the text shape to constrain a flow field. Flow lines appear only inside the letters, revealing the text through flowing curves:

function setup() {
  createCanvas(800, 400);
  let positions = getTextPixels('FLOW', 200, 800, 400);

  background(15);

  // draw flow field lines starting from random text positions
  for (let i = 0; i < 2000; i++) {
    let start = positions[Math.floor(Math.random() * positions.length)];
    let x = start.x;
    let y = start.y;

    stroke(random(150, 255), random(100, 200), random(180, 255), 40);
    strokeWeight(0.8);
    noFill();

    beginShape();
    for (let s = 0; s < 50; s++) {
      vertex(x, y);
      let angle = noise(x * 0.005, y * 0.005) * TWO_PI * 2;
      x += cos(angle) * 2;
      y += sin(angle) * 2;
    }
    endShape();
  }
}

The flow lines start inside the text and wander outward following the noise field. The text shape is revealed by the density of starting points -- inside the letters, lines cluster densely and the shape is clear. As lines wander outside the letterforms, they spread out and thin, creating a feathered edge effect.

The noise scale (0.005) produces very smooth, sweeping curves. Bump it up to 0.02 and the lines get tight and curly. The step count (50) controls how long each line can wander. Short lines (20 steps) stay tightly inside the letters. Long lines (100+ steps) escape the text entirely and create whispy tendrils that extend outward. Both are beautiful -- short lines for crisp typography, long lines for that ethereal dissolving effect.

This is basically the same flow field technique from episode 12, but seeded from text positions instead of a uniform grid. The noise doesn't care where the lines start. The text shape is just the starting condition. Everything after that is pure flow.

Font choice matters

The visual weight of your font dramatically affects how these techniques look:

Bold sans-serif (Impact, Arial Black, Helvetica Bold): dense pixel coverage, lots of sample points, strong geometric shapes. Best for particle text because more pixels means more particles means denser, more readable results.

Serif fonts (Georgia, Times New Roman): more detail in the letterforms, thinner strokes, serifs create extra texture. Beautiful for line-mesh and noise displacement, but thin strokes can look sparse as particles.

Monospace (Courier, Consolas): uniform character width makes layout predictable. Great for kinetic typography because spacing is consistent. The blocky shapes work well as particles too.

Display fonts (if you load custom fonts via loadFont()): the more distinctive the letterform, the more interesting the generative treatment. A script font turned into particles creates organic, flowing shapes. A geometric font like Futura creates clean, architectural particle clouds.

For any pixel-sampling technique, bold fonts work best. More pixels lit up = more sample points = denser output. A thin font at 200px might only produce a few hundred positions. The same word in a bold font produces thousands. That density difference changes the whole character of the effect.

You can load custom web fonts in p5.js with loadFont() in preload(), or reference web-safe fonts directly as strings. For pixel sampling, the font just needs to render to the offscreen canvas -- it doesn't need to be installed on the viewer's machine because we're reading pixels, not displaying text.

Per-character control with measureText

For kinetic typography and per-letter effects, you need to know exactly where each character sits. ctx.measureText() gives you precise widths:

function getCharPositions(text, fontSize, canvasWidth, canvasHeight) {
  let tempCanvas = document.createElement('canvas');
  let ctx = tempCanvas.getContext('2d');
  ctx.font = `bold ${fontSize}px Helvetica`;

  let chars = [];
  let totalWidth = ctx.measureText(text).width;
  let x = (canvasWidth - totalWidth) / 2;

  for (let i = 0; i < text.length; i++) {
    let metrics = ctx.measureText(text[i]);
    chars.push({
      char: text[i],
      x: x,
      y: canvasHeight / 2,
      width: metrics.width
    });
    x += metrics.width;
  }

  return chars;
}

This gives you an array where each character has its own position and width. Use this for per-character animations: staggered reveals, individual rotations, character-specific colors, explode-and-reform effects. The key advantage over text() with alignment is precision -- you know exactly where each letter is in pixel coordinates, which means you can animate each one independently.

Combine this with the pixel sampling approach and you can get pixel positions per character, not just per word. Sample each letter separately onto its own offscreen canvas, collect the positions, tag them with the character index. Now you can dissolve individual letters, morph specific characters while others stay still, or animate words letter-by-letter.

Animated noise text

Let's make the noise displacement animated. Instead of a static warp, the noise field drifts over time so the text continuously flows:

let pg;

function setup() {
  createCanvas(800, 400);
  pg = createGraphics(800, 400);
  pg.background(0);
  pg.fill(255);
  pg.textSize(200);
  pg.textAlign(CENTER, CENTER);
  pg.textFont('Georgia');
  pg.text('DRIFT', 400, 200);
  pg.loadPixels();
}

function draw() {
  background(15);
  let t = frameCount * 0.008;

  loadPixels();
  for (let y = 0; y < height; y += 2) {
    for (let x = 0; x < width; x += 2) {
      let i = (y * width + x) * 4;

      if (pg.pixels[i] > 128) {
        let n1 = noise(x * 0.008 + t, y * 0.008);
        let n2 = noise(x * 0.008, y * 0.008 + t * 0.7);
        let dx = (n1 - 0.5) * 40;
        let dy = (n2 - 0.5) * 40;

        let brightness = 150 + n1 * 105;
        fill(brightness, brightness * 0.8, brightness * 1.2);
        noStroke();
        rect(x + dx, y + dy, 2, 2);
      }
    }
  }
}

The t variable shifts the noise field each frame. The text is constantly warping and flowing, never settling. The y += 2 and x += 2 step sizes (drawing 2x2 rects instead of individual pixels) keep it running smoothly -- iterating every pixel at 60fps would be too heavy for most machines.

The different time multipliers for X and Y displacement (t vs t * 0.7) mean the horizontal and vertical warps drift at different speeds. This prevents the displacement from looking like a uniform scroll and instead creates a swirling, turbulent motion. Looks like the text is being pulled by invisible currents.

Putting it together: generative poster

Let's combine techniques into a single generative composition. A poster with text as the main element, flow lines for texture, and noise displacement for atmosphere:

function setup() {
  createCanvas(600, 800);
  background(245, 240, 230);

  let margin = 60;
  let positions = getTextPixels('ART', 250, 600, 400);

  // shift positions to center vertically with margin
  let yOffset = 200;
  for (let p of positions) {
    p.y += yOffset;
  }

  // layer 1: flow field background (subtle)
  stroke(200, 195, 185);
  strokeWeight(0.3);
  for (let i = 0; i < 400; i++) {
    let x = random(margin, width - margin);
    let y = random(margin, height - margin);
    beginShape();
    for (let s = 0; s < 40; s++) {
      vertex(x, y);
      let a = noise(x * 0.003, y * 0.003) * TWO_PI * 2;
      x += cos(a) * 2;
      y += sin(a) * 2;
    }
    endShape();
  }

  // layer 2: text as particle cloud
  noStroke();
  for (let pos of positions) {
    let n = noise(pos.x * 0.02, pos.y * 0.02);
    let size = 1 + n * 4;
    let gray = 20 + n * 40;
    fill(gray);
    ellipse(pos.x, pos.y, size);
  }

  // layer 3: a few accent lines through the text
  stroke(180, 60, 40, 80);
  strokeWeight(1);
  for (let i = 0; i < 30; i++) {
    let start = positions[floor(random(positions.length))];
    let x = start.x;
    let y = start.y;
    noFill();
    beginShape();
    for (let s = 0; s < 80; s++) {
      vertex(x, y);
      let a = noise(x * 0.004, y * 0.004 + 50) * TWO_PI * 2;
      x += cos(a) * 2;
      y += sin(a) * 2;
    }
    endShape();
  }
}

Three layers: subtle flow field background, particle text as the main element, and red accent flow lines seeded from inside the text. The layers interact visually -- the background flow lines are barely visible but add texture, the text dominates with its dark particle cloud, and the accent lines add a splash of color that emerges from within the letterforms and wanders outward.

This is composition thinking from last episode applied to typography. The text is the focal element (hierarchy). The background flow provides rhythm and texture. The margins frame everything. The accent color creates contrast. One algorithm, layered carefully, producing a poster that feels designed rather than generated.

Tips from experience

A few things I've learned working with text in generative pieces:

Choose your words. The text content matters as much as the visual treatment. A beautiful typographic animation of a boring sentence falls flat. But a single powerful word -- "NOISE", "DRIFT", "FLOW" -- rendered with generative techniques can be genuinely moving. Short words work better than long sentences for most particle/displacement effects. The visual density stays consistent across fewer characters.

Bold fonts, always. I keep coming back to this because it's the number one mistake. Thin fonts produce thin particle clouds that are hard to read. Start with the boldest font you can find and experiment from there. You can always reduce sampling density (increase the step parameter) if it's too heavy.

Match the technique to the word. "FLOW" rendered as a flow field. "NOISE" rendered with noise displacement. "SHATTER" rendered as exploding particles. When the visual treatment reinforces the meaning, the piece becomes more than just a visual experiment -- it becomes communication. That intersecton of meaning and form is where typographic art lives.

Performance matters. Pixel-level iteration is expensive. For animated effects, don't iterate every pixel every frame. Use the particle-based approach (sample once, animate the particles) rather than the pixel displacement approach (resample every frame). Particles are lightweight objects -- moving 2000 particles per frame is trivial. Re-reading 320,000 pixels per frame is not.

't Komt erop neer...

  • Draw text to a hidden canvas, read its pixels, use those positions for anything -- particles, lines, noise, flow fields
  • The step size in your pixel sampling controls the density and resolution of the effect
  • Particle text with spring physics creates the classic assemble/disassemble effect
  • Mouse repulsion through particle text gives satisfying interactive disruption
  • Connect nearby text positions with lines for a holographic mesh look
  • Noise displacement warps text into flowing, underwater shapes -- animate it with a time offset
  • Kinetic typography: animate individual letters with wave offsets, staggered timing, and per-letter color
  • Morph between words by lerping particle positions (equalize array lengths first)
  • Text-masked flow fields reveal letterforms through flowing curves
  • Bold fonts produce better results for all pixel-sampling techniques -- more pixels, more density, more readable
  • measureText() gives per-character positioning for precise letter-level animations
  • Layer techniques for composed pieces: background texture + particle text + accent elements

The text techniques we covered today treat letterforms as pure shape data. But shapes need surface quality too. Next time we're looking at techniques that add visual depth to flat digital output -- grain, paper texture, hatching, the kind of subtle surface detail that makes digital art feel analog and tactile. It's the difference between a clean vector and something that feels like it was printed on real paper :-)

Sallukes! Thanks for reading.

X

@femdev

Sort:  

Congratulations @femdev! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You have been a buzzy bee and published a post every day of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Our Hive Power Delegations to the March PUM Winners
Feedback from the April Hive Power Up Day
Hive Power Up Month Challenge - March 2026 Winners List