Learn Creative Coding (#45) - Mini-Project: Generative Shader Artwork

This is it. The mini-project that ties the entire shader arc together. Since episode 21 we've been building tools -- SDFs and shapes (33), patterns and repetition (34), noise on the GPU (35), feedback loops (36), color palettes (37), raymarching (38-40), fractals (41-42), post-processing (43), and textures (44). Fourteen episodes of individual techniques. Today we combine them into one gallery-ready piece.
The goal is a self-contained GLSL artwork. Something you could render full-screen, run at a party, upload to Shadertoy, screenshot for your portfolio. Not a demo of one technique, but a composition that uses several techniques together to create something with actual aesthetic intent. We're going to design the piece first -- mood, palette, motion style -- then build it layer by layer, then add post-processing, then make it seed-based for infinite variation.
I'm going to walk through my creative process as I build this, mistakes and iteration included. The point isn't to copy my exact shader. It's to see how you go from "blank fragment shader" to "finished piece" using the tools we've collected.
Design phase: what are we making?
Before touching code, I think about the piece. What mood? What motion? What palette?
I want something contemplative. Slow, evolving, organic. Not flashy, not glitchy, not geometric. Think about looking into a tide pool, or watching clouds form, or the surface of oil on water. Layered noise fields with color that shifts gradually. Soft edges, no hard geometry. The SDF work from episode 33 is useful for masking and distance-based effects, but I don't want visible shapes.
Motion: slow. Everything on timescales of 5-30 seconds per full cycle. No rapid flickering, no sudden transitions. The viewer should be able to stare at it for minutes and keep noticing new details.
Color: warm earth tones fading into deep blues. Think sunset over water. Amber-to-teal with dark areas going almost black. We'll use the cosine palette from episode 37 -- it's perfect for this kind of smooth gradient coloring.
Structure: layered. A base noise field provides the primary texture. A second noise layer provides movement at a differnt scale. Distance from center controls the color shift (inner = warm, outer = cool). Post-processing adds bloom, vignette, and film grain for the final cinematic look.
Let me sketch the layer stack:
- Base layer -- multi-octave noise field for texture and color mapping
- Flow layer -- domain-warped noise for organic motion
- Color mapping -- cosine palette driven by noise value + distance from center
- Depth layer -- secondary noise as brightness modulation (simulates depth/shadow)
- Post-processing -- bloom glow, chromatic aberration, vignette, grain
That's the plan. Let's build it.
Layer 1: the base noise field
Start with our standard noise setup from episode 35. Multi-octave value noise, nothing fancy yet:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
float noise(vec2 p) {
vec2 i = floor(p);
vec2 f = fract(p);
f = f * f * (3.0 - 2.0 * f);
float a = hash(i);
float b = hash(i + vec2(1.0, 0.0));
float c = hash(i + vec2(0.0, 1.0));
float d = hash(i + vec2(1.0, 1.0));
return mix(mix(a, b, f.x), mix(c, d, f.x), f.y);
}
float fbm(vec2 p) {
float total = 0.0;
float amp = 0.5;
for (int i = 0; i < 6; i++) {
total += noise(p) * amp;
p *= 2.0;
amp *= 0.5;
}
return total;
}
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
float n = fbm(uv * 3.0 + u_time * 0.05);
gl_FragColor = vec4(vec3(n), 1.0);
}
Six octaves of fractal Brownian motion. The * 3.0 on UV sets the base frequency -- low enough to see broad shapes, high enough that the six octaves add visible fine detail. The u_time * 0.05 makes it drift slowly. If you run this you get animated grayscale clouds. Decent starting point, but it's just noise. No personality yet.
Layer 2: domain warping for organic flow
Domain warping is where noise goes from "clouds" to "something alive." We talked about this concept in episode 35 but didn't fully explore it. The idea: use noise to offset the input coordinates of a second noise call. The second noise reads from warped positions, creating swirling, folding patterns that look biological.
vec2 warp(vec2 p, float t) {
// first noise pass: offset coordinates
float nx = fbm(p + vec2(1.7, 9.2) + t * 0.04);
float ny = fbm(p + vec2(8.3, 2.8) - t * 0.03);
return p + vec2(nx, ny) * 0.8;
}
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
// domain-warped noise
vec2 warped = warp(uv * 3.0, u_time);
float n = fbm(warped);
gl_FragColor = vec4(vec3(n), 1.0);
}
The vec2(1.7, 9.2) and vec2(8.3, 2.8) are arbitrary offsets that make the two noise channels sample from different regions -- without them both X and Y warping would use the same noise pattern, making the motion look uniformly diagonal. The different time multipliers (0.04 vs -0.03) give the X and Y warping differnt speeds and directions. The * 0.8 controls how far the domain gets warped -- higher values create more extreme folding, lower values give gentler undulation.
Run this and you'll see the noise has completely changed character. Instead of uniform cloudy texture, it has these flowing, folding structures that look like cells under a microscope, or coffee creamer swirling in a cup. That's the domain warping doing its thing. The patterns evolve slowly because the time factors are small.
Double domain warping
We can warp the warped coordinates again for even more complexity. This is what Inigo Quilez calls "warp of a warp" and it creates incredibly rich organic structures:
vec2 warp(vec2 p, float t) {
float nx = fbm(p + vec2(1.7, 9.2) + t * 0.04);
float ny = fbm(p + vec2(8.3, 2.8) - t * 0.03);
return p + vec2(nx, ny) * 0.8;
}
vec2 warp2(vec2 p, float t) {
vec2 w1 = warp(p, t);
float nx2 = fbm(w1 + vec2(4.1, 3.9) + t * 0.02);
float ny2 = fbm(w1 + vec2(5.6, 7.2) + t * 0.025);
return w1 + vec2(nx2, ny2) * 0.4;
}
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
vec2 w = warp2(uv * 3.0, u_time);
float n = fbm(w);
gl_FragColor = vec4(vec3(n), 1.0);
}
The second warp layer has smaller intensity (* 0.4 vs * 0.8) because it adds fine detail on top of the broad flow from the first warp. If both warps have the same intensity it gets too chaotic. The layered approach -- big broad flow from warp 1, smaller detail from warp 2 -- creates that natural hierarchy you see in fluid dynamics. Large vortexes with smaller eddies inside them.
This is computationally heavy. Six octaves of FBM called four times for the warping, then one more time for the final value. That's 30 octaves of noise per pixel. On a decent GPU it's fine. If it's slow, drop to 4 octaves in the FBM or remove the second warp layer. Always profile -- if your frame time is above 16ms (60fps) you need to simplify somewhere.
Layer 3: color from cosine palettes
Now we map our noise value to color using the cosine palette technique from episode 37. The formula: a + b * cos(2*PI * (c*t + d)) where t is our input value and a, b, c, d are vec3 parameters that define the palette.
vec3 palette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
return a + b * cos(6.28318 * (c * t + d));
}
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
vec2 w = warp2(uv * 3.0, u_time);
float n = fbm(w);
// distance from center for color shift
float dist = length(uv);
// blend noise and distance for palette input
float colorVal = n * 0.7 + dist * 0.3;
// warm amber to cool teal palette
vec3 color = palette(colorVal,
vec3(0.5, 0.4, 0.3),
vec3(0.5, 0.4, 0.4),
vec3(1.0, 0.7, 0.4),
vec3(0.0, 0.15, 0.35)
);
gl_FragColor = vec4(color, 1.0);
}
The colorVal mixes noise (70%) with distance from center (30%). The noise gives the organic texture its color variation. The distance component pushes the outer regions toward cooler tones and keeps the center warm. The exact palette parameters take some tuning -- I usually start with Inigo Quilez's palette explorer values and adjust the d parameter (phase shifts) to get the hue range I want. The vec3(0.0, 0.15, 0.35) phase shift means red peaks first, green slightly later, blue later still -- giving that amber-to-teal transition.
Try tweaking the palette coefficients. Change d from (0.0, 0.15, 0.35) to (0.0, 0.33, 0.67) and you get a rainbow. Change b from (0.5, 0.4, 0.4) to (0.2, 0.2, 0.2) and the palette becomes more muted. Change a to (0.3, 0.2, 0.2) and the overall image gets darker. Five minutes of parameter tweaking changes the whole mood.
Layer 4: depth and brightness modulation
The piece needs depth. Right now every part has roughly the same brightness, which makes it feel flat. Let's add a second noise evaluation that controls brightness -- some areas bright, some dark, creating the impression of light falling across a textured surface.
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
vec2 w = warp2(uv * 3.0, u_time);
float n = fbm(w);
// separate noise for brightness variation
float brightness = fbm(uv * 2.0 - u_time * 0.03 + 50.0);
brightness = smoothstep(0.25, 0.75, brightness);
float dist = length(uv);
float colorVal = n * 0.7 + dist * 0.3;
vec3 color = palette(colorVal,
vec3(0.5, 0.4, 0.3),
vec3(0.5, 0.4, 0.4),
vec3(1.0, 0.7, 0.4),
vec3(0.0, 0.15, 0.35)
);
// apply brightness modulation
color *= 0.4 + 0.8 * brightness;
// darken edges (distance-based falloff)
color *= smoothstep(0.9, 0.3, dist);
gl_FragColor = vec4(color, 1.0);
}
The brightness noise uses different coordinates (uv * 2.0 vs uv * 3.0) and a different time offset so it moves independently from the color pattern. The smoothstep(0.25, 0.75, ...) remaps the noise from its 0-1 range into a more contrasty 0-1 range -- dark areas get darker, bright areas get brighter. The 0.4 + 0.8 * brightness multiplication means the dimmest areas are 40% brightness and the brightest are 120% (slightly over 1.0, which the post-processing will handle).
The distance falloff smoothstep(0.9, 0.3, dist) acts like a soft vignette built into the artwork itself. Not a post-processing vignette (we'll add that too) but a structural darkening at the edges. The peice should feel like it's emanating from the center.
Putting it all together: the complete artwork shader
Here's the full shader with all four layers assembled, before post-processing:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
float noise(vec2 p) {
vec2 i = floor(p);
vec2 f = fract(p);
f = f * f * (3.0 - 2.0 * f);
float a = hash(i);
float b = hash(i + vec2(1.0, 0.0));
float c = hash(i + vec2(0.0, 1.0));
float d = hash(i + vec2(1.0, 1.0));
return mix(mix(a, b, f.x), mix(c, d, f.x), f.y);
}
float fbm(vec2 p) {
float total = 0.0;
float amp = 0.5;
for (int i = 0; i < 6; i++) {
total += noise(p) * amp;
p *= 2.0;
amp *= 0.5;
}
return total;
}
vec2 warp(vec2 p, float t) {
float nx = fbm(p + vec2(1.7, 9.2) + t * 0.04);
float ny = fbm(p + vec2(8.3, 2.8) - t * 0.03);
return p + vec2(nx, ny) * 0.8;
}
vec2 warp2(vec2 p, float t) {
vec2 w1 = warp(p, t);
float nx2 = fbm(w1 + vec2(4.1, 3.9) + t * 0.02);
float ny2 = fbm(w1 + vec2(5.6, 7.2) + t * 0.025);
return w1 + vec2(nx2, ny2) * 0.4;
}
vec3 palette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
return a + b * cos(6.28318 * (c * t + d));
}
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
// domain-warped noise
vec2 w = warp2(uv * 3.0, u_time);
float n = fbm(w);
// brightness variation
float brightness = fbm(uv * 2.0 - u_time * 0.03 + 50.0);
brightness = smoothstep(0.25, 0.75, brightness);
// color from palette
float dist = length(uv);
float colorVal = n * 0.7 + dist * 0.3;
vec3 color = palette(colorVal,
vec3(0.5, 0.4, 0.3),
vec3(0.5, 0.4, 0.4),
vec3(1.0, 0.7, 0.4),
vec3(0.0, 0.15, 0.35)
);
// brightness modulation + edge falloff
color *= 0.4 + 0.8 * brightness;
color *= smoothstep(0.9, 0.3, dist);
gl_FragColor = vec4(color, 1.0);
}
Run this and you should see flowing, amber-to-teal organic forms slowly evolving against a dark background. The motion is gradual -- things shift and fold like slow fluid dynamics. The center is warm and bright, the edges are cooler and darker. It already looks like a piece, not a tech demo.
But it's not finished. The output still has that raw, slightly flat quality of ungraded footage. Post-processing will give it that final polish.
Layer 5: post-processing stack
We covered all of these effects in episode 43. Now we stack them. I'm skipping chromatic aberration in the final version because re-evaluating the full warp pipeline three times per pixel is too expensive for a default build. If your GPU handles it, add it back -- see episode 43 for the technique.
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
vec2 screenUV = gl_FragCoord.xy / u_resolution;
// --- artwork layers ---
vec2 w = warp2(uv * 3.0, u_time);
float n = fbm(w);
float brightness = fbm(uv * 2.0 - u_time * 0.03 + 50.0);
brightness = smoothstep(0.25, 0.75, brightness);
float dist = length(uv);
float colorVal = n * 0.7 + dist * 0.3;
vec3 color = palette(colorVal,
vec3(0.5, 0.4, 0.3),
vec3(0.5, 0.4, 0.4),
vec3(1.0, 0.7, 0.4),
vec3(0.0, 0.15, 0.35)
);
color *= 0.4 + 0.8 * brightness;
color *= smoothstep(0.9, 0.3, dist);
// --- post-processing ---
// 1. bloom-like glow on bright areas
vec3 glow = max(color - vec3(0.5), vec3(0.0)) * 0.3;
color += glow;
// 2. color grading
color *= 1.1; // exposure
color = (color - 0.5) * 1.15 + 0.5; // contrast
color *= vec3(1.04, 1.0, 0.94); // warm tint
// 3. vignette
vec2 centered = screenUV - 0.5;
color *= smoothstep(0.75, 0.35, length(centered));
// 4. film grain
float grain = (hash(gl_FragCoord.xy + fract(u_time) * 173.0) - 0.5) * 0.07;
color += grain;
// 5. gamma
color = pow(max(color, vec3(0.0)), vec3(0.9));
gl_FragColor = vec4(clamp(color, 0.0, 1.0), 1.0);
}
The color grading is minimal. Slight exposure boost, slight contrast increase, slight warm tint. The palette already does most of the color work -- the grading just nudges it. Over-grading kills the subtlety of the cosine palette.
The grain at 0.07 intensity is barely visible but adds analog texture. Without it, the smooth gradients of the cosine palette can show banding on 8-bit displays. The grain breaks up the banding.
Making it seed-based
Right now the piece looks the same every time you load it. For generative art, we want variation -- same code, different output each time. The technique from episode 24: derive all variation from a single integer seed.
// add seed uniform
uniform float u_seed;
// seed-derived values replace the hardcoded offsets
float seedHash(float n) {
return fract(sin(n * 127.1) * 43758.5453);
}
vec2 warp(vec2 p, float t) {
// offsets derived from seed instead of hardcoded
vec2 off1 = vec2(seedHash(u_seed), seedHash(u_seed + 1.0)) * 20.0;
vec2 off2 = vec2(seedHash(u_seed + 2.0), seedHash(u_seed + 3.0)) * 20.0;
float nx = fbm(p + off1 + t * 0.04);
float ny = fbm(p + off2 - t * 0.03);
return p + vec2(nx, ny) * 0.8;
}
Pass a random integer as u_seed from JavaScript. Each seed produces different warp offsets, which produces a completely different flow pattern. Same aesthetic, different composition. Run through seeds 1-100 and you'll get 100 unique artworks.
You can also derive the palette from the seed:
vec3 seedPalette(float t) {
// derive palette parameters from seed
vec3 a = vec3(0.45 + 0.1 * seedHash(u_seed + 10.0),
0.35 + 0.1 * seedHash(u_seed + 11.0),
0.3 + 0.1 * seedHash(u_seed + 12.0));
vec3 d = vec3(seedHash(u_seed + 20.0) * 0.3,
0.1 + seedHash(u_seed + 21.0) * 0.3,
0.2 + seedHash(u_seed + 22.0) * 0.4);
return a + vec3(0.5, 0.4, 0.4) * cos(6.28318 * (vec3(1.0, 0.7, 0.4) * t + d));
}
The key is constraining the seed-derived parameters to a range that always looks good. The a values stay between 0.35 and 0.55 -- always a warm midtone. The d phase shifts stay in a range that keeps the palette in the amber-to-teal family. Every seed gives a different palette, but they all share the same mood. If you let the parameters go fully random, some seeds produce ugly muddy palettes and others produce garish neon. Constrain the randomness.
The JavaScript side is straightforward:
// on page load:
const seed = Math.floor(Math.random() * 10000);
// or from URL hash:
// const seed = parseInt(location.hash.slice(1)) || 42;
const seedLoc = gl.getUniformLocation(program, 'u_seed');
gl.uniform1f(seedLoc, seed);
Add the seed to the URL so you can share specific outputs: myart.html#7342. Reproducble randomness -- same seed, same artwork, every time.
Export: high-resolution renders
For portfolio or print, you need higher resolution than your screen. The approach: multiply the canvas size by a factor (3x for print, 2x for web), scale the u_resolution uniform accordingly, render one frame, and save.
function exportHighRes(scale) {
const w = canvas.width * scale;
const h = canvas.height * scale;
// create offscreen canvas at high res
const offCanvas = document.createElement('canvas');
offCanvas.width = w;
offCanvas.height = h;
const offGl = offCanvas.getContext('webgl');
// ... set up same program, same uniforms ...
// IMPORTANT: pass scaled resolution
offGl.uniform2f(resLoc, w, h);
offGl.uniform1f(timeLoc, currentTime);
offGl.uniform1f(seedLoc, currentSeed);
offGl.viewport(0, 0, w, h);
offGl.drawArrays(offGl.TRIANGLES, 0, 6);
// download as PNG
const link = document.createElement('a');
link.download = 'artwork-seed-' + currentSeed + '.png';
link.href = offCanvas.toDataURL('image/png');
link.click();
}
// trigger on keypress
window.addEventListener('keydown', function(e) {
if (e.key === 's') exportHighRes(3);
});
Press 's' and you get a 3x resolution PNG. The shader math is resolution-independent (everything uses normalized UV coordinates), so scaling up just gives you more pixels with the same patterns. The u_resolution uniform must match the actual canvas size, otherwise the aspect ratio and coordinate mapping break.
For video, you'd render frame by frame using requestAnimationFrame with a fixed time step, export each frame as PNG, and assemble them with ffmpeg:
ffmpeg -framerate 30 -i frame-%04d.png -c:v libx264 -pix_fmt yuv420p output.mp4
Performance profiling
Before you call it done, check the frame time. Open your browser's developer tools, go to the Performance tab, record a few seconds, and look at the frame duration. If it's consistently under 16ms, you're at 60fps. Under 33ms is 30fps. Anything above 33ms and you should simplify.
The biggest cost in our shader is the warp2 function. Each call to warp2 evaluates fbm four times (two for the inner warp, two for the outer warp), and each fbm is six noise evaluations. That's 24 noise calls per warp2. The main function calls warp2 once for the artwork, plus the separate brightness fbm. That's around 30 noise evaluations per pixel total.
If that's too slow, here's what to cut, in order of biggest savings:
- Drop warp2 to just warp (saves 12 noise evals per call)
- Reduce fbm octaves from 6 to 4 (saves 33% of all noise evals)
- Lower canvas resolution (halving width and height gives 4x fewer pixels)
- Remove the separate brightness noise (saves 6 evals, use the base noise for brightness instead)
The visual impact of each simplification is different. Dropping the second warp layer loses some organic complexity but the piece still works. Reducing octaves makes the noise smoother and less detailed. Reducing resolution makes everything blurry. Choose based on what matters most for your output format.
Comparing seed outputs
Generate 5-10 different seeds and look at them side by side. Are they all good? Or do some look muddy, washed out, too dark, too uniform?
If all seed outputs look different but equally appealing, your parameter constraints are right. If some seeds look obviously worse, your ranges are too wide. Tighten the constraints until every seed in a range of 1000 produces something you'd be happy to display.
Common issues with seed-based variation:
- Some seeds too dark: the brightness noise happens to land in a mostly-dark region. Fix by clamping the brightness floor higher:
brightness = 0.3 + 0.7 * brightness - Some seeds too uniform: the warp offsets happen to produce low warping, making the noise look flat. Fix by adding a minimum warp distance:
return p + vec2(nx, ny) * max(0.5, intensity) - Palette too different between seeds: the phase shifts are too random. Narrow the range:
0.1 + seedHash(...) * 0.2instead ofseedHash(...) * 0.5
This tuning process is the creative work. The code is done. Now you're curating the output space. Making sure the system produces consistently good results, not just occasionally good ones.
The finished piece
Here's the complete shader, with seed-based variation and post-processing, ready to deploy:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform float u_seed;
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
float seedHash(float n) {
return fract(sin(n * 127.1) * 43758.5453);
}
float noise(vec2 p) {
vec2 i = floor(p);
vec2 f = fract(p);
f = f * f * (3.0 - 2.0 * f);
float a = hash(i);
float b = hash(i + vec2(1.0, 0.0));
float c = hash(i + vec2(0.0, 1.0));
float d = hash(i + vec2(1.0, 1.0));
return mix(mix(a, b, f.x), mix(c, d, f.x), f.y);
}
float fbm(vec2 p) {
float total = 0.0;
float amp = 0.5;
for (int i = 0; i < 6; i++) {
total += noise(p) * amp;
p *= 2.0;
amp *= 0.5;
}
return total;
}
vec2 warp(vec2 p, float t) {
vec2 off1 = vec2(seedHash(u_seed), seedHash(u_seed + 1.0)) * 20.0;
vec2 off2 = vec2(seedHash(u_seed + 2.0), seedHash(u_seed + 3.0)) * 20.0;
float nx = fbm(p + off1 + t * 0.04);
float ny = fbm(p + off2 - t * 0.03);
return p + vec2(nx, ny) * 0.8;
}
vec2 warp2(vec2 p, float t) {
vec2 w1 = warp(p, t);
vec2 off3 = vec2(seedHash(u_seed + 4.0), seedHash(u_seed + 5.0)) * 20.0;
vec2 off4 = vec2(seedHash(u_seed + 6.0), seedHash(u_seed + 7.0)) * 20.0;
float nx2 = fbm(w1 + off3 + t * 0.02);
float ny2 = fbm(w1 + off4 + t * 0.025);
return w1 + vec2(nx2, ny2) * 0.4;
}
vec3 palette(float t) {
vec3 a = vec3(0.45 + 0.1 * seedHash(u_seed + 10.0),
0.35 + 0.1 * seedHash(u_seed + 11.0),
0.3 + 0.1 * seedHash(u_seed + 12.0));
vec3 d = vec3(seedHash(u_seed + 20.0) * 0.3,
0.1 + seedHash(u_seed + 21.0) * 0.3,
0.2 + seedHash(u_seed + 22.0) * 0.4);
return a + vec3(0.5, 0.4, 0.4) * cos(6.28318 * (vec3(1.0, 0.7, 0.4) * t + d));
}
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
vec2 screenUV = gl_FragCoord.xy / u_resolution;
// artwork
vec2 w = warp2(uv * 3.0, u_time);
float n = fbm(w);
float brightness = fbm(uv * 2.0 - u_time * 0.03 + 50.0);
brightness = smoothstep(0.25, 0.75, brightness);
float dist = length(uv);
float colorVal = n * 0.7 + dist * 0.3;
vec3 color = palette(colorVal);
color *= 0.4 + 0.8 * brightness;
color *= smoothstep(0.9, 0.3, dist);
// bloom glow
vec3 glow = max(color - vec3(0.5), vec3(0.0)) * 0.3;
color += glow;
// color grading
color *= 1.1;
color = (color - 0.5) * 1.15 + 0.5;
color *= vec3(1.04, 1.0, 0.94);
// vignette
vec2 centered = screenUV - 0.5;
color *= smoothstep(0.75, 0.35, length(centered));
// grain
float grain = (hash(gl_FragCoord.xy + fract(u_time) * 173.0) - 0.5) * 0.07;
color += grain;
// gamma
color = pow(max(color, vec3(0.0)), vec3(0.9));
gl_FragColor = vec4(clamp(color, 0.0, 1.0), 1.0);
}
The piece responds to aspect ratio automatically because we normalize UV with / u_resolution.y. Portrait, landscape, square -- the composition adjusts. The distance-based falloff and vignette center on the screen regardless of shape. Full-screen it on any monitor and it just works.
What we built and what comes next
This mini-project demonstrated the creative pipeline for shader-based generative art: design first (mood, palette, motion), build the layers one at a time (noise, warping, color, brightness), apply post-processing (glow, grading, vignette, grain), add seed-based variation, profile and optimize, and export at high resolution.
Every technique in the shader came from earlier episodes. FBM noise from episode 35. Cosine palettes from episode 37. Domain warping is a new application of noise concepts. Post-processing from episode 43. Seed-based variation from episode 24. The mini-project isn't about learning new techniques -- it's about combining exisiting ones into something coherent.
After this, we leave the fragment-shader-only world and look at what happens when the GPU does more than just color pixels. Compute pipelines, data textures as simulation state, massive parallel particle systems -- the concepts from episode 44's data textures section taken to their logical conclusion. The fragment shader is powerful, but it has limits. Each pixel runs independently with no knowledge of its neighbors (except through texture reads). Breaking that constraint opens up fluid simulation, physics, and particle systems with millions of elements running at full framerate.
But this piece we built today? This is something you can show people. Put it on Shadertoy. Run it full-screen at a party. Export a high-res still for your wall. You just built a generative artwork from scratch, in one file, running in real-time in a browser. That's the whole point of creative coding :-)
't Komt erop neer...
- Design before coding: decide the mood, motion style, palette, and layer stack before writing GLSL. The creative decisions matter more than the technical ones
- Domain warping: use noise to offset the input coordinates of another noise call. Double warping (warp the warped coordinates) creates rich, fluid-like organic patterns
- Layer architecture: base texture (FBM), flow (domain warping), color (cosine palette), depth (brightness modulation), post-processing. Each layer is independent and can be tuned separately
- Cosine palette parameters:
acontrols brightness center,bcontrols color amplitude,ccontrols frequency,dcontrols phase (hue). Constrain ranges for consistent aesthetic across seeds - Seed-based variation: derive all randomness from one integer seed using a hash function. Constrain parameter ranges so every seed produces good output -- curate the output space
- Performance budget: count your noise evaluations. Domain warping is expensive (24 noise calls per warp2). Reduce octaves or drop the second warp layer if the frame time exceeds 16ms
- High-res export: scale the canvas, match
u_resolutionto actual size, render one frame, save as PNG. The math is resolution-independent - Post-processing stack: bloom (threshold + self-glow), color grading (exposure, contrast, warmth), vignette, film grain, gamma. Order matters -- grade before grain, grain before vignette
- The goal isn't a tech demo -- it's a finished piece with aesthetic intent. One shader file, one HTML page, runs in any browser
Sallukes! Thanks for reading.
X
Congratulations @femdev! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
Your next target is to reach 80 posts.
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP