Learn Creative Coding (#10) - Pixel Manipulation: Reading and Writing Image Data

Every image on your screen is just a grid of numbers. Not metaphorically — literally. When you really internalize that, a whole world of creative possibilities opens up. You can build Instagram filters from scratch, create glitch art that would make Kim Asendorf proud, or process images in ways no preset filter could dream of.
This episode is the payoff for going vanilla in episode 9. The Canvas API gives you direct access to every single pixel on the canvas — read them, manipulate them, write them back. p5.js wraps this with loadPixels() and pixels[], but the underlying mechanism is getImageData() and putImageData(). Understanding the raw API means understanding what your code actually does to image data, and that's where the real creative power lives.
Fair warning: this one gets math-heavy in places. We're going to build real filters — the kind that Photoshop charges money for — and I'm going to explain exactly why the math works, not just show you the code.
The pixel array: how images actually work
A 500×500 canvas has 250,000 pixels. Each pixel has four values: Red, Green, Blue, Alpha. That's 1,000,000 numbers stored in a single flat array. The Canvas API gives you access to all of them through a Uint8ClampedArray — a typed array where every value is an integer between 0 and 255, automatically clamped (no overflow, no underflow).
const canvas = document.getElementById('canvas');
const ctx = canvas.getContext('2d');
// draw something first
ctx.fillStyle = '#2a9d8f';
ctx.fillRect(0, 0, 500, 500);
ctx.fillStyle = '#e76f51';
ctx.beginPath();
ctx.arc(250, 250, 100, 0, Math.PI * 2);
ctx.fill();
// now read the pixels
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
const pixels = imageData.data; // Uint8ClampedArray
console.log(pixels.length); // 1,000,000 (500 * 500 * 4)
console.log(pixels[0], pixels[1], pixels[2], pixels[3]); // R, G, B, A of first pixel
The imageData object has three properties: .width, .height, and .data. The .data property is where the actual pixel values live — a flat, one-dimensional array laid out like this:
[R0, G0, B0, A0, R1, G1, B1, A1, R2, G2, B2, A2, ...]
Every four consecutive values represent one pixel. Pixel 0 is indices 0-3. Pixel 1 is indices 4-7. Pixel at row y, column x:
let index = (y * canvas.width + x) * 4;
let r = pixels[index];
let g = pixels[index + 1];
let b = pixels[index + 2];
let a = pixels[index + 3];
That (y * width + x) * 4 formula is the single most important thing in this episode. Every filter, every glitch effect, every manipulation technique builds on it. The * 4 is because each pixel occupies four slots in the array. The y * width converts a 2D coordinate to a 1D offset (row-major order — same memory layout as C arrays, NumPy, and basically every image format ever).
Why Uint8ClampedArray instead of a regular array? Two reasons. First, typed arrays are dramatically faster — the JavaScript engine knows every element is an 8-bit unsigned integer, so it can use SIMD instructions and avoid type checks. Second, clamping: if you assign pixels[i] = 300, it silently becomes 255. Assign -50, it becomes 0. No overflow bugs, no need for manual Math.max(0, Math.min(255, val)) on every write. The hardware does it for free.
The read-modify-write cycle
The pattern for every pixel manipulation is the same:
// 1. read
let imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
// 2. modify (loop through pixels, change values)
let px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
// do something with px[i], px[i+1], px[i+2], px[i+3]
}
// 3. write back
ctx.putImageData(imageData, 0, 0);
getImageData() creates a COPY of the pixel data. You're not editing the canvas live — you're editing a snapshot. Nothing changes on screen until you call putImageData() to write the modified data back. This means you can read the original pixel values while writing modified ones without interference.
One gotcha: getImageData() is subject to CORS restrictions. If you draw an image from a different domain onto the canvas (like loading a photo from an external URL), the browser marks the canvas as "tainted" and getImageData() throws a security error. For creative coding, either use same-origin images, or set img.crossOrigin = 'anonymous' before loading (and hope the server sends CORS headers).
Grayscale: the classic first filter
Average the RGB channels and assign the result to all three:
function grayscale(imageData) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
let avg = (px[i] + px[i + 1] + px[i + 2]) / 3;
px[i] = avg; // R
px[i + 1] = avg; // G
px[i + 2] = avg; // B
// alpha stays the same
}
return imageData;
}
let imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
grayscale(imgData);
ctx.putImageData(imgData, 0, 0);
That's a real grayscale filter in seven lines of actual logic. But it's not a GOOD grayscale. The human eye doesn't perceive all colors equally — we're far more sensitive to green than to blue (evolutionary adaptation: green = vegetation = food/safety). A simple average treats all three channels as equal weight, which makes greens look too dark and blues too bright.
The ITU-R BT.709 standard (used by HDTV and virtually every photo editor including Photoshop) defines luminance weights:
function luminanceGrayscale(imageData) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
// BT.709 luminance coefficients
let lum = px[i] * 0.2126 + px[i + 1] * 0.7152 + px[i + 2] * 0.0722;
px[i] = lum;
px[i + 1] = lum;
px[i + 2] = lum;
}
return imageData;
}
Green gets 71.5% of the weight, red gets 21.3%, blue gets 7.2%. These aren't arbitrary — they're derived from the spectral sensitivity of the human retinal cone cells. There's an older standard (BT.601, from NTSC/PAL television) that uses 0.299, 0.587, 0.114 — slightly different weights because CRT phosphors had different characteristics than modern LCD/OLED subpixels. Both look good; BT.709 is technically more accurate on modern screens.
The difference between simple averaging and weighted luminance is subtle but visible. Put them side by side and you'll notice the weighted version has more natural contrast — greens and skin tones especially look right.
Invert
The simplest possible filter — subtract each channel from 255:
function invert(imageData) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
px[i] = 255 - px[i];
px[i + 1] = 255 - px[i + 1];
px[i + 2] = 255 - px[i + 2];
}
return imageData;
}
Mathematically this is a reflection around 128 (the midpoint of the 0-255 range). Black becomes white, red becomes cyan, green becomes magenta, blue becomes yellow. The alpha channel stays untouched — inverting alpha would make opaque pixels transparent and vice versa, which is rarely what you want.
Fun fact: inverting an image twice gives you back the original. 255 - (255 - x) = x. This property (involution) is useful for glitch art: invert a region, apply a different filter, invert again — the inversion acts as a "transform space" that changes how other operations behave.
Threshold: high-contrast black and white
Convert to grayscale, then snap each pixel to either full black or full white:
function threshold(imageData, level) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
let lum = px[i] * 0.2126 + px[i + 1] * 0.7152 + px[i + 2] * 0.0722;
let val = lum > level ? 255 : 0;
px[i] = val;
px[i + 1] = val;
px[i + 2] = val;
}
return imageData;
}
// threshold(imgData, 128) — 128 is the cutoff
With a threshold of 128, roughly half the pixels become black and half become white (on a typical photograph). Lower the threshold and more pixels survive as white — you get a light, airy stencil. Raise it and you get a dark, heavy one. This is exactly what screen printing shops use to generate stencils from photographs.
For creative coding, threshold is amazing for creating high-contrast masks. Threshold an image, then use the result to decide where to place shapes, particles, or text. White areas get elements, black areas stay empty. Instant image-driven composition.
Brightness and contrast: the math behind every photo editor
Brightness is addition. Contrast is multiplication around a pivot point:
function brightnessContrast(imageData, brightness, contrast) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
for (let c = 0; c < 3; c++) {
let val = px[i + c];
val += brightness; // shift up or down
val = (val - 128) * contrast + 128; // stretch from center
px[i + c] = val; // Uint8ClampedArray handles clamping
}
}
return imageData;
}
// brightnessContrast(imgData, 20, 1.3) — brighter + punchier
// brightnessContrast(imgData, -30, 0.8) — darker + flatter
The brightness part is obvious: add a positive number to shift all values up (brighter), negative to shift down (darker).
The contrast formula deserves explanation. (val - 128) * contrast + 128 means: measure how far each value is from middle gray (128), multiply that distance by the contrast factor, then add 128 back. If contrast > 1, values far from 128 get pushed farther away — lights get lighter, darks get darker. If contrast < 1, everything gets compressed toward 128 — flat and muddy. At contrast = 0, every pixel becomes exactly 128 (solid gray).
The Uint8ClampedArray handles the clamping automatically — values that would exceed 255 or go below 0 get silently clamped. This is one of those cases where the typed array saves you from writing defensive code.
Channel manipulation: how Instagram filters actually work
Want to make everything cooler (more blue)? Just multiply the blue channel:
function channelShift(imageData, rMult, gMult, bMult) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
px[i] = px[i] * rMult;
px[i + 1] = px[i + 1] * gMult;
px[i + 2] = px[i + 2] * bMult;
}
return imageData;
}
// channelShift(imgData, 0.8, 0.9, 1.3) — cool blue tint
// channelShift(imgData, 1.3, 1.1, 0.7) — warm vintage
// channelShift(imgData, 1.0, 0.7, 0.7) — red-dominant dramatic
This is essentially what Instagram's color filters do. "Valencia" is roughly (1.1, 0.9, 0.75) plus some contrast adjustment. "Nashville" is (1.2, 1.0, 0.8) plus a brightness shift. A couple of multipliers, maybe a brightness/contrast pass, and you've got a "filter" that some app charges money for.
For more sophisticated color grading, you can combine channel multiplication with per-channel offset (lift/gamma/gain — the same controls you find in DaVinci Resolve):
function colorGrade(imageData, rLift, gLift, bLift, rGain, gGain, bGain) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
px[i] = px[i] * rGain + rLift;
px[i + 1] = px[i + 1] * gGain + gLift;
px[i + 2] = px[i + 2] * bGain + bLift;
}
return imageData;
}
// cinematic teal-orange look:
// shadows pushed toward teal, highlights toward orange
colorGrade(imgData, 0, 10, 15, 1.1, 0.95, 0.85);
Convolution: the engine behind blur, sharpen, and edge detection
Up to now, every filter looked at one pixel in isolation. Convolution is different — it considers each pixel AND its neighbors. This is how blur, sharpen, emboss, and edge detection work. The concept: for each pixel, multiply it and its surrounding pixels by a small matrix of weights (called a kernel), sum the results, and that's the new pixel value.
function convolve(imageData, kernel) {
const px = imageData.data;
const w = imageData.width;
const h = imageData.height;
const output = new Uint8ClampedArray(px.length);
const kSize = Math.sqrt(kernel.length);
const kHalf = Math.floor(kSize / 2);
// calculate kernel weight (for normalization)
let kWeight = kernel.reduce((a, b) => a + b, 0);
if (kWeight <= 0) kWeight = 1;
for (let y = 0; y < h; y++) {
for (let x = 0; x < w; x++) {
let r = 0, g = 0, b = 0;
for (let ky = 0; ky < kSize; ky++) {
for (let kx = 0; kx < kSize; kx++) {
// sample coordinates (clamped to canvas edges)
let sx = Math.min(w - 1, Math.max(0, x + kx - kHalf));
let sy = Math.min(h - 1, Math.max(0, y + ky - kHalf));
let si = (sy * w + sx) * 4;
let kVal = kernel[ky * kSize + kx];
r += px[si] * kVal;
g += px[si + 1] * kVal;
b += px[si + 2] * kVal;
}
}
let di = (y * w + x) * 4;
output[di] = r / kWeight;
output[di + 1] = g / kWeight;
output[di + 2] = b / kWeight;
output[di + 3] = px[di + 3]; // preserve alpha
}
}
// copy output back to imageData
imageData.data.set(output);
return imageData;
}
The kernel is a flat array representing a small matrix. Here are the classic ones:
// Box blur (3x3) — averages all neighbors equally
const BLUR_3x3 = [
1, 1, 1,
1, 1, 1,
1, 1, 1
];
// Gaussian blur (3x3) — weighted, center pixel matters most
const GAUSSIAN_3x3 = [
1, 2, 1,
2, 4, 2,
1, 2, 1
];
// Sharpen — enhances edges by subtracting neighbors
const SHARPEN = [
0, -1, 0,
-1, 5, -1,
0, -1, 0
];
// Edge detection (Laplacian) — finds boundaries
const EDGE_DETECT = [
-1, -1, -1,
-1, 8, -1,
-1, -1, -1
];
// Emboss — creates a 3D lighting effect
const EMBOSS = [
-2, -1, 0,
-1, 1, 1,
0, 1, 2
];
Why does box blur work? Each pixel becomes the average of itself and its 8 neighbors. Detail gets averaged out → blurriness. Gaussian blur weights the center pixel more heavily, which produces smoother, more natural-looking blur (it approximates a true Gaussian distribution, hence the name).
Sharpen is the inverse of blur. The center weight is 5 and the four neighbors are -1, summing to 1. This means: take the center pixel, amplify it, subtract its neighbors. Edges (where neighbors differ from center) get amplified. Smooth areas (where neighbors equal center) stay the same.
Edge detection uses a kernel that sums to 0 — in flat areas (all pixels identical), the result is 0 (black). Only where neighboring pixels differ (i.e., at edges) does a non-zero value emerge.
For creative coding, convolution is amazing for creating painterly effects. Apply a large blur, then composite the original back on top at partial opacity — instant soft-focus glow. Or run edge detection, then use the result as a brightness mask for the original image.
Glitch art: pixel sorting
Now the fun stuff. Pixel sorting is a popular glitch art technique invented by Kim Asendorf — you sort a row of pixels by brightness, creating streaky melted distortion:
function pixelSort(imageData, low, high) {
const w = imageData.width;
const h = imageData.height;
const px = imageData.data;
for (let y = 0; y < h; y++) {
// collect pixels in this row
let row = [];
for (let x = 0; x < w; x++) {
let i = (y * w + x) * 4;
let brightness = px[i] * 0.2126 + px[i + 1] * 0.7152 + px[i + 2] * 0.0722;
row.push({ r: px[i], g: px[i + 1], b: px[i + 2], a: px[i + 3], brightness });
}
// find runs of pixels within the brightness window and sort them
let start = -1;
for (let x = 0; x <= w; x++) {
let bright = x < w ? row[x].brightness : 0;
if (bright > low && bright < high && start === -1) {
start = x;
} else if ((bright <= low || bright >= high || x === w) && start !== -1) {
// sort this run by brightness
let run = row.slice(start, x);
run.sort((a, b) => a.brightness - b.brightness);
for (let j = 0; j < run.length; j++) {
row[start + j] = run[j];
}
start = -1;
}
}
// write back
for (let x = 0; x < w; x++) {
let i = (y * w + x) * 4;
px[i] = row[x].r;
px[i + 1] = row[x].g;
px[i + 2] = row[x].b;
px[i + 3] = row[x].a;
}
}
return imageData;
}
// pixelSort(imgData, 50, 200) — sort mid-brightness pixels, leave darks and highlights intact
The low and high thresholds define which pixels get sorted. Pixels below low (deep shadows) and above high (blown-out highlights) stay in place — only the midrange melts. This is what gives pixel sorting its signature look: recognizable shapes (the dark outlines and bright highlights anchor the image) with liquid-smooth midtones flowing between them.
Try sorting vertically instead (swap the x/y loops). Or sort by hue instead of brightness. Or sort only every third row. Each variation creates a completely different aesthetic from the same algorithm.
Glitch art: channel offset
Shift the red channel a few pixels to the right while leaving green and blue in place. This creates chromatic aberration — the "broken TV" look:
function channelOffset(imageData, redOffX, redOffY, blueOffX, blueOffY) {
const w = imageData.width;
const h = imageData.height;
const px = imageData.data;
const original = new Uint8ClampedArray(px);
for (let y = 0; y < h; y++) {
for (let x = 0; x < w; x++) {
let di = (y * w + x) * 4;
// red channel: sample from offset position
let rsx = x - redOffX;
let rsy = y - redOffY;
if (rsx >= 0 && rsx < w && rsy >= 0 && rsy < h) {
px[di] = original[(rsy * w + rsx) * 4];
}
// green channel: stays in place (already correct)
// blue channel: sample from different offset
let bsx = x - blueOffX;
let bsy = y - blueOffY;
if (bsx >= 0 && bsx < w && bsy >= 0 && bsy < h) {
px[di + 2] = original[(bsy * w + bsx) * 4 + 2];
}
}
}
return imageData;
}
// channelOffset(imgData, 8, 0, -5, 2) — red shifted right, blue shifted left+down
Notice we copy the pixel data into original BEFORE modifying. This is critical — without it, we'd be reading already-shifted values as we work through the array left-to-right. The copy ensures every lookup reads from the untouched original.
This version shifts both red and blue channels in independent X and Y directions. Real chromatic aberration (from cheap camera lenses) actually radiates outward from the center, but the flat offset version looks more "digitally broken" which is usually what glitch artists want.
Glitch art: scanlines and data corruption
Add horizontal scanlines and randomly corrupt pixel data for a full VHS aesthetic:
function vhsGlitch(imageData, intensity) {
const w = imageData.width;
const h = imageData.height;
const px = imageData.data;
for (let y = 0; y < h; y++) {
// scanlines: darken every other row
if (y % 3 === 0) {
for (let x = 0; x < w; x++) {
let i = (y * w + x) * 4;
px[i] *= 0.7;
px[i + 1] *= 0.7;
px[i + 2] *= 0.7;
}
}
// random horizontal shift for some rows
if (Math.random() < intensity) {
let shift = Math.floor((Math.random() - 0.5) * 40);
let rowCopy = [];
for (let x = 0; x < w; x++) {
let i = (y * w + x) * 4;
rowCopy.push(px[i], px[i + 1], px[i + 2], px[i + 3]);
}
for (let x = 0; x < w; x++) {
let sx = x - shift;
if (sx >= 0 && sx < w) {
let di = (y * w + x) * 4;
let si = sx * 4;
px[di] = rowCopy[si];
px[di + 1] = rowCopy[si + 1];
px[di + 2] = rowCopy[si + 2];
}
}
}
}
return imageData;
}
// vhsGlitch(imgData, 0.05) — 5% of rows get shifted
The scanline effect darkens every third row to simulate the visible scan pattern of a CRT monitor. The random row shifting simulates magnetic tape tracking errors — the signal briefly loses sync and the row jumps left or right. intensity controls how many rows get corrupted. At 0.02 you get subtle vintage vibes. At 0.15 the image is barely recognizable.
Loading external images
All these filters work on canvas content. To process photographs, load an image first:
const img = new Image();
img.src = 'photo.jpg';
img.onload = function() {
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0);
let imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
// chain filters
luminanceGrayscale(imgData);
brightnessContrast(imgData, 10, 1.4);
channelOffset(imgData, 5, 0, -3, 1);
ctx.putImageData(imgData, 0, 0);
};
Filters compose by chaining — each one modifies the imageData in place, so you can stack them. Order matters: grayscale followed by channel offset looks different from channel offset followed by grayscale (the second version shifts colored channels, THEN removes the color — losing the offset effect). Think of it like guitar pedals: the signal flows through each effect in sequence.
Complete project: interactive pixel laboratory
Let's build something that ties everything together — an interactive filter chain where you draw on the canvas and apply effects in real time. This uses the animation loop from episode 9, mouse input, and the filter functions from above.
const canvas = document.getElementById('canvas');
const ctx = canvas.getContext('2d');
const W = 600, H = 600;
canvas.width = W;
canvas.height = H;
// --- utilities ---
function rand(min, max) { return Math.random() * (max - min) + min; }
const TAU = Math.PI * 2;
// --- state ---
let mouse = { x: 0, y: 0, pressed: false };
let activeFilter = 'none';
let brushHue = 0;
let frame = 0;
canvas.addEventListener('mousemove', (e) => {
let rect = canvas.getBoundingClientRect();
mouse.x = e.clientX - rect.left;
mouse.y = e.clientY - rect.top;
});
canvas.addEventListener('mousedown', () => mouse.pressed = true);
canvas.addEventListener('mouseup', () => mouse.pressed = false);
window.addEventListener('keydown', (e) => {
switch(e.key) {
case '1': activeFilter = 'none'; break;
case '2': activeFilter = 'grayscale'; break;
case '3': activeFilter = 'invert'; break;
case '4': activeFilter = 'threshold'; break;
case '5': activeFilter = 'channelShift'; break;
case '6': activeFilter = 'pixelSort'; break;
case '7': activeFilter = 'glitch'; break;
case 'c': // clear canvas
ctx.fillStyle = '#111';
ctx.fillRect(0, 0, W, H);
break;
}
});
// --- filter functions (from above) ---
function grayscale(imageData) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
let lum = px[i] * 0.2126 + px[i + 1] * 0.7152 + px[i + 2] * 0.0722;
px[i] = lum; px[i + 1] = lum; px[i + 2] = lum;
}
}
function invert(imageData) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
px[i] = 255 - px[i]; px[i + 1] = 255 - px[i + 1]; px[i + 2] = 255 - px[i + 2];
}
}
function threshold(imageData) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
let lum = px[i] * 0.2126 + px[i + 1] * 0.7152 + px[i + 2] * 0.0722;
let val = lum > 128 ? 255 : 0;
px[i] = val; px[i + 1] = val; px[i + 2] = val;
}
}
function channelShift(imageData) {
const px = imageData.data;
for (let i = 0; i < px.length; i += 4) {
px[i] = Math.min(255, px[i] * 1.3);
px[i + 2] = Math.min(255, px[i + 2] * 0.7);
}
}
// --- drawing ---
function drawBrush() {
if (!mouse.pressed) return;
// paint with shifting hue
brushHue = (brushHue + 0.5) % 360;
for (let i = 0; i < 5; i++) {
let angle = rand(0, TAU);
let dist = rand(0, 30);
let x = mouse.x + Math.cos(angle) * dist;
let y = mouse.y + Math.sin(angle) * dist;
let size = rand(3, 12);
ctx.fillStyle = `hsla(${brushHue}, 80%, 60%, 0.7)`;
ctx.beginPath();
ctx.arc(x, y, size, 0, TAU);
ctx.fill();
}
}
// --- main loop ---
function draw() {
drawBrush();
// apply active filter to entire canvas
if (activeFilter !== 'none') {
let imgData = ctx.getImageData(0, 0, W, H);
switch(activeFilter) {
case 'grayscale': grayscale(imgData); break;
case 'invert': invert(imgData); break;
case 'threshold': threshold(imgData); break;
case 'channelShift': channelShift(imgData); break;
case 'pixelSort':
// sort only a horizontal band near the mouse
let bandY = Math.max(0, Math.floor(mouse.y) - 20);
let bandH = Math.min(40, H - bandY);
let band = ctx.getImageData(0, bandY, W, bandH);
pixelSort(band, 30, 220);
ctx.putImageData(band, 0, bandY);
imgData = null; // skip the full putImageData
break;
case 'glitch':
vhsGlitch(imgData, 0.03);
break;
}
if (imgData) ctx.putImageData(imgData, 0, 0);
}
// HUD
ctx.save();
ctx.fillStyle = 'rgba(0, 0, 0, 0.6)';
ctx.fillRect(0, 0, W, 28);
ctx.fillStyle = 'white';
ctx.font = '13px monospace';
ctx.fillText(`Filter: ${activeFilter} | Keys 1-7 to switch | C to clear`, 10, 18);
ctx.restore();
frame++;
requestAnimationFrame(draw);
}
// init
ctx.fillStyle = '#111';
ctx.fillRect(0, 0, W, H);
draw();
Paint with the mouse to create colorful splatter art, then press keys 1-7 to apply different filters in real time. The pixel sort mode only sorts a 40-pixel-tall band near your cursor, so you can "paint" the sort effect across specific areas. The glitch mode progressively corrupts the image — leave it running and watch your artwork dissolve into static.
This is the creative coding loop at its best: make something, destroy it, make something new from the wreckage.
Performance: what to know
Pixel manipulation can be slow on large canvases because you're iterating over every single pixel. A 1920×1080 image has 8.3 million values per filter pass. For context: at 60fps, each frame gets about 16.6ms. A single convolution pass on a 1080p canvas takes roughly 30-60ms in JavaScript — already too slow for real-time.
Some practical optimization tips:
- Skip pixels for preview: iterate by 8 instead of 4 (process every other pixel) to get a half-resolution preview at double speed. Write the final version at full resolution.
- Use
createImageData()for output instead of modifying in place — avoids cache aliasing issues on some browsers. - Avoid
Math.floorin the inner loop — use bitwise| 0instead:let index = (y * w + x) * 4 | 0;The bitwise OR is faster thanMath.floorfor positive numbers. - Web Workers for heavy filters: move the pixel processing to a background thread so the UI doesn't freeze. Pass the
ImageDatavia transferable objects for zero-copy performance. - For real-time effects, use WebGL shaders — fragment shaders run per-pixel ON THE GPU, which is orders of magnitude faster. We'll get there in episode 21.
For creative coding at 500×500 or 600×600? Don't worry about it. Modern browsers handle it fine at 60fps. Optimize when you actually hit a bottleneck, not before.
't Komt erop neer...
- Images are flat arrays of
[R, G, B, A, R, G, B, A, ...]— every four values is one pixel - The index formula
(y * width + x) * 4converts 2D coordinates to array position getImageData()reads a copy,putImageData()writes it back — the read-modify-write cycleUint8ClampedArrayauto-clamps to 0-255 — no overflow bugs possible- Grayscale = weighted luminance (BT.709:
0.2126R + 0.7152G + 0.0722B), not simple averaging - Invert =
255 - valueper channel — a reflection around the midpoint - Threshold = snap to black or white — amazing for generating masks and stencils
- Brightness = addition, contrast = multiplication around 128
- Channel multipliers = color tinting — the secret behind every Instagram filter
- Convolution = kernel matrix applied to each pixel and its neighbors — blur, sharpen, edge detect, emboss
- Pixel sorting = sort rows by brightness for Kim Asendorf-style glitch art
- Channel offset = shift individual color channels for chromatic aberration
- VHS glitch = scanlines + random row displacement
- Filters compose by chaining — order matters, like guitar pedals
- For performance: stay under ~600×600 for real-time, use Web Workers for heavy offline processing, use WebGL for GPU-powered filters
Next episode: particle systems from scratch. No library, just classes, velocity, gravity, and thousands of tiny dots creating beautiful emergent behavior. We'll build a complete particle engine with forces, lifetimes, emitters, and attraction fields.
Sallukes! Thanks for reading.
X