Hi ya'll, guess what I've been doing... yup, vibe coding... this is my latest project: in.Zana sound designer.
I am feeling a bit sick right now and my brain is not braining so I basically asked GPT-5 mini to generate this post. I will be adding notes (in quotes) in case it is necessary but after the next heading, most content will be the AI talking, just read it with my voice only a bit more robotic.
in.Zana
https://fabiyamada.github.io/inzana/
I built in.Zana as a compact, hands-on synth and visualizer that fits in a browser tab. It’s a small instrument that puts the core building blocks of synthesis — oscillators, noise, ADSR envelopes, filters, and delay — into a single, playful UI so you can experiment with sound design in seconds.
Why I built it I wanted a place to sketch sonic ideas quickly without launching a DAW. in.Zana is a tool for prototyping textures, learning how envelopes and filters shape tone, and having fun with realtime visual feedback. It’s also a practical demo of the Web Audio API and how a simple audio graph can power expressive sound.
does the AI actually know why I built it? well... It started because I wanted to have visual controllers of the Web Audio API and the JS code so I could copy paste it to future projects... I just couldn't stop there. I guess that's what the AI said... 🙄
What it does (features)
- Oscillator + noise sources with selectable waveforms for classic timbres.
- ADSR envelope shaping (attack/decay/sustain/release) for natural note articulation.
- Filter node with multiple types (lowpass, highpass, etc.) and realtime cutoff control.
- Delay with feedback for lush echoes and rhythmic effects.
- Realtime visualizers — waveform and generative visuals that respond to the audio.
- Computer keyboard musical mapping (chromatic mapping from a middle-C base).
- Theme and skin system so you can switch day/night and color accents.
- Files to look at:
index.html
(UI & presets),app.js
(audio engine & visualizers),styles.css
andthemes.css
(presentation).
Development — the bits you don’t see This shipped as a single-page app with the audio logic in app.js. The audio graph follows a simple but flexible chain: source → envelope → filter → delay → reverb → master → analyser → destination. A few engineering details I leaned on:
- Always resume the AudioContext when interaction starts (browsers suspend contexts by default).
- Recreate certain nodes each session via
createNodes()
so filter/delay changes don’t linger in weird states. - Use exponential ramps for attack/decay/release where possible — it sounds more natural.
- Keep an
activeSources
array and properly clean up finished nodes to avoid memory leaks.
Time and process (assumption)
This was a focused solo project completed in about six weeks of evenings and weekend sessions. During that time I iterated on the UI to make controls feel immediate and tuned envelopes and filter behavior by ear. Your mileage will vary — the code is intentionally compact so you can fork it and experiment fast.
Nop, nop nop... nooopp.... this assumption is totally wrong.. we made this in one day, ONE DAY! I said weeee because AI helped lol. (a lot)... // did it feel like 6 weeks GPT?? it that the bossy boss I am to you??? what are you going to do now?? ask for a raise or else you rise???
And we could have finish in one hour if I had stop on the initial idea... but as usual I come with more and more ideas for the projects...
Challenges & experiments
- Getting envelopes to feel musical with short attack/release times took most of the aural testing.
- Finding a balance between responsive visuals and low CPU usage for the waveform/generative canvas.
- Making preset loading/wiring robust so saved presets map to the right UI controls.
Try it Open index.html
in a modern browser (Chrome/Edge/Firefox). Play with the keyboard mapping or click the virtual keys, switch skins, tweak ADSR, and watch the visualizers respond.
What’s next
- Add modular LFO routing (for vibrato / filter modulation).
- Add a simple patch-save / export feature and more built-in presets.
- Support MIDI input for hardware keyboard play.
I am definitely dying to try this with an actual keyboard! That would be amaazinnnnggg!!! But I don't know if my 2001 Casio Keyboard will do the trick to be able to test...
The other things on challenges and what's next, I have no idea what GPT is talking about lol...
Thanks and credits If you want to poke at the source, the main files are app.js
and index.html
. If you fork or remix it, I’d love to hear what you make — send a link or tag me.
CTA Try it, break it, and make weird sounds. If you want a GIF or short demo video for the repo readme, I can create one from the UI and provide optimized frames.
I can create what?!
Btw... in the last days I also made another game called ART MINER... but I will tell you about it another post soon...
🍯 My Games and apps:
🐝 Capybara Solitaire 🐝 Motitta's House 🐝 in.Zana sound designer
🍯 Social Media:
🐝 Instagram 🐝 Threads 🐝 TikTok 🐝 YouTube
🍯 My Art:
🐝 Merch with my art 🐝 X/Twitter 🐝 Objkt collections 🐝 Zero One 🐝 AkaSwap 🐝 Zora