Cool, I like the split keyboard feature.
I've not really thought about a license tbh, I'll look into which one would be best and add a note about that. I don't really mind what anyone does with it so maybe MIT would be the one.
Cool. Great to meet you virtually, btw. Your web audio synth is awesome. Adding the oscilloscope made me appreciate how your different patches generate their unique sounds. And I added a few preset patches I liked too.
Because of the mapping of the typing keyboard to musical notes. F would represent E# which doesn't exist on a piano. H should work though.
I found a pic that explains the keyboard layout better than I could:
Since it looks like we're all sharing our web-based synthesizers, here's a FM synthesizer I built with Rust + Wasm with SIMD earlier this year: https://notes.ameo.design/fm.html
Gets a bit crazy with an n-key rollover keyboard, hitting 31 notes simultaneously! But what happened to the F♯ that I expected on the = key, between [’s F and ]’s G?
(Also since I’m writing something already, you’re serving your site on both HTTP and HTTPS and only some URLs are doing the appropriate 301 HTTP → HTTPS redirect, presumably because it’s being done at the Wordpress/PHP level, so any direct file references handled directly by Apache don’t get done. You should make it do all URLs, and ideally add an HSTS header.)
Not sure what went wrong and I'm not sure about the exact measurements but your synth has huge input or output latency, maybe around 100ms or something like that? Other WASM/web synths I've tried doesn't suffer from the same problem.
Otherwise it's a nifty little toy, thanks for sharing :) Lots of fun.
Yeah some browsers/operating systems have worse latency issues than others with Web Audio. However, it's possible to tune the `latencyHint` a bit when constructing the `AudioContext` to specify a custom value. You have to find a balance between too low where have buffer underruns and miss samples and too high where it adds noticeable delay.
This is doing it the hard way if you just want to generate a sine wave, because WebAudio’s OscillatorNode [1] will do it for you, no WebAssembly required. It likely works in more browsers too.
You can also use setPeriodicWave() to have it iterate over any sampled waveform you like. There is enough in the JavaScript API to fool around with basic subtractive synthesis where you connect an oscillator to a filter.
I also recommend Syntorial [2] for understanding what subtractive synthesis is capable of. It won’t help you synthesize real instruments, but you learn what the various knobs on synthesizers do by trying to reproduce increasingly sophisticated synthesizer sounds.
I think you provided a great guide for people looking to do the same, I appreciate the write up! Real time audio is a tricky thing to get into and even moreso in the browser, starting with a sine wave here is a great launch point for more experiments.
Is there a way to supply a PeriodicWave a specific wavetable or series of floating points to represent the wave form? It appears from the docs that PeriodicWave is a way to generate waveforms by shaping sinewaves
One problem I've had in the past trying to do audio on the web is I struggled to find any reference material on how to generate my own waveforms and rely on the browser APIs only for playback. This tutorial, with the AudioWorklet, is the first piece I've seen on how to easily do this.
Oops, I misremembered. For looping a wavetable, you'd want to use AudioBufferSourceNode [1].
If it's a static waveform, you could also do a Fast Fourier Transform and hard-code the resulting table into your program as a Periodic Wave. Apparently, this is how the prefined waveforms of OscillatorNode work. [2]
Generally speaking, you need to read about all the different kinds of nodes that WebAudio gives you and wire them together, sort of like you would with a modular synthesizer. For experimenting with audio synthesis, I'd recommend trying out Rack [3] to figure out what kinds of sounds you want to make, then coding it in JavaScript if you can find equivalent nodes. (Rack is going to have much more capable modules, though.)
This probably isn't much help, but if you've got an arbitrary waveform you can do a Fourier transform on it to decompose it into a list of additive sine waves (each wave parameterized by amplitude and frequency, ignoring phase), and then play all those sine waves together to make your sound. I have no idea about the implementation details here (I can imagine that this might get computationally intensive!) but in theory this should work.
If this stuff interests you have a look at Pianoteq. They take this to a whole new level, through physical modelling they get extremely close to being able to generate realistically sounding pianos.
Interesting one! For the time I played piano, I ended up buying myself 2 ADAM A7X monitors and played with Ableton's Grand Piano. I also downloaded a supposedly other library that allowed me to use sampled grand piano sounds which was supposed to improve sound quality.
It, however, never ended up sounding quite as good as my brothers's simple 300€ Yamaha keyboard.
Even after having trained my ears for many months, I still don't know why the piano sounds from my speakers just weren't as pleasing as what by bro's Yamaha produced.
The modern Yamaha keyboards use samples from real pianos. In mine there is both a Yamaha and a Bosendorfer and both sound quite good. But being sample based they are essentially just playing back recorded sounds.
Pianoteq generates the sounds using nothing but a bit of software and it is really most impressive.
I've never been able to work out if Pianoteq uses physical modelling - modelling the strings and soundboard as the solutions of differential equations - or spectral modelling - overtone resynthesis, which is rooted in sampling but reassembles the harmonics in samples dynamically instead of playing back a fixed sample series at different rates.
I suspect it's the latter, because there's a hint of detail missing in the way the overtones move.
The wikipedia article[1] says it's "Fourier construction" but without reference (that I can find) and without elaboration. At their website[2] they list some of their staff; I looked up research by one of their researchers and found a paper "Modeling and simulation of a grand piano"[3] which looks quite heavy on the physical modeling of strings and soundboard. I'd expect that to work better than spectral modeling because I think the latter would introduce (too much?) latency via needing to collect an entire spectral window (plus extra computation to compute the phases, and even then I don't think it could sound good enough?). Whereas physical modeling works directly in the time domain and there's a wealth of literature around it. See e.g. J. O. Smith III's waveguide synthesis work[4].
This problem is similar to the one faced by photographers. Why do in-camera shots look so much better in one camera vs another? A big reason these days is the camera's default software post-processing matches your taste. I bet with a little fiddling (particular with things like eq, compression, reverb, maybe resonance - all of which Live has in abundance!) you could find a great sound. It's just easier because Yamaha has great taste in sound. (One day I swear I will get a C5!)
The other thing that affects your experience, and that doesn't have anything to do with timbre, is latency. Ableton through a USB audio interface into monitors is going to take (much) longer than the onboard sound generation + speakers on a digital piano. It's going to add at least 20ms plus whatever time is required to compute the sound. Meanwhile any cheap digital piano is going to do better than that.
I agree that unchecked latency (no matter how imperceptible) can sour or perception of an instrument's playability. That said, latency of 3ms is very attainable so long as your interface isn't a decade old. That's a very usable speed.
Software instruments are usually about as quick as you'd ever need them to be (no latency). So just make sure your audio buffer is set low (32 - 128 samples) and that you're not doing any heavy DSP processing that's going to add extra latency.
It takes some vigilance to do and it can be a pain in the ass to manage when you're trying to play the instrument in a CPU intensive session, but if you do it right you'll only get latency at the output stage (3ms).
I have Live 11 on macOS 11.2.3 on an Intel 2.3GHz MBP and Live states output latency as 13ms for a 64 sample buffer at 44.1KHz sample rate. Apparently latency is reported by the audio interface driver itself, rather then measured, because live has a parameter to override that latency, including with negative values (using time travel?) It's something I should fiddle with, for sure. It would be nice to have playable virtual instruments!
Negative values typically offset for instance midi output to a point prior to the point where they normally would be output. So this only works for items that have a timestamp. But a soft-synth for instance could be driven that way to anticipate a longer delay further down the chain to end up with a more neutral delay once the whole chain has been traversed.
I guess the yamaha has dedicated DSP, it might also have some analog circuitry as part of the signal path. Both of these will change the quality of the sound coming out, potentially for the better. Also for some reason sometimes downsampling to 12bit gives a pleasing character as can be heard on some of the hardware samplers from the 90s. As for the sounds actually on the keyboard, its possible they were recorded from a different piano, with different mic placement and a different mic pre amp to the ableton sounds, all of which could potentially lead to a nicer sound being heard.
I haven't used their product, but the examples of all the effects they model impressed me quite a bit.
Unfortunately it is not possible to link to the examples page directly. To see what I mean, click on the Fine details of sound tab here:
https://www.modartt.com/pianoteq#acoustic
Pianoteq is great, and they have some really classic keyboards in their collection.
I really wish there were more open source physical modeling synths out there. I'd love to play with physical modeling code, but it's just not that common.
A small note on your LaTeX use: If you use \sin, the sin will be upright (instead of italics), which is considered standard use for math operators (other operators of the top of my had that also use this: tan, cos, log, dim, deg).
If you encounter an operator, where the corresponding command with \ does not exist, you can create it with
I had the opposite experience (based on the title): I read Goedel, Escher Bach because I thought programming and math were really interesting and it ended up getting me into music.
To answer the question in footnote 1, off the top of my head: setTimeout and setInterval have no timing guarantees because in the browser window/tab context there's no guarantee that they're active/visible. It doesn't make sense for a UI event to fire (their intended use) if the UI is hidden/inactive, and timing guarantees are therefore deferred to specific implementations.
requestAnimationFrame does make those timing guarantees.
RAF is still imprecise for audio use. For audio in the browser, you usually want to defer all timing events that must be precise to a scheduler that uses either the webAudio context time or even more ideally, use a lookahead scheduler to schedule events by audio frame using an internal counter and the sample rate.
A great programming exercise that forces you to wrap your head around JavaScript timing events is writing a small step sequencer that can be used while it is running with minimal latency.
Interesting point, thank you. I was reading the docs today and encountered the following (presented fyi):
It turns out that this is currently true [0] because the base audio context (BAC) time is accurate to within 20us whereas RAF is accurate to within 1ms [1]. However in future browser versions the default sensitivity for BAC is going to drop to 2ms. This is done to help improve user privacy by hampering browser fingerprinting.
This is all configurable and its materiality depends on use case, but interesting read: thank you.
> As specified in the HTML standard, browsers will enforce a minimum timeout of 4 milliseconds once a nested call to setTimeout has been scheduled 5 times.
- Timeouts in inactive tabs
> To reduce the load (and associated battery usage) from background tabs, browsers will enforce a minimum timeout delay in inactive tabs. It may also be waived if a page is playing sound using a Web Audio API AudioContext.
> The specifics of this are browser-dependent.
- Throttling of tracking scripts
> Firefox enforces additional throttling for scripts that it recognises as tracking scripts. When running in the foreground, the throttling minimum delay is still 4ms. In background tabs, however, the throttling minimum delay is 10,000 ms, or 10 seconds..
- Late timeouts
> The timeout can also fire later than expected if the page (or the OS/browser) is busy with other tasks.
- Deferral of timeouts during pageload
> Firefox will defer firing setTimeout() timers while the current tab is loading.
I thought they weren't accurate because the callback is only added to the stack once the timer expires, this means you may need to wait for the stack to clear before execution happens.
In a default installation, as far as I am aware, Emscripten uses Clang as its compiler, while the code at the bottom implies it uses GCC. (To support existing build pipelines, it attempts to recognize arguments for either compiler.) Is this in error, or can Emscripten be configured to use GCC?
OP here. Hey, if you have a source on your statement I'm happy to change the blog post towards what is correct.
It may be that I assumed emscripten to use gcc but since its been over a year I wrote about it, I'm not sure anymore and so I default to trusting what I wrote. Happy to change with a source.
It appears to be noted in the readme for the repo [0]: "Emscripten compiles C and C++ to WebAssembly using LLVM and Binaryen." A 32-bit Clang is present in my own emsdk installation.
Synth: http://www.errozero.co.uk/stuff/poly/ Source: https://github.com/errozero/poly-synth
Works best in chrome