Similar to this, a while ago I made this online playground (Lambda Musika) where you can program sound realtime in your browser (using JS) in a functional-ish way:
The basic idea is you write a function `t => [l, r]` where `t` is time and `l`, `r` are output samples for the left and right channels in `[-1, 1]` range. You can think of it like ShaderToy but for sound synthesis.
It includes a small utility library but it's meant to be just a few helper functions instead of a full-fledged framework like SuperCollider, Sonic Pi, et al. I.e. it's still sample-oriented instead of module-oriented. E.g. in Sonic Pi you script modules, their parameters, and how they connect with each other, while Lambda Musika is all about outputting samples of a waveform.
It's very barebones -- I'd love to get some time to upgrade this to Monaco editor and add TS, intellisense, etc. -- and possibly buggy, but I still find myself coming from time to have some fun with it.
There's also stuff like Sonic Pi (https://sonic-pi.net/) and most things live coding related, but I found that I don't really like that approach even though I love synths and programming. For some reason I don't think they go together well. But some people are really good with that and it's fascinating
Yes, I feel the same way, but then I started making music to get away from the computer, rather than finding even more things to spend time on with the computer, maybe that's why.
There are many, many more languages for playing around with audio (and video) synthesis than that. The domain is typically called livecoding. Here's a good list of languages for that: