It took some time, but we finally got Kokoro TTS (v1.0) running in-browser w/ WebGPU acceleration! This enables real-time text-to-speech without the need for a server. Looking forward to your feedback!
This is brilliant. All we need now is for someone to code a frontend for it so we can input an article's URL and have this voice read it out loud... built-in local voices on MacOS are not even close to this Kokoro model
Yes, I am saying they might include features for TTS in addition to their current STT feature set. Seems like many of these sorts of apps are looking to add both to be more full fledged.
Incredible work! I have listened to several tts and to have this be free and in complete control of the customer is absolutely incredible. This will unlock new use cases
Brilliant job! Love how fast it is, I'm sure if the rapid pace of speech ML continues we'll have Speech to Speech models directly running in our browser!
Kokoro gives pretty good voices and is quite light...making it useful despite its lack of voice cloning capability. However, I haven't figured out how to run it in the context of a tts server without homebrewing the server...which maybe is easy? IDK.
Fantastic work. My dream would be to use this for a browser audiobook generator for epubs. I made a cli audiobook generator with Piper [0] that got some traction and I wanted to port it to the browser, but there were too many issues. [1]
Is there source anywhere? Seems the assets/ folder is bundled js.
In my opinion, there's a ton of opportunity for private, progressive web apps with this while WebGPU is still relatively newly implemented.
Would love to collaborate in some way if others are also interested in this
Everyone in the space only caring about (and therefore testing on) Nvidia/CUDA as suggested in GP is exactly why a software bug that seriously impacts results but only effects AMD GPUs would get through into released software very easily.
WebGPU actually generates the speech entirely in the browser. Web Speech is great too, but less practical if the model is complicated to set up and integrate with the speech API on the host.
The implementation of the Web Speech API usually involves the specific browser vendor calling out to their own, proprietary, cloud-based TTS APIs. I say "usually" because, for a time, Microsoft used their local Windows Speech API in Edge, but I believe they've stopped that and have largely deprecated Windows Speech for Azure Speech even at the OS level.
Just to be clear, are you really saying that speech with text to speech is server hosted and not on device for Windows?
You could do text to speech on a 1Mhz Apple //e using the 1 bit speaker back in the 80s (software automated mouth) and MacinTalk was built into the Mac in 1984. I know it’s built into both the Mac and iOS devices and run off line.
But I do see how cross platform browsers like Firefox would want a built in solution that doesn’t depend on the vendor.
If the application is still using the deprecated Microsoft Speech API (SAPI), it's being done locally, but that API hasn't received updates in like a decade and the output is considerably lower quality than what people expect to hear today.
Firefox on Windows is one such application that still uses SAPI. I don't know what uses does on other operating systems. Like, on Android, I imagine it uses whatever is the built-in OS TTS API, which likely goes through Google Cloud.
But anything that sounds at all natural, from any of the OS or browser vendors, is going through some cloud TTS API now.
Any luck with getting this running on iOS 18.2.1 running Safari? I have tfe WebGPU feature flag turned on (Settings -> Safari -> Advanced) and I’ve tried a few other WebGPU demos successfully
Generating audio takes a bit, but wow, 92MB model for really decent sounding speech. Is there a way to plug this thing into speech dispatcher on Linux and use for accessibility?