Do you work with blind developers on building this at all? I know that HN has had a few big threads discussing the workflow for blind programmers, I would feel like this could potentially be a part of it? Assuming stutter could be interpreted to another language?
Currently I'm the only one working on the project. I know there are probably a lot of potential applications for the language, but I still haven't written a specification and neither a parser nor interpreter that works.
My plan so far is to make it embeddable with a voice recording demo, so that it can be used inside a Web Browser with their respective voice recognition APIs.
The idea behind the language is that it will also embed native data types and tries to make the grammar as predictable as possible, so that the recognition can get more failsafe.