The next time someone tries to tell me that a true screen reader should use computer vision and machine learning (including OCR) rather than requiring applications to implement accessibility APIs, I will bring up this case.
"Why can't we just, you know, direct blind users to a special protocol that structures the data appropriately and then lets them parse it however they want?"
Me: 'We did! It's called HTML! Designers just broke it!'
IMO, HTML is still closer to that ideal than anything else we have. My guess is that given a random web application and a random non-web GUI (especially if the latter is multi-platform), the web application will be more usable with a screen reader.
I'd say markdown is even better than HTML for writing generic documents since it enforces simplicity. In particular, it forces a linear flow of the document and does not have any support for stuff like JS.
Is it possible for developer to make canvas accessible? For example, pronounce "You are one the road that goes from left to right, there is a shop on your top and a inn on bottom" like mud.
Real accessibility is about presenting the same information your other users have. So, instead of you typing the description, let each of the drawn objects have its own description and made them discoverable and navigable. I think that Google were trying to make flutter2 components accessible, but it means starting from ground zero and building the same stuff anew.
Html could have been that - or better, it was at first - but instead of creating a more specialized solution for running rich apps we decided to exploit html.
Right now we are in what I'd call the worse of both worlds, because we rely on html to do things it wasn't designed to, and there's no longer purity in any html out in the wild.