Just curious, but where do you draw the line? To use a silly example: we don't legally require every that everyone who posts an image on social media include a written description. There must be some ratio of cost to benefit at which accommodations stop being reasonable.
If we required that screen sharing tools were compatible with screen readers, we'd have to revamp many layers of abstractions. It would require changes to every operating system, every UI framework, every browser, and every screen sharing application. An alternative would be to throw a bunch of machine learning at the problem (to try to turn pixels back into meaning), but that would have a lot of broken corner cases. The issues would likely be as bad as auto-generated subtitles, which are generally not good enough to be considered ADA compliant.[1]
My guess is that if the law changed tomorrow and mandated that screen sharing tools accommodate the blind, we'd end up with no cross-platform screen sharing tools. Microsoft would make their Windows screen sharing. Apple would make their MacOS screen sharing. Google would make their ChromeOS screen sharing, and none of them would be interoperable. Also desktop Linux would be SOL.
> My guess is that if the law changed tomorrow and mandated that screen sharing tools accommodate the blind, we'd end up with no cross-platform screen sharing tools.
Solving this problem in a cross-platform way is hard, but not impossible, especially for a company as well-funded as Zoom. And yes, I have ideas about how it could be done, though like my suggestion about the Chromium accessibility tree, they're not necessarily fully baked.
> we don't legally require every that everyone who posts an image on social media include a written description
Not that it takes too much away from your point, but I've experienced an interesting gap in this example. While not legally required, big chunks of the short-form-text fediverse (Mastodon/Pleroma/…) have had circulating posts recommending descriptive text for image posts, and I'm actually surprised by how many people get into the habit of complying naturally—perhaps because there's also an easily-noticeable slot in the UI for it? Ten or so years ago I remember it being like pulling teeth to explain to some people doing media projects on the Web that this kind of accessibility was important, and now with what seems to be culturally a similar crowd… huh, y'know?
> If we required that screen sharing tools were compatible with screen readers, we'd have to revamp many layers of abstractions. It would require changes to every operating system, every UI framework, every browser, and every screen sharing application.
Why?
You're basically putting half the screen reader on each side of the screen sharing tool. This requires a significant number of changes to the screen sharing tool, but shouldn't require changing anything else.
> UC Berkeley was forced to delete over 20,000 videos of lectures because their auto-generated subtitles weren't accurate enough
Compared to the effort of setting up all those courses, captioning services are really minor. I feel like they should have just fixed that. According to the document they even have an internal unit specifically for doing this.
When it comes to the complaints that the presentations themselves were done wrong, that seems more like a situation where "fix it or delete it" is a problem.
> Compared to the effort of setting up all those courses, captioning services are really minor.
Setting up those courses is how the university makes its money. The university exists for its students, current and former, and not necessarily the general public.
Spending money on transcription services, on the other hand, would not have benefited their students, who are already accommodated regarding accessibility in compliance with the law. It might not exactly help students for the videos of lectures to be publicly available online, either, but there are plenty of good reasons to record lectures for students (if they miss a lecture or want to review), and beyond the initial cost of setting up a camera, inexpensive. And once you're recording them anyway, it doesn't really hurt to make those lectures available online.
Meanwhile, captioning is expensive. It's not a simple fixed cost, you pay standard rates of $1.50-3.00 per minute ($90-180 an hour) and that's without accounting for the other transcription problems, including (but not limited to):
- technical vocabulary many people may not understand
- professors for whom English is not their native language and thus speak with a heavy accent
- students positioned far from the microphone who ask questions during the lecture
And for what? If they have a deaf or hard-of-hearing student, they can accommodate them for their specific classes, but otherwise it's an extremely expensive proposition to do so not just for every single class whether or not they have students who need it, but also for all the previous classes in the past for which they recorded lectures. Obviously in this case taking down the lectures was the rational thing to do, especially considering that people were going to download and mirror them afterwards anyway, so it wasn't as if the lectures would be lost to the public.
If we required that screen sharing tools were compatible with screen readers, we'd have to revamp many layers of abstractions. It would require changes to every operating system, every UI framework, every browser, and every screen sharing application. An alternative would be to throw a bunch of machine learning at the problem (to try to turn pixels back into meaning), but that would have a lot of broken corner cases. The issues would likely be as bad as auto-generated subtitles, which are generally not good enough to be considered ADA compliant.[1]
My guess is that if the law changed tomorrow and mandated that screen sharing tools accommodate the blind, we'd end up with no cross-platform screen sharing tools. Microsoft would make their Windows screen sharing. Apple would make their MacOS screen sharing. Google would make their ChromeOS screen sharing, and none of them would be interoperable. Also desktop Linux would be SOL.
1. UC Berkeley was forced to delete over 20,000 videos of lectures because their auto-generated subtitles weren't accurate enough: https://news.berkeley.edu/wp-content/uploads/2016/09/2016-08...