Hacker News new | past | comments | ask | show | jobs | submit login
I am no longer able to use Google with Lynx (blind.guru)
575 points by mlang23 on Nov 25, 2019 | hide | past | favorite | 318 comments



Hi Mario,

I am an engineer on Google Search frontend.

Thank you for posting this, and I'm really sorry and saddened to see this broken; this is certainly not intended behavior.

I've reproduced the issue you described in the blog (Lynx does not allow clicking on the search results page). Even though Google serves valid HTML to Lynx, it's probably HTML that Lynx cannot parse, and since it used to work before, this is a regression. I filed a bug for our team to look further into it and address the root cause.

Interestingly enough, pressing <L> shows a list of links in the current document, and it does show all the available result links, so Lynx does see and parse them, it's just not rendering the links inline with the search results, so that's something we have to investigate as well.

In the meantime, as a temporary workaround, if you're open to using Chrome with a simple UI that would be amenable to a screenreader and keyboard navigation, you can use a User Agent switcher extension by Google [1] to set the user agent header to Lynx [2] and Google will serve the same HTML that it would have served to Lynx. You can then use the <Tab> key to iterate over the search results, and <Enter> to select a particular result.

I look forward to seeing this bug resolved, and will be personally following up on the status of this bug.

Again, I'm really sorry to see this, and I hope we'll be able to restore your ability to use Google with Lynx shortly!

[1] https://chrome.google.com/webstore/detail/user-agent-switche...

[2] https://developers.whatismybrowser.com/useragents/explore/so...


Thanks a lot for this positive reply! I am thrilled to read that this might be counted a regression and actually fixed. I really hope that can happen.

Regarding 'L', Lynx sometimes "hides" links if the anchor is around a div. Maybe it is just that simple. IIRC, <a href=...><div>...</div></a> will trigger a similar behaviour.

Regarding your Chrome suggestion, that really doesnt help me much since I spend 99% of my workday on a plain virtual console. The context switch of moving to another computer that runs Windows for simple searches is really not practical.

Again, thanks for spotting this and acting on it!


Hi Mario,

Your analysis is correct; the issue was due to <div> tags appearing inside <a> tags. This should be fixed now; I've verified that I can follow result links using Lynx.

Once again, my apologies for you running into this issue! Thank you for reporting & debugging it and thank you for your patience as well.

I hope this is resolved for you now; please try it out and let me know whether or not it works for you, or if you run into any other issues.


Will bet good money this is due to your user-agent based content serving. I have similar issues with non-standard browsers I use. I don't know why google and google based sites (including recaptcha) are the only ones uaving this issue. It really is bad for the web to have this sort of user agent discrimination.


> It really is bad for the web to have this sort of user agent discrimination.

Eh, if the issue really was (as figured out below) that lynx doesn't support divs inside anchor tags, that seems like the best possible solution if you aren't going to drop lynx support altogether. Even IE6 allows that.

It just isn't worth trying to do progressive enhancement by tying everything into knots trying to keep the page strict html 4.


Doesn't matter. Google can make a site that works great with all browsers. Not that you have no point but Google is the only company I struggle to get their products working without conforming to their assumptions about user-agents. Don't break things because only a fraction of users use or don't use a feature (includes JS).


Another win for Hanlon.


I feel like many of the text-mode browsers have failed to keep up with changing web standards. We were all mad when IE was holding back the Internet, and I'm not sure we should give lynx and w3m a pass because they're geek tools. (Accessibility is an important concern, but web browsers running under a GUI system support screen readers.)

https://www.brow.sh/ is a console-based browser that claims to support modern standards. Perhaps that is what we should be using.

(I am now prepared for 6 comments replying to me saying that anything that can be implemented with HTML from 1999 should be, and a list of search results can be. I guess. If all that stuff works for everyone, why did we invent new stuff? Just because? Or perhaps it wasn't really as amazing as well all remember.)


I'll be the first (EDIT: third) of six.

> If all that stuff works for everyone, why did we invent new stuff? Just because? Or perhaps it wasn't really as amazing as well all remember.

To better track people and push ads. It's really mostly just it. Modern web has very little to do with providing value to the end-user; any utility that's provided is mostly a side effect, and/or a vector to lure people into situations where they can be monetized.

Text browsers aren't holding the web down, they're anchoring it in the port of productivity, even as the winds of commerce desperately try to blow it onto the open seas of exploitation.


Come on, you can't be serious about this.

Creating sophisticated web pages is massively easier than 10 or 20 years ago. Yes, HTML of plain simple text-only pages is still pretty much the same, but most users actually prefer visually fancier content with pictures and colors.

Yes, companies presenting themselvses online profit of more capabilities. And yes, presenting ads is probably easier too. But if you think those changes were just made because of monetary greed, you could say the same about almost any technological advancement, like color photography, or electric cars, because all of these had a commercial side to them too.


I am serious. Yes, it's true it's easier than ever to create sophisticated websites. But it's also true that almost all this sophistication delivers negative value to users - it distracts them, slows them down, and forces them to keep buying faster hardware. It doesn't have to be like this - but it currently is. It's not even just a problem of explicit business incentives - the technical side of the industry has been compromised. The "best practices" in web development and user experience all deoptimize end-user ergonomy and productivity.


The mistake you are making is that you are trying to answer the question of what the average user wants by looking at what you want. Developers are not representative of users.


The average user, by Google's own studies, wants a faster experience.

By far.

Is there any evidence that web apps of today are faster in achieving what equivalent non appy web pages of the past managed? Despite the fact that those older "apps" were running on computers which were orders of magnitude slower than our cell phones today.

I've worked with 2 companies now where their users (and both these companies have users who pay 4 digit annual fees per user) have refused to migrate to the new Web 2.0 apps these companies have tried to foist on them.

The difference in these cases is that the users, by virtue of paying, actually have a say and so have required the companies to provide parity with the older and newer applications, and usage continues to not just be higher, but grow faster on the older versions (despite having a larger base).

Regular users have no such option. Google changes GMail, but its users still insist on using the older versions of the app, which is why they provide HTML mode, etc. However, it's users do not pay Google, and are forced to go with whatever Google wants to do, which is constantly hiding the older version and making it progressively worse to use.

It's not evident to me at all that regular users "want" to use these new web 2.0 apps, as much as they don't have a choice.


I find it interesting by default I switch back to classic if possible. All newer interfaces seem to remove features and slow things down.


Yes and no: Ajax allows interactive (snappier/faster) behavior in most cases, especially for complex interaction flows. Using the minimal html Gmail interface vs the modern one, the modern one is quicker for complex interactions because I end up loading fewer pages,even if the average load is more expensive.


It doesn't feel faster to me. And that's the case with Ajax in general - in principle, it can allow for snappier, faster experience. In practice, it rarely does.


I just measured it: switching between "drafts" and "inbox" on the new gmail takes ~15 ms, while on the simple HTML view, it takes 3-500ms per load.

Its literally 20 times faster, and importantly, cuts across the human visual perception boundary, which is at ~200ms. So the old HTML version is human perceptible, while the new version renders in ~1 frame.


Fair enough. On a fast enough machine. This is off-topic, but while on my beefy dev desktop I get similar times to you, this was absolutely not the case on a much weaker laptop I used sometimes during travel - and the latter is more representative of how regular people experience things.

(Also, FWIW, loading delay of HTML pages on said beefy desktop, as tested right now, are consistently about 750-1500ms for me.)

More on-topic, these times apply only when the pages are hot in the browser cache. On my desktop, the first-time switch between "drafts", "sent" and "inbox" takes around 3 seconds each, and only then it is instantaneous. So regular users are likely to experience these long Ajax delays more often than not.


It depends on the application though. Mapping apps (e.g google maps) without ajax are awful.

Even basic UIs like filtering can be bad if you want to change multiple filters, and you have to wait for a whole page load in between each change (page load times for pages with filters are often slow as they're performing complex queries).

It's a case of different things being appropriate for different use cases I think. There definitely are still times when plain HTML is best, but it's also not always faster.

I've built a React app that's under 300kb (cached with a service worker for subsequent page loads) that loads almost instantly and works offline. On the other hand, plenty of plain HTML sites include heavy CSS frameworks, or 5mb images, gifs, etc and load pretty slowly, especially on poor connections.


TBH, that's a side effect of "EVERYthing is a webpage, because that's the way it is!" Of course a map application is SLOOOOOOW when forced to go through all the intermediate steps to resemble a web page. Mapping apps that forgo this convention are snappy; but no: a chat client is a web browser, a map client is a web browser, the "show me two lines of text" notification bar is a God-damned webbrowser.


Snappier! You mean "with more spinners, now with added lag". (Get off my lawn.)


That's a cop-out. Being a developer and a long-time computer user biases me, but also gives me a more informed perspective on what's productive and ergonomic, and what's distracting and annoying. I can name the issues I see, instead of writing off the frustration as "this computer must have viruses" or "this is just how things have to be".

Bloat because of inefficient design isn't delivering any value to regular people that a developer is oblivious to. It's just bad engineering. Similarly for distractions, abandoning all UI features provided by default for the sake of making a control have rounded corners, setting the proficiency ceiling so low that no user can improve their skill in using a product over time, etc.


In my experience (years of talking IRL with thousands of users of my B2B SaaS product), there exists a large cohort of users that don't want to improve their computer skills. They want the software to make things as absolutely "user friendly" as possible.

As an example, I tried standardizing on <input type="date" /> across our product (hundreds of fields). Within 24 hours we logged >1,000 tickets with users saying they disliked the change. They preferred the fancy datepicker because it let them see a full calendar and that enabled a more fluid conversation with the customer (like "next Wednesday").

Yes, Chrome does offer a calendar for this field type, but Safari for desktop does not (just off the top of my head).

I'm a vim-writing, tmux-ing, bash-loving developer. If it were up to me, I'd do everything in the terminal and never open a browser.

I recognize that the world doesn't revolve around me and my skills, interests and tastes. If a large cohort of my customers tell me they don't want to improve their computer skills and want a fancy UI, who am I to tell them they're wrong? They're paying me. They get what they want.


You're conflating a couple of things here. It's true that users don't like change - and for good reason; messing with UI invalidates their acquired experience, and even if you know you've changed only one small thing, they don't know that. It quite naturally stresses people out.

Two, I'll grant you that you sometimes have to use a custom control, because web moves forward faster than browsers, and so you can't count on a browser-level or OS-level calendar widget being available. But then the issue is, how do you do it. Can the user type in the date directly into your custom field? Is the calendar widget operable by keyboard? Such functionality doesn't interfere with first-time use, but is important to enable users to develop mastery.

A lot of my dissatisfaction with modern UI/UX trends comes from that last point: very low ceiling for mastery. Power users aren't born, they're made. And they aren't made in school, but through repeated use that makes them seek out ways to alleviate their frustrations. A lot of software is being used hours a day, day in, day out by people in offices. Users of such software will be naturally driven towards improving efficiency of their work (if only to have more time to burn watching cat photos). If an application doesn't provide for such improvements, it wastes the time of everyone who's forced to interact with it regularly.


Extending Web capability by building in features to HTML, such as calendar-based date-pickers (sometimes useful, often tedious), is one thing. Those standards can either be retrofited into a console-mode browser (it is possible to display a calendar in plain text, see e.g. cal(1)), or degrade gracefully to a text-based input field of, oh, take your pick, YYYY-MM-DD, YY-MM-DD, DD-MM-YYYY, MM-DD-YY, etc.

Better forms inputs could very well be useful, I agree.

The recent MSFT + GOOG snogfest announcing major improvements to HTML by ... improving form and input fields in their (GUI-only) browsers strikes me, in light of rather ominous icebergs looming on the HTML horizon, of rather gratuitious deckchair-rearanging. No matter how fine those arrangements might be.


This seems to be exactly where progressive enhancement is preferred. If you use input=date, it’ll probably work on mobile much better than s as JavaScript calendar solution. Also, I hope your date picker also allows typing by hand on desktop, otherwise may God have mercy on your soul...


[flagged]


> Greed and morality can't mix. I personally support morals.

Hold up, your position is that users that prefer something 'easy to use' as opposed to something 'powerful' are immoral? What am I missing here?


I thought the parent was referring to the developers/business choice as being immoral, not the customers.

If I choose to offer predatory loans which I would never accept for my friends or family to a community that is not financially savvy, and someone calls me out on it, it doesn't fly to say "hey, what do you have against these people taking advantage of my easy to use service?".


What you're missing is that I'm talking about developers having a choice between providing the best possible product in a technical sense, or simply going the way of profit and greed. To me, when you knowingly produce a substandard product and seek non-savvy users, that's an immoral act. I believe the goal should be to constantly raise the floor, not lower the ceiling.


How can what the parent did be considered immoral? It's not like they pushed tracking and ads down their customers throats, they just made the interface easier for people to use.


Parent's idea, as far as I can tell, not that I lean any which way, is that if you create a deficient product because you're financially incentivized, that's immoral.


This makes me wonder. If we could have all of the SPA and frontend framework overcomplication and the ability to make four asynchronous loading screens to load what could be rendered server side as an HTML table with a navbar instead - if we had all of that technological progress two decades ago, would we have seen what benefits it would give over minimalist design, if any?

Sometimes it feels like the web of yore was so simple to use and free of unnecessary bloat simply because that was as far as the technology had progressed at that point. React didn't exist and the browser was limited from a technical perspective so the best people could manage was some clever CSS hacks to cobble together something mildly attractive. It might have taken a while to render even those simple pages on computers of the time, so back then those pages might have hit a performance ceiling of some kind.

Maybe as more and more features get added to a piece of technology, there's some kind of default instinct that some people have to always fully exercise all of it even if it's not at all necessary. Simply because you can do more with it that you couldn't do years ago, there's some assumption that it's just better for some vague reason. It's easier to overcomplicate when everyone else is doing it also, so as to not get left behind.

Then everyone who doesn't have knowledge about web technologies in the like get used to it, and people's expectations change so this "new Web" becomes the new standard for them and start enjoying it in some Stockholm syndrome manner - or not, and the product managers mistakenly come to this conclusion from useless KPIs like "time spent on our website" which will obviously increase dramatically if it takes orders of magnitude more CPU cycles just to render the first few words of an article's headline.

I'm only speculating though.

Personally, as someone with a headless server running Docker, it pains me to no end I can't browse Docker Hub with elinks.


> Maybe as more and more features get added to a piece of technology, there's some kind of default instinct that some people have to always fully exercise all of it even if it's not at all necessary.

I suspect this is very true. It seems true in my experience. I think the reason may be just that our field is so young and dynamic, that everyone learns everything on the job. If you want to stay up to date, you have to gain experience with the new tools, and the best way to do it is to find a way to make using them a part of your job. It saves you time after work, and you even get paid for it.

It takes good judgement to experiment with new technologies on the job without compromising the product one's working on. I feel that for most developers, the concerns of end-users rarely enter the picture when making these calls.


[flagged]


It's analogous to the "should you listen to parents or childless people on the topic of parenthood/children issues". Parents are strongly biased, but they also know much more about the issue.

Or: when seeking driving advice, would you reject a priori the opinions of professional drivers, just because they're not representative of the general population? You would be justified in rejecting their views when considering your marketing strategy. Which kind of makes me think that a lot of reasons for pushing the "devs are not representative users" thought aren't about how to best serve users.


It’s nothing like that at all. Those are opinions influenced by technical expertise and experience. User interfaces and design that people like is completely based on personal preferences and tastes. It’s more like the driving instructor telling the pupil that they are factually incorrect for liking Ford cars because of some technical inferiority the instructor believes they have over some other car.

Your technical insight doesn’t make your user experience any more or less valuable or important.


I never said that technical expertise makes my experience more (or less) important. I'm saying that my technical expertise lets me understand my experience better, reason about it better, and communicate it better. I know concepts and vocabulary that a non-tech-savvy user doesn't, which makes it easier for me to pin-point sources of frustration.

User interfaces are an acquired taste. A shiny-looking app built on newest design trends may look appealing at first. But once you're couple hours into using it, your outlook changes. It suddenly starts to matter whether the application is able to handle reasonable-sized workloads without slowing to a crawl (many web applications can't). It matters whether you can use the keyboard instead of clicking on everything. It matters whether the application is fast and responsive, and doesn't. lag. on. you. every. click.

What long experience with software - both as a creator and as a consumer - gives me is the language and ability to look past the shiny facade, and spot the issues that will matter long-term.


You say:

> I never said that technical expertise makes my experience more (or less) important.

But then immediately dive into a long winded explanation of why your own personal opinions are superior. If you have some deeply academic reason for not likening something, but millions of user just absolutely love how shiny it is, then you can’t prove they’re wrong for doing so, nor that your opinion is in any way better or more valuable than theirs.


When I meet the user that prefers shiny over functional after working for a few hours with each, I'll change my mind. I haven't met that user yet.

What I've seen though is that users almost never have any say in the matter. Software is generally forced upon users - mandated by their employers, being the only way to access a particular service or a social network, etc. Users have little say in how the software works.


A faster horse.

People value privacy when made aware of the issue, but very few people are aware of how websites track (and manipulate) them.


I feel like this claim needs support.

All of the studies I'm aware of that ask people to choose between different price points based on privacy end up showing minimal valuing of privacy.


It is clear that many people know that many services they enjoy compromise their privacy. These stories have been covered by mainstream media ad nauseam. Jokes about privacy compromises are part of pop-culture at this point.

Most people, however, are not enthusiasts or activists who are wholly invested in these topics. Even being aware of the issue, most act based on their immediate needs. Immediacy is an extremely important factor in decision-making.

If we were to give this a utilitarian assessment formula:

[severity of problem] * [immediacy of problem] <||> [importance of need] * [immediacy of need]

Many people using social media to communicate with their family may evaluate that as:

[100] * [1] < [100] * [100]


GP's claims and the studies you mention are about two different things. GP claimed that users aren't generally aware of the extent their privacy is violated, and not that they would chose privacy over price if made aware.


Did those studies gave the big picture about privacy?

Why is it that people working in the field tend to be paranoid about privacy, and people outside are not? It can't be that we are just on average more paranoid. It's more likely that we know exactly how this data can be aggregated, stored forever, analysed in the future with capabilities that don't exist yet, etc. I think most people don't understand privacy in those terms.


Counterpoint: all purchasing sites I used so far are tracking me (or at least try to). I am tracked although I pay for the site.

You have to factor in the fact that people know they are tracked whether they are paying or not. Given that fact I'd prefer not paying as well.


There's a difference between valuing privacy and being willing to pay an extortion fee.


I think there's a faulty premise in there somewhere. An architect or an engineer is not a candy salesman. A candy salesman really shouldn't have a say in what flavor, texture, sweetness, etc. is appropriate for any given client.

However, the way the poster you're replying to sees this issue is not as a candy salesman; but, as a public engineer. There's a problem, like "what should web be", "how should web work", which is akin to "how to build a railway over this valley", "how to minimally disturb the ecosystem", etc. That's not a realm of likes and dislikes, but of practicality.

One of the realities of today is that the average user is extremely distanced from the technicalities of Web, whether that's desirable or not. That puts a lot of burden on the informed and on the developers, which are often the same people. The few are obligated to make decisions for the many.

Do you deliver a box-shadow, but increase technical debt? Do you migrate to a more energy efficient platform, but alienate some users? Do you broaden the scope of your system, in turn increasing system complexity, or do you delegate to a dedicated third-party system, having the user possibly learn to use that third-party system?

It's a question of which compromise best serves the user. It shouldn't be a question of likes and dislikes. This is a complex situation rife with miscommunication, ignorance, conflict of interests, and inertia. Any simple solution, such as disregarding the opinions of developers, should be regarded with great suspicion.


Many "average" users don't know what they want, don't even realize the options that are available to choose from (i.e. the different ways an app can be built), and/or will accept whatever is offered.

Though it wasn't patently insulting, you have gone ad hominem - "to the man" - rather than the idea. Whether one is a developer or not, doesn't have any bearing on whether economy and efficiency are worthwhile values in a computer application. Although it so happens that developers tend to also be users. If they ARE users, then they represent users. Also in the sense of acting as an advocate of sorts for the user, developers represent users.

"Average" users having never been offered the thing being proposed here (faster and ad-free versions of the same apps, with the same network effects etc.), I don't see how you can state with any confidence that they wouldn't have chosen them, were they available.


It's ironic that Google got "lucky" because of its minimalistic approach back in the day. Now, it's just a huge mess.


> The "best practices" in web development and user experience all deoptimize end-user ergonomy and productivity.

What are you seeing that leads you to think this? The ads and engagement drivers (autoplaying videos of other content on the site) on sites that need eyeballs to keep the lights on, or the articles showing how to download the minimum usable assets so you don't waste the user's bandwidth, battery, and disk space[1]? The latter is what I tend to see when I'm looking at pages describing "best practice".

1: https://alistapart.com/article/request-with-intent-caching-s...


The best practices that encourage you to minimize content and maximize whitespace on your screen. To change text for icons. To hijack the scrollbar. To replace perfectly good default controls with custom alternatives that look prettier, but lose most of the ergonomic features the default controls provided. To favor infinite scrolling. Etc.

Some "best practices" articles discourage all this, but in my experience, that's ignored. The trend is in ever decreasing density.

It all makes sense if you consider apps following the practices I mention as sales drivers and ad delivery vectors. Putting aside the ethical issues of building such things, my issue is that people take practices developed for marketing material, and reapply them to tools they build, without giving it any second thought.


This sounds like the kind of argument that would have said that the algorithm for rounded rectangles in the Mac OS toolbox was superfluous fluff.

The world is bigger and more interesting than screens and screens of uninterrupted plaintext.


Rounding rectangles is superfluous fluff, but it also is nice, and serves a purpose in the context of the whole design language they're using. I'm not against rounding rectangles and other such UI fluff in general. But I am against throwing away perfectly working controls, with all the ergonomy their offer, and replacing it with a half-broken, slower version of that control that only works if you use it in one particular way, but hey, it has rounded rectangles now.


Sounds like you're against bad dynamic HTML then.

Fortunately, good dynamic HTML also exists.

And I wouldn't write off rounded rectangles as superfluous fluff. They're fairly ubiquitous in user interface design because round cornered structures are fairly common in nature. They make a UI look more "real". And decreasing the artificiality of a user interface isn't superfluous; it lets more users interoperate with the interface without feeling like they've strapped an alien abstraction onto themselves. A lot of people in the computer engineering space have no trouble working with alien abstractions for hours at a time, but it's an extremely self-selecting group. We are often at risk of believing that what is normal for us should feel normal for everybody.

https://bgr.com/2016/05/17/iphone-design-rounded-squares-exp...


> most users actually prefer visually fancier content with pictures and colors.

You're aware that pure HTML and CSS alone can produce visually fancy content with pictures and colors, right? It honestly seems like a lot of web developers are starting to forget this, but it's true, I swear. My personal web site (https://coyotetracks.org/) is minimalist by design, but it has accent colors, custom fonts, Retina-ready images, and that silly little fade in and out when you hover over links, all without any JavaScript whatsoever. Also: turns out it works pretty well on Lynx!

I think JS gets a bit of a bad rap these days and am willing to leap to its defense even though I don't like it much as a language, but a huge chunk of the reason it has a bad rap is because people do bad things with it, by which I mean either malicious things or just unnecessary things. An awful lot of modern web sites could run just fine with far, far fewer scripts than they're deploying.

(And, yes, I can even do web analytics without JavaScript, because there are these things called "server logs" I can analyze with utilities like GoAccess.)


Your site is nice and fast


Sure, new technologies have changed things for the better. Sophisticated web pages should be created when sophisticated web pages are necessary.

Sophisticated web pages are not necessary to disseminate text-only context. 90's HTML is perfectly capable of doing that.

I have no problem with loading 30MB of JS libraries into the browser for an application that actually does something. I have a problem with loading 30MB of shit to read 10kB worth of news.


It's actually because we programmers like to recreate things, because there's this itch and wonder about how stuff works. Http hasn't changed that much.

Html did. It was hyper text, and rendering was done for document flow.

Then we could script a bit, and soon after we wanted "webapplications". Now, we lost probably 15 years trying to fit in an application-ui and lifecycle model in a document-flow model.

Html, or rather xml, or rather trees, are a good way to represent a user interface. Unfortunately back then, the only languages available were C++ and Java for any proper work oh yeah, and visual basic!).

Javascript, php, and perl were a godsend in terms of productivity. Just like the 80s home computers and basic. It just worked. Type and run. This is also why bad badsoftware gets popular btw..

Coming back to the post.. Lynx renders HTML how it was intended: as a document.


There's little value in me doing this, but I always have to push back against this idea. This is the hill I'll die on.

Most application UIs are not complicated. Most applications are just interactive documents. There is nothing special about 90% of the apps I use on my native computer that means they couldn't be rendered as HTML. The document model is fine for applications. Preferable, even.

Heck, a reasonable portion of the native applications on my computer have optional pure-terminal interfaces. If your application can work in a terminal, it can work with HTML -- and embracing a document model for your application UI would be a better user experience than whatever the heck pixel-perfect framework that everyone is chasing nowadays. That's true on the web, and it's true on native.

The problem with the web is not that HTML is incapable of being used as an application platform; it's not that we're trying to fit a square peg into a round hole. It's that both web and native developers overestimate the interface requirements of their applications, and bring in unnecessary cruft to achieve pixel-perfect layouts that are worse for end-users.

We have a square peg, and a square hole, but we like to go to conferences and pretend that our peg is actually some convoluted, recursive star shape.


Thank you for writing this. I had this feeling that the document model isn't that bad for software in the first place (a feeling that I noticed when I prototyped some of the stuff I was working on in Observable, and realized the prototype in an interactive notebook format fits the problem more than the SPA we were building). But I haven't considered the implications. You might be very right that the document model could support a lot more than we give it credit for, if we just stopped pushing the established "appliance" UI paradigm.


If you're talking about "forms" as the interface, then sure. I 100% agree. But the moment things get more interesting, you'll fail. Video editing for example. Lastly, I believe that if forms suffice, you don't need any of the recent CSS features. Simple forms without css should be fine. So no panels, modals, popups, nested navigation or single page apps.


>Creating sophisticated web pages is massively easier than 10 or 20 years ago.

And still average people aren't taking advantage of this and creating their own websites because despite it being massively easier, the way the web is now has pushed it still beyond the average person's reach. If these 'sophisticated' stacks and technologies weren't the norm and instead the web was more focused on being s place where.the average person can easily put up their own fancy looking simple webpage maybe we wouldn't be so dominated by these massive companies that have become the gatekeepers by providing limited platforms for people to do what did used to be a relatively easy thing even back in the day.

There's a reason these companies fund and push these technology stacks, it gives them huge control over the internet and in the end, they don't really do anything fundamentally different than what good old fashioned HTML and CSS can do. Hell especially the HTML of today.


A lot of todays websites are evil. Just today, I couldn't even mark and copy just simple plain-looking text.


Holding ALT while highlighting in the browser will often allow you to select text that is otherwise overly JavaScript-ed.


>most users actually prefer visually fancier content with pictures and colors

Are you sure this is the case? Because I think it may be a shiny object trap, where on first view a visually fancy site is great and appealing but in the long run a simple, fast-loading site is preferable.


is it now ? my parents whom were 100% tech illiterate were able to put up a moderately complex website with frontpage in 1996 without touching a single line of code and with a 100% custom look. Would they be able to do the same today with current tech stacks ? Not so sure...


I think he was serious, and I agree with him — and I believe that most users don’t want the majority of what JavaScript-laden pages offer over clean HTML-only pages.


The crux of the problem is that the page in this doesn't need to be sophisticated. It lists result links from top to bottom. Things afforded by JavaScript that may enhance the interface like autoloading the next page of results can be made progressively. In particular, "visually fancier" content doesn't necessarily happen at the expense of accessibility with limited browsers. That's the point of CSS.

That is really the fault of "modern web", that web pages are more "sophisticated" than they really need to be to present the information they contain in a visually pleasing and usable manner. There are so many round-about approaches to the problem that people don't concern themselves with the most straight forward one. I can't say that it's somehow massively easier to create a simple list of links with some excerpt in a way that it doesn't work on a 15 year old browser than it is to create one that doesn't. You really have to go out of your way to break something as simple and fundamental as linking.


> Creating sophisticated web pages is massively easier than 10 or 20 years ago.

Well if talking progress, we can also compare, say, energy efficiency of information transmission, or the number of well maintained Web clients. That doesn't look so good, does it? The question is what problem are we solving, or, in your words, what is "sophistication"? According to some measures we did achieve impressive things. But a lot of us experience heartache, because we think we didn't do all that well.


To say that text browsers are "anchoring" the web to those text only standards would imply that developers are making design decisions based on testing and feedback from text only browsers.

There is no way that the percentage of developers doing that isn't vanishingly small. Like 0.1% or less. I always chuckle when 1 person chimes in on a show hn post to complain that the site doesn't work well in lynx... Ya, I'll get right on that, top priority!


You can make that choice. On the other hand, if you decide on not making the situation right on that, you'll alienate not only accessibility-focused users, but also a very loud minority - some of whom are influential across tech (or tech-adjacent) communities.


Amen; fourth here... don't need js cycles for html query


From another comment digging into the issue.[0]

> Regarding 'L', Lynx sometimes "hides" links if the anchor is around a div. Maybe it is just that simple. IIRC, <a href=...><div>...</div></a> will trigger a similar behaviour.

I'm generally against unnecessary web complexity, but I don't understand how anyone can paint Lynx a hero for randomly ignoring anchor tags.

I embrace progressive enhancement where possible, all of my blogs/sites will load and function without Javascript. I'm not going to serve alternative HTML in a scenario like this. There has to be a give and take towards Lynx supporting objectively valid pure HTML content.

It wouldn't violate any of Lynx's pure-text principles to parse modern HTML correctly.

[0]: https://news.ycombinator.com/item?id=21636159


>Modern web has very little to do with providing value to the end-user

I disagree strongly with this. The web has moved a lot in the direction of developer experience (ES6, modules) and new capabilities (WebSockets, WebRTC, WebAudio, SVG, canvas...). Yes, most of this happened because it's a side-effect of big surveillance capitalism companies wanting make that sweet sweet digital pollen to be even sweeter, but that doesn't make it any less sweet just because it was made in bad faith.


The capabilities are there. The dev experience is there (somewhat; JS ecosystem is a mess, but I guess that's just a side effect of moving very fast). But the capabilities are not used for end-user benefit. Not much, anyway. Yes, I benefit from Netflix, I benefit from Google products (not as much as I would if they didn't keep on worsening the UX every few months) - and such functionality requires some of the new capabilities. But 99% of other sites I visit don't use these capabilities for anything good. A pizza ordering site shouldn't need a React frontend. An e-commerce store shouldn't use WebSockets. Every other site out there should not ask me to enable notifications. But they all do. I blame a mix of CV-driven development, designers showing off, "data-driven" bandwagon, and surveillance capitalism.


I agree with you on all points - I think we're just speaking past each other. I'm speaking about the platform, and you're speaking about applications. The applications written for the web today are exactly as you say; the browser platform itself, however, is quite extraordinary and I remain deeply impressed with it.


Right. I admit that I was strongly biased against browsers because of my distaste with web applications. The browser as a platform is slowly growing on me these days, and I won't deny that modern browsers are some of the most impressive feats of software engineering.


I think a pizza ordering site is a good SPA example, but progressive enhancement still applies. React probably makes its design simpler. Elm would, even more so. But it’d be nice to access it with Lynx.


And I think a pizza ordering site is a good case for server-side and mostly static HTML. There's very few things on such page that change more often than every few days, and as for the dynamic stuff, all you really need to manage is a client-side basket, which is trivial in isolation. I believe you could easily cut the time spent on such site by a third if you approached it this way. Even more for people accessing the site from older machines.

I can tell, because I order pizza quite frequently from a variety of sites.


Heavy graphics with static HTML? That's going to be a crummy experience for mobile users.


CSS supports media queries.


I'm not talking about choosing by screen resolution. I'm taking about choosing by whether and when the media is needed at all, which becomes much more tractable when you add some JavaScript to control it.


> I am now prepared for 6 comments replying to me saying that anything that can be implemented with HTML from 1999 should be, and a list of search results can be.

They would be correct replies!

In addition there is this concept called "graceful degradation", where if the browser has more advanced features, you support them, otherwise you work anyway. It's not like supporting Lynx means you can't have a map in the search results when using Chrome. Certainly not for a company with the resources of Google.

Also they should probably send something like a Lynx-version of Google down to people with a poor internet connection.


They probably are hiring only the best machine learning, cloud-native engineers fresh out of university who have never heard of lynx and now the institution doesn’t even realize it broke support for it.


> We were all mad when IE was holding back the Internet

As I recall it, we weren't mad about IE "holding back the Internet", we were mad about IE encouraging web designers to stick a bunch of dynamic clutter such as ActiveX controls into their webpages. Largely because they created this lock-in where sites only worked well on one browser.

It turns out that JavaScript has been co-opted into being the new ActiveX, and Chrome is the new IE. But since JavaScript is nominally an open standard, and Chrome runs on the big 3 OSes, nobody seems to get mad that alternative browser projects are dying because they can't keep up with all the stuff that needs to be implemented in order to work well with sites that were only tested on Chrome and WebKit.


Amen, amen! The ignominous "best viewed in" which we fought in the Second Browser War is creeping back in - except now it says "this hour's current Google Chrome" instead of "MSIE 6".


TBH, I think it's worse than it was with IE6. Targeting just IE was annoying, but, even so, 20 years ago, I could use BeOS as my primary desktop OS, and Net+ was a decent enough browser. Nowadays, the bar to successfully render a modern JavaScript-heavy website is so high that even Microsoft couldn't successfully maintain their own independent rendering engine, and is shifting over to Google's.

Anyone who has heard of the phrase "embrace, extend, extinguish" should be at least a little bit uncomfortable with this situation. For example, it implies that any potential alternative OS needs to be able to compile Chromium (or, as what can only be seen as a second-class substitute nowadays, Gecko) before it can really be viable. If you're the kind of person who likes a free, open and competitive software landscape, that's a looming threat.


That's a different issue altogether, but still salient.

- 1. you need a beefy device to render in a reasonable timeframe

and

- 2. you need The One Blessed Renderer to even see anything

I do recall the situation 20 years ago, when I could use about 2/3 of the relevant Net, with the rest going "meh, chronic complainer, just use IE. What? Yeah, that's your own fault for not choosing Windows." The second part is far more worrying, as it is not resolvable locally, either by optimization, or by horsepower.


One does not need to time travel to 1999 to accommodate text browsers, current HTML works just fine.

OTOH I don't worry too much, accessibility-enforcing laws will provide plenty of job opportunities for future developers... So yeah, good move, I guess.


> I am now prepared for 6 comments

Not sure that's the case, if so, I'd expect a comment that shows a little deeper understanding of the issues involved.

"HTML from 1999" isn't the issue. Lynx did fine on that... and HTML from 5 years ago, just like a whole host of non-visual or semi-visual user agents.

brow.sh is... OK, I guess, nice to have around in a pinch, but like most other schemes that rely on a headless full-fledged browser in order to work with applications that have become dependent on JS to merely render content it introduces a glorified screen-scraping layer to something that can easily be much simpler, assuming web application developers can be bothered to think about it.

We don't need to freeze the web at 1999, and some applications fit poorly in a non-visual context. But a little bit of reflection on how the merits of progressive enhancement and moving forward without losing the benefits we had at that stage would be nice. The stage where people cared about such things was pretty amazing in terms of the breadth of devices web applications would in fact work on pretty well.

Also, if you're not thinking about pretty plain HTML version of your app, chances are half decent you're missing an opportunity to engineer your application better whether or not you care about UA interop.

But, you know, if you're pretty sure the browser should be "thought" about as nothing more than The VM That Lived™, by all means, carry on.


Thanks for making me aware of brow.sh. What an amazing project.


I use text browsers because they are efficient and fast. Also Static content shouldn't ideally need js because it is a security concern.

AFAIK most of us don't complain when a dynamic website doesn't work. We just use a modern browser.


> I feel like many of the text-mode browsers have failed to keep up with changing web standards

Yeah, web standards such as images and video and audio. Not keeping up with those standards is kind of the point.


How about tables, or frames? Lynx doesn't even support those.


Links does, and Emacspeak with Eww is the best browser for the blind ever.


> brow.sh

Does headless Firefox (what brow.sh is at it's core) even launch if there's no X11 available? Is it that headless?


I haven't used it, but it works in a Docker container without any device mounting or access to the host X server, so I'd assume that it does indeed work without any X available.


I've played with it and it works over a headless server instance through an ssh session, so I think it does not require X11


It does work without an external X11 all right (trick question, actually, there's an Xvfb underneath all the turtles - so X11 actually is required, even though it's all hidden inside the container ;))


w3m gets a pass because javascript is bad. The FSF is right about that. wasm will be even worse. Not technically worse, but for removing another layer of control. DNS over HTTPS is just as bad in that sense.

If you can make something without JS you should. cryptomarketplot.com has an accessible mode FOR CRYPTO!! If crypto sites can, everybody can.


It's funny because "javascript is bad" was common geek sentiment at the turn of the century. Now it is flamebait, or being caught up in the past [itself possibly a bit of a code for ageism]. Seeing this attitude change is one of the most interesting things I've seen in tech nerd circles in the last decade or so.


Can you elaborate on why it's funny? javascript (on it's own) has changed a lot in 20 years and it was originally designed in what, one week due to external time constraints? i think the criticisms are valid if $Your_favorite_language had ~4 days only for language design. From my recollection, there were no frameworks, no tooling, no linters, no jquery. it was a basically a primative language for browsers that had no compiler and didn't do anything useful (back in the 90's). About as exciting as VBA 1.0.


JavaScript was made to add a bit of interactivity to webpages. Like "click the button to expand the text". What is it now? There are whole javascript frameworks being made. Maybe soon web(sites) won't even use HTML anymore but rather render custom UI to a canvas. Browsers are new operating systems running on top OS. Lots of performance wasted by this. Yes, there are webapps using it wisely, although JS is often misused on sites which could have been plain html or html+ajax. In many cases CPU is not even doing much but some webs take 2 seconds or more until I start seing some graphical output after entering URL. It takes a lot of time to render templates and construct DOM. Pretty sad.


No i'm not a fan either. I'd much prefer a site the loads in <50ms.


It's funny to see attitudes change so dramatically is all I mean.

I think you may be blind to the harshest criticisms, though. It was a common view that the web was built for static content (or server-side dynamic content) and code execution by the client is mostly a nuisance at best. And there is some validity to that. When JS was developed, roughly contemporaneous with worse ideas like java applets and ActiveX on the web, it was thought by proponents that untrusted code execution is OK to good. 20 years later, it was still assumed that if JS is sandboxed, memory safe, has a different page table from the rest of user mode, untrusted code execution is safe. Then spectre and meltdown happened.


Yeah there's some hindsight there... but are you saying that views have gone full circle? Where are these attitudes changing? JS has historically been sandboxed better than Java Applets and ActiveX, which is one reason why it's popular now.


Nope, not saying they went full circle, just that they went from one extreme (client side code bad) to another (client side code as inevitable fact of life and why question that?).

And tangentially, I was pointing out more recently that the grumpy position from older times actually would have prevented some real world problems. (Spectre and meltdown go from being local privilege escalation bugs to "holy shit you can own my machine if I visit a web page")


The redesigned version of Google Search that is being A/B tested no longer shows links. Despite being a developer, I'm anxious to click search results, especially because when results are filtered to be from the last day or week, they are full of phishing sites and pages with scraped content that immediately redirect to malware.

This change can't possibly be beneficial to users. It makes people even more ignorant about the technologies they depend on, and exposes them to further risk of being exploited.

UPDATE: This is the new design I've seen, the domains are missing: https://i.imgur.com/5RTdXI1.png


I'm in this A/B test too, and like you, it makes me anxious. I've become so accustomed to looking at the full URL (in green) of what I'm about to click on, that without having it there, I trust Google search results less as a whole.

The one that got me was when a search result pointed me to a site that was something like:

    example.com/?ipaddr=10.3.4.3
And Google, in an attempt to be helpful, showed me this:

    example.com > ...
> It makes people even more ignorant about the technologies they depend on, and exposes them to further risk of being exploited.

This is the point. If you've ever viewed an AMP site using Mobile Safari, you'll still see "google.com" in the Location Bar, instead of the site's own domain name. Google's fix for this is to try to kill the URL.


They took urls away and then added this: https://i.imgur.com/RI4xxgs.png

For example, searched "hierarchy" and it'll show "en.wikipedia.org > wiki > hierarchy" above the wikipedia search result.

> This change can't possibly be beneficial to users.

You're right if the url/domain isn't even shown at all. But I can think of a few benefits of showing the domain as it currently does, like to avoid phishing. It also basically parses the url and interprets it for less-technical users which is something that more-technical users are already doing when they read the url.

I don't think it's so bad as a default if there's a config option for displaying the full url for more technical users, or the necessary data available to at least write a browser extension.


Multiple versions are being tested. The one I've seen does not contain the domain, not even in a tokenized form.

It looked like this: https://www.searchenginejournal.com/google-is-testing-search...


Yeah, the iteration I'm being served is definitely an upgrade over that. The one shown in your link is what I had previously, so it seems that mine is a refinement of it and a middle ground that I don't think is so bad.


I don't see the issue with showing both an URL and a tokenized representation, outside Google's seeming motive to "kill the URL". In both versions, information that would be immediately available in the URL is potentially left out, which can't possibly help prevent phishing unless Google goes out of their way to actually verify that websites are what they say they are.

It's also hard to trust Google engineering to get something like this right after the Chrome "trivial subdomain" stripping debacle(https://bugs.chromium.org/p/chromium/issues/detail?id=881694).


What happens when you click the down arrow next to the domain? Google cache/similar links?

Edit: I can see it on all my searches on https://www.google.co.uk/ it is cache/similar


I got this crap on my work PC, and it was the last straw for me. I've switched to duckduckgo (to train myself, what I did was actually change my dynamic bookmark, so that when I type "google X" it goes to the duckduckgo search page for X instead of google, as it used to do). This morning I tried entering google and it's showing the domains again, but I don't care any more, they've lost me.


After seeing these, I switched my phone and browser to search with DDG by default. Most of the time, I don't notice, although Google definitely catches news and blogs much quicker and has a bigger shopping portfolio. Other than those two, DDG has been good enough for me.


The URL is right there, above the search result title. Just the / has been replaced by a > and it's been made a more human readable. To the average person it's even more prominent now.

I'm looking forward to this change. It will incentivise websites to make their URL paths more human readable because now their

example.com > cgi > html > static > actually_human_readble_part.html

noise is seen by everyone not just weirdos that look at the URL bar like me.


You should take another look at the image—there's no domain suffix, and they've also removed any arguments in the URL, both of which are _incredibly_ important.


Speaking of recent Google changes - Has anyone else noticed Google has removed the 'sign out' link? Before, you could click the upper right corner icon and "sign out", but that is now gone and I cannot find any way to sign out of Google, anywhere!


When I click on my profile icon at the top-right, I have a sign out button at the bottom of the menu (well, technically it's "Sign out of all accounts" since I'm logged into multiple), fwiw.


This happened to me after I had formatted (new PC), and since I had just started using Firefox as my main browser and did not see this change on Chrome, I believed that Firefox had been gimped by Google in this specific manner.

Needless to say, this was the straw that made me switch to DuckDuckGo instantly, and I've been happy with it, especially with the ability to use the Google bang (!g) in the infrequent case it's required.

This has taught me a valuable thing about A/B testing though—don't make an experiment any longer than it needs to be, and a refresh should bring them back to the old behaviour, just in case it's bad enough to make them switch completely.


I wonder if Google does proper X-testing (X as in exodus), but I guess they don't take care about a couple users leaving, as they're still busy spreading through the rest of the world. I still hope that just means the clock's ticking for the next dot-com bubble to burst so that a new generation of websites could blow up big.


At least in chrome, mousing over a hyperlink pulls up the destination address in the bottom left corner (for now).


Is there a feedback button near the bottom? Maybe you can voice your concerns.


Surely these are just all ads? If not, that's a shame.


There used to be a saying in a11y circles, "Google is a blind user".[1]

Now Google are manifestly anti-blind-user.

Shouldn't be anything a major ADA lawsuit couldn't fix.

Meantime, DDG is actually pretty damned good. For console users: https://duckduckgo.com/lite

________________________________

Notes:

1. https://www.w3.org/2006/Talks/06-08-steven-web40/


Blind users do not, as a rule, use Lynx. This is a common misconception. The Lynx interface is, in fact, very poorly suited for visually impaired users, as it relies heavily on visual layout, color, and cursor positioning to convey information.

The majority of blind users use standard desktop web browsers with screen reader addons like JAWS or VoiceOver.


The blind user in TFA might be surprised at your assertion.

Blind users typically rely on screen-readers, including tools such as emacsspeak (which relies on either Emac's built-in eww browser, w3m (of which I believe eww is based), lynx, etc.

The ability to rely on console-based tools with text-to-speech capability, and receiving typed input, is fairly widespread.

The requirement that interactive content be rendered directly to speech is key.


I am blind. I know or have known at least 20 other blind people sufficiently to know what their browsers are. None of them used Links. One of them used Edbrowse. The rest (including myself) are Firefox, Chrome, or Safari. I have at least one personal project (not public) which uses React heavily. Saying that Links is necessary is an outdated view, so much so that we have things like the accessibility object model [0] in progress to possibly go so far as even supporting use cases like remote desktop connections in the browser by making fake screen reader only nodes in the accessibility tree.

In general, the terminal itself is even not so great. There are efforts like Emacspeak which mandate learning what is essentially a second desktop environment, but outside that it turns out that offering semantics (which only non-text browsers and apps can do) is useful: for example knowing whether or not the cursor is in a text editor, so that deletions are significant, or whether text is a table.

The idea that JS is bad for screen readers--or indeed that we use text-based browsers--is a consistent misconception that is no longer true. It was true 10 or 15 years ago, if not longer, but everything AT has come a very long way since then.

For a source that's not just anecdotal, this has info on primary browser: https://webaim.org/projects/screenreadersurvey8/

0: http://wicg.github.io/aom/explainer.html


> The idea that JS is bad for screen readers--or indeed that we use text-based browsers--is a consistent misconception that is no longer true.

To be perfectly blunt, I feel this misconception is pushed mainly by people with an "anti-javascript" agenda.

If one can no longer argue that "supporting non-javascript clients is the only way to support accessibility", one is only left with "if you break support for non-javascript clients, you will only be excluding people who deliberately disable javascript". And at that point, the amount of effort to support non-javascript vs. the return on investment shifts heavily in favor of not caring about users who intentionally disable javascript. This is an argument I've had in every shop I've worked at and at the end of the day in every instance we decided it was simply not worth the hassle to support people who intentionally disable javascript.

In fact, I'm pretty sure any competitive search engine these has to have a very complex crawler that is more than able to deal with javascript rendered pages. If they didn't, they'd be leaving a ton of content out of their indexes--not a good look for a search engine. So not even the "you have to support text-only browsers to please google" argument has most likely fallen out of favor.


I see this like supporting DOS in 2019 or somesuch. There might be an esoteric reason to do so but when 99% of the userbase has left and the old thing can't support new technologies, saying that we need to support the old thing forever because a tiny subset of users still use it stops meaningful progress. If there weren't plenty of good, modern options I would be all for Links but there are, so I'm not. At some point it's on the user for choosing not to leave their little island of familiarity, especially when the user is technical enough to be using Links.


> If there weren't plenty of good, modern options I would be all for Links but there are, so I'm not.

What are the good, modern alternatives to Lynx?


Firefox and Chrome both have mature accessibility API implementations at this point. Edge is also at least okay. Internet Explorer has worked forever. You then couple those with a screen reader--most commonly Jaws or NVDA--and you get something that very much resembles Emacs or what have you: there's around 50 or 60 keystrokes I use on a regular basis. It's like a local client-server model (indeed documentation on this topic uses those terms). You couple something implementing the server with a client, i.e. a screen reader, a one-switch controller, speech recognition, what have you, and those consume exposed semantic information. Browsers then map web pages to the platform's accessibility model for consumption.

NVDA offers scriptability for the web and otherwise in Python as well, so anything it can't do can probably be added. For instance there's an add-on for using your local screen reader to control a remote machine, provided that both run it (not the most applicable to accessibility, but a good example of how far you can take NVDA's scripting). Jaws also does much of this but is much more proprietary including an only half documented scripting language.

The quickest way to get some idea is to probably look at the NVDA user guide: https://www.nvaccess.org/files/nvda/documentation/userGuide....

iOS is also good. unfortunately Apple very much dropped the ball on OS X and hasn't picked it up again, but my brother (also blind, it's genetic) did an entire business degree on an iPhone because he didn't want to be bothered learning a laptop. That's a loss in efficiency, but even the lesser options are now sufficient enough that a non-programmer can pick them up and go get a college degree.

There is an idea that goes something like "Obviously screen readers have to struggle to present information, therefore dedicated text-based browsers are better". That was true in 1995 when we didn't even have MSAA. I know people from that era and they had to hook APIs in other processes at runtime. But in actuality, once you expose the accessibility tree and hand it over to the people who want to use it, good things happen.


Ah, I'm sorry, I misunderstood what you meant. You're talking about screen reader compatibility only.

I was interested in hearing about browsers that do what Lynx does, but are better. Unfortunately, the browsers you mention are graphical, and so are not Lynx replacements.


From my perspective there is very little difference. The interface I get out of Firefox is exposed as if it were a text-based browser for lack of a better analogy (it's not quite the same, but the differences are subtle and not obvious at first glance). But I also get the ability to do all the non-text-based-browser things with that interface instead of being limited to what a text-based browser supports, and those things can be made accessible to me. But the really big advantage is that my skills at driving Firefox also work with Chrome, IE, and Edge, and any web view on the platform. Plus there is a large common subset that is shared with all the desktop apps as well.

I'm not the right person if you're looking for someone who shares enthusiasm for text-based browsers, in other words. In general I would like it if people would stop using blindness as a point in their arguments that they're necessary because it shows a massive misunderstanding of what the world of accessibility is like.


In that sense, there's links2 (http://links.twibright.com/). (While it supports graphical output, it's a text-mode browser at heart.)


> In fact, I'm pretty sure any competitive search engine these has to have a very complex crawler that is more than able to deal with javascript rendered pages.

Save the region-specific engines which likely lag behind, Google [0] and Bing [1] both support crawling javascript, and Bing is generally the search engine index of choice for all other search engines like Yahoo, DDG (at least for now, I occasionally get crawls from duckduckgobot), etc.

0: https://developers.google.com/search/docs/guides/javascript-...

1: https://blogs.bing.com/webmaster/october-2018/bingbot-Series...


I admit I have an anti-javascript agenda since most JavaScript on the web is used against me, to track me, show ads, autoplay videos, popups, exploits etc. I don't trust you, I don't want you to run code on my computer.


You're already visiting web pages that are directing your computer to access servers somewhat arbitrarily. You're running quite a bit of other people's code on your computer.


I don't consider displaying static documents to be running code.


[flagged]


I appreciate your assistance but I can check spelling; a simple "Did you know that it's Lynx" would have sufficed. Good to know there's two text-based browsers. I didn't, but I and everyone else I know will go on not using either.


Experiences vary, I'll grant that.

There's also a difference between those who acquire perceptive limitations (sight, hearing, also motor control, etc.) later in life, whether through accident, injury, illness, or degeneration, and those who have limitations from birth or a young age. Having to learn some (admittedly arcane) interface such as emacs late in life, with fewer capabilities and often declining cognitive capabilities, is difficult.

And yes, mainstream commercial software and OS offerings are improving. Slighty. (Most are still abysmally poor.)

I'm hard-pressed, though, to see how an increased dependence on dynamic and programmatic web design elements improves accessibility. Especially when wielded by technologists, managers, and clients with little awareness or concern for such access.

Again: Google should be much better positioned to grasp this than most. They clearly don't.


Google themselves are your example. Leaving aside some horrible accessibility keybindings in Docs, both Docs and Sheets are basically fully accessible. in fact Sheets is the best spreadsheet program I've used. It's not as powerful as Excel, but Excel is laggy for a variety of technical reasons that shouldn't exist, at least with NVDA. I can also read presentations in slides, and I might be able to make one. I've never tried; web or not, making slides just isn't something super feasible for a blind person if it's going to look any good.

I have gripes about Aria. It's definitely possible to abuse this stuff and end up with an inaccessible mess, but overall we have been trending toward a more accessible internet and things like the aforementioned do exist.

I've been blind since birth. I started on a device called the Braille 'N Speak 2000, which functioned very much like Emacs. I don't use Emacs because Emacspeak requires Linux desktop and adds a ton of extra complexity on top for very little gain. Linux dropped the ball big time on accessibility and audio in general, and never really recovered. Obviously this is opinionated, but I feel like you're implying that I lost my vision later in life and am forming my opinion around that perspective. You might additionally want to look into Jaws and NVDA. Learning those is about as bad as learning Emacs or etc; knowledge from when you were sighted doesn't transfer in the slightest and the interface is much more arcane than you probably imagine it to be.


> Docs and Sheets are basically fully accessible. in fact Sheets is the best spreadsheet program I've used.

This is off topic, and I don't want to distract from the current conversation, but speaking of sheets -- as a web developer, I often build SVG charts with d3, and I've been racking my brain lately trying to figure out how to make them more accessible to blind users beyond just linking to tables of data.

If you're using Sheets, are you also regularly consuming charts as well? Is there a common auditory shorthand for representing something like a pie chart?


Sadly no. Making charts accessible is an unsolved problem. There have been some efforts for accessible graphing calculators that work more or less, but it's not trivial to make a generic one-size-fits-all solution.

For Sheets, the underlying stuff that runs it is quite complicated. They ended up doing something akin to an offscreen model with HTML to make it work because afaik they use a canvas of some sort to draw everything. In fact, unless you turn on braille mode, both products actually have a built-in screen reader that talks via aria live regions. That's terrible practice, but to their credit they got ahead of what the internet was providing for accessibility and didn't have a choice in that regard.

For something you can practically implement without a huge project, I suggest text descriptions of the data. If you want to do a bit better, make it an HTML table--that'll give some convenient navigability for free.


You are falling for a pretty common misunderstanding. While many blind people use text-to-speech in combination with a classical "screen reader", that is not the end of the story. There is another major technology called Braille. And in countries where there is a good social system, blind people actually own so-called Braille displays. That applies to me, for instance. I am pretty much a pure braille user. Sometimes, when I use a Windows machine, speech will rumble along, but I really primarily rely on what I can feel beneath my fingers. And for braille display users, Lynx is really a nice option.


I own a braille display and use the 93-volume trigonometry textbook from high school and the logic around it with respect to making sure the right chapter was in the classroom with me as an analogy when explaining CPU caches to people. In the long run I discovered that incredibly fast speech rates scale better than braille for most tasks outside educational settings, but if it works for you, by all means use it. And before someone inevitably makes the "but braille is important for literacy and brain development" point: it is, and I advocate learning it.

However, putting the burden on site developers to support text-based browsers for this use case is O(sites) but putting it on the screen reader developers is O(1). In other words only the latter scales. Braille isn't a very good argument for site authors supporting text-based browsers from any practical perspective, and in all honesty I think most of them would find this off-putting. It's already hard enough to get people to do accessibility; if we make the bar as high as that and go around claiming that it's necessary for accessibility, no one will ever bother.


It belatedly occurred to me that blind people who primarily use braille may have a different perspective. I and most of the blind people I know are American, so we're accustomed to primarily using TTS. (I'm partially sighted, but I often use a screen reader for non-programming tasks.)

Still, it seems to me that accessing a GUI through a structured accessibility API would have advantages over accessing screen-oriented terminal programs, even for braille-only users. For example, there are all the quick navigation commands that JAWS introduced and most other GUI screen readers have copied. The screen reader is also free to reformat text in a way that's optimal for a braille display. Wouldn't it be nice to be able to read text in smoothly flowing paragraphs, uninterrupted by screen line breaks? I suppose that's not an issue if you exclusively use computer braille and the width of your console is a multiple of the length of your braille display.


Sadly the screen readers do actually do a not-so-good job of this, but I think that may in practice be more a function of poor UI for switching indicators on and off. You get 80 cells at most, and even the most conservatively unambiguous abbreviations for controls are going to use 2-3 letters of that. Hit a line with a few links and half your display is gone. I'm not enough of a braille user to know how to go about fixing this for sure, but there is definitely an inefficiency. Ending up in a "but my favorite text-based browser is more efficient" position because it turns a bunch of this off, or because it's configurable in a way your favorite screen reader isn't, or etc is something I can see happening, but nonetheless the real issue there is that screen readers need to be fixed, not that we should go ask all the sighted people to support Lynx.

NVDA's flow for deciding which formatting you care about is to tab through a list of 30 checkboxes. They have hotkeys when the dialog is open but it's still less than ideal if, as I suspect, braille users need to change them more often. And there is also a potential education problem around teaching braille users that the way they get more efficiency is to change them around all the time.

My solution in the world of infinite resources would be to make the cells 5 or 6 dots high so that you can put the formatting in line with the characters it's for. That's something I thought would be useful for a long time. But sadly we live in the world where good braille displays will forever be expensive and thus doubling the price isn't doable.


I used to be legally blind and heavily vision impaired. Even then, the most accessible web browser is Lynx since it runs in a Terminal which has a fixed and rigid UI.

I'm frustrated for you.


Are you saying the OP is lying about being blind or is a fake?


Perhaps I spoke a little too broadly. What I meant is that the bulk of blind users don't use lynx. This particular one appears to -- sometimes? -- but this usage is not typical.


...in your circle


You could say that about literally any anecdotal observation. I’m sure you notice trends around you and generalize. In fact, if you didn’t, you would find it hard to pretty much do anything in life. In short, your comment is myopic and unhelpful.


I don't draw broad software trends from just my own perspective though, espcially when my own perspective is directly contradicted by the op


No where does OP suggest that using lynx is common among blind users. He simply stated that he used lynx. Throughout this thread, multiple other blind users have noted that use of lynx is uncommon, even among blind users.


But Google has the access logs to their own website to know how frequent use of any particular browser is, regardless of circles.


Back when I did web design stuff and I couldn't get anyone to put any effort into accessibility, I would "sell" the concept as it being SEO: search engines see what we put in for text, not images. Accessibility for humans is accessibility for Google.

It was pitiful that I had to do this but there you have it.


a11y = accessibility; I had to look it up so I'll save everyone else the trouble.


And it reads "ally" ?


The 11 stands for 'eleven letters ommitted'. Cf. k8s (Kubernetes), i18n (internationalisation), l10n (localisation).


Okay, that explains that trend, but wow. We actually managed to come up with something arguably more obtuse than an acronym (and they're hardly the most friendly of things)


I'm pretty sure that "i18n" and "l10n" are about fifteen years old -- not exactly a trend any more (at least, not a new one).


They're a bit older than that – this credible report suggests 'i18n' was in use at DEC in 1985, with appearances in public online discussions by 1989, and in books by the early 1990s:

http://www.i18nguy.com/origini18n.html


Amazing. I wouldn't have guessed it was that far back.



Fancy web people are too cool for acronyms


What was the acronym for "accessibility" again?


    PS C:\> 'accessibility' -replace '(?<=^.)(.+)(?=.$)', {$_.Length}
    a11y
(PowerShell 7)


That's not an acronym, GP's point is that a single word acronym contains even less information than this (apparently called a 'numeronym').


I'm resisting the urge to rewrite it in APL. :) Nicely done.


No need to resist

  (⊃,(⍕2-⍨≢),⊃∘⌽)'accessibility'


I tried to write it a couple of different ways, but I keep coming back to approximately your code as the most direct way, if written slightly differently with (⍕¯2+⍴) in the middle; I'm puzzled by the behaviour of being able to drop and take negative numbers to index from the end of the array, but not being able to use negative indexes in other contexts. Why is (¯1⌷word) an error, instead of working like (¯1↑word) works?

Here's a much more convoluted way, because everything is difficult in this language:

    ,/((⊂∘⍕∘⍴∘⊃@2)⊢⊂⍨((1@2)1,⍨¯1↓1⌷(↑(1,⊂))))'accessibility'
    a11y
a frustratingly roundabout way to build up the boolean vector:

    'accessibility'
     1100000000001   ⍝ for penclose
without counting the length first.

(Is there a way to drop from the middle of an array? "delete index 4 5 6"? Or to insert into the middle of an array? "Insert between elements 2 and 3"?)


"Everything is difficult in this language" made me smile. :) I mostly agree. There are things things that APL / J make amazingly easy, too, but a lot of simple things are just brutal to me. I'm not a hardcore APL programmer -- I've played with J more than APL, but I'm still a novice at both. I just enjoy the mental challenge sometimes.

To insert items from B into A at point N, you could (in pseudocode, I don't have time to play with APL right now!):

(Take N of A) , B , (Drop N of A)

To remove items from the middle of an array between N and M, I can think of two ways (I might have off-by-one errors here):

- (Take N of A) , (Drop M of A)

- Bools / A

where "Bools" is an array of 1/0 values, and the 1's indicate the elements to keep. As a dyadic verb, "/" means "compress", that is, to keep only the indicated elements from the right-hand object.

I'm sure there are other ways! These are the few ways that my limited brain could come up with. :)


I'm also a novice at APL, despite thousands of lines of session scrollback building up basic lines that almost work then I change one character and INSCRUTABLE ERROR, over and over. Your take,B,drop design looks simple and sensible but the repetition of N is bugging me - I tend towards codegolf more than code clarity. The kind of drop I was hoping to find would be like you can index non-contiguous indices and update them:

    Vector[2 8 5]←100
but instead of them getting value 100, they get deleted.

Or like you can do ~ for set subtraction "without some values" like (⍳9)~5 7 but treating the right argument as indices to remove from the array, rather than values to remove from the array. I feel like there's two patterns which work in the same way as the thing I imagine and a space where the thing I imagine could exist but doesn't. Which I've now tried to code up:

    4 9 8 7 10 11 {(~(⍳⍴⍵)∊⍺)/⍵}'accessibility'
    accssty
It's not so ugly, but that took me down another rabbit hole of why I can't turn that into a train because there seems to be no way to force monadic iota when the trains design wants it to be dyadic. That is I want 3 4(⍳⍴)'accessibility' to come out the same as (⍳⍴)'accessibility' by somehow blocking the iota from having a left arg. Like 3 4(⍬∘⍳⍴)'accessibility' but Dyalog shoves the numbers in as a left arg then complains that the left arg is unavailable. Is this a problem with having to overload every glyph with a double meaning because of limitations on IBM 1960s printer technology, or is this my misunderstanding of tacit code and limited knowledge of operator behaviour, who knows. Maybe in another thousand errors I'll know a little more.

I just enjoy the mental challenge sometimes.

There's no accounting for taste ;)


Sorry about the pseudo code but I don’t have an APL font on my phone:

Vector[(iota rho Vector) set-minus indices to remove]


No accounting for taste -- truer words never spoken. :)

---

Abandon all hope, ye who read further.

I try not to worry too much about making things tacit. But there's a neat trick you can play in J to find a tacit definition for a non-tacit expression (if one can be found). I don't know if there is a counterpart in any common APL distributions.

First you define an expression in terms of values named 'x' and 'y'. Your goal is to find a tacit verb -- let's say 'T' -- that you can call as 'x T y'. (X is always on the left and y is always on the right, by convention -- compare with ⍺ and ⍵, sort of.) There is a J operation, cryptically named '13 :', which will try to make such an expresssion tacit.

Here's a quick attempt at writing 'N M drop Y', which will return Y but with the N:M section deleted. Again there might be off-by ones here (you could use Increment/Decrement to fix that):

Define some 'x' and 'y' values for experimenting:

    x =: 5 10     NB. these will be our N:M indexes
    y =: 'abcdefghijklmnopqrstuv'
Here are examples of Take and Drop (these are just to help you translate between APL and J):

       5 {. y
    abcde
       10 }. y
    klmnopqrstuv
Monadically the same verbs mean First and Last:

       {. x
    5
       }. x
    10
With M and N defined as the first ({.) and last (}.) values of X, take M from Y, and join (,) it up with the result of dropping N from Y:

       (({. x) {. y ) , (}. x) }. y
    abcdeklmnopqrstuv
Close enough! Now here's where we invoke the 'make tacit' verb, '13 :'. We have to wrap the original x/y expression in quotes, to make it a string, and then call it like this:

       13 : '(({. x) {. y ) , (}. x) }. y'
    (] {.~ [: {. [) , ] }.~ [: }. [
There's the magic. The tilde (~) is like ⍨ in APL, and the bare [ and ] represent 'take left value' and 'take right value'. The [: word is called a 'cap' and is used to limit the left side of a verb fork. (Sorry, I don't know the equivalent in APL.)

And that's our tacit version. We can try it out, give it a name, and try it out again:

       x ( (] {.~ [: {. [) , ] }.~ [: }. [ ) y
    abcdeklmnopqrstuv
       drop =: (] {.~ [: {. [) , ] }.~ [: }. [
       x drop y
    abcdeklmnopqrstuv
       5 10 drop 'abcdefghijklmnopqrstuv'
    abcdeklmnopqrstuv
 
If you've truly abandoned all hope, the J vocabulary is listed here: https://code.jsoftware.com/wiki/NuVoc


Ooh that's interesting, and I can read enough J to see how the basic version works, and a bit of what the tacit version is doing; the left and right functions map to ⊣ and ⊢ in APL, I don't know that the cap [: needs an equivalent because Dyalog APL recognises two functions alone without that, and calls it an "atop" for (fg) is (f(g(x)) or "f atop g".

But I'm going to have to read more to work out what/how the tacit version works because I have no intuition for telling where the x and y or ⍺ ⍵ enter into them.

Dyalog have this document for tips for translating d-fns into tacit form: https://dfns.dyalog.com/n_tacit.htm but how that compares to what '13 :' does internally..


Cool! I didn't know that about APL, and thanks for the Dyalog 'n_tacit' link.

This sort of feels like we are two tourists, trying to help each other translate between two languages that neither of us speaks very well. :)

These two pages might help you to build an intuition about how J does verb trains. The diagrams help to explain where the x and y values are used throughout (and where they are not).

https://code.jsoftware.com/wiki/Vocabulary/hook

https://code.jsoftware.com/wiki/Vocabulary/fork

I'm starting to be able to write long-ish forks in J without mechanical help, although I usually get them wrong the first dozen times. Practice makes perfect, I guess!

I'm planning to do this year's Advent of Code puzzles (https://adventofcode.com/) in J this year... at least until I reach a problem that explodes my brain. I'm hoping that this will give me something concrete to sharpen my skills on.


APL is a strong drug, I need to regulate my dosage. :)


Why can I not resist these urges? Here it is in J, which is basically APL but in ASCII:

       ({.,(":@(_2+$)),{:) 'accessibility'
       -->  a11y


Did you reply to the wrong person?

This is a cool one-liner though!


TIL


Thanks. I should have spelled that one out.


LOL. Do you know how goddamn strict Google is, internally, regarding a11y/i18n/l10n?

The idea that your precious text browser == blind people is wildly presumptuous. Don’t use people with disabilities as your human shield.


Strict or no, Google actually fails at accessibility a lot. The basically broken unergonomic keybindings in docs for starters and the entire fiasco that is Android come to mind immediately. For a really "fun" fail that's recent, Youtube recommendations are now a live region. This means that if you leave that tab focused, your screen reader just starts reading things while you're trying to play music and are across the room and can't stop it. That seems like a minor example except the whole point of Youtube is hearing things so that's kind of a big ball for someone to drop, thinking that live regions are a good plan. Things have improved some (I no longer hate the GCP console) but my point is that strict about accessibility doesn't mean good at accessibility by any means, and Google has a deservedly bad reputation in blindness circles that they earned over a very long time.


Is there a way to make DDG Lite chrome's default search engine?

Chrome seems to let me create non-default search engines, set the full DDG as the default, but not set lite as the default.


The problem is that the search query does not go in the URL in the Lite version. That really sucks. It makes it useless in the browser history as well. In fact, because all the URLs are the same on each query, they're not even added as separate entries in the history.


They don't?

Search term(s) are the URL parameter:

   ?q=<query>
Which does show in my history AFAICT.


On the regular version, not the lite. Here's my URL after searching for "asdf":

https://duckduckgo.com/lite/

In the search form, it uses POST instead of GET to perform the searches.


Even if the search form uses POST, the server does read the parameter if it's specified in the querystring. That is, https://duckduckgo.com/lite/?q=asdf works, so adding it as a search engine with URL .../?q=%s should work as well.


That's nice I guess. Still a shame the text input can't be used without forgoing the history.


For desktop, I believe so. Not for Android.

I've fully ditched Chrome desktop for Firefox, however.

Other than the landing page for search itself, the distinction doesn't much matter. If you set web search as a homepage or bookmark, you can definitely do it there, however.


How would I get into such accessibility circles?



Another person notes that ddg is inferior. Maybe people get used to a certain way of searching with Google that doesn't translate to ddg. Haven't noticed a drop in quality myself and I think I might have been retrained to use different patterns and techniques in structuring my queries.


DDG results are so much worse for me, especially anything longer tail or in Spanish, that I switch to Google when I'm actually getting work done. I find myself adding "!g" to an important search just to check for any results that DDG doesn't know about and it's almost always an upgrade to see Google's results.

Search is hard.

I don't like to chime in to say something negative about an underdog like DDG, but I see this "people probably just don't know how to use DDG" suggestion a lot and it's quite the opposite: Google feels like it can practically read my mind with minimal context, like knowing I also may be talking about a recent event that shares the name with a generic search term. And I'm not talking about personalized search.

Or consider how "elm dict" in Google takes me to https://package.elm-lang.org/packages/elm/core/latest/Dict (#1 result), but https://duckduckgo.com/?q=elm+dict&t=h_&ia=web in DDG doesn't (nowhere on page 1).

Run into this enough and it becomes hard to willfully use DDG when you know you're likely missing out on good results when trying to do real work.


I know you're probably annoyed that I'm telling you that you're using ddg wrong. That's not exactly what I'm saying. It's more like: we're trained to expect certain things from the search engine, and so it's hard to switch.

> Or consider how "elm dict" in Google takes me to https://package.elm-lang.org/packages/elm/core/latest/Dict (#1 result), but https://duckduckgo.com/?q=elm+dict&t=h_&ia=web in DDG doesn't (nowhere on page 1).

ddg gives me the source code to elm Dict (8th hit, so it's in the first page): https://github.com/ivanov/Elm/blob/master/libraries/Dict.elm

I assume it's the same because ddg is claiming not to affect search results by anything except time and user configuration.

Google's results (for me) are a full page of references to every different version of elm's documentation for dict. Not exactly a wide net, and frankly pretty redundant. To see anything else, I have to click at the bottom of the page. It doesn't show me the source code. I went through the first ten pages and didn't see any link to it.

For ddg, I just use the arrow key to scroll down, and I can press enter to follow the link I want, changing the meaning of "first search page" for me quite a bit.

> DDG results are so much worse for me, especially anything longer tail or in Spanish, that I switch to Google when I'm actually getting work done. I find myself adding "!g" to an important search just to check for any results that DDG doesn't know about and it's almost always an upgrade to see Google's results.

I have a completely different experience in italian. They're actually pretty good, which is surprising given the small audience.

For work, usually I directly search for documentation in reference systems (e.g. en.cppreference.com). Neither ddg or google will consistently direct me to the "best" documentation. YMMV.


A comment on my comment. It is really true that Google indexes the deep web of generated content (like AliExpress, eBay, and others) better than ddg. That's part of the long tail that's costly to cover.


Sure, but 90% of the time, I and most people I know don't do very sophisticated searches. I'm actually mostly using google as a billion dollar search engine for wikipedia / stackoverflow / arch wiki / bbc / nyt / ft / whatever big site there is in a given domain. Because these sites happen to have 90% of what i'm looking for. For the rest, we all have our own little forums we follow: fb, hn, email etc.

So instead of trying to beat google on full web searches, the trick might actually be to index all 100 best ranking websites according to some metric (alexa rank for instance) and do it better than google. Then, maybe you can grab over 50% of the search traffic. For broader queries (in the search knowledge graph sense), in this scenario, people would fallback to google.


Search is ripe for disruption, and ddg is the lead candidate right now. One wonders if such a strategy could give rise to more challengers more quickly. I’ve certainly considered it myself (alexa top 10k though).


> Google feels like it can practically read my mind with minimal context, like knowing I also may be talking about a recent event that shares the name with a generic search term. And I'm not talking about personalized search.

I wish there was some compromise, because Google regularly seems to read another mind than my own, automatically "correcting" search terms to terms with similar spelling that are totally irrelevant to my search or including what is superficially synonymous but for my purposes irrelevant in the results. I frequently feel like I have to convince Google to stop second-guessing me and actually consider what I wrote rather than what it assumes I meant.

A few more knobs and switches to adjust that behavior would be helpful at least for power users.


> Another person notes that ddg is inferior

I'll agree with this too. Thing is, 99.XX% of the time DDG works fine. 1% of the time if I can't find a thing, I try with google and probably 50% of the time I can then find what I wanted. E.g. DDG has a 99.5% success rate, Google has a 99.75% success rate. Not too bad by DDG, as I know that last .25% is REALLY hard.

Either way, google is seeing only a tiny % of my search queries, so I'm happy.

Of note, I switched to DDG earlier this year. I've tried to do it in the past and found that the DDG/Google ratios were like 80%/99+% in the past, which is WAY too much of a tradeoff to make. DDG has MASSIVELY improved, I'm using google search <1/day now.


Same here: 99% of the time. If I want to dig -more specifically- than DDG wants to go in some cases, I just add a !b in front (for Bing). I do a lot of research; haven't used Gargle for 10 years.


What many people don't realise (especially in HN) is that DDG is not as good as Google, in my experience, in looking for non-English content. One of the things that stopped me was the lack of good Dutch results that Google can pick up easily with its internal translations and whatnot.


I use DuckDuckGo when I know what the first result is likely to be. If I'm actually _searching_ for something (i.e. the majority of the time) I add !g. I can feel myself flinch every time I submit a DuckDuckGo query. It's just much, much worse.


During my first 2 weeks I definitely noticed a difference in quality, but I guess over time you get an intuition for how to combine search terms when it comes to less common queries. Now I prefer DDG over Google, even without all the nice shortcuts.


What's really weird about DDG is that the results from the no-JavaScript version seem far inferior to the results from the JS-enabled version of DDG.


I think Google's results have become dramatically worse, and I've become better at DDG. I now often find it difficult or impossible to find what I want on Google, and straightforward on DDG. I also use the bangs all the time now, they're great.


> However, being a blind person, I guess I have to accept that Google doesn't care anymore.

Jumping from "this isn't working on my incredibly niche browser" to "Google don't care about blind people" is completely ridiculous.


Defaulting to simple HTML allows one to support all incredibly niche browser. That is the beauty of protocols and standards. This is particularly relevant when it comes to accessibility. Google search results are literally lists of web links, so this is absolutely doable.

They don't do it because they are more preoccupied with extracting data about their users than they are about accessibility, and yes, this includes blind people. There is not way around it.

It is perfectly legal to be selfish, but let's not bullshit ourselves about what is really going on...


Yep. Following standards has great side-effects all around. The same things that break sites for the blind also break it for UX-enhancement extensions like Tridactyl, which lets you click elements from the keyboard, so long as sites don't go out of the way to make clickable buttons undiscoverable.

(Extreme apologies for any implied equivalence between myself and the blind.)


I actually think this a great example of the Curb Cut effect, where accessibility features that are vitally important to one group of people (wrt curb cuts, wheelchair users) also provide broad benefits to many others (wrt curb cuts, one example would beparents with small children in strollers).


Blind enablement also allows easier webscraping, which is what I think Google is more worried about.

It has solutions, like Google voicing out the contents, instead of doing a webpage that is so scrappable that screenreaders can parse it.


Webscrapping isn't bad per se. It's being made to look bad by parties that want to eat their cake and have it too. If you show something to a person, that person should also be able to use custom automation to view it. If you show something publicly, the public should be able to use custom automation just as well. Don't want someone scraping off your site and using it for profit? Limit the audience and sue people making profit off your data for copyright infringement.


What about bot-throttling heuristics and captchas for suspected ones?

They seem to have captchas down to an art in every other context...


> It has solutions, like Google voicing out the contents, instead of doing a webpage that is so scrappable that screenreaders can parse it.

The real solution is to find a business model, or a way to organize society, that does not depend on us building nonsensical prisons or restrictions for each other. It's almost as if surveillance capitalism is not the final answer...


Google is mostly focused on trying to kill URLs at the moment, both in their pages and Chrome.

Accessibility is "just" a side effect in their EEE war on users.


I don't feel that's fair -- yes, Lynx is not really updated much anymore and at this point very niche. But it must work for their workflow, and I feel like something as critical as search should have fallback to work with very 'primitive' browsers and older W3 standards.

(FWIW I work at Google, but not on the search team. I might go looking at internal discussions to see if this is being looked at at all)


Recently I read announcements that Google was making it harder to view internal projects/discussions on other teams [0] (suspiciously, this came after the leak that they were still working on a search browser for China [1]). Have you, as a Google employee, found it hard to audit or observe previously visible projects?

[0]: Couldn't find a quick source on this

[1]: https://theintercept.com/2019/03/04/google-ongoing-project-d...


Perhaps you could contribute better by citing examples where Google does care about blind people instead of calling an actual blind person's argument ridiculous.


It doesn't matter if he's blind; I'm calling the connections ridiculous. Lynx isn't a browser for blind people, it's a browser for the terminal. The terminal isn't some accessibility tool either, and was never made to be one. Drawing a connection between Lynx and blindness accessibility is tenuous, saying that this change is Google is attacking Lynx is even more tenuous, and drawing some sort of transitive connection between all of them to say that the change made is somehow against accessibility because it doesn't work on lynx is doubling so, bordering on...

It should also be noted that it seems that this is a bug with Lynx, not Google.

There's this is you care to read it: https://www.google.com/accessibility/

But that's not the point.


Please, learn a bit about edbrowse, made by a blind developer, and think again.

https://edbrowse.org/


Perhaps we could contribute ourselves instead of asking if others can.

Google seems to be heavy users of https://developer.mozilla.org/en-US/docs/Web/Accessibility/A... which offers a lot more power and flexibility towards accessibility compared to plain text by enabling accessibility to full featured web interfaces instead of basic versions.

See other methods https://www.chromium.org/developers/design-documents/accessi...


How about the built-in screen reader in ChromeOS?


Lynx is niche now? Low user count, yeah, but it’s a standards complaint browser.


The definition of niche is literally "appeals to a small, specialized section of the population" so, er, yes.


Perhaps, but in this case Google would just have to comply with the standards, which are not niche at all.


The standards just describe what to do to make a standards compliant HTML page, what tags are allowed etc.

There are no standards that say that e.g. your page can't be all dependent on JS.

In other words, Google could be 100% standards compliant, and not work in Lynx.


The grammar of the English language does not forbid one to write in Greek, but if you choose to write in Greek only, you will compromise on a lot of people being able to understand you.

The same with standards, and you are misunderstanding on purpose.

If Google chooses to not support simple HTML, then they are choosing to not support countless accessibility tools, and they know it. Some blind people will have a more miserable life because Google attained a de facto monopoly, but does not recognize some of the moral obligations that people like me feel should come with such a position. "With great power comes great responsibility", or maybe not.


Is lynx up to spec on HTML5 ARIA attributes? My understanding is that that's how accessibility is "supposed" to be done now, but if lynx hasn't been updated in a while, it might not support those HTML5 features, and thus not be standards compliant.

(edit: Someone below notes that lynx appears to incorrectly parse valid HTML5 on the google homepage, so it sounds like Lynx's lack of updates are hurting here).


> Is lynx up to spec on HTML5 ARIA attributes? My understanding is that that's how accessibility is "supposed" to be done now

No, that's not true at all and is unfortunately a common anti-pattern. Accessibility is supposed to be done by using standard HTML elements and attributes. ARIA is there to extend / fill in the blanks and to fix things when people deviate from the norm. For instance, if you have a button, you should almost always use the standard HTML <button> and only use some other element type with an ARIA role=button if it's unavoidable. And <button role=button> is redundant. Best practice is still to use the semantics defined by HTML, as it always was.


I should have been clearer, but imo correctly using html5 node types is part of correctly using html5 attributes.


>The same with standards, and you are misunderstanding on purpose.

While I get what you mean, the use of the term "standards" just conflates an orthogonal issue.

You can be 100% standards compliant and not readable on Lynx, or 100% standards compliant and readable on Lynx.

Relying on JS is not some niche obscure corner or some bypass of the standards as per the English/Greek analogy. It's basically the norm for most SPAs today.

The problem is that the standards are not compliant with Lynx (or rather that Lynx is not compliant with the standards).

What you want is not Google to use the standards, but to use the part of the standard that is about simple, not JS dependent, HTML.


SPA's are shit for the blind.


Google shouldn't be restricted to a subset of the available technology because a niche browser isn't updating to available technology. Yes, a side-effect of this is that the blind community using Lynx can't use Google. While unfortunate, it's also a tiny, tiny, TINY community.

If you want to be upset, be upset with Lynx for falling behind. Or don't be upset and switch to JAWS, BRLTTY, Orca, etc. But the idea that anyone is supposed to support every possible browser is just silly.


According to another comment thread, this isn't accurate: https://news.ycombinator.com/item?id=21629207


It seems that the problem is caused by the fact that Lynx isn't standards compliant anymore, and fails to interpret valid HTML5 structures correctly because they aren't valid under older standards.


Lynx is the definition of niche...


According to some other comments, the problem might indeed be that it is not a browser compliant with the current HTML standard: https://news.ycombinator.com/item?id=21629207


> it’s a standards complaint browser.

What does that even mean? What standards it complies with? Is there a compliance test suite or report somewhere? Because there definitely are bunch of standards it does not comply with.


Curious, I installed lynx just to check this out.

I find that I physically cannot navigate to the links in the page except the first few at the top.

But.

On the pages I get, the <a ... href="..." ...>...</a> structure is still 100% intact. It's buried in a table and div soup, but it's there.

So, I argue Lynx parsing bug!

The author of this article would have done well to save and diff the working/not-working HTML they received. :(


Not really a bug more than an outdated browser not being updated for HTML 5.

In HTML 4, <a href="..."><div>...</div></a> is an error and Lynx deals with this by implicitly closing the <a>, turning it into a hidden link (which can still be followed by pressing 'l').

In HTML 5, <a href="..."><div>...</div></a> is valid.


Google actually detects the Lynx user agent and sends an HTML 4 page, but apparently this new code wasn't written with that in mind.


I think it is because in html4 the content of an <a> element is restricted to inline elements, whereas in html5 <a> is transparent so its content can be block elements if its parent allows them.


That would explain why my blog is terribly broken in lynx. The links from my category pages to article pages are usually done with <a href...><figure>...</figure></a>, which breaks the <a> in lynx.


Why should anyone have to do that? If the site stops working for the user than it no longer works.

Should they be expected to make their own ‘re-Googler’ to fix the page so they can use it again?


>Why should anyone have to do that?

When there's a browser bug, someone needs to debug it and fix the browser. Otherwise the browser bug will remain forever.


> Luckily, there is duckduckgo. However, I have to admit, the search results of duckduckgo are by far inferior to what Google used to give.

For cases where DuckDuckGo isn't quite enough, I usually rely on StartPage [1]. It uses results from Google, but like DuckDuckGo it doesn't track its users.

Startpage, like DuckDuckGo, also works well in Lynx (I've just tested in Lynx 2.8.9 on Ubuntu 16.04).

Additionally, you can get StartPage results right from within DuckDuckGo by just appending the !sp shortcut to your search query [2]

Edit: you may want to keep in mind, however, that StartPage is now owned by an advertising company. Some users took issue with that. Personally, I'm OK with that as long as users are not being tracked. Relevant HN thread: https://news.ycombinator.com/item?id=21371577

[1] https://www.startpage.com/

[2] https://duckduckgo.com/bang?c=Online+Services&sc=Search


DDG has been getting better. I've switched my browsers search to it and only occasionally go back to Google for something specific.


DDG's results are very bad still, and they will likely always be terrible. It's not hard to find a query that is objectively much worse than what Google has. DuckDuckGo will never be able to catch up to Google's search quality results. It doesn't have Google's data, Google relies on the search behavior of most of the people of the internet to guide its results along with its vast human and hardware resources to create results that even Microsoft can't match.

https://www.bloomberg.com/news/articles/2019-07-15/to-break-...


From my view, I think it must depend on your use case and common searches. I've been using DDG for years now without any major qualms on search results. For my purposes it finds what I need, and for the few cases where it doesn't (perhaps 1 out of 50 searches?) it's easy enough to just add on a !g to my query to use Google instead.


Every time I do the !g anymore, I never find Google has any better results. Usually, I just get 32,000,000 more results listed that are just auto-generated junk. Most of those don't even have the words in them that I searched for, so I don't understand how they come up.


Googles results are getting steadily shittier, so they're a bad benchmark to beat.

The behavior based searching is crap which has been usurped by people seeking profits. Optimizing the dopamine feed of users and ad impressions of publishers isn't a good thing or something to aspire to.


Or you could use searx.


Did they also remove link wrapping with this? The HREF goes straight to the destination for me now on Chrome, where previously it went to some Google domain redirect. It's there on the first HTML load too, it's not a JS thing after the fact. Is there a different response for Lynx or are they formatting it in such a way that Lynx doesn't pick it up?


They're using the ping property of the <a> element now, the only good thing to come out of this


On a more substantial note, that's documented here:

https://www.w3.org/TR/2008/WD-html5-20080122/#hyperlink0

It's a little odd, to see that a browser "must parse", but also "may either ignore the ping attribute altogether, or selectively ignore URIs".

It strikes me as a bit clumsy compared to the typical MUST/SHOULD/MAY wording.

Anyone (other than Google) using a-pings?


Ah yeah that's definitely going to be different HTML on Lynx, then, since I bet they don't support that and Google's not missing out on tracking.

Yup - curling with Lynx user agent gets a targets of href="/url?q=<whatever>" rather than href="<whatever>" ping="tracking"


Hm, so now I can go straight to the Google hosted AMP version without bumping through Google an extra time first? /s



googler no longer works either. I went to use it and got no results. Since they just scrape the results page a layout change is probably going to break everything.

This issue was opened recently for it.

https://github.com/jarun/googler/issues/306


It got updated and working again.


Also ddgr: https://github.com/jarun/ddgr

Same author, no tracking, works over Tor.


I like the idea behind DDGR. However, I was surprised that it doesnt offer a pager for search results. Whenever I search something, I need to use the scrollback buffer to actually see the first results... That is not very userfriendl, even for a CLI tool.


A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.

That they are only underscores Google's massive blunder here.

Yes, there've been CLI wrappers around web queries before. Until the past few years, these simply addressed search format, URI arguments, and namespace. They launched in the user's choice of browser, text or graphical. Surfraw is the classic, I've written a few very brief bash functions for equivalent capabilities, again, launching any arbitrary browser (though usually w3m by my personal preference).

Now what's needed, and you're recommending, is a content-side wrapper as well. This story ends poorly.


> A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.

I think this ship, if it hasn't sailed already, is at least starting its engines and getting ready to leave the harbour.

User agents and servers are increasingly trending towards an adversarial relationship, where the user doesn't want to do much of what the server is asking of it. This has been true from the first pop-up blockers, through to modern adblocking and anti-tracking measures (the situation now being so bad that tracking protection is a built-in default feature in some browsers).

Eventually, a "filter out the crap" strategy becomes too onerous, an "extract what looks good" strategy starts to look better, and you end up with tools like Reader Mode. Custom clients are a natural next step - when someone gets desperate enough to write an article-dl to match youtube-dl, we'll be there.


Oh, I agree that the ship is now barely visible over the horizon.

That doesn't diminish the fact that the original intent was to have a common, freely-available mechanism for accessing, viewing, and presenting content.

(I'll probably be asked for citations. TBL has probably written on this, and Tim O'Reilly had an essay on his early response to the WWW as opposed to alternative, proprietary, systems, when O'Reilly & Associates were plotting their early course.)


https://weboob.org/ will become mainstream.


> A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.

Perhaps, but the Web is now effectively requires Chrome, or things which look enough like it.


When it comes to searches for technical reasons, dev related, sci, even gov searches, duckduckgo blows google out of the water. Junk random searches, trivial things, yea... its weak. My experience I should say.


Serious question, why search Google in your terminal with Lynx as opposed to `googler`? The actual result pages can still open in Lynx but the experience of navigating the results is very nice.


Not pretending to be able to answer for OP, but one answer would be that it's because lynx is a browser, and Google is a web site. Traditionally you read web sites with browsers, rather than requiring special tools for every particular site.


Author of the post is blind so I'd imagine the usage of Lynx is for screen reading/accessibility reasons.


Spivak is asking why Mario isn't using https://github.com/jarun/googler .

If Mario is able to use Lynx, the expectation is that Mario can probably also use googler.


I would imagine that most are not aware of googler &/or do not want to use one tool to search and another tool to browse


There's a patch over v3.9 that works with the new layout: https://github.com/jarun/googler/issues/306

Hopefully we'll have a PR merged soon.


> However, being a blind person, I guess I have to accept that Google doesn't care anymore.

This is not a blindness issue. It would be more accurate to say that Google doesn't care about geeks who cling to old ways of doing things long after there's a good reason to do so. As others on the thread have pointed out, blind people can use graphical web browsers with a screen reader, even under GNU/Linux. I know you know this; I'm pointing it out for the benefit of everyone watching the thread.


On the other hand I don't feel like I'm in a position to blame a blind user for clinging to something they already know instead of learning how to use a graphical browser. A lot of people in general have good reasons for sticking to what they know. Especially if what they stuck to worked fine until recently, and doesn't now only for some trivial reason that's easy to fix.


> It would be more accurate to say that Google doesn't care about geeks who cling to old ways of doing things long after there's a good reason to do so.

Is it that you can't think of a good reason to use text-based browsers at all or is lynx itself the issue? command line web tools like lynx and curl are pretty handy to have available.


> However, I have to admit, the search results of duckduckgo are by far inferior to what Google used to give.

I've switched to DuckDuckGo since I read its CEO's book "Super Thinking", and I'm not feeling that it's inferior. Sure, it doesn't have rich cards and other goodies, but I've come to realize that these are nice-to-have, but not essential. On the other hand, reducing the confirmation bias by getting out of the filter bubble is, I believe, essential.


It's been probably years since I've used Google search. It's entirely possible that I'm not getting desired results, but as far as I've been able to tell. It's entirely possible that because I don't end up on answers, I end up (most of the time) going directly to read source code and documentation more often than not. This seems to have helped created something of a better understanding of whatever tech I'm using at the time.


Startpage.com, a Google proxy, still works fine in lynx.


Unfortunately, they got bought: https://news.ycombinator.com/item?id=21371577


Wow, first Private Internet Access, now startpage? What's next, the next Waterfox version will be based on Chrome!?

Starting to feel like Luke Skywalker and Princess Leia in the trash compacter, the walls are closing in.


True, and it's a shame. But it might not matter that much in this accessibility context as Google is also an advertising company.


How does Qwant perform? It's supposed to be privacy-focused like DDG and SP, and it's European, so covered by GDPR too:

https://lite.qwant.com


Hey! We will be happy to give you an infinite access to our search API: https://serpapi.com

It will be slower than regular Google but at least you are not going to be blocked.

Edit: Create an account without credit card details, send an email to julien _at_ serpapi.com with it, and I’ll make sure you have an active account.


w3m works fine for me.

Any reason people prefer lynx over w3m or eww?


I'm surprised elinks doesn't get more mention whenever text browsers comes up and I wonder why. I prefer it over the others.


It doesn't work for me for quite a while now (for up to a year), even with recent versions from git [0]. My preliminary guess is that's because Google renders each result with <div> (block element) inside an <a> tag. I didn't have any extra spare time to test that further (and report) though, so I simply ditched Google and just went with duck.com from that point.

[0] https://github.com/tats/w3m


Also works with elinks.


Indeed.


What? w3m is resulting in the same thing for me. No actual links to what pops up. Sometimes links to some videos.


Compatibility with screen readers that blind people use.

Moreover, fuck Google and non standard practices in general.


Lynx is not a screen reader. (In fact, it's considerably less accessible to blind users than a typical desktop browser -- as a console application, it has no way to provide accessibility data to a screen reader.)

A screen reader is a tool like JAWS or VoiceOver which interacts with desktop software (including web browsers like Chrome or Safari) to provide information about what the user is interacting with.


>as a console application, it has no way to provide accessibility data to a screen reader.)

FFS, you had console screen readers in Linux since forever.

And even OpenbSD, with yasr. It works fine with speech-dispatcher.

Less accesible? Maybe in your limited world, but this is "Hacker" "News".

Well, I guess the new IT generations are even less aware of TTS systems since 1998 or so.


This is a trend I've noticed also, behemoths breaking standards without permission nor apology, crushing the canary in the coal mine (w3m/lynx, though emacs eww browser works). As a member of an IT Development team creating various web applications, we have found different approaches are needed based on client needs and workflow. We develop many complex data-based backed query/display single page web apps usually with zero js. Where not needed it is little more than a security hole begging for trouble. Where no 'live' user experience is required this is equals resource waste (xtra payload, xtra client cpu cycles if no js blocker) especially in many work environments.

As user/engineer I am annoyed when any site sends me their worthless js to execute unnecessarily, thus wasting my devices cpu cycles, battery, plus my lifejuice, and for what??? In most cases nothing I want/desire/need, therefore just to make a bigger tool of me than before, and not for the better. That being said, using web communications, live/simulated data visualizations benefit greatly from js.

Separation of concerns is what is missing, js was created to benefit and enhance user experience, the ne'er do wells have hacked it into a tool mindlessly used and often enough does screw the user over without permission nor apology...

Interestingly the creators of what became Google received their start with NSF funding. Good way to finish off giving everyone the finger Google, continue to 'do only evil'...


From JWZ.org

Greetings, Lynx users. There is a reason this page doesn't use ALT tags on the images. The reason is that the bozos responsible for both MSIE and Netscape Confusicator 4.0 decided that they would display the ALT tags of images every time you move the mouse over them -- even if the images are loaded, and even if they are not links. The ALT attribute to the IMG tag is supposed to be used instead of the image, not in addition to the image.

This looks absolutely terrible, so I don't use ALT tags any more in self-defense.

If they wanted to implemented tooltips, they should have used the TITLE attribute to the A tag. That's in the HTML 1.2 spec and everything.

I had to decide between making this page look good for the vast majority of viewers, or making it be readable by the miniscule minority of you stuck in the 70s. Those of you in the retro contingent lost. Sorry.

from view-source:https://web.archive.org/web/20000304020552/http://www.jwz.or...


Someone got really worked up over text appearing when they moused over an image, and broke accessibility because of it?


jwz has a... strong personality.


And several more in ready reserve!


Yeah, jwz.


Google locked me out of Lynx nearly a year ago. I'm surprised it took this long to affect other people.


I'm Google's public liaison for search. Thank you for bringing this to our attention, and our apologies for the inconvenience caused. This was indeed a bug that our engineers have explored, and it should be fixed now.


I am willing to bet, attorneys who specialize in exerting money from ecommerce sites based on fluke accessibility lawsuits will not take notice of this problem that presents an actual hardship.


Sounds like a bug in lynx.


Solution: Use alternative HTML parser

   lynx -tagsoup https://www.google.com/search?q=whatever
The program will now display search result URLs as visible links.


"Bye bye mainstream, hello ghetto."

Yes, because your use of deprecated software is no longer supported is TOTALLY the same thing as being relocated into the Ghetto in Warsaw.


Strange with elinks I can still click on the links.

With with lynx not.


This appears to have been fixed since the posting. Links in the search results work again in lynx.


I am able to Google using Links (another excellent text mode web browser).


duck.com works well with lynx.


Sounds like an ADA lawsuit is in the works...


That's what happens when Google releases their own console killer.


I think you may be confusing console (the terminal emulator, command line interface) with console (video game console like Xbox or Playstation or Stadia).


Google search no longer working with text readers should be an ADA Violation. https://www.ada.gov/complaint/


Lynx is not a screen reader.

Screen readers are tools like JAWS or VoiceOver which interact with desktop applications, including desktop web browsers such as Chrome or Safari. Google works fine with these.


And YASR under Linux with lynx/edbrowse works 200000 times better than jaws, starting with a logical layout for the blind.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: