Hacker News new | past | comments | ask | show | jobs | submit login
Why browse the Web in Emacs? (2008) (sachachua.com)
152 points by pmoriarty on Nov 27, 2014 | hide | past | favorite | 85 comments



This covers many of the reasons I prefer web browsing without JS enabled: speed, less mem usage, safer. Before I started doing so, I thought that disabling JS would make the internet unusable, but a vast majority of pages display their content just fine without it. In addition, it is really easy to add domains to the JS whitelist in Chrome without the need to go into the options menu every time (less so in FF). AND, Chrome uses separate whitelists for normal and private browsing, and will clear the private browsing whitelist when you close the window! There are some periodic annoyances, but I wouldn't think of going back to default-enabled.

Browsing the web in Emacs is a bit too hardcore for me though :P


"I thought that disabling JS would make the internet unusable, but a vast majority of pages display their content just fine without it"

I also browse the web with Javascript disabled (using Noscript), but have formed the opposite impression: more and more sites won't display their content (or display their content incorrectly) if Javascript is disabled. Most of the sites I'm looking at are not "web apps" or SPAs (single page apps), they are sites with text articles or links so they have no real reason to break without Javascript. (It's always annoying when you attempt to click a link only to find it won't work unless Javascript is enabled.)

The rise of Javascript frameworks is only fuelling this trend of Javascript-dependent web sites. I've said this before, but Web developers pick the tools that make their lives easier (as you'd expect), but that doesn't always mean that users get the best experience.


> It's always annoying when you attempt to click a link only to find it won't work unless Javascript is enabled.

Ah, the number of times I've asked colleagues to stop doing this! If you want a JS-powered link/button/whatever on a page, insert it using JS; then you're guaranteed that it will only show up for those who can use it.

Likewise, all togglable content should begin visible, and selectively-hidden by JS during page load; that way, it only gets hidden for those users capable of showing it again.

Also, although this is rarer, all work should be done in small, isolated event handlers. That way, when some unexpected situation arises (eg. the user is blocking your chosen spyware platform), that particular handler dies, but all of the rest keep working (eg. the button handlers, the slideshows, etc.).


The truth is almost nobody cares about js being disabled anymore. Even if today you think it's an overestimating (I don't think so anymore) over few more years it won't be even for you. Just 2 years ago it was like "What if a customer doesn't have js enabled? We don't want to loose him just over that! Let's support both versions.", today it is already more like "No js? Weird, lol. Well, it's his own problem anyway, no reason to spend time&money on that in 2014.".

When I for some reason was forced to use lynx, which I don't normally do, I was simply amazed by how smooth browsing is, it doesn't feel like the usual "internet" anymore, more like navigating your project via text editor on the local filesystem. But unfortunately as you go further than browsing ArchWiki it becomes almost unusable due to lack of static (in the sense "no js") sites support.


> no reason to spend time&money on that in 2014

If you're considering whether to go back and change it, don't bother; you've failed. The point is to do it that way the first time, instinctively.

The time&money spent writing element-inserting JS corresponds to time&money saved by not adding a bunch of links/buttons/etc. to HTML templates.

If you're tied into a framework which makes the right way more difficult, then file a bug with the framework devs ;)


http://stackoverflow.com/questions/764624/how-can-you-use-ja...

W3m-js is/was a thing. Also, w3m can apparently draw images inline via xterm, though I wasn't able to get this working in the ~10 minutes I tried on OSX Yosemite with X11 and xterm using the w3m from homebrew. UZBL is ok too, if you just want a minimalistic WebKit browser with vim key binds.


If you want vim-keybinds in your browser you can just use vimperator or pentadactyl, it's not the point here. Web without pictures/js on today's connection is almost as smooth as navigating your local file system, while "normal" browsing isn't, and you (well, me) don't even notice that before you eventually try that js-free browsing from lynx or something like that. That is, it would be if not for the fact that js-free web is a "Red Book animal" already.


It probably depends on the exact type of sites you normally visit; I default to JS off but the majority of sites I visit are "pre-Web 2.0" informational types which are perfectly readable without JS. As you notice, it's mostly the newer sites which are problematic.

Web developers pick the tools that make their lives easier (as you'd expect), but that doesn't always mean that users get the best experience.

...which I think is both selfish and somewhat ironic since web developers are almost certainly users too.


More and more websites now reimplement loading the page in JavaScript. Just thinking about it makes me angry.


I really dislike how a lot of multi-page navigation (i.e. getting results 51-100) is now commonly done with AJAX. It's not really faster than loading a new page, but it breaks the back button. "Oh you clicked on the 287th link and hit back? Here are results 1-50." So annoying.


The pushState API has been available for years. This isn't an inherent limitation of single page apps. Even where the pushState API isn't available (older versions of IE) the anchor tag can be be used to maintain browser history so the back button doesn't break. https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/M...


But here's the deal: now the web developer has to manually do something that used to be intrinsic to the way the web worked. Really feels like a step backwards to me.


What's interesting is that in the past, all the Real web devs knew that frames suck because they break navigation. Anyone who showed their framed site in a community of cool web devs would be laughed at.

Now it's cool to break the back button (and links and everything) and make me watch spinners. It's as if the text and thumbnails they're serving today somehow take two orders of magnitude more bandwidth to load...


Devs and designers are now (more so than ever) different sets of people, and designers are now calling the shots in most places.

I'm a web dev, I know that ajax loading sucks, that without enough budget to do it right (and we never have enough budget) UX ends up broken, messing with scrolling sucks, page-based is best, don't make an app if it should actually be a site, etc.

Do I get any say? Not really. What do I end up building? Magic-scrolling ajax-loading apps, and if I have any budget left at the end of the job it goes on tweaking typography (i.e. designer-visible stuff) rather than fixing back buttons.

Which is to say, devs still know that this stuff sucks, but we're no longer in charge of the relevant decisions.


Please don't make this about developers versus designers. Good web designers will not make those kinds of mistakes. Bad developers might.


DISCOURSE I AM LOOKING AT YOU. In a real forum I'd open pages 1-n in tabs and read them on PT. On discourse I have to open the same page N times and scroll to a spread of points down the page.


Why does that make you angry? Sometimes, reimplementing how loading works can make your website more performant, for example. If JavaScript gives developers the power to make their websites better, why shouldn't they take advantage of that power?


It makes me angry because it's unnecessary, adds complexity, and requires me to allow whatever website to execute code on my computer. I don't want that code, I just want to read the page.

It makes me angry because it is a gross abuse of the web platform. The web is not for applications, it's for documents. I would be just as angry if I received a Word document that contained scripts that it needed to run to display the document (actually I would be angry if I received any Word document).

I don't want 7 tracking scripts, 4 advertisement scripts, jQuery, Angular.js, and whatever other junk modern webdevelopers include on their "websites". I just want the content, and that's what HTML is for. You don't need to include any JavaScript programs to show me formatted text, which is what I visited the page for. I didn't visit to see fancy scrolling effects, to have my every interaction with the document tracked, or to see nothing at all if I don't run foreign programs. I just want the content. Just give me a [motherfucking website](http://motherfuckingwebsite.com/).


  It makes me angry because it is a gross abuse of the 
  web platform. The web is not for applications, it's for
  documents. I would be just as angry if I received a Word 
  document that contained scripts that it needed to run to 
  display the document (actually I would be angry if I 
  received any Word document).
That ship has already sailed, Pam. There are two webs now: the document-centric, platform-agnostic, user-controlled presentation of content as envisioned by Tim Berners-Lee at CERN, and the over-specified development platform that is HTML5, that ultimately originated in Win98's Active Desktop "push technology".

The latter has gained world-wide adoption because developers and industries couldn't agree to build a standard software repository that was distributed and allowed independent re-implementations of the running engine. Java tried to be that standard, but didn't have a convenient way to deliver software to end users. App stores later are becoming a close second, but being walled gardens they'll never replace that role in full.

It's still useful to think of "the two webs" as different purposes for the HTML5 technology; it's a good question in particular to ask yourself before you start a new website and decide what of the two models you want to support. Requesting that all webs are coded assuming the "web of documents" view is not realistic anymore, though.


I might agree with your for the minority of things that are literally documents. But that's not what most of the pages I visit are. Twitter is a stream of little posts and real-time notifications. My webmail client isn't a document, it's a browser for documents. Lots of sites let me add my own content, and it's a painful kludge to re-render the static HTML for every page every time anyone makes a change. Let that data get pulled out of a database and sent to the application running in my browser. That's a much better fit for what's actually happening with the content.


I have nothing against applications. I just don't want applications inside my document browsing application.

Some uses of client-side scripting on the web are useful and necessary. When that is the case, make a real application instead of abusing a platform for publishing documents. Web app developers are just making things worse for everyone.

When client-side scripting is not necessary, web developers use it anyway for some reason. Blogger is a great example of this: a blog post is definitely a document, it doesn't need any client-side scripting, yet Blogger blogs just show an empty page if you visit them with JS disabled. Why would they do that?! It's almost as if they want Blogger to be as inaccessible as possible.

>My webmail client isn't a document, it's a browser for documents.

That's exactly why it shouldn't be on the web! Abusing the web to make it into an application platform is like forcing a square into a circle-shaped hole. The web doesn't have to be everything to everyone, just let it be a platform for documents.


Not every page has to be a web app - the Blogger thing annoys me too. But if it's an app you're making, the web browser is the biggest deployed platform and therefore an appealing target. http://xkcd.com/1367/


> Twitter is a stream of little posts and real-time notifications.

The stream part is easy enough, with pagination. I have no objection to hiding the pages with JavaScript, making any page 'bottomless.' Real-time notification, obviously, can't work without some sort of execution, but one could simply show notifications on the next page viewed (or calculate them when sending that page, or whatever).

> My webmail client isn't a document, it's a browser for documents.

Good thing that you're using a browser…

I'm not opposed to using JavaScript to speed things up or enhance them, but using it to deliberately break the Web is just wrong.


If they haven't benchmarked their site on my computer, they don't know whether it's making their websites more performant. But years of experience with JS disabled by default (and only enabled when necessary) makes it very clear to me that using JS, at least on my system, is almost never a way to make any site more performant. On the contrary, it increases load times, cpu usage, and memory usage. It might also force me to use bloated crappy browsers like Firefox or Chrome where I could otherwise use something like w3m or links, which will display the page in a fraction of the RAM, much faster.

I wouldn't be disabling JS if it weren't such a pain in the ass. Or maybe I still would, just for security.


Plenty of sites do use client-side instrumentation, and so they do indeed benchmark their site on your machine, and then make decisions based on the results of those benchmarks (likely aggregated with many other benchmarks from many other users). Yes, sometimes JavaScript is used poorly in ways that make sites bloated and slow, but plenty of sites use it properly in ways that only make their sites better (collecting the appropriate data to confirm that what they are doing is making their site better).


People talk about these sites every once in a while. But my experience is that enabling JS makes things worse across the board. If there's some site that would perform better with it, these are very much in the minority. And in my experience, "old web" sites with out JS loaders and such are almost always much faster & easier on the resources.

Even if some sites deploy instrumentation, it doesn't mean they're necessarily lifting a finger to make it faster for me.


I do not find adding another vector for problems, including malicious ones in the face of persistently broken sandboxes, to be "better".

If it's a "page" I am going to "read", I don't feel the need to enable arbitrary execution of whatever gets stuffed down the pipe, beyond rendering the more restrictively defined HTML and some hopefully well-debugged -- although even with them problems continue to crop up -- image formats.

It's like a biological viral infection. Hygiene can keep it out, but once it's inside, you may have a very persistent and possibly quite debilitating problem.

I don't want to be disconnected from the world. Neither, however, do I want to engage in, um... "unprotected casual browsing".


Noscript and ghostery (for the less hardcore) do similar for Firefox. Don't browse without them! It's like a condom for the internet.

I cringe every time I load up a browser on the iPad and wonder what information sites are gathering.


HTTPSwitchboard for Chrome-like, and Policeman for Firefox. Those 2 extensions replace NoScript, RequestPolicy, AdBlock, you name it.


HTTPSwitchboard is so awesome for this. https://github.com/gorhill/httpswitchboard

P.S. Next version might be a local proxy (well, usually local) which will improve browser performance, cross-compatibility with other browsers and across OSes, and all around fit better, imho (which is that a browser's only job is to render web content).


Does anyone remember Gopher:// ?

My experience was that content was a first-class-citizen of the gopher-net, because we didn't have "hyper media".


Damn kids and their "hyper" media. Why couldn't they all just be satisfied with good old fashioned calm media.


In the text-based browser I used back then, Gopher appeared nice and neat, with easy to read lists, whereas whatever web browser presented stuff in a sort of jumbled way. Gopher seemed like the better system and I wasn't really impressed with the www stuff I'd load.


Wow, I had never heard of it, but that looks pretty cool. Gopherpedia is a complete interface to Wikipedia which I'll probably make use of.

Anyone else running Lynx? I don't use it for normal browsing (FF+NoScript like many others here) but I do fire it up for some of the purposes listed in the original article - works well with my workflow, no distractions, and of course it's really safe. Incidentally, it still fully supports Gopher.

On Mac OS, I highly recommend Lynxlet http://habilis.net/lynxlet/


I used Lynx back in my glorious startup days (12-15 years ago :) it was a great tool to see what the various crawlers (which were Altavista, Lycos, Yahoo and Hotbot at the time) were actually "seeing" on the pages of my site. It helped solving issues with spacing, alt texts...


I'm a big fan of Lynx. (Haven't changed my profile in a while.) Though it is sad that it breaks on most of the modern web.

I've often wished someone would by the .text TLD and have strict bans on images and restrictions on scripts.

It'd probably just quickly and abysmally fail, but hey, 140 character limits caught on, who knows.


I was working at a university in the UK and got to see Mosaic in 93(I think, it was a very early beta). I said to the main sysadmin "It's very pretty, but it will never replace gopher". He still reminds me of that fact occasionally. Even then it wasn't as fast or efficient to get data/documents out of and I stand by that judgement.


> You’re safe from browser exploits.

I'm skeptical of that. Of course no one is actually targetting whatever vulnerabilities there are, and getting rid of JS helps a lot. But is all the code sandboxed or anything?


Given that eww (the latest Emacs web browser) is written in elisp, it's safe from buffer overflows and such, which are the most common form of vulnerability in software written in C. The actual parsing code uses libxml, which isn't elisp, but is one of the most widely-deployed parsers around.

Obviously there's no such thing as perfect security, but I would guess it's way more difficult to exploit than your conventional browsers.


Did you mean the browser code itself? We are actually trying to do that in gngr[1].

(1) https://gngr.info


I use w3m from Emacs all the time. It is not my primary browser, but I do like it for reading documentation while hacking. It is ridiculously fast and uses very little memory when compared to Firefox.

It is very convenient for copying snippets of code and pasting them in whatever file you are editing (or a REPL), too.

The same, to a lesser degree, goes for Dillo. It is amazing how Dillo can have dozens of tabs open and never use more thant 50-100 MB of RAM... :)


Have you tried eww in Emacs 24.4? I already like it better than w3m.


It's better once you keep it from messing with your text background color. I think this does it:

    (defadvice shr-color-check (before unfuck compile activate)
      "Don't let stupid shr change background colors."
      (setq bg (face-background 'default)))


I use emacs to read my email and use w3m to format HTML emails (which sadly, most are these days, and more and more don't include a plain text alternant). In particular it does a good job with tables. I have not tried eww yet.


I agree. Not had chance to try eww yet.

One step up from Dillo is Netsurf, which seems to handle more CSS than Dillo but is still pretty snappy.


I think what would be interesting is not in bringing the web to Emacs but rather Emacs to the web; a web browser that exposed everything right down to the rendering primitives in a decent (well, better than elisp or Javascript) language, allowing users to customize how they interact with the web in the way that Emacs users customize how they interact with text.



Firefox is already pretty heavily customisable through XUL and, of course, is open source so you can modify every aspect of it...


I guess XUL is the stumbling block for me there. And of course, the browser itself can be modified, for which yay! but really I'm looking for a small set of primitives exposed in a high-level language that can be combined to create web browsers specific to certain tasks; perhaps something like this could be retrofitted onto Firefox, in the way that Emacs was an macro-extension of an existing editor.


Vimperator for Firefox, then.


Customizing something and making it yours is a powerful thing. I think that ultimately that's the reason. One reason is that you make things uniquely fit you. But, I think in more cases than we are aware or want to admit, the emotional reasons are the real reasons. Sometimes its nice to have something you made, not something that was mass produced for everyone.

"Resource hogging" always seems like such a crazy reasoning. It's more aesthetic than practicalities. Is your memory really so scarce?


There's a line between "use" and "hogging"...

My computer is noticeably slower when Firefox is running. Quite often, Firefox will grab the audio device and not give it back, silencing all other programs. Sometimes the fan will just spin up and the computer will sound like a hoover until I quit it. Over time it just uses up more and more memory, rarely giving it back. It's not unusual to look at the task list and see Firefox taking up 400+MBytes, even though you're doing very little. (Right now, it's taking up 420MBytes. All I have open is the HN page with this text box I'm typing in.) I also think it does something rude with the GPU, too - when I'm not running Firefox, the auto-hide dock pops up perfectly smoothly. When Firefox is running, though, it stutters as it pops up.

The X11 version is terrible to use over a network too.

(I had a whole different set of complaints about Safari.)


Firefox is the only application that regularly crashes (due to running out of memory) on my system. I could increase memory limits and it'll end up causing so much swapping I wish it had just crashed.


I don't know if Emacs is so secure. Years ago I wrote a virus that would propagate between files in Emacs lisp.


Years ago, huh? Was it before Emacs added support for whitelisting file-local variables? These days you can't just place elisp in arbitrary files and have it get executed.

(Of course, propagating to other elisp files is both easy and completely orthogonal to Emacs, since you could do it in basically any language.)


The same problem happened with Microsoft Word, which, in my mind, is a product a lot like Emacs. Emacs is written in a special dialect of LISP, Word is written in a special dialect of Visual Basic. Both have a mind-boggling array of features, etc.

The difference was the Word viruses got a lot more fame because more people used Word.


Could it propagate from computer A with an infected Emacs to computer B with a clean Emacs, simply by opening a file saved by A on B? If so, I would be pretty surprised and impressed. If not, I don't think I would consider it a virus.


If it can propagate from machine to machine it would be considered a worm.

If it only infects files locally, it's a virus.


Fair enough. I always considered computer viruses to be self-replicating, which I thought meant among computers, but Wikipedia supports your definition. Still, is this really self-replicating at all? It sounds more like an Emacs extension that simply "colors" files in some unique way.


   M-x eww http://www.lispworks.com/documentation/HyperSpec/Front/index_tx.htm


The example is well chosen because the Common Lisp Hyperspec is actually nicer to view in a text browser than in a graphical one ;)


All she had to talk about was scripting. It is super convenient to be coding in your language, have another window that is open beside your code that automatically does your searches on a certain website based on what you have open in the other window.


This is true! I've used keyboard macros with webpages and textareas to save myself lots of time. =)


A great reason not to do this, if you are building anything that will ever be touched by end users (or are building anything that is an application on the web browser) is this isolates you from how real users consume internet content.


From the other comments, I feel more alone in this camp than I thought. I've taken the stance to not even consider a user, browsing sites I develop, with javascript off. It's such an integral part of the web by now, and without it, interactivity doesn't really exist.

Is it really so prevalent to browse without JS (I doubt the casual user would do that, since they're probably not even aware that JS exists)?

The only reason I'd care about displaying content without JS, is specifically for crawlers.


> ...without it, interactivity doesn't really exist.

This is what it comes down to for me. I started browsing with Javascript off after both my cores were pegged at 100% for forty-five seconds to load a page with a 500 word movie review. The pages wasn't nearly as interactive without the Javascript, but I didn't care. I didn't want to interact. I just wanted to read the text.

If your site is an actual app that I'm going to interact with, I'll turn on Javascript. That makes sense. However, if it's something that I'm just going to read, then Javascript stays off.


I browse with JavaScript turned off by default, and have ever since I saw a pretty cool DEFCON (I think) presentation on the number of gaping holes it makes possible. I'll sometimes enable it for a site, but I'll also sometimes just not use a site.

I really don't like the fact that this is getting more and more difficult to do. The Web is not about allowing random people to execute code on my computer; it's about reading documents and sharing information.


Its not prevalent at all. The HN echo chamber makes it seem like there are many people who do this, but the number of people in the real world is basically zero.


The problem with JS is not interactivity but a lot of aggressively annoying advertisement and data grabbing which usually comes along JS.

My solution is to use different browsers. My default browser has JS turned off. If I encounter a possibly interesting site which requires JS then I use another browser for that. This solution keeps my default browser clean without need to configure any JS blocker.


disclaimer: not a web dev, don't really have a dog in this race.

The reaction here strikes me quite a bit - talk about supporting an old web browser which has a small market share, and the reaction will often be that it's a waste of time. No-js users are a miniscule fraction of a typical site's audience, so why would those sites bother to support them any more than they would bother to support an old browser?


I believe there's a difference between not supporting an old, non-standard-compliant browser, where you must have the same functionalities but the platform is different, and not supporting users who have turned off javascript, because in this case the functionalities themselves are different. You almost have 2 different "products" in this case.

Also, angry users are always the most vocal. The huge majority that doesn't bother (or doesn't even know how to) turning off javascript will not complain on HN.


I guess - from my point of view it would be lumped under the wider group heading of 'how much should I worry about going to significant effort supporting stuff that few of my customers will care about?'


Off-topic: What does the [noobtor] mark on the dead comment below mean?


Guess: a new ("noob") account registered/used via Tor?


Just yesterday, I tried it the other way around with a terminal emulator in the browser using FireSSH [1]. However, I’m not sure whether it is particularily safe to have a SSH connection to localhost in Firefox. It also needs some key-rebinding to be of any use which I haven’t done yet.

[1]: http://firessh.net/


https://chrome.google.com/webstore/detail/secure-shell/pnhec...

Also crosh on my chromebook, which uses the same url just replaces nassh with crosh.


Yeah right, I have no idea why I deserve a downvote.


LOL, are you fucking kidding me…

One of the points of the blog post is that web browsing in Emacs allows you to integrate browser work and Emacs work, which for me is the most compelling one. But you can possibly achieve something similar by integrating Emacs into Firefox via SSH and using for example .js [1] for automatization work. That way you wouldn’t have the disadvantage that you basically lose most formatting and layout which are actually quite helpful if you stay away from the loud and annoying websites. That’s why I thought it was worth mentioning. I also use Tree Style Tabs and it could be helpful to keep work related websites and Emacs buffers organized in that way.

[1]: https://github.com/defunkt/dotjs


It is a dream of mine to do exactly this, but even eww.el just wasn't any good to actually use. I can do everything else in Emacs, except that, but that's a pretty big something.


elinks is quite useful on a raspberry pi


No eww.el mention? It comes w/24.4


This article was written in 2008.`


Quick answer: because you are a masochist


Is terrible, this idea.


Can you elaborate please?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: