Hacker News new | past | comments | ask | show | jobs | submit | kree10's comments login

Try loading any Meteor-based site (such as the linked story, or http://www.meteor.com/ itself) with cookies disabled on Chrome or Firefox and watch them break.

(Meteor doesn't appear to trap exceptions when using localStorage/sessionStorage, and even something as simple as "if (window.sessionStorage) ..." may throw an exception.)


Mentioning CDs reminds me that the price of physical CDs on Amazon in some cases has dipped below $0. For some CDs, Amazon throws in the MP3 download (with the usual cloud storage) for "free". The funny thing is, the price of the CD+MP3 is sometimes less than the MP3 alone, even with shipping. Check the Velvet Underground's back catalog for some examples.


Now, if you pirated that mp3 track (with negative sale price), would that be "stealing"? :-)


But seriously: if you sell the CD, should you delete your MP3s?


That is an awesome tip.


Also, this will turn on vi-style line editing in programs using GNU readline (things like mysql, psql, sqlite3):

    echo "set editing-mode vi" > ~/.inputrc
For programs linked against libedit instead (as on OS X):

    echo "bind -v" > ~/.editrc


Take a look at bcvi if your preferred local editor is vim: http://sshmenu.sourceforge.net/articles/bcvi/


I remember reading that longboxes were a way to prop up the LP jacket manufacturers, since the materials and process were similar, but... "citation needed".



I'd take that further. Is there any good reason for anyone to run an FTP server (public or otherwise) in 2012?


If you want to have a shared folder that you share between people you trust, it's still the simplest solution. It's very low-level, but it works.

Yes, you can buy a cloud offering, but physical disk is still way cheaper than "cloud disk". You don't have all the cloud features, but on the other hand, the data are 100% yours, on a server that you control.


I don't buy that it's "the simplest". Just about every major Linux distro ships w/ SFTP enabled out-of-the-box. How is installing an FTP server easier than just using the built-in SFTP server?

I've been trying to actively discourage the use of FTP for the last 10+ years. It's not an option because it passes passwords in-the-clear. Protocols that pass cleartext authentication should just be off the table today.


I find that positive attitudes towards FTP often seem to correlate with positive feelings towards Telnet, and both seem to correlate with "Not really that comfortable with Nix"*.

The number of times I have had to correct tech-ish friends when they talk about "telnet-ing" into servers is frightening. All of them were relatively technology literate but either didn't do it for a living or got into doing it for a living "by accident". Think your physics major buddy who has only ever used windows on any computer that he owns.

You put people like that in the position to make a call, and I assure you you'll have an FTP server running somewhere in 5 minutes flat.


I believe that random Windows authoring software is vaguely more likely to have FTP built in than SFTP. (But finding good SFTP software for Windows, e.g. WinSCP, is not hard, so this is not much of an argument.)

It's also the case that SFTP requires giving someone an actual user account on your UNIX box, and preferably knowing enough about how to set SSH up to restrict them to SFTP access only. If your server is much more valuable than the data and you don't trust yourself not to get SSH configuration subtly wrong, it's not terribly unreasonable to prefer installing an FTP server to adding a local user and giving someone else a password to it.


oh sure, SFTP is better, I thought we are comparing FTP/SFTP to DropBox and the like


I usually have an internal TFTP server set up and laying around somewhere. A lot of embedded devices provide simple support for updating their firmware over TFTP.


Engineers and scientists have large datasets to share. Gigabytes. Terabytes. FTP can handle it.


How do you dir on http?


WebDAV [0]

If Microsoft every had of built a decent client into Windows Explorer like MacOSX has (rather than the crufty, half baked one they ran with) then it could have been great. As it turns out, it is only really easy to access it through FTP-like programs (separate from Windows Explorer).

Having said that, we had pretty good experiences with WebDrive [1] allowing us to mount WebDAV directories in Windows. Also, Gnome does a pretty good job on Linux with GVFS [2].

[0] https://en.wikipedia.org/wiki/Webdav [1] http://www.webdrive.com/products/webdrive/index.html [2] https://en.wikipedia.org/wiki/GVFS


If this is your problem, FTP is not your answer.


Which makes me wonder why approximately everyone has a parser for the common FTP directory listing formats, but I'm not sure I've seen any HTTP client that parses Apache mod_autoindex output or the other big servers' equivalents.


This made me smile, but I think Perl is more like Robert Pollard/Guided by Voices. Both seem like kind of a mess to the uninitiated (and even the initiated). Both hit their stride in the mid-90s. And both are still around, remain influential, and are just as productive today, even if they have a lower profile.


That's the thing. Once you have "fixed" PHP, you're left with a brand new language that few people know, won't run your existing code, isn't included in Linux distributions or hosting packages, have no books about it and no answers on StackOverflow.

Of course these things will all happen in time if the fixed version takes hold. But if you're willing to throw out PHP and start anew, you can have all of the above right now by learning Node, Rails, Django, etc.

I think the comment above about needing a Jeremy Ashkenas (that is, a CoffeeScript for PHP) is the only realistic way for a "fixed" PHP to succeed. There needs to be a smooth transition, and a FixedPHP-to-PHP5 compiler could provide it.


I think some organizations see this kind of implementation leakage as a feature, not a bug.

I remember when http://microsoft.com/ began doing external redirects to "default.asp" circa 1997. If you were a "webmaster" (do these exist anymore?), this was a dog whistle. They were not using static .html (or .htm) but not any of the common dynamic methods like .cgi or .shtml either. And using "default" rather than "index" indicated a break from NCSA/Apache convention. They were using a different web server. Those 11 extra characters said a lot.


While seeing "index.html" (or "index.ANYTHING") makes me cringe, I have some sympathy for the developers who do this.

Let's say I have an old-school, all-static site with pages at http://example.com/x/index.html and http://example.com/x/about.html. I would like to make a link to the "index" page from the "about" page. What are my choices?

<a href="/x/"> will work, but will break when someone decides to move "/x" to "/y".

<a href="."> will work from the server, which does an internal redirect, but not on the static version on my local drive I'm going to demo to my boss. (I also have a hunch a significant number of developers aren't aware of "." and don't know this is an option.)

So we end up with <a href="index.html"> for better or worse.


  <a href="/x/"> will work, but will break when someone decides to move "/x" to "/y".
If /x redirected to /y, that wouldn't be a problem.


If you're having problems like this, your web development environment is total garbage and should be fixed.

99% of the time the reason for index.html is the developer was viewing static files in their browser because they didn't have a proper server or test environment. This is inexcusable in 2012.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: