I'm referring to the "nothing is perfect argument". It gets used to muddy the water so that one can imply that it's all a wash. "Nothing is perfect" should be taken as a given on almost all topics, so it's useless except for as a dishonest rhetorical device.
I don't like to compare programming languages and human languages, but for different reasons. (Hint: because they have so little in common.)
I always thought Tor hidden services make a great way to host your own cloud. Not only can you host on your own hardware without messing with dns, isp reps, and router settings, but you get privacy features for free.
IMO, I think local hosting is the primary usecase the Tor project should be advertizing.
Hidden services are not a primary concern of the Tor Project, or a primary use of the Tor network. They are a novel way of using a system designed for anonymous and censorship-resistant communications.
Doesn't solve the problem. The essential nature of the problem is that your data can disappear under circumstances that you can't control, be it because the server died, problems with the host, they're going away entirely, they decide they don't like you anymore. To fix all permutations of this problem, you have to replicate, host the data in multiple physical places.
It introduces another problem, managing the replication and testing to make sure it's working, but if the data you're trying to protect is important enough, you devote the resources.
Looked at in this light, hosting your own physical box is less useful than using two separate online services. You still have the management and testing overhead either way, but with your own physical box you also have to maintain and manage that resource too. That only makes sense if you have existing infrastructure you can plug into, and sometimes it doesn't even make sense then.
Or vice versa... all the solutions you mentioned are liabilities if given enough thought. The cost of hosting in multiple places coupled with the very real possibility of other circumstances not in your control are enough to make the argument to host your own stuff a viable solution. The cognitive overhead of learning the api, writing tools to automate processes, testing and so on are not free in either time or money.
I don't think a clear cut winner emerges in all cases. Cloud vs. DIY is situationally dependent and requires a great deal of deliberation. I think the pendulum will swing between cloud and DIY for a long time... With proponents of each making awesome cases to use either one.
When it comes to risk, honestly, most people and companies are their own worst enemies, much much more inclined to hose themselves than, say, Linode or Amazon would.
The cost of using multiple solutions should be negligible compared to the cost of losing the information. If it isn't, then store it wherever, and if you lose it, so what.
A competent developer should not have to spend a lot of time learning an api, automating the process or testing the automation. If you don't have a competent developer, you should just use COTS. If you have a developer that complains about the time these tasks take, you don't have a developer, you have a technical handyman. In which case you can't engineer anything and should just use COTS, because that's all he'll realistically be able to handle anyway. If the developer is you and you can't afford to waste your time engineering your infrastructure, then yours is not a technology company and again should just use COTS.
Your operating profits should support the cost of your engineering, including the salary of a competent developer, this will typically dwarf your hosting costs. If they don't then you don't have a real business and need to spend more time figuring out how you're going to make money and less time on the technology.
If you're storing and using big data, and currently using a cloud provider, then your roadmap should include a plan for eventual self-hosting, as that's one of the few areas where self-hosting still makes sense, as costs can diverge very quickly. It's not big data unless building your own Backblaze storage pod is a viable option.
For all other applications, self-hosting can quickly become a boondoggle, unless you have competent systems administration, the cost of which will again dwarf your hosting costs. If you do not have competent systems administration, and you are owning and managing systems, then your business is a disaster waiting to happen. If the hard drives fill up on your home-built server, you will have downtime until you can figure it out and fix it. You will not have your hosting company's skilled customer support team, which handles the common cases that trip people up all the time, at your disposal.
Any time you touch the machine do do anything other than deployments, you run the risk of breaking something important. If your development is not competent either, then you run the risk of having your hygienic development process dirtied by, say, someone working directly on the production server. The problems caused by this are insidious and can take up time and attention that is better used pushing your business forward.
I worked for a company that had hundreds of boxes held hostage after a colo attempted to dramatically raise prices mid contract. The company eventually sent a bunch of employees with vans over a weekend to the city the colo was in to physically move the machines and reconstitute ~20 racks in a different colo.
Getting serious power / ac / battery switch-over / TB/s internet is a lot of work, hence datacenters. If you're under thousands of boxes, it's not cost effective.
Unless you also have your own physical network infrastructure connecting all the things it needs to connect to, and also control all those endpoints, having your own physical server doesn't solve the problem, it just pushes it around a bit.
This is not about a company closing down. Its about a company deciding to close their services for a user, arbitrarily, at their own discretion. Files you upload can be permanent deleted, at any time, at the whim of the service provider or any of their employees. You have no rights and no appeals.
The difference is that a user doesn't feel like just using his computer normally gives him any backup. But if his folder is inside Dropbox, he might be lulled into a sense of comfort since he thinks of Dropbox as a kind of backup[1]. So if he uses Dropbox instead of just using his plain old computer (hard drive), then he might start to get more careless than when he didn't feel that he had a safety net.
> We called ourselves Autistici, instead, for the passion we have for understanding the technical tools and for exposing the politics implicit in the digital world; even if software is created in a virtual world it doesn't mean it doesn't have a political impact on reality.
Starting from the technical tools we use we came to develop a clear array of political stances, crucial to both cyber and material world and lives: privacy, anonymity, free sharing of knowledge just to mention a few.
...
> Autism with invention generates sharing
So uh, yeah, that actually seems to kind of be the case. Inasmuch as "passion for understanding technical tools" is about autism.
> and the article linked from that comment is why I think Go is posed to become dominant over other new languages like Rust.
Is there some law that says that any top comment on HN that mentions either Rust or Go, a mention of the other is bound to happen in the same sentence? Like some kind of magnetic, rhetorical force. Heh.
If either, or both, Rust or Go succeeds, it will be for widely different reasons. edit: to whoever is adamant about downvoting my questions for today; you forgot the one previous to this one.
I think your comment had enough respect/content not to warrant a downvote. Sorry to see the haters out today, comrade. Rust and Go seem to have the potential for some overlap. But it seems Rust will be successful as a lower level systems language while Go may be easier to maintain when building highly concurrent, networked applications.
A sort of humanitarian science, since almost everything we touch have been made by other humans. But somehow STEM people are famous for looking down their nose on the non-hard sciences.
> As it should. If it weren't for capitalism we would not enjoy any of the abundance we have become accustomed to having in the past century.
That argument can't be directly applied to the future. Just because a system worked in the past, doesn't mean that it will work in a future world with different constraints. Hence why many think that today's capitalism is not a good political model for the envisioned future of mass automation.
And of course we can't hold on to an idea out of reverence and respect, i.e. "be grateful for what it has done for you, you privileged first-world dweller, you".
I certainly sympathize with the meat of your arguments, but I take issue with the use of the word "capitalism". That capitalism is being used at all to describe our current system, is a misnomer.
> That capitalism is being used at all to describe our current system, is a misnomer.
Well then I'm confused, since I was just going by what you seemed to refer to. And I assumed that you were talking about a contemporary (now, + recent history) capitalistic system. What kind of capitalism were you talking about? Something that existed before but doesn't anymore?
> I had been using it for a while, and assumed it had been around for ages, based on how it seemed to be so oriented around a unix file system and using the command line.
Unix culture seems to be alive and well to this day, and still seems to get infused with new blood. So I wouldn't even have been surprised if something like git was made by a college student instead of a 40-something inventor of the most popular Unix-like OS.
Logic systems can have bugs in them that make them inconsistent, which means that you can prove false (or bottom or whatever), which means that you can prove anything in this logic. And that, in turn, makes the logic system useless since for any given formula, it is provable in that logic. And being able to prove everything is not fun.
Then you have to patch your logic and make sure that none of your proofs relied on that inconsistency. Or something like that.
You don't, but so what? Even proofs done well by hand can have obscure mistakes; most of the time they don't invalidate the proof. Take a proof as strong evidence of correctness and it all works out from a epistemological point of view.
The program defines a type `t` with 257 constructors (`C_0` through to `C_256`). As an analogy, the type `boolean` has two constructors (`true` and `false`).
Each of these constructors takes a natural number as an argument.
It then defines a function `is_256` which checks whether it's argument has the form `C_256 n` for some `n`, returning a boolean.
It then defines the value `falso` of type `False` (which shouldn't be possible). It does this by defining two values: one has type `is_256 (C_256 0) = true`, the other has type `is_256 (C_256 0) = false`. This is clearly a contradiction, which is used to construct the value `falso`.
The reason one value gives `true` and the other `false` is that the former is getting calculated by the regular Coq implementation, whilst the latter is getting calculated by the buggy `vm_compute` alternative. vm_compute should give identical results, but faster; but since there's a bug, we can derive a contradiction by mixing correctly-calculated values with incorrectly-calculated values.
There has got to be some name for this argument tactic. It's so trite and useless.
https://news.ycombinator.com/item?id=9259764