You need a cache in front of Wordpress so that it doesn't hit the database on every read, then you can run Wordpress anywhere. I survived close to 3,000 req/sec against a Wordpress entry using a single Linode 360, back when those were available.
wp_supercache, nginx, varnish, etc. Rinse and repeat.
I appreciate the advice, but I knew this already. I've been using WP Supercache since it was released. Nginx for I think 4 years at this point. I have it configured to the point that nginx serves the gzipped pages WPSC writes out to disk without ever hitting PHP.
It's been a few years since I worked on a large WP install but I wonder if you could write a plugin that replaced the comments section with an esi (edge side include) directive. That would allow Varnish to cache the whole page and then call into WP using the url in you esi include to build the comments section. You could also then set the ttl on esi comments url so that you can fragment cache the comments for non-logged in users (for say 10 seconds).
It's funny you should mention this. I was chatting to a mate about this thread and he suggested using an ESI approach.
I think it would work well if the Recent Comments widget was modified to spit a HTML fragment to predictably-named files that varnish could pick up and include with ESI.
I never installed Wordpress in production, this is the first time I hear about ESI and in general have almost no relevant experience, but maybe this suggestion has some worth:
ESI sounds like it would couple your web application code to your cache. This sounds negative to my ear. How about modifying the Recent Comments widget to work with an IFRAME or some AJAX? It adds another request to the server, but now both requests can be cached and compressed.
That's why Varnish is more effective, because you can configure it to cache the results of things that WP Supercache misses by design including the Recent Comments Widget and its data. If you need SSDs to run Wordpress, there's a flaw somewhere. Every time I've scaled Wordpress, caching has been the answer. Apply liberally.
WP Supercache is a hack anyway, for folks running WP on shared hosts without root. If you have root, there's a plethora of better things for caching, even as ancient as Squid as a reverse. You can get your MySQL traffic down to <1 QPS fairly trivially, no matter what kind of traffic is hitting the frontend.
Don't forget wordpress.com is a huge MU installation, and they've existed since before SSDs became popular. The disk is not your issue here.
Varnish is not effective in the face of Recent Comments because that widget breaks whole-page caching fatally. Every time anyone leaves a comment anywhere, the entire cache for the entire site is invalid.
When I looked at where the slow runtimes were occurring on Linode, it was always jammed on disk I/O and it was always on PHP functions that are reaching into MySQL.
In my experience the MySQL query cache + an object cache do more for sites with a Recent Comments widget than whole page caching.
As it happens, I do all of the above. And I was doing all of the above. And still getting jammed on I/O. Because MySQL likes to join on disk. Whole page caching is useful only if you prevent that from happening. It's useless if the cache is rendered invalid every few seconds on a chatty site.
Varnish gives you your own throttle for how often you want invalidation. It's a tool specifically designed to make misbehaving apps -- i.e., that widget -- misbehave. You just have to slap the dog on the nose when it behaves. I'm just saying you could have made this work on Linode (and I have), but I do see your post-purchase rationalization at work, so I know anything I say will be fruitless anyway.
There's also the possibility that you had shitty neighbors.
Right, and when I hosted on WPEngine and then on Pagely (and both of them choked), my users immediately piped up that the recent comments were inaccurate. In fact there were a number of page freshness anomalies which I believe were down to whole-page caching that I was frequently quizzed about by users.
If your page is basically static, then yes, whole-page caching will fly. But several of the sites under my supervision are, to quote Pagely's founder, used "like a chat room".
Edit, per your edit:
> I'm just saying you could have made this work on Linode (and I have), but I do see your post-purchase rationalization at work, so I know anything I say will be fruitless anyway.
Basically, I was there, I saw the numbers and I know why they wound up looking the way they did. I suspect that anyone in my particular situation would have evolved their approach in the same way that I have. I've been running Wordpress blogs since 2004. I feel that I've picked up some ideas on how to make it fast, but sometimes the general solutions don't work because you have a specific problem.
Got it. So you have users that expect to have real-time conversations in the comments on a blog, meaning you can't optimize a blog application like a blog is designed to be used, meaning you have to pay for SSDs in order to make your blog function at all because apparently MySQL can't handle N inserts/second and however many people have these conversations refreshing every ten seconds, generating a few SELECTs that are rapidly in query cache.
I completely understand how this could be a problem and how switching providers would fix it.
Sarcasm aside, you clearly don't understand what my problem is.
1. Recent Comments invalidates every page it appears on whenever a comment is posted in any thread. In practice that means that the entire site cache is invalid. That breaks whole-page caching models.
2. This means that Wordpress will regenerate from scratch.
3. This means first of all generating the page, which joins multiple tables including TEXT fields. Because of the brilliant design of MySQL, these joins ignore indices on the joining fields and frequently the join will occur on disk.
4. The Recent Comments plugin also causes joins on disk because it too refers to tables with TEXT fields.
5. The query cache helps a lot, but the site on Linode was still observably jammed on I/O, even when MySQL was given an entire server to itself.
However, if you feel you can do it better, I am happy to engage your services as a fulltime replacement. WPEngine said they could do it for $250/month (they couldn't). Pagely said they could do it for $149/month (they couldn't). I invite your bid.
> Sarcasm aside, you clearly don't understand what my problem is.
Yeah, I spent this entire thread clueless about the issue you're running into, even though you spelled it out a few different times because you think I don't get it. Wordpress falls over under normal site load, film at eleven.
Since you want to switch to condescension, I'm assuming wise sir moved MySQL's tmpdir to a RAM disk and found that unsatisfactory for his mystical, MySQL-breaking SELECT/INSERT workload? Also, I'm far more expensive, and I know that WPEngine is multitenancy Wordpress on Linode in the backend. (That one's free.)
We're not being very productive here, are we? I could nitpick your comment just now but it wouldn't change your mind either.
You think I'm an idiot. Possibly you think I'm a liar.
I don't think you're an idiot. All I can do is point out that I looked at the numbers, I've tested various strategies or tools (and adopted most of them), I referred the problem to the experts, and this is where I've had to go.
I think you're unnecessarily combative in the face of advice and sitting comfortably atop your pillar of experience, ready to shoot down anyone that dare take time to offer you advice. You've appealed to your authority on this matter more times than I can count. Look at how you've approached the conversation from the very first reply, which set the tone for the rest:
- I've been using WP Supercache since it was released
- [I've been using] Nginx for I think 4 years at this point
- Basically, I was there
- I saw the numbers and I know why they wound up looking the way they did
- I've been running Wordpress blogs since 2004
- You clearly don't understand what my problem is
Now I've asked you something specific. You've lamented that you identified the issue as on-disk joins, when MySQL has to resort to an on-disk temporary table due to a TEXT column. That's discussed here[1]. I'm assuming, because I didn't assume you are stupid (unlike in the inverse), that you deduced this was the case by inspecting created_tmp_disk_tables. I then asked if you tried removing the disk from the picture by creating a RAM disk, mounting it somewhere, then instructing MySQL to use it for its temporary disk table area by setting tmpdir. I also assume you know that tmpdir defaults to the system /tmp, which might not be on a filesystem that you prefer[2]. Again, I assumed you knew these things, and just asked if you tried them.
How do you respond? "Let's ignore each other." So now I'm left wondering if you genuinely don't know how to scale MySQL, and you've tired yourself of appealing to your own authority in order to prove me wrong. What I'm telling you, is the notion that your blog network creating a workload for MySQL that it is incapable of operating on commodity disk is completely ridiculous, and I'd laugh you out of an interview if I pressed you like this. I think you gave up, but I wasn't saying it, but now that you've gone at me like this, I will. You're basically saying you couldn't make MySQL work with a <50QPS write load (I refuse to believe you're writing more than 50QPS to MySQL) because of some TEXT columns.
I'd have far more respect for you if you'd just say, yeah, I probably could make MySQL keep up with my blog workload, I just didn't put much effort into it and bought SSDs on a provider I don't prefer instead.
(But wait: I don't understand. Username oddly appropriate.)
> I think you're unnecessarily combative in the face of advice and sitting comfortably atop your pillar of experience, ready to shoot down anyone that dare take time to offer you advice.
I regret now being such a grump about it. But nothing you've so far suggested is new. I felt lectured down to and I felt supremely pissed off by it.
My remark that we should stop talking was because it was becoming increasingly acrimonious and I didn't see the point in further e-peen waving.
> you deduced this was the case by inspecting created_tmp_disk_tables
I did.
> I then asked if you tried removing the disk from the picture by creating a RAM disk
I did in 2007, actually, on a physical server I had access to. It would reliably lock up the DomU. I might not have been the only one[1]. I think I moved to Linode in 2008.
> So now I'm left wondering if you genuinely don't know how to scale MySQL
Entirely possible. I have as little to do with MySQL as I can. When the site slows down I learn a little more.
Take for example the documentation you referred to, in particular:
Some conditions prevent the use of an in-memory
temporary table, in which case the server uses an
on-disk table instead:
* Presence of a BLOB or TEXT column in the table
I learnt about that after a long period of fiddling with the tmp_table_size and max_heap_table_size values.
> What I'm telling you, is the notion that your blog network creating a workload for MySQL that it is incapable of operating on commodity disk is completely ridiculous, and I'd laugh you out of an interview if I pressed you like this.
I didn't believe it either. Yet there it was, chewing up disk. I got a lot of relief from implementing various caching strategies, switching web servers and so on and so forth. But eventually it was consistently bottlenecked on the database. So I broke the site into two servers, which gave me a few more years. But eventually it was, again, bottlenecked on MySQL.
> You're basically saying you couldn't make MySQL work with a <50QPS write load (I refuse to believe you're writing more than 50QPS to MySQL) because of some TEXT columns.
I didn't say that it's inserting. I'm saying that it creates temp tables on disk to satisfy fairly standard page and widget queries. If you thought I was talking about insertions then I can understand your skepticism.
88 QPS since last restart, FWIW. Hardly the world's biggest installation. About 90% of queries are served from the query cache; but of those that aren't, around 44% of joins are performed on-disk. That's pretty much what I've seen every time I look: around 45% of joins going to disk.
> I probably could make MySQL keep up with my blog workload, I just didn't put much effort into it and bought SSDs on a provider I don't prefer instead.
At the start of this thread you said that you've had varnish serve 3k RPS in front of a Wordpress instance on modest hardware. I agree that such performance is doable, even quite straightforward, for the common use case.
But if you take away caching, Wordpress is not quite so performant. And that's my problem; the whole-page caching strategy that makes thousands of RPS fairly straightforward doesn't work for me, because Recent Comments invalidates the entire cache.
So I have two choices: either I do without that particular widget and let varnish or nginx server up what are essentially static pages 95% of the time (and I have an nginx rule that does this with the gzipped pages that WP Supercache writes to disk).
Or I can accept that, because of the unusual pattern of usage, I am closer to the uncached baseline than most Wordpress installations are. Because the bloggers I host asked nicely, I have chosen the latter.
Putting my own anger down for a minute, I am happy to take any other advice you have. I projected onto you my own frustration.
Her first statement while being interviewed was that she was performing a science experiment. That story changed under interview, and she later admitted someone told her how and encouraged her to do it. The outpouring of support from renowned scientists and legal funds happily ignored that point, and stuck with the "science experiment" angle.
So, no, this is not an anti-science campaign by the government, regardless of how really hard we want to believe that she's aspiring to science. These things will blind you and take limbs off if you mishandle them. They have blown up on police officers after being left in an alley because they didn't detonate when kids make them. Then they just leave them for other people to clean up.
Even if you buy the science experiment angle, which she herself went back on, this is a safety issue from the completely reckless way in which she performed a "science experiment", similar to detonating a pipe bomb on school property just to see what happens. Oh, how the narrative would be different if she had built a pipe bomb instead.
Honestly, guys, we have this one wrong. Here's several other cases for your consideration, showing how big this problem is (the reason arrests are picking up is because LEOs are communicating about this, now that kids are starting to do it more):
This is a serious problem, not an opportunity to "win another scientist," and the narrative around this case has been disgusting. Reckless behavior and endangering yourself and others intentionally is not the hallmark of a scientist, no matter how much you want the science angle to be true.
No, we have it right. People like you need to stop convicting everybody in our society for momentary instances of foolish behavior. Who cares if it was a ad-hoc experiment or something she saw on YouTube that her friends were trying to get her do? It's still science, and performing it without all the permissions and safety precautions doesn't change that. Even if it just a bunch of kids who just wanted to see something blow up. It's cause and effect. Learning and growing. Yeah, it was stupid, but you learn from stupid things.
Regardless of all that, the charges are completely over the top. Community service, detention, suspension, sure. But the county was trying to charge her with a felony! I did stuff like that in junior high too, because I wanted to see the reaction. I learned from it though, and grew, because I didn't become an instant criminal and have my entire life ruined.
Congratulations, now we just put another young person into the system that strips away the chance to succeed and condemns her, for life, to one childhood mistake.
Endangering yourself for curiosity is pretty much the absolute definition of a scientist. By definition, when you do something where you're not sure what it will do - that is, science - it's possibly dangerous.
Let's keep in mind: this incident, where no one was injured or harmed in the slightest, is being punished infinitely more severely than destroying the entire world financial system has been. Because, hey, poor black girl in Florida.
Interestingly, of all your links, not one caused even the slightest injury or slightest property damage. Injuries seem to be rather rare (and plastic shrapnel would lose velocity within a couple of feet).
I can think of another female who endangered herself for curiousity's sake, and the body of scientific knowledge this world possesses is infinitely better for it.
In that particular person's case she wasn't arrested. Instead, it resulted in her becoming the only person ever to receive two Nobel prizes in two separate areas of science.
Did she (or anyone else) realise the magnitude of the danger at the time though? I'd imagine she would have taken more precautions if so. Even now, her papers require protective clothing and cautious handling due to their contamination.
Oh please, I've blown up so much stuff when I was younger, and it was instrumental in encouraging me to learn about science.
Did people get hurt? No.
Was she doing it to hurt people? No.
Was she doing it for attention? No.
She was doing it out of curiosity. She doesn't deserve to be treated like a criminal.
We must look at this in terms of mens rea.
She needs a stern talking to, and probably at least a suspension from school. If they really want to escalate this to an expulsion, fine, but criminal consequences? Probation? Jail? Completely ridiculous.
Like there aren't enough actual criminals to prosecute.
About the same time the store workers in my town would call the police / fire department when we bought aqua net and/or starter fluid. (I guess 5 teenage dudes buying multiple cans of hairspray was a bit obvious.)
Never did get in trouble with our potato guns. I'm sure I'd already be in jail if it were this day.
Her first statement while being interviewed was that she was performing a science experiment. That story changed under interview, and she later admitted someone told her how and encouraged her to do it.
How is the former different from the latter? Is that not still science? Are you not doing science if a teacher directs you to perform experiments for a grade?
The point of mixing the chemicals and doing the procedure yourself is to understand it properly. Just being told about it in vague terms is far inferior in terms of learning.
It becomes a mere 'demo' once you're well-practiced in the specific procedure. And she wasn't.
Look around at the replies to me and the downvote beating I'm taking for sharing my opinion. Turn on showdead for an even more depressing trip through my replies. Then ask yourself if this can ever come to an understanding without me seeing the "error of my ways" and adopting the "acceptable" position of this community. Then, go even further, and ask yourself if it matters one iota of a shit more than the several minutes I've already spent subjecting myself to this beating, including someone telling me I should play with anthrax and die for fuck's sake. Then turning it around on me and calling it my fault after I called him out on it (and got downvoted yet fucking again for doing so!).
The answer is, very clearly, absolutely not on all accounts. Hacker News does not tolerate dissenting opinion and that should scare you. The continued existence of this community and any contribution is plainly pointless, and while I'd invest a little time demonstrating that to you, I know that nobody will listen, I'll turn light gray, and this community will go back to having a grand old time patting themselves on the back throwing money at a Florida teenager they have never, and will never, meet nor hear about ever again.
One of the (many) problems in our discourse today is that there is no middle. No compromise. Ever. Look at American politics for an example. Even in this discussion here, there was no possible suggestion that I might be a little right, no possible seeking of an understanding, no middle ground. You either think this girl deserves our showering of praise and affection or you suffer the consequences and get grayed to nonexistence.
No, nitrogen. I'm done. You guys can have your polarized, pointless discussion, and I will go back to keeping my opinions to myself. Just like I'm pretty much done with this industry, as well, because it seems like the number of intolerable people that I have to spend nine to five arguing with, over pointless shit like this, continues to multiply until I don't want to listen any more.
I started this account when a good guy, Jesse Noller, had has life threatened by the very people in this industry for daring to intervene in the PyCon situation. I hoped that maybe, just maybe, I could effect some change and get people to see that there is a middle, there is a compromise, there is an opportunity for discussion. When I follow the community, I am rewarded with heaps of karma -- including one comment with dozens of upvotes for saying what HN wants to hear. When I dare speak against the community, as I have here, I am reminded why I keep my distance and why those of us who think rationally consider this community a heaping pile of arrogant shit. You should hear what people outside HN say about it, based on discussions like this. Congratulations, HN. You're now Slashdot. You're now the community that in a single thread compared a teenager that built a bomb in a two-liter bottle to Alfred Nobel and Marie Curie.
In the hours since you left this question for me, I went to see a film and forgot about Hacker News for four hours, and it was four glorious, wonderful hours I intend to repeat. Continuously, for the rest of my days.
I'm glad you took the time to write this response. Part of the problem with this thread is that people defending Kiera Wilmot aren't just defending her, they're defending themselves. They did similar (or more extreme) things, and they know their lives would have been ruined by the kind of response demonstrated by school and government officials in her case. They know that society might lose decades of valuable contribution from a healthily curious girl, and think of what their lives would be like if their youthful indiscretions had destroyed their own careers.
People implicitly defending their own identities have a much harder time backing down or seeing the implicit defense of identity in the other side's arguments, so you have two sides escalating to ever more extreme examples until the discussion devolves into shouting and namecalling.
There were some deplorable comments that will no doubt be cited by others as examples of HN's terribleness (though you should note that you're the one who brought up anthrax). But there was also an inspiring rally around someone who looked like she could use some support. Sadly, not everyone who deserves this kind of attention gets it (like the Novato teenagers you mentioned in another comment).
In the end, your life will be perfectly fine without HN, HN will carry on without you, and eventually it'll either get better, or reach the point where everyone interested in rational discussion will leave. But if not HN, where? HN seems to have much less of a hivemind than, say, Groklaw. What other community is as consistently articulate (outside of stories about Apple ;-P), even if they're articulately vile and wrong?
If it means anything, in my opinion, we are observing a selection bias here: sociopaths and psychopaths feel the need to demonstrate the acceptable social behavior and, as additional safety measure, outrage over deviant behavior. Since they cannot tell either of these they end up mimicking each other and going ballistic over anything that differs from themselves.
If you take them seriously it's rather disgusting. However, I look at this as teenagers talking about sex (that they have not had yet), when everyone thinks he is the only virgin and is vigorously trying to hide this fact from his peers (who are doing exactly the same).
So I should be able to manufacture pipe bombs because I'm curious how big the explosion is, then? Maybe I can try one out on my school's football field? No malicious intent, just curiosity to 'see what happens', so by your logic I should be square right?
Or perhaps I could study the inhalant properties of anthrax powder at my desk at work, because I have no malicious intent and just want 'see what happens'?
There's a difference between science and doing something reckless that could, theoretically, endanger others without scientific safeguards in place. Public commentary on this issue is completely ignoring that.
You're completely ignoring the context and scale. She put some household chemicals in a soda bottle that caused a minor explosion. You're attempting to compare that to highly controlled biological weapons.
Do you really think it's justified to charge her with a felony?
> Public commentary on this issue is completely ignoring that.
No, lots of public commentary is condemning her for what she did but not want to see her entire future destroyed by federal conviction. Even if she's acquitted it's traumatic and destructive.
It's the kind of thing that could have been dealt with entirely within the school.
I condemned her for what she did, but I also said that it was stupid for the school to involve police, and that it was something that should have been dealt with in school.
Yes, if you're curious what a pipe bomb explosion would be like, then I think you should investigate that curiosity safely with safeguards in place.
If you want to 'see what happens' with anthrax, then by all means you should study its properties at your desk at work--inside a laboratory--with safeguards in place.
And if you want to see what happens when you combine two chemicals in a science class, then by all means. I think students should be able to reasonably assume that the safeguards are in place. Because if we're giving our children materials to build the equivalent of a pipe bomb in science class, then that is our fault. We're the adults. We bring the safeguards, and they bring the scientific curiosity.
And by the way, a discussion of recklessly carrying out science is independent of a discussion of whether an act can be scientific in nature.
Yeah, that was called for. Thanks for the rational, objective discourse, there, by telling me I should kill myself with anthrax because I disagree with your opinion. Just reminds me why I love the Internet.
If you knew inhaling anthrax powder would kill you, why did you suggest it in the first place?
To be fair, it wouldn't kill all of us to do such an experiment. Not that anyone would think it was a good idea either way. So even assuming the other comment was telling you to "kill yourself" is dishonest on your part.
Most of the backlash comes from the over reaction of the school and the DA. While her decision was not wise and definitely had some risk, I do not know exactly how she cared out this demonstration so the amount or risk is rather unknown to me, she is faced with punishment that could reasonably cripple her future.
I completely agree. She went back on her story that she was doing a science experiment while being interviewed, but everybody wants to stick to that story anyway. It's almost like everybody in the world wants her to be a scientist, and I pay dearly every time I raise this point here. I don't get it.
Maybe it's because actual, grown-up scientists who nurtured their youthful interest in science by blowing things up were hoping that she'd be impressed by their support and decide to become a scientist herself.
And if she had hurt or, worse, killed herself or other people? Reckless is not a quality of a scientist. Why didn't we, the public, extend an invitation to "become a scientist" and pour support and legal defense funds to the Novato teenagers arrested for making the exact same thing OFF school property[1]? It's a felony charge in California, as well.
> But officers did arrest a group of teenagers at the end of January for making and setting off Drano bombs in an open space off Palmer Drive.
Let me be clear: mishandling one of these devices can blind you and remove limbs, and even if you buy the "I'm doing a science experiment" angle, she's doing it without training or safety considerations. This is a safety issue, not a science-hating issue, and there have been many charged and convicted before this young lady. I hate that we absolutely cannot have an objective conversation about this.
And if she had hurt or, worse, killed herself or other people?
God damn it this is the whole point! Obviously if she had killed other people the story would be different. But she didn't. She blew shit up in an open place for reasons that were not sinister.
Motive matters. The reason she's getting so much support is because what she did resonates with so many of us, and we sense a kindred spirit.
Maybe you see her as a little terrorist-in-training who will use her new-found knowledge to blow up a marathon or something, but I have no reason to think that. I see a kid who heard about this from someone and thought "Wow, if I mix these ingredients in a plastic bottle there will be an explosion. Cool!" And then because she got off her ass and actually DID something rather than watch youtube videos or TV, she's far ahead of her peers.
I lived at a dorm at MIT which was known for several times a year making a coffee-mate bomb. The explosion was loud and the flames leapt up 5 stories, and then everyone would scatter before the MIT police inevitably arrived. It was fun.
How many people out there didn't do something reckless in their youth? By the standards modern society seems to be applying to youth, the vast majority of present day adults should have been charged with felonies in their teens.
From your link: Police said no one in Novato has been injured by a bottle bomb so far. But officers did arrest a group of teenagers at the end of January for making and setting off Drano bombs in an open space off Palmer Drive.
Note that nobody was hurt, and that the teenagers were using an open space. That sounds like the teenagers were following reasonable safety precautions.
From another HN thread on the subject: ... 2Al + 3H2SO4 -> 3H2 + Al2(SO4)3 is not on the prohibited list of reactions that are federally impermissible without a license. (http://news.ycombinator.com/item?id=5636823)
What felony would these kids be guilty of committing?
You know who should have been arrested? The guy I knew years ago (whose dad was a sheriff letting him off the hook, and who was definitely not a friend) who threw a Drano+foil bomb at a pedestrian and drove off. His favorite part of the story was the "hilarious" screams of the victim as the bomb exploded: "It burns!"
Can you point to any specific incident of anyone anywhere on Earth at any time in human history being blinded or losing a limb, or even a finger, by either a dry ice bomb, draino bomb, or an HCL bomb in a plastic bottle?
As duaneb below points out, her intentions were not sinister. Negligence can be a crime, but do we really charge minors with crimes of negligence? Isn't that part of the problem of being a minor?
Given that it wasn't done with malicious intent, this is a matter for school disciplining, not court time.
The point is that the spirit of scientific inquiry cannot be divorced from risk, danger, and even the occasional mischief. As we've heard over and over and over again, it is commonplace for folks who are going through their formative years and interested in science to do things like create experiments which go horribly awry or even to get up to dangerous mischief. As I mentioned elsewhere, there are appropriate ways to ensure that people learn the right lessons about potentially dangerous activities. And the right way is not "don't ever do it, don't even think about it, you'll be punished severely".
For most customers, maybe, but I suspect there's a pretty sizable portion of the population (myself included) that will happily pay more for an airline that treats me extraordinarily well, considers my comfort while on board, and gives me reasons to shout their name down the street. For example, Virgin America is one of those airlines for me, and I will happily pay a premium to fly one of their tragically few routes (they're one of the few airlines I fly first on, as well). I blacklist Southwest entirely because of the complete disregard for my experience shown on the three miserable flights I've had with them. I don't think I fit in your business bucket, but that's a data point, at least.
In another industry where profit is the focus, domain registration, I'm absolutely dying for a $100-$200/year registrar that knows what they're doing and isn't awful to deal with. I will happily pay that premium since my hosting bill far outweighs my domain registration, and handling support tickets expediently and providing features I want are far more important than the bottom line to me. If an extra $10/year from all customers means I get IPv6 glue or a ticket answered inside of 72 hours, please, do it! (This is less relevant now, but was a concern for me in the past.) I'm willing to part cash to be treated better in almost all cases.
Sure, people would pay more to get better treatment but most people don't fly enough to know the difference. Ask a few people around which airlines give you the entire can of coke and which only give you half the can in a tiny plastic cup. Ask them which airlines give out free pretzels on medium length flights and which ones give out a menu where you have to pay for everything. Ask them how expensive the cheese sampler is off that menu and if they remembered to factor it into their cost of better treatment. Which airline has the widest seats? I actually don't know the answer to that. Thinnest stewards/stewardesses? I could care less about how attractive they are, but I like aisle seats and don't want fat stewardess butt in my face when they walk up and down the aisle. Are you going to be flying in a widebody jet or a tiny regional jet? You get the picture. There's lots of stuff to consider, and 1 inch less of legroom may not pop into people's minds (certainly not mine since I am always in an exit row. It's my mini-first class upgrade).
The fact of the matter is that most travelers aren't informed enough to know the difference between airlines except for the prices that show up on Priceline/Travelocity/Kayak/etc. I would like to see a feature chart on each airline and an estimated value to see if it was worth the extra money. I would pay extra to get updated intel like that before I book a flight.
For most customers, maybe, but I suspect there's a pretty sizable portion of the population
Yup, and for those two things you mention that you are willing to pay more for, there are probably 98 other things you aren't. Each person will have the things they are willing to pay more for, and most will be different, so for most services, 98% of people are going to choose the cheaper option.
Hence most companies won't give a shit, and almost everyone has moments where they wonder why there isn't an option to pay more for higher quality of X product.
Yeah, apartments aren't a commodity (they're pretty much the farthest from, given their absolute uniqueness in terms of location). Plane seats are, unfortunately.
Yes, but presumably you looked at the apartment first.
Unless airline search sites start listing seat pitch, amenities, seat recline, in-flight entertainment quality, etc. next to the ticket price when selecting a fare, it's hard to see how these factors would play into a purchase decision.
Even if the data were there, I'm wagering that the lions share of customers would keep $20 in their pocket at the cost of some minor discomfort.
For domain registration at the $100-200/yr range, just become a Tucows/OpenSRS reseller yourself. You get a lot of power to do things, minimal hassle, etc. I own maybe 200 domains (because I tend to get all variations of the ones I use) and it's totally worth it (and still only $7-8/domain/yr)
That's interesting. I fly first on a bunch of random airlines but most often fly SWA. I've had very few unpleasant SWA experiences, and lots of excellent ones where they bent over backwards to make sure things worked out for me.
On the other hand, almost every other airline I've flown has been nothing but bad experiences: repeated last-moment cancellations, surly staff, lost luggage.
For most customers, maybe, but I suspect there's a pretty
sizable portion of the population (myself included) that
will happily pay more for an airline that treats me
extraordinarily well, considers my comfort while on board,
and gives me reasons to shout their name down the street.
I'm not an air travel expert, but as I understand it many airlines offer seats in classes such as 'economy', 'business class' and 'first class' with differing prices and comfort levels. Isn't there both demand and supply for higher comfort levels at higher prices?
When you fly, do you pay a premium for business class or first class accommodation?
The difference between economy and business class is 3-5X. Its like a completely different world/price level, that those of us stuck in economy can only dream of.
To sleep on a 12 hour flight...that would be great!
I don't understand why economy is so berated by passengers. It's fine for me; I'm 5' 11" and I have plenty of legroom and the people I sit with are generally polite and well-groomed. All of the nightmare stories I hear just do not make sense to me. Perhaps it's because I fly internationally; and people are more willing to be neater.
Although, I must say one last thing: do not bring an infant onto the plane. They're messy, hard to deal with, and give everyone a headache in an enclosed area where you can never get any peace.
My parents had the sense to keep me offboard until I was 4. I think all parents should do so.
On a 12 hour flight, you want to sleep, but its impossible in economy. At least we can watch movies now during our flight, but the level of comfort between economy and business is vast.
I've never had a problem with babies on flights. I travel between the states and China often, and there are always a more than a few of them on.
...probably didn't build actual clock himself, he's just cutting a base for it and applying a photograph. So, that's interesting w/r/t the article. (Edit: Oh, someone below me said that already.)
At any rate, thank you for showing me Regretsy. I didn't want to work today, and this is a good start.
You are correct. Go's remote imports are dangerous for long-term project maintenance, but the feature is still useful for quick, throwaway projects.
Go badly needs two things:
1. A best practice that dictates that you never import from remote repositories in production, long-term code; the feature is fine for one-offs and experimentation, but the article summarizes only one way this style of work can lead you in to a maintenance world of pain. What happens if the repo you're importing from Github is deleted? What do you do for fresh clones? You're going to end up changing the URL anyway. I feel the Go community has kind of glossed over this (and I like Go).
2. An equivalent of CPAN or PyPI, which you could then import from in concert with a tool to manage those dependencies, a la:
import (
"cgan/video/graphics/opengl"
)
This model works for CPAN, PyPI, and so on for a reason, and that reason is avoiding several of the dependency/merge hells that remote repos can create. CPAN provides Perl a lot, such as distributed testing in a variety of environments. I personally think such a thing is necessary for long-term maintenance of any software project that utilizes third-party libraries. This is one of Google's oversights in Go, because they have an (obviously) different take on third-party code. Here's a good case:
Developer A checks out the code clean. Five minutes later, developer B checks out the code clean. In both cases, your "go get" bootstrap script fetches two different commits, because in that five minutes, upstream committed a bug. Developer B cannot build or, worse, can build but has several tests fail for unknown reasons or, even worse, the program no longer functions properly. Developer A has none of those problems. In a world with a CPAN-like, developer B can see that he has 0.9.1 and developer A has 0.9.0, developer B can commit "foo/bar: =0.9.0" to the project's dependency file, then everybody else doesn't suffer the same fate. In the current world, you're either massaging your local fork to keep the new commit out, or any other troublesome, non-scalable approach to this.
Building large software projects against a repository never works. You need tested, versioned, cut releases to build against, not master HEAD. It only takes one bad upstream commit to entirely torpedo your build, and you've now completely removed the ability to qualify a new library version against the rest of your code base. Other people are suggesting "well, maintain your own forks," so you're basically moving merge hell from one place to another. I, personally, have better things to do with my time; I've seen (Java) apps with dozens of dependencies before, and keeping dozens of repositories remotely stable for a team of people will rapidly turn into multiple full-time jobs. Do you want to hire two maintenance monkeys[0] to constantly keep your build green by massaging upstream repositories, or do you want to hire two feature developers? Exactly.
I've started writing a CPAN-like for Go a couple times but I'm always held back by these threads:
The second one highlighting how difficult Go is to package as a language -- my personal opinion is treat Go just like C and distribute binary libraries in libpackage, then the source in libpackage-src. If one message in that thread is true and binaries refuse to compile with different version compilers, I'm troubled about Go long-term.
[0]: I'm not calling all build engineers maintenance monkeys. I'm saying the hypothetical job we just created is a monkey job. I love you, build engineers, you keep me green.
You're right that syncing directly with the net is a problem. However, the problem is basically developer education. Go doesn't work like other languages and the consequences aren't well-documented. If you do it right, the process works fine; it's basically how Google works internally.
The basic idea is to commit everything under $GOPATH/src so that you never need to touch the net to reproduce your build.
After running "go install" you should check in the third-party code locally, just like any other commit.
Then updating a new third-party library to sync with their trunk is like any other commit: run "go install", test your app, and then commit locally if it works. If it doesn't work, don't commit that version; either wait for them to fix it, sync to the last known good version, or patch it yourself.
If you aren't committing your dependencies then you're doing it wrong.
> If you aren't committing your dependencies then you're doing it wrong.
Disagree strongly (and hate absolutes like "doing it wrong"). Their metadata, yes, by all means, commit that. There is absolutely no reason, however, to have the source code for a dependency in my tree, Go or not.
Give me a binary I can link against, or at least the temporary source in a .gitignored location, and let's call it a day. When I want to bump to a new version my commit should be a one-line version bump in a metadata file, not the entirety of the upstream changes as a commit. I've seen a sub-1MLoC project take 10 minutes just to clone. Internally! You're telling me you want to add all the LoC and flattened change history of your dependencies in your repo? Egads, no thanks! Where do you draw that line? Do you commit glibc?
There's just no reason to store that history unless you are in the business of actively debugging your dependencies and fixing the problems yourself, rather than identifying the issue and rolling back to a previous version after reporting the problem upstream. I guess it's paying your engineers to fix libopenal versus paying your engineers to work on your product; one's a broken shop, the other isn't. Some people will feel it's one, some the other.
Actually what is insane is to have your production code do anything else. "pip install foo" and similar schemes open your code up to the following problems:
- incompatibilities that were introduced in the version 1.2.1 while you've only tested your code with 1.2
- the host is down so you can't compile your own code because your dependency is not available
- the host is hacked and "foo" was replaced with "malicious foo"
- exponential increase of testing (you really should test with all version of your dependencies you use)
Ultimately, I don't understand the doom and gloom point of view. C, C++, Java, C# etc. programmers have been pulling dependencies in their repos for ages. In my SumatraPDF I have 12 dependencies. I certainly prefer to manually update them from time to time than to have builds that work on my machine but fail for other people or many other problems that are a result of blindly pulling third party code.
None of the things you listed are problems. The other comment demonstrates solutions to all of them, and I do not understand your fourth bullet point in context at all.
> C, C++, Java, C# etc. programmers have been pulling dependencies in their repos for ages.
I'm not making this up: in my career, I have never worked on a project where this is the case, and I've worked for shops that write in three of those languages.
> I certainly prefer to manually update them from time to time than to have builds that work on my machine but fail for other people
That's your choice, and it's a little bit different because I'm assuming "other people" are end users -- those that want to recompile SumatraPDF from source for some bizarre reason -- not developers. Fixing a broken build is a skill that every developer should have, but not an end user. Once I learned how to write software, I never came across a situation as an end user compiling an open-source Unix package that I could not solve myself.
The opinion I'm sharing here is related to developing on a team, not distributing source for end-user consumption. It sounds like you don't develop SumatraPDF with many other people, either. Nothing like merge failures on a huge dependency that I didn't write to ruin a Monday.
Also, wait, SumatraPDF is built with dependencies in the codebase? What if a zero-day is discovered in one of your dependencies while you're on vacation for a month; what do distribution maintainers do? Sigh? Patch in the distribution and get to suffer through a merge when you return?
> C, C++, Java, C# etc. programmers have been pulling dependencies in their repos for ages.
The first time I worked on a C# project started in an age where nu-get was not widespread, I saw with dismay a "lib" directory with vendored DLLs. It does happen.
- Binary artifacts under version control is no-no for me, unless we're talking assets. Third-party libraries are not assets.
- Where do these DLLs come from? How do I know it's not some patched version built by a developer on his machine? I have no guarantee the library can be upgraded to fix a security issue.
- Will the DLLs work on another architecture?
- What DLL does my application need, and which ones are transitive dependencies?
That's many questions I shouldn't have to ask, because that's what a good package management system solves for you.
Spot on. We had this problem taking on some legacy code during a round of layoffs. They had checked in /their own/ DLLs from subprojects. It turned out that one DLL had unknown modifications not in the source code, and another had no source at all.
Another problem was that by building the way they had, they'd hidden the slowness and complexity of the build - including the same code from different branches via a web of dependencies, and with masses of unused code. They never felt this pain, so had no incentive to keep the code lean.
Sure. But at the same time, if you make it a policy to forbid nailguns at the workplace, you have less people shooting themselves in the foot while you're not looking.
Anyway, this analogy isn't helping anyone. You think libs in source control is a problem because some people might not do it properly. I'm contending that there's nothing wrong with libs in source control--there's something wrong with letting people who might not do it properly near your source control.
There are clear benefits from having a package manager (if anything, pesky things like version numbers, direct dependencies, etc are self-documented). In addition, it does prevent people from taking shortcuts, and even good people take shortcuts when the deadline is short enough.
But if you didn't write the dependency (and thus presumably don't commit to it), why would there be a merge conflict?
As for upstream maintainers rebuilding your package, I don't see how having to update a submodule is vastly different from updating the relevant versions in a file. Both seem like they'd take mere seconds.
It's not like you're writing gotos straight into library code, it's merely a bookkeeping change. You're just importing everything into your repo instead of referring to it by ambiguous version numbers. In the end, the code written and the binary produced should be identical.
- you don't need the internet to install dependencies. There are many options e.g., a locally cached tarball will do (no need to download the same file multiple times). Note: your source tree is not the place to put it (the same source can be built, tested, staged, deployed using different dependencies versions e.g., to support different distributions where different versions are available by default)
- if your build infrastructure is compromised; you have bigger problems than just worrying about dependencies
- you don't need to pull dependencies and dependencies of dependencies, etc into your source tree to keep the size of the test matrix in check even if you decided to support only a single version for each your dependencies.
As usual different requirements may lead to different trades off. There could be circumstances where to vendor dependencies is a valid choice but not due to the reasons you provided
No really! Go isn't like other languages. You have to think differently. Please try it!
There's no such thing as, say, Maven binary dependencies for Java. If you don't check in the source code for your dependencies, your team member won't get the same version as you have and builds won't be reproducible. You won't be able to go back to a previous version of your app and rebuild it, because the trunk of your dependencies will have changed. By checking in the source code you're avoid a whole lot of hurt.
Checking in source is okay because Go source files are small and the compiler is fast. There isn't a huge third-party ecosystem for Go yet.
> There's no such thing as, say, Maven binary dependencies for Java.
I'm saying there should be, but not necessarily the same thing. That's my entire point.
I'm also not a fan of the "you have a dissimilar opinion to mine, so obviously you've never used Go properly" attitude in this thread. One way to read your last is that I've never used Go at all, though I'm giving you the benefit of the doubt and assuming you meant used Go properly. Either way, I don't get the condescension of assuming I'm unaware of everything you're explaining to me simply because I have an opinion that is different than yours. Especially since half of your comment is repeating things to me that I said earlier.
Maybe it sounds like condescension. I was in the same place at the beginning. No exceptions? No generics? Heresy. How dare you Go people ignore my many years of experience? I wrote a few rants to the mailing list, which were basically ignored.
The reason I assume you haven't used Go much is that your examples of problems with checking stuff in aren't examples of problems happening in Go. It's an analogy with other languages and other environments. Such arguments don't seem to get very far.
Maybe it won't scale and something will have to give. I expect the Go maintainers will find their own solution when it happens, and it won't look like Maven or traditional shared libraries. (If anything, it might be by replacing Git/hg with something that scales better.)
You already have that in Javaland. It's called maven, and it allows you to change one number in one file to upgrade the dependency version. Clojure also has that with Leiningen.
Almost. `go get` will not resolve your sub-moduled dependencies. It'll work well enough for developing your library, but will break when people try to consume your library (at least, without git cloning it)
I agree 100% with your first point. It should be spelled out.
Your second point I am less sold on. Transitive dependencies (pkgA imports pkgB@v1 but my code need pkgB@v2 which is incompatible with v1) are the thing of nightmares in large systems development, which is what Go is designed for... that lack of versioned imports wasn't an oversight, it is a feature.
Centralized repos are centralized points of failure, and only as good as they are well managed. NPM versus CPAN if you will. Any serious project will localize dependencies, even if they are in CPAN, you never know when CPAN will be down or other unforeseen things might happen.
Instead what we have is that pkgA needs pkgB@then (which happens to be when the author of pkgA last cached pkgB) but my code needs pkgB@now. That's worse in pretty much every way, mostly because there are no identifiers anywhere to clearly work around or even detect the problem. I'm all for "your build can only use a single version of pkgB" (linking two versions of pkgB into the same binary is insane) but I need to say what version that is, not leave it nondeterministic and dependent on uncontrolled, unrecorded state of the machine running the build.
No, you just mirror CPAN. This is already done in lots of shops I know of for PyPI. IME, I've only ever had PyPI down on me once, and there are mirrors (that are usually up) if that is ever the case[0]. I think localizing dependencies as you say is a waste of time.
I do understand the basics of probability. The likelihood of your serving infrastructure or application being compromised is an order of magnitude higher than the most popular repositories in software development. I'm not saying it doesn't happen, but I also don't walk around worried about having an asteroid land on me simply because I understand probability. If it happens, it happens, and we deal accordingly, but using a much more difficult software engineering process because of (arguably) paranoia is silly.
And, that the package(s) you're trojaning aren't signed[1]
(I'm not immediately sure if new releases are automagically signed/digested when uploaded via PAUSE, or what fraction of currect packages are signed)
Their movies are great as a computer person and moviegoer, since each usually contains at least one "wow, that must be really hard with computers" moment. Like the fur simulation in Monsters, Inc. or the crowd physics(ish) simulation in WALL-E, or Merida's hair as you mention. Not to mention the incredible render quality progression from Toy Story to Toy Story 3.
wp_supercache, nginx, varnish, etc. Rinse and repeat.