> It might make us weirdoes, but when cabling looks this neat it is a sexy and sleek piece of art.
I always tell people that server cabling is as much art is it science. When I was early in my career, I had some great mentors in this respect, and now when I see a well cabled rack, it really speaks to me.
Just a quick note from and old military avionics tech: "Tape, Lacing and Tying, Aramid" (sorry, can't remember the NATO Stock Number offhand, but I used to order a couple of skeins a week at one time) will cut you up badly until you develop the appropriate scars and calluses. It's basically 25-pound test waxed flat dental floss. Because the knots (a clove hitch secured with a reef/square knot) are quick, easy and repeatable, you'll be through the skin before you notice the beginnings of the damage. (And taking a month off of lacing is enough for the toughened skin to get back to its ordinary, weak self.)
I find cabling is a fantastic visual indicator of quality, if the cabling isn't done right then there is a high degree of probability that a lot of other things aren't done right.
If you cant do the simple things, how can you do the complex.
Sometimes it just interesting to walk down the datacenter aisle and look at others wiring... can see some where nightmares live, and others which are quite sexy and interesting to study.
As someone who is responsible for data enter design and deployment - if anyone on my team was not acutely OCD over how the cabling looked, well they would not be a member of the team very long!
It's being worked on :) The plans I've seen use 60 GHz spectrum, which is very short-range (shorter than an average datacenter) and line-of-sight. The latest proposal used beamforming to bounce the signal off a ceiling or wall to further reduce interference among nodes. https://www.cs.ucsb.edu/~ravenben/publications/abstracts/bea... Making this automatic would be awesome :)
Probably because there's limited wireless bandwidth. It'll work, but it doesn't scale - doubly so for colocation, where other customers might eat your bandwidth. The "short distances" would help, but there might not be enough throughput to make it worthwhile.
At Blekko we ended up moving 600 servers from one co-location facility over to a neighboring town about 5 miles away. It is a lot of work, and its a lot of co-ordination. When I was at Google I thought they were just being arrogant by designing their own racks, building them and shipping them in pre-built "chunks" to the data center. I was very wrong about that, it saves a huge amount of time.
I look forward to a time when a colo facility says something like "We can lease you 50 OpenCompute 2.0 slots for $3,000 a month" and know that I can just populate the hardware, plug my switches into the structured wiring solution and be done.
On lesson 2, power is always the biggest concern when it comes to colo so it may not necessarily be that the facility is attempting to nickel and dime you (although that is often also the case), but that they need to restrict power density to a certain level so they can ensure they have enough power for the rest of the facility and so that they can provide adequate cooling. Pricing is also highly dependent on how much available space there is in a facility; the best deals are to be had when a facility is brand new and needs to fill the space. Whereas a facility that's almost full may stick to full list price and be content to have you walk if you don't want to pay those rates.
On lesson 6, when the hosting provider does own the building, chances are there will be less bandwidth options available within the building unless the provider specifically offers bandwidth neutral services. May not be a concern if you're happy just using the provider's own bandwidth mix, but if you're pushing a significant amount of traffic, or need higher quality transit, you are much better off shopping around for different bandwidth options.
On 2: Many of the providers we looked at did say it was cooling. They would give higher power to single racks, but then they would make us take up more floor space. So really floor space is often how the express their cooling capacity.
On 6: That is a very good point. We ended up going to the Google building which has a lot of transit options. Transit if definitely a factor
I think the biggest lesson learned here is to not co-locate in the first place. Almost everything they encountered could have been avoided by moving to a "cloud" provider (AWS, RackSpace, etc.)
While I get a kick out of a well wired rack, I think its a waste of time to do work you shouldn't be doing in the first place. And how much time did it take to do all the experimentation with different rack combinations? Purchasing the components, installing them, and then sending back when they don't fit. Those things don't matter and don't move your company forward, unless you're in the rack management business (like Rackspace or AWS).
Really? Last time the subject was mentioned, the StackOverflow people basically said "if you don't buy your own servers then you don't love computers and we love computers so..." - there wasn't an actual cost comparison, maybe I missed it. It seemed to be more just giddiness at buying servers and wiring stuff up. Which seemed odd, seeing as last I heard, SO doesn't even run their own routing.
That said, buying servers, especially a gen or two older and refurb, can be a major cost win, so it wouldn't surprise me. Running the actual racks, meh, I'd rather have the colo people handle that unless there's so many racks that it's simply not feasible. For a couple dozen servers, I can't see the point in running it all yourself; it's pretty straightforward plugging stuff in, ain't it?
Yep, totally agree. There's this great 'OMG THE CLOUD' here on HN, but really as I'm sure you guys did, you have to sit back, actually do a cost-benefit analysis (even if it's two guys on a whiteboard and not a full report) and decide what is best for the way the company/product are structured.
Yes, you do give up a lot for the cloud and your example is a great one. However, those types of problems are fairly rare and can generally be designed around.
I think S.O. would be a great service to move off of real hardware. They don't seem to be doing things where dedicated hardware would give you an advantage (big data, video transcoding, 3D graphics, etc.) so why go through the trouble of managing it yourself. Plus, if the system were designed correctly, you get fail-over for "free".
Thanks for posting this. With some new hardware we purchased we'll be doing the same move Broadcom -> Intel, so I think I'll actually be bookmarking this to come back to later!
But... I also highly doubt most cloud providers are bothering to tune NIC's. They are probably just throwing KVM/Xen up on some standard hardware (whatever is cheapest with highest memory density) and opening the machine up for access. There are going to be parts they tune as a default stack, but that far? I somehow doubt it.
There's something between "cloud" and "build your own servers and rack them yourselves" -- I think managed colo/dedicated hosting is probably the sweet spot for most companies which don't need the burstiness of cloud servers and aren't in the dedicated datacenter business themselves (or with unique hardware, performance, security, etc. needs).
e.g. SoftLayer dedicated servers, or a quality local/smaller/industry-specific provider. IIRC hackernews runs on a SoftLayer dedicated server.
I didn't see that as a lesson learned. You have effort involved in moving 'sites' regardless of if you host your own equipment or 'rent'. If you're on the 'cloud' you have to potentially learn about a new providers API, test to make sure that the server blocks you rent are of the same level and capabilities as what your old provider had (meaning performance testing, etc.).
Basically you have pain regardless of what you do, one just abstracts a very specific piece of the equation away from you. Frankly I see it as they have the talent in-house to manage their own equipment, and thus they can realize a tremendous cost savings for their monthly bills by not running everything on top of AWS... and no matter which way you shake it, five racks of equipment on a cloud service is going to be more expensive.
It's easy to say this when you're hacking on a side project and just need the smallest instance possible, or just a medium instance. But when you're scaling for millions of users, then those costs add up.. really really quick. AWS, especially nickel and dime you for the smallest of things, such as total # of I/O requests.
Plus I find, the whole myth about able to scale faster.. somewhat misleading. I use EngineYard, and you need to ALLOCATE a certain amt of space. Sure you can increase it in the future, but you'd need to shut down the instance, and create a new one with the extra space... not exactly fast nor convenient. And it's not like you can't add an extra hard drive if you hosted your own servers...
I agree that the cloud can make the best case for hacking side projects. It's quick to deploy and quick to destroy. However, scaling your system depends on how you architect the system. Building in the cloud is not the same as building in your colo cabinet.
Your example of moving your instance to a bigger one is a classic example of doing it wrong (so to speak). In the cloud you don't scale up, you scale out; many machines doing the work. And bringing those machines on-line takes minutes, that is certainly faster than shutting down and rebuilding. I've played with EngineYard & Heroku and while I really like their services, they tend to re-enforce the "old world" way of doing something. Break out and play with "raw" AWS and I think you'll find a lot to like.
I think there is always going to be some amount of overhead with physical or cloud services for scaling. I can say though that one thing you can do with physical servers which is pretty neat is shut down entire physical nodes when you don't need them to save on resources. Your scaling software has to know how to do this, but IPMI is fun stuff like that.
Is it really with getting Colo space for just one rack of servers? To me this seems to be well below the threshold where owning your own hardware makes sense.
Probably depends on what direction you think you will go in long term. Companies tend to gain knowledge and expertise over time so it less common to to shift hosting models (except early on).
At just one rack, if you want to own hardware you would usually just rent a single cabinet (Can be pretty cheap depending on location).
The number of servers in a rack is variable depending on the power. We got 208v / 30A , so we can pretty much fill up a rack with redundant power.
In our main facility, we currently have 4 racks in use which are not full yet and a 5th already provisioned for growth.
I would disagree. With modern processors and memory you can cram a LOT into one cabinet. We run with Dell C8000's and R620/R720's and they are 'wow' level fast. Given that you can cram dual processors and 256GB of ram * 8 into 4U with the new C8000's you basically can have around 2TB of memory and 16 physical cores in 4U... that's a lot of hardware.
Addendum: I should say though that be aware the C8000 servers are hungry beasts for power. I believe they are nearly 9A at full draw.
I worked at a place where the hardware in one rack easily exceeded 150k. We started saving money vs. a cloud solution in about 6 months. You can pack a lot of firepower in a virtualized environment.
> Our biggest error with this was not making one person ultimately responsible for the physical design. Choices need to be made and not everyone’s ideas can be reconciled with each other and the constraints of reality. For a holistic design, eventually someone has to reconcile reality with what everyone wants or you end up with a bunch of individually well thought out pieces that don’t fit together (as well as a bit of frustration.)
That doesn't just apply to server room design. It applies to just about ANY design. Including software.
> It might make us weirdoes, but when cabling looks this neat it is a sexy and sleek piece of art.
I always tell people that server cabling is as much art is it science. When I was early in my career, I had some great mentors in this respect, and now when I see a well cabled rack, it really speaks to me.
Here is the reddit server rack just before we tore it down: http://imgur.com/DlaX4