One problem I see, specifically for the environment, is that "dev" is now a TLD (thanks Google!), so you have to be careful should you try to do a short cut like "web01.dev" you may get a surprise depending on your resolv.conf:
The article describes geographic sub-domains, and uses "nyc" as an example, but that's a TLD now as well. It may be better to use UN/LOCODE as a starting point:
Once I was moved to a new team at work, and the team leader really wanted a team name. I generated like 5 or 10 words from /usr/share/dict/words, one of which was profanity. I could tell that everyone liked Team Profanity, but no one was brave enough to adopt it. I really wish we had, because there was much cursing on that team.
Back at Uni we got four new workstations so obviously I named them 'death', 'war', 'famine' and 'pestilence'. Then shortly after we added a fifth, it ended up being 'mayhem'.*
Some time later we got two more boxen and the female members of our research group were given the job of naming them, and we ended up with 'itchy' and 'scratchy'...
* for the Pratchett readers I was wrong, it should of course have been 'Kaos'...
I really like the functional naming part of this, but I don't like the idea of using random names.. I think that server names must be functional otherwise you will tend to think of them as an abstract box that, if you just install that little extra package, will be a fine webserver on top of that mailer.
Don't do it! If you're labeling hardware and need multiple functions, use virtualisation or containers, and then the host is named after that (e.g. we use Ganeti, so gnt-01, or say kube-01).
At home, I name machines after inspiring political figures and while I like the naming, it does discourage me into treating those machines as cattle, which I should do more.
I prefer to have mostly meaningless names for machines, and then having more names for them in DNS for their various functions. That way the functions can be moved about the machines and if that's what you care about, then that's how you refer to them (deploy the app Foobar to web01 and web02), but when a machine goes down, you can shout, "clark is down!" web01/dns01/sto01 being down only means the service is down, whereas clark being down is something more severe.
I manage my stuff with Ansible, which makes all this pretty easy.
here if "fsn-node-01" is down, we know it's a major ganeti node that dies. if "unifolium" dies, ... can you guess what happens?
at first I couldn't either, but then I learned it's an old KVM node hosting a bunch of virtual machines, and it means pain, definitely something severe. yet it's not meaningfully different from "corsicum" dying (which is "just" a VM).
adding meaning to machines reduces the cognitive burden (for oldies) and discovery burden (for newcomers) of figuring out what is what.
that assumes you don't run services bare-metal. if you do, then maybe it makes sense to use meaningful names for those, but in general, I strongly try to avoid doing that anyways... and if I do, having a functional name for that metal is mostly harmless: if the function changes, the name changes, and then I need to do a name change, which I allow for.
but that is rare, because if i dedicate full metal for a function, you can bet it won't be readily available for other purposes...
oh and forgot to mention.. at my current job, there was this practice of naming servers after onions (I'll let you guess where that is).
I had no idea how many kinds of onions there were turns out there are many, especially if you start to take in Latin names and translations...
But then I need to learn how to spell colchicifolium, and neriniflorum, and no, those two are not related (or are they) and what does meronense do again? and oh yeah corsicum is the replica of one of those...
Those naming conventions are cool until you start to document your stuff properly and start being serious about training and including new people. Then you should focus on meaningful, easy to remember, type, and generate, names.
This kind of naming is fine for small projects, and personal setups. In professional settings I would suggest using descriptive names and rely more on DNS.
It's not that I don't enjoy fun and personal naming schemes, it's just that it's a constant annoyance when dealing with a large number of different systems.
We've been hired to deal with different companies, who picks some random naming, cars, athletes, plants, cities and so on and it's confusing as hell. How am I suppose to remember that Ford and Volvo are your two web servers? Now I need to maintain a list mappings for your servers and look them up every time I need to change your web configuration. Just call them prod-web01.company.com and prod-web02.company.com, it's fine and everyone will be able to guess what those servers do.
You can also do web01.prod.company.com and web01.test.company.com, but while it looks cleaner (and I personally prefer it) is does hide the "prod" or "test" in most shells, so you constantly need to check that you're not messing with a production box.
Functional names for servers are IMHO one of the worst possible options, because functions drift over time.
Have individual hw named in unique way that isn't related to functionality it runs, then use CNAMEs specific to functionality - i.e. never point a DNS client at "freddy.dc.somecorp.com", you point it at "dns01.dc.somecorp.com".
The decision on what machines need unique names, not just pseudorandom IDs or similar crap, is well described by another comment here https://news.ycombinator.com/item?id=26054487 - which I like to put as "named systems are the systems you care about"
In fact, I'll go with seemingly unpopular opinion that there are only pets, never cattle. The pets just happen to have components, but at some level of the stack you're hitting a precious pet. Even if said pet is "us-east-1 Lambda service"
There is a good point to the drift of functions. You just need to run this one thing, and you do have that server which could just run it. It's a very real thing.
Where I see this most often is with companies that place a unusual high cost on servers, virtual machines or containers. In those cases I normally just application servers, that's a good a description as any. The only difference is that I won't thing twice about deleting a server called app01 and recreate it using Ansible, Puppet, docker-compose or whatever deployment tool that customer uses.
In my mind there aren't "pets" any more, and if there is that's a mistake that needs to be corrected. The customers I deal with who have pet servers are the most dysfunctional and the ones with the most challenges, both technically and organisational.
Why can't you rename the host if its function changes? Like if "the 1U box with asset tag XY1Z234" goes from being a web server to a DB server, why not reinstall it and call it db03?
I think the approach of naming the host "freddy" works for small installations (and all installations in 1990 were small), where reinstalling a single server is a manual process and impacts your capacity, and where humans remember "Oh, freddy is the one with a very large hard drive." If you've got any sort of automation, let alone virtualization, you should be keeping facts about hardware somewhere other than people's heads and so you can index them by the actual identity of the hardware - the fact that web04 had a large disk last year is remembered by a field on your inventory entry for XY1Z234, not by any human. And reinstalling web04 as db02 is just a matter of running a script from the comfort of your work-from-home laptop - certainly no need to visit the datacenter.
I think this lines up with your point about pets being higher levels of abstraction - I wouldn't point any DNS client at dns01, since that's a specific server, I'd point it at a virtual IP that can be bound (possibly multicast) by any dnsNN server that happens to be up. That virtual IP is the pet and the API surface, and it belongs to no actual server.
Maybe it depends on how big pockets one used to have, and how willing one was to follow vendors jealously declaring their software should be the only thing running on the server.
In my experience, unless a server was a VM host (or, these days, k8s cluster), it was rare that it would be single function. We just didn't have the money to allow such waste of resources. So unless you had single-software with enough requirements to hog the whole box (usually DB servers), then everything tended to have multiple functions, and reimaging was rarer event.
So I prefer to have "Freddy the web serving system", which might contain several machines named in style of R04-24.SFO.i.contoso.com ;) - because "freddy the web serving system" is the part that I care, and individual components enter everyday care only in the metrics of capacity planning dashboard.
> I'll go with seemingly unpopular opinion that there are only pets, never cattle.
This is emphatically not true beyond a certain point, perhaps around 500 boxes or so. My last job involved working on a fleet of about 15,000 hosts; I can promise that extremely few of the servers were precious pets. They really were cattle, and we would blow away and reimage them into different functions (with hostnames matching the function) moderately frequently.
Ehh, it's actually a topic for a yet-unfinished article of mine, but note that I pointed that "us-east-1 lambda service" is still a pet.
In the case you described, whatever drives the replacement system is the pet - the so-called cattle are just "components" that became interchangeable for the bigger "pet" system. As another commented called it, you need good names at the edge - where the edge for me is the level where you care about the system. We used to care about individual components of a computer and those would be effectively "pets" of the time. Nowadays, you do not give name to a DIMM in your server nor take extra care to ensure it works. And so on, till you reach the level you actually have to care, and that's where you need to name it (IMO) because that's your pet, even if it's composed of what people call cattle.
As for hostnames matching functions - I found much more use when making hostnames reflect basic type of hw and physical location, and would keep the name as long as the location didn't change. The rest depended on what functions were currently applied to the machine.
I used to name physical servers like pets (greek gods or whatever) and virtuals like cattle (function), with the idea that, yes, function of physical machines can change over time...
But the problem with that logic is that the name is not only attached to the metal, it's set (and mostly so, I'd argue) in software, as the OS hostname. And then what happens next is that the motherboard of that precious pet melts down and you scramble to move those drives to a spare, and now you have two metals with the same name.
It just doesn't work so well.
And your fancy random or pseudorandom looking naming scheme is cool and fun until someone new comes in and asks you wtf colchicifolium is...
Name machines on purpose. If the purpose changes, the OS will likely follow as well. If not, renaming shouldn't be that big of a deal, or, to put it another way: if you can't rename, you can't reinstall, and then you'll fail.
Now naming hardware is the "fun" part. when is the last time you did an inventory of your motherboards? or is it disks we're naming? the raid array or individual ones? how about memory sticks? or do you only name precious server cases? ;) after all, that is where the label ends up...
Used to name arrays, at one point had a hell of a problem because we found out that we had no way to consistently name, and thus communicate, PCI-X I/O-bays (it was a big POWER server...)
But if a pet "melts", well, the pet died. You moved the functions over to another pet. Whether it's a move from Zeus to Jupiter, or rebuilding your precious cattle management system on eu-central-1 after us-east-1 burned down and you got slapped with data locality requirements, is less relevant ;)
Cute naming schemes are a holdover from the days of physical servers when location, function, etc were subject to change so an immutable machine name couldn't contain them. Now that most things are virtual and easily created/destroyed there's no good reason _not_ to name functionally. The most common refrain I've heard in defense of these archaic naming practices is that "we don't want the hackers to know which server does what" but if they've managed to get your root DNS zone file or a shell on a box then you're already pretty far gone anyway (not to mention that nmap exists). All it does is impede productivity and create the illusion of security.
If you are dealing with assets in a physical environment then obviously the above doesn't apply.
As a side note: a long time ago (before I knew what I was doing) I ran a Windows Server environment for my dad's business. The main server was Jake and the off-site backup was Elwood.
> Yes, baremetal, the only way to guarantee performance!
Single function bare metal, that is. At which point, the boring functional naming scheme re-applies easily.
If you're running multiple different functions on a single box, how are you guaranteeing any performance for any single function? How does that differ from using a hypervisor with similar limiting features?
You'll need a mapping for the server names anyway. If you have a large datacenter, I hope it's stored as configuration of your infrastructure as code tool, if it's small, you won't need to check it every time.
Anyway, is prod-web01 that server with an outdated web server that you keep because of that one system that couldn't migrate, the one with a Java server that runs those proprietary tools, the one with IIS that runs that FOSS code that the developer decided to write in C#, the one supporting your main applications or the one you insulated in a special network because it hosts a powerful API that you don't want to expose?
"prod-web02" implies that this is the second web server in a cluster of web servers. That makes a lot of sense in 2021, where "prod-web02" is likely an AWS instance or a docker VM that is doing nothing but serving web pages, and which exists in some cloud server where you will never see the physical machine it is running on.
This was written in 1990, though, and there was no cloud, and VMs were quite rare things, (and we had to walk to school uphill, both ways, in the snow). Machines were physical things, and no one had the budget to buy an entire computer to be the second production web server - a machine ran a lot of different daemons that did a lot of different things, and the roles and responsibilities of a given physical piece of hardware would change as time went on. Which is still true today, but today it's just that a machine runs a lot of docker containers instead of running a lot of daemons, and we don't care where the docker containers are running, whereas back then we very much cared what machine it was running on, because sometimes we had to go to that machine and reboot it or kick it or install more RAM.
At the time, I was working at Alcatel (before it was Alacatel-Lucent, before that became Nokia), on a 450Gbps switch which was used for things like routing satellite TV feeds or handling distribution from big fiber backhauls, or running LANe for small nations. On the third floor, we had several racks of these switches in "the lab" which we used for testing and development work. While you could push a new software load from the comfort of your desk, if you borked things badly enough you'd sometimes have to go downstairs and plug a serial cable into the front of the control card so you could go poke around and force the card into a sane state.
So, when a new hire came in, they might say, "Hey mrweasel, I uploaded a bad load to Spock, and it's stuck in a boot loop. Where can I find that physical machine?" And you'd say, "Head down to the third floor, turn left out of the elevators, walk to the second last row of racks. On your left there should be switches named "Picard", "Riker", "Deana", etc..., and on your right should be "Kirk", "McCoy", and friends. Spock is third on the right between McCoy and Uhuru."
That conversation is way more confusing if all of these machines are named "test01-test20". You couldn't even name them that - there were lots of different products in different areas of the lab, so you'd need the product name in there, and they'd be "rsp7670-test01" through "rsp7670-test20".
And, when you want to push a new build from your desk, it's way less likely you'll mistype "picard" when you want to update "data", but it's quite easy to accidentally clobber "rsp7670-test05" when you mean to overwrite "rsp7670-test06", which was sure to summon an angry developer to your desk asking why you just killed their 48 hour validation testing, 45 hours in.
I had the joy of working on several ATM to the desktop campus networks using LANE and an ATM WAN using LANE to connect several thousand locations for an organization belonging to a large nation state. Thanks for triggering some horrible memories. ;)
I spent an entire weekend, dialed in by modem from Canada to a certain oil exporting middle eastern country, trying to figure out why the control plane for the ATM WAN that ran LANE for all their hospitals and 911 services was down. Fun times. :P
Best naming scheme I ever saw was at Harbinger in the late nineties. They used the periodic system. The last octet of the IP address corresponded to the atomic number, and you could use the full element name, or the abbreviation. So carbon.harbinger.com was x.x.x.6 and c.harbinger.com was a CNAME to carbon. oxygen or o was .8 etcetera.
> So carbon.harbinger.com was x.x.x.6 and c.harbinger.com was a CNAME to carbon. oxygen or o was .8 etcetera.
No wonder helium was mostly hanging ...
Sorry, could not resist. It is a clever naming scheme. At one site a client used names from The Three Stooges, then they got the 6th server (exhausting the list of the lead characters).
One company that I worked at... while the hardware people had their boring "r17s4ad" type name (rack 17, shelf 4, application, development), our team used the naming scheme of Caribbean islands. This worked rather well.
Unfortunately, I only remember the test and production server names. The test systems were "Trinidad" and "Tobago". The production systems where "Nevis" and "Nassau" (the app started with an 'N').
The thing that made this work really well was that the first syllable hinted on what the rest of the word would be and then everything beyond that reinforced that you read it right. This even was the case that foreign accents, while slightly off still reinforced the "you heard this right".
The machines _also_ had names of "test01" and "test02" or similar... but we (as devs) never used those names because you had to listen to the end of it to be sure that you had the right machines.
Alternative method for Arch Linux: Install the `rfc` package, then find the RFC at /usr/share/doc/rfc/txt/rfc1178.txt. A very nice package to have installed when travelling far from an internet connection.
> Nobody expects to learn much about a person by their name.
That's a good point, but humans are very different to computers - machines (be they physical or virtual) are provisioned for a specific purpose (even if that is to be "Fred's PC"), whereas humans need time to grow and can decide for themselves what to do as a career / what their hobbies will be, and change it any time.
But realistically, yes, the RFC is right that you may end up with machines whose purpose changes or multiple machines have the same purpose etc. But when they don't, it is much easier to have meaningful names. If you will name them differently, then please make sure all developers can access the list/mapping document and it is kept updated. It's frustrating if you want to investigate some production problems but end up looking at the wrong server's logs because devops didn't tell anyone they moved an application /website etc.
I like someone else's comment about app01 being easier to reason about and recreate than something named more specifically, and in the modern world, its easy to spin up a new docker instance or VM so there's less need to "let's just add this small service on that machine because it has spare capacity" (where capacity could be CPU/RAM etc.)
A long-time personal favourite naming scheme of mine has been Pokemons (and by extension, Digimons).
There's a good selection, it's pretty varied and there's no shortage of em (assuming you don't exceed 100-200 new systems in a 3 year period). This also has an accidental side-benefit of not being tasked to name new hosts at work, unless your really want to call the new database server Stufful, for example.
Another thing to keep in mind with more descriptive hostnames like `database`, is to serialize them from the start. It's a very minor thing but after you have three hosts with one of them missing the serialization, it's going to stand out like a sore thumb and it could be a major undertaking on changing that name where it's used.
When it comes to project names, there is a certain level of permanence that name (or codename) is going to have. Once chosen, that name will be thrown around in the codebase almost universally. The same does apply to the hostnames, at least in part when it comes to configuration files (and by extension, certain hard-coded hostnames that could linger around in the code years after the host in question has been decommissioned).
> When it comes to project names, there is a certain level of permanence that name (or codename) is going to have. Once chosen, that name will be thrown around in the codebase almost universally. The same does apply to the hostnames, at least in part when it comes to configuration files (and by extension, certain hard-coded hostnames that could linger around in the code years after the host in question has been decommissioned).
The weirdest naming scheme for computers I've seen is the one used some years ago by my college: It named Linux workstations after Linux kernel committers' email usernames!
There were the better-known folks like linus (Linus Torvalds [1]) and gregkh (Greg Kroah-Hartman [2]), but also relatively more obscure people, like shemminger (Stephen Hemminger [3]) and stelian (Stelian Pop [4]).
I didn't recognize most of those names at the time, but now that I do, I wonder what those people would have thought about having a large organization's computers named after them. A bit creeped out, I would think.
The most sensible naming scheme for us was to distinguish them by index. But there were two important differences: TPUs have many sizes, which means some are larger than others; if you're using a v3-256, you're very likely the only researcher doing so. They are also distinguished by type; v3 is more powerful than v2. Finally, they are region-based; the less powerful v2's are in the US, whereas the v3 fleet is mainly EU based.
That led to the convention of tpu-v3-8-euw4a-1, tpu-v2-256-usc1a-0, and so on.
The "tpu-" prefix might seem redundant, but I find it's helpful in conversation. That's a personal preference though, and if I had to do it again I'd probably drop the tpu- prefix entirely.
I found this scheme was horrible for VMs though. TPUs are often used for specific training runs, and the scheme above is easily added to bash files / config scripts. But for VMs, you're often SSH'ing into them all the time.
Ultimately we started naming the VMs after the researchers who originally needed them. Our current primary training box is song.tensorfork.com, named after researcher songpeng who it was created for. So the SSH scheme was pleasant: song@song.tensorfork.com for him, shawn@song.tnesorfork.com for me, arfa@, aydao@, etc.
When arfa neded a VM, I simply named it arfa.
All other more complicated naming schemes failed with time. No one (including me) could remember long VM names, let alone ones with numbers in them.
The other scheme that persisted was to use anime characters, as emersion mentioned. Tensorfork itself runs off of vegeta, which is my personal Hetzner server. "goku" was one of our primary workhorses at one point, due to its large VM size.
Our final two VMs are named "test" and "nuck", which also seem to work quite well (much to my surprise). "Is test down?" is almost completely unambiguous. And it's easy to remember which one is which: "nuck" is in Canada, so therefore "test" is the one in europe.
A pattern emerges here: most of our VM names are short, four-letter identifiers: arfa, song, test, nuck, goku, with vegeta being the standout. All other conventions failed with time.
For personal stuff (and even early machines for some companies which blurred the lines) I always chose the names of forests in the Magic: the Gathering multiverse. I try and help keep this page updated purely for that purpose:
These days, for work, it’s just boring unambiguous stuff. I think with more cloud infrastructure and the rarity of shared unix servers with home directories for people, it’s rare that I have any emotional investment in a machine. They’re basically soulless now.
To date, Dryad Arbor is still one of the coolest cards in MtG, simply because of both the simplicity and the universality of its quote. When it came out, I was very surprised by the idea that a land could also be a creature. These days we have way wonkier mechanics, but that novelty along with the quote has a special place in my heart.
An opponent being able to remove one of your lands with small creature removal or any ping effect or -1/-1 counter is scary.
It sees play (I think) in Legacy and Commander where there are a bunch of specific powerful ways to abuse it (e.g., combo off early with Arbor + Gaea's Cradle to generate a bunch of mana), but you're not going to pull that off often in cube, so it would mostly be a basic forest that your opponent can easily remove.
For personal machines I use fiction locations from media I enjoy. My laptop is Hyrule, my desktop is Konoha, and my Pi is MotherBase, but I spell mine all lowercase.
For servers I use for my roommates and I (Jellyfin and the like), we name them after the quirky students that go to our university that we appreciate.
The IP datagram is printed, on a small scroll of paper, in
hexadecimal, with each octet separated by whitestuff and blackstuff.
The scroll of paper is wrapped around one leg of the avian carrier.
A band of duct tape is used to secure the datagram's edges. The
bandwidth is limited to the leg length. The MTU is variable, and
paradoxically, generally increases with increased carrier age. A
typical MTU is 256 milligrams. Some datagram padding may be needed.
Upon receipt, the duct tape is removed and the paper copy of the
datagram is optically scanned into a electronically transmittable
form.
It's easy to be nostalgic for the days of big iron servers or clusters that were around for years as a sort of network landmarks. We used to have dev and QA clusters named after movie monsters and you'd get some great hallway conversations about who was using Godzilla that week.
Made for great mascot toys and posters on the rack doors.
A server can be around for years and years why being different. For instance my main debian package server has been "basson" since 2003. In 2003 it was a large, old SGI server; later on it became a 1U rack server, that was replaced a couple of times; nowadays it's a VM. But the internal sources.list still points to "basson" :)
That's a great scheme. I hope that one day amongst the university computer labs named for Greek/Roman mythology or rock artists, we have a lab of anime characters. At least at my uni, I think a lot of people would get a kick out of that.
For some reason, when doing a fresh OS install, the hostname step is the bottleneck and takes too long to complete. Distros should really prioritize optimizing this part of the installer!
Back when I was a sysadmin at a university CS department, we always picked namespaces for machines---everything in a given namespace had related names and were identical (or nearly identical) machines. If someone mentioned toque or glenfiddich, you could tell what lab they were in and what kind of machine they were.
I always wanted to use 'Starships from Iain M. Banks' science fiction', which is why no one ever listened to me.
For personal machines, I have been using adjectives that start with 'i'.
I use Harry Potter character names. There were hundreds of the mentioned off hand in the books, so I usually use obscure ones. They have the benefit of being easy to pronounce and spell: Mafalda Hopkirk, Silvanus Kettleburn, Irma Pince, etc.
The biggest mistake I've made, once myself, and once inheriting a network someone else passed on to me: Picking a theme that's too small.
I inherited a couple servers "jules" and "vincent" [after the Pulp Fiction characters]. I added mia and ringo, I started finding remaining names weren't great. Butch is a bit homoerotic. "zed" is one I've used, but I can't wait for the day someone without a sense of humor figures that out. I've also used "chopper" now. "Marsellus" is too hard to spell. "watch" is next on my list, but that's really dredging the bottom of the barrel.
The other network, we used Scooby Doo characters because there was no way that room would ever have more than five systems... of course the next tranche of added systems had to go with a totally different theme. So the room now has the mystery machine occupants, and ... dragons from how to train your dragon.
Some time ago I got into the habit of naming my home network after Pokemon that I happen to thought fit well. My Windows desktop is Charizard and it's Ubuntu dual boot is DarkCharizard. My fileserver/Docker box is Metagross. My wireless SSID is Raichu, my laptop is Pikachu, and my phone is Pichu. And so on.
At a previous employer many moons ago we had three SunOS servers named Shooty and Bangbang after these two clueless coppers in Hitchhiker's guide to the galaxy. The third was named tartsdrawers as it was up and down all the time. Good memories.
Periodic table entries here. Lots of ways to divide that space, but generally for me laptops were lighter elements, desktops heavier, and servers the heaviest. Virtual machines were all heavily radioactive elements (e.g. Uranium and Plutonim). Can't remember what I used the noble elements for.
Now I don't bother and just select the default. Or name it after where it's going to be, or what it's going to do, or when I got it. If I'm going to repurpose or move it I'll probably just reimage it anyway.
The wifi is still named "Periodic" though and it still uses a chemical makeup as the password (e.g. c3h5n3o9).
In my own private home lab -- which has grown way too large -- I like using names of the Garbage Pail Kids [0] as hostnames.
There's a few benefits to this naming scheme, when dealing with physical machines at least: 1) you can usually find a name that suits the "temperament" of the particular host and 2) you can tape the trading card to the host (or the rack, next to it) to make finding / identifying it easier.
Like a lot of things, people can remember names of things better than numbers. Genes for example, it’s easier to remember their names lz, wnt, than their associated gene Id number. But like stars and many other things there are too many to give each a unique name.
Naming can make sense, our old cluster was named after orchestra parts. When you logged In you where placed on the lobby. The machines where clustered (violin01, violin02, tuba01) and grouped by function types (percussion were the web servers)
New cluster it’s login01,login02
, the work cluster names I don’t remember...
On the other hand, I really hate code names for software versions... I can never remember which name is which version. I wish everyone would just say “16.04” and “18.04”, that makes it really obvious which version cam before the other one.
I never bothered to figure out if Ubuntu kept parts of the name between the 16.04 and the 16.10 version, or if they just used the next letter in the alphabet.
The iso images are numbered (ubuntu-server-16.04.02-x64.iso or whatever) thankfully, so i have 4 ubuntu templates - "leaving LTS this year", the next two LTS, and whatever the 9 month version is currently (20.10 being the current 9 month version)
Thankfully qemu/docker/lxc have made "freezing" a stable, working machine at 16.04 (or even 14.04 if you had the foresight to set it up on qemu at least) - but now i have tons of naming issues. We use a pooled hypervisor system, so sometimes i name things after which pool "server" they are on because i will spin two vms on different servers in the pool with the same VM name (say asterisk-voip) but when you log in it will say asterisk-pve2 or asterisk-wok3 as a quick "hint" as to when i actually "finished" installing it.
for DNS i only name stuff when an "app" requires it either for TLS/SSL or whatever reason. I don't maintain a real DNS server anywhere, and i really should, but i don't like naming things for literally all of the reasons hashed over by everyone in this thread!
Everything in our system is named <purpose><number>.<location>, but that is because everything is set up automatically by provisioning systems that way. We have thousands of servers in hundreds of data centers, so it the only time you ever go to a particular machine is to debug something.
I'm assuming (based on reading how they performed grouping) that percussion01 didn't exist, but perhaps cymbal01 did (since a cymbal is a type of percussion instrument). Therefore cymbal01 would be a web server of sometime. The benefit to naming it this way is you could have multiple types of web servers (internal, dev, prod) and by using a more generic name you could more easily change the function of that web server (so cymbal servers could be dev, then move to prod, without needing a name change).
My feeling, though, is that you care more about the function of a machine than the physical machine itself. Why not change the machine name when you change it’s purpose? There is no reason you need to know that the current prod web server used to be the dev server.
Given that it seems you don’t upgrade Linux Instalaciones between major revisions - I use short names that are unique in the first character or two and code the os/version. Sure it doesn’t scale much but I don’t have many servers and ssh u<tab> gets me where I need to be.
Over 5 years at working at Amazon (quite some time ago, I don't know what they do now) and we happily violated "no domain names" without any issue for linux servers. If you don't have linux boxes that span different networks it doesn't cause a problem to have server have one single domain name and have the hostname be the same. Linux boxes aren't proper security devices or router so don't use them as that and don't mutihome them. The routers and switches were given short names. These RFCs simply don't capture that there are rules you can adopt (and if you're big enough -- which is not terribly big) then you can discard some of these suggestions completely. There is not a one-size-fits-all. I ignored this RFC's suggestion at a fairly massive scale for 5 years and never had an issue.
Also don't fall into the trap of just assigning random serial numbers to servers. I later worked somewhere that did this and it makes it difficult to communicate about the servers in the middle of outages. I've had communication issues where I've been talking to another engineer and I was using the first hex digits of the server name and they were using the last as shortcuts and we thought we were logged into different servers and it was the same one. You hardly ever want humans dealing with your server names, but when humans do need to use your server names it is one of the times that really matter because shit is on fire.
Group them by single purpose of what the cluster does with some kind of incrementing number. The idea to use theme names and not "project" names is also deeply 100% wrong. When you have 100,000s of servers you run out of theme names and you'll fail to remember the name schemes in the middle of an outage. Name them after what the servers do, and keep them more or less reflecting their purpose. Consider carefully some kind of numbering scheme to keep the short names unique across datacenters so you don't have a dozen foo-101 servers. You may want to use incrementing serial numbers for both datacenters and cluster members or something so "foo-1-101" and number your datacenters (or logical cluster number if you're really big and stamp them out 30,000 at a time or something).
Oh right this is the RFC from 1990. Yeah, shit has changed, this RFC needs to evolve.
Back in 1990 when this was written a single system admin hand managing 20-30 servers was a lot. Web didn't exist. I can't recall any kind of load balancing or much clustering. You might have SunOS boxes running RIP doing routing across internal subnets that were 10baseT. NAT and firewalls weren't used much at all and servers would just sit on public IPs. This RFC is prehistoric.
I am fondly recollecting my time as an undergraduate at CMU in the 80s. All the machines were named from cities and towns in Pennsylvania. Old-school bare-metal on-prem indeed. That time has largely passed as machines are ephemeral. Functional names make more sense now.
Resource tags, like the ones supported by Vmware, AWS, and other cloud providers would be more important to me than a server name. I suppose for old-school bare-metal on-prem, you could put tags in DNS TXT records and some database.
for virtual machines i tend to use names that belong to some other entity and call the hypervisor after that. for example Saturn and its moons, Africa and its countries or some authors and their works.
> Certain sets are finite, such as the seven dwarfs. When you order your first seven computers, keep in mind that you will probably get more next year.
We ran into this years ago when we named our machines after the Marx Brothers. We started out with Harpo, Groucho and Zeppo. When two more arrived, we used Chico and Gummo. IIRC we added Karl, Deutsche, Skid, Birth and Spencer before giving in and adopting a proper 'cattle not pets' convention (which, by the way, isn't covered by TFA).
> a proper 'cattle not pets' convention (which, by the way, isn't covered by TFA).
It is:
...
Of course, they could have called the second one "shop2" and so on. But then one is really only distinguishing machines by their number. You might as well just call them "1", "2", and "3". The only time this kind of naming scheme is appropriate is when you have a lot of machines and there are no reasons for any human to distinguish between them. For example, a master computer might be controlling an array of one hundred computers. In this case, it makes sense to refer to them with the array indices.
I can't find the post that linked it, but it had a very nice scheme.
EDIT: found it: https://mnx.io/blog/a-proper-server-naming-scheme/
EDIT2: one-liner to get a random word from the file:
cat wordlist.txt | tail -n +2 | sed -E 's/\s*(\w+)\s+/\1\n/g' | sed -E '/^$/d' | shuf -n 1
i.e. <print file> | <skip first line> | <split lines into words> | <remove empty lines> | <choose random line>