Hacker Newsnew | past | comments | ask | show | jobs | submit | halfstar91's commentslogin

Why is my 14 year old niece now collecting vinyl? I can guarantee it's not nostalgia. There's obviously more at play there even when acknowledging your point about relative market size.


Perhaps it is _anemoia_ - nostalgia for a time you've never known https://www.dictionaryofobscuresorrows.com/post/105778238455...

In this case, it's for the harmless charm of an imagined past, but the same forces are at play in some more dangerous forms of social conservatism.


It's a very narrow subgroup.

But things can coexist. It's now easier to create music than ever, and there is more music created by more artists than ever. Most music is forgettable and just streamed as background music. But there is also room for superstars like Taylor Swift.

Things don't have to be either-or.


How many 14 years old do you know who collect vinyl?


The medium is the message. I know several people born post 2000 who are embracing records and tapes.


I started when I was pretty much exactly that age, ten years ago.


What happens if nobody wants to use your spot to stay in? Presumably you never build up credits and can't use the service effectively.


Probably an interesting business challenge. I could see a few options:

1. Add the ability to buy credits. Hurts the "community" aspect, but gets around that problem and acts as a revenue stream.

2. "Joe Smith is a new host! Camp at his place for zero credits in exchange for writing a review!"

3. Just accept that people in undesirable locations don't get to use the tool.


Incrementally usually :)


Yes, but a usual pattern for smaller DBs is, for example, to run a full backup weekly and then daily incremental backups. Also, SQL dumps makes easier to do partial recovery, like for a single table.

On this size class (~1TB) I see PITR backups using something like Barman as the only viable alternative. (Of course you can also copy the whole VM/Disk etc)


If you think Kubernetes is unnecessary overhead, you've never operated at significant scale.


Actually i have experience working un hi loaded env operated by k8s.

But for 99% of projects i see, it’s a waste of time and resources (mostly people resources, not just cpu). HN is a perfect example of a project that doesn’t need it, no matter the traffic.

If you need some additional flexibility and scalability over “bare metal” setup, you can go far with just docker compose or swarm until you have no choice but use k8s

Again, if you know what you are doing.


I worked in an academic library where the scale was on the order of hundreds of users per day, but many of those users were faculty/researchers spread out across the globe who got very grumpy when whatever online archive/exhibit was not available.

I migrated our services from a very “pet” oriented architecture to a 4-node Proxmox cluster with Docker Swarm deploying containers across four VMs and it worked great. Services we brought up on this infra still to this date have 100% uptime, through updates and server reboots and other events that formerly would have taken sites offline temporarily.

I looked at k8s briefly and it seemed like total overkill. I probably would have spent weeks or months learning k8s for no appreciable advantage over swarm.


And how many k8s believers ever reach significant scale? It's just like back when NoSQL was the trendy thing and people thought a couple gigabytes meant "big data". Mostly it’s simply cargo culting.


Brilliant rephrasing of my point.


Not at all, k8s is not designed for very large scale. Unsurprisingly, FAANGS don't use it to manage their own platforms.

Edit: Google's Borg is a very different beast.

Edit: no need to patronize me. I worked on massive scale deployments otherwise I would not be commenting.


Many large organisations use Kubernetes (Google, Spotify, IBM, etc). Regardless, large scale and very large scale are different. Kubernetes is well suited for controlling fleets of compute resource in the order of 10,000s CPU cores, and terabytes of memory.

The compute overhead to orchestrate these clusters is well worth the simplicity/standardisation/auto-scaling that comes with Kubernetes. Many people have never had to operate VMs in the hundreds or thousands and do not understand the challenges that come with maintaining varied workloads, alongside capacity planning at that scale.


a million nodes running a single application is scale, but a thousand nodes running a thousand applications is also scale, and they are very different beasts.

The FAANGs operate the first kind, k8s is mostly aimed at the second kind scale, so its designed "for scale", for some definitions of scale.


K8s spun off of Google’s Borg operator software specifically designed for high availability at FAANG scale. So essentially K8s is the “community edition.” Go read the Google SRE Book for context.

We use it to serve ruby with 50 million requests per minute just fine. And the best part is the Horizontal Pod Autoscaler which saves our ass during seasonal spikes.

While serverless/lambda are great I do think K8s is the most flexible way to serve rapidly changing containerized workload at scale.


Kubernetes is fantastic, though I think of it more as a tool for managing organizational complexity than ops. You can vertically scale beyond the level needed by most commercial applications nowadays using ad-hoc approaches on a machine or two with 16+ cores, 256GB+ RAM, terabytes of NVM drives, etc. but many (even small) companies have policies, teams, and organizational challenges that demand a lot of the structure and standardization tools like Kubernetes bring to the table.

So I'm not disagreeing with your assertion, but I'd perhaps scope it to saying it's useful overhead at significant organizational scale, but you can certainly operate at a significant technical scale without such things.. and that can be quite fun if you have the team or app for it :)


Kubernetes is great at significant scale, that's what it's designed for. It has significant overhead if you don't need that scale.


Exactly. And probably not at significant complexity either.


Was this work published and peer reviewed?


Think of it this way: the two cars (if they’re the same weight) create a virtual immovable object at the impact point.

“Brick wall” is doing a lot of work there - note that it’s not the same as hitting a stationary car at 50mph, that car would move. We rarely encounter actual immovable objects on or near the road, which is why a 50+50 head-on is more violent than a 50+0 collision into something softer.


It makes sense as well. If we assume each car has the same mass, and picture the collision from the side, it would appear as if each car was hitting an invisible brick wall that separated them. Or, picture hitting a stationary car while driving 100km/h (perhaps on some frictionless surface) - afterwards, both cars would be sliding at 50km/h.


I know this isn't what you mean.. but I'm imagining someone blindfolded in a research lab testing this out...


Blindfolded person: "I can't see a difference!"


And based on recent discoveries it sounds like Xiaomi should be trusted less than others.


As stupid as it might sound, I do trust Pixel phones, and an hypothetical iPhone running a different OS, the most of all alternatives. If one want's a smartphone, if not just take a 20+ year old dumb phone. Or BlackBerry.


Good luck getting that 20 year old phone to connect to any modern wireless network. I don't think anyone is running a 2G network anymore.


I totally forgot that even 3G is going to be phased out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: