“If you were using federated Kuberenetes with node auto-scaling and the latest cloud-native AI-enabled service discovery OSS tools for geographically aware traffic distribution your static site wouldn’t have gone down.”
I think the most prototypical "hugged to death" personal website is a stock wordpress setup on shared hosting or a low spec vm, without any caching plugins, and perhaps with some popular plugins that happen to be database heavy.
It's easy to click a few buttons, install some plugins and themes, and wind up running 100+ sql queries on every page load. The various caching plugins work very well, but it's not necessarily something everyone thinks of turning on in advance of getting a lot of traffic all at once.
Can't say if that's what happened here, but it's super common when personal sites linked here go down.
Ah thanks. This make sense now. I had mine entirely frontend and even had a decent sized js game on it but never had any trouble posting. Likely because I never needed to deal with sql, (or low traffic)
People don't take advantage of caching headers. Put cloudflare in front of your blog with 10 minute caching headers and that isnt going down for almost anything.
They don't even need Cloudflare. Just a typical cache plugin would allow a WordPress/Django/Rails/whatever site to survive HN. Every page request being 90 database calls to satisfy "related posts", "wordclouds", "previous", "next", "related" and whatnot just doesn't scale :) Yey abstraction.
You can read about his tech stack here[1]. Curiously, his "About" page loads fine. He is using a Django REST API with DynamoDB and the "whole site is hosted serverlessly on AWS Lambda."
Dynamo DB can be expected to adjust capacity every 5 minutes when set up with an auto scaling configuration or on pay-per-request model.
Lambda will scale up in seconds.
Dynamo, however, can be set not to (if you wish to stay within free tier limits). Dynamo DB free tier allows 5 4KB/key reads/second, with a buffer for about 5 minutes worth of "tokens" at this rate.
If poorly designed (e.g. using Dynamo like a relational database) you chew through this with app-side joins very quickly. Even if well designed, looking up related articles, or re-loading records on page transition will eat up DB time
As someone else mentioned, caching is critical for this setup to survive this load, especially if you want to stay in free tier... And CloudFront costs pennies compared to Dynamo and Lambda... Though both can easily be cheaper than $5/month
Scotty or even McCoy might be a better fit, but having seen not enough TOS, my impression is Spock is enough prominent in out-technobabbling his captain in this context
It's a static site FFS! Erols and ServInt were serving 3k requests per second on a Pentium in the nineties using Apache and SCSI disks with no acceleration, no http cache, no memcache because that 2KB page fits into OS disk cache!
I wrote an article here, and the traffic I got was 1.3 million web requests. All I did was use memcache, and have all my static assets served with nginx.
Just like the article implies, if it requires any tweaking and configuration, most people won't/can't do it.
“If you were using federated Kuberenetes with node auto-scaling and the latest cloud-native AI-enabled service discovery OSS tools for geographically aware traffic distribution your static site wouldn’t have gone down.”