I called the ICO a few years ago asking how to comply with an ex-employee GDPR data request for access to their emails. Their recommendation: read them all to determine which contained personal data.
When I told them I (as a 5 person business) obviously don't have time to go through 1000s of old emails they reacted with surprise to the amount of emails. I guess they don't send many. They didn't offer any other solution.
As others have mentioned this org is a tax on all UK business.
Yeah, this. The easiest way to comply with the GDPR is not to store personal data. The second easiest is to delete it as soon as it is no longer required (this includes from backups!)
Do you actually want those emails to be unearthed during a lawsuit 5 years from now?
At least one firm I worked with had a mandatory 180-day delete of any correspondence not specifically tagged for archival, and the stated reason was to prevent all their random conversations being exposed during discovery if they were prosecuted.
I long for a US-like entrepreneurial attitude here in the UK. Business people here think that they are playing zero sum games. Everyone looks to the govt for funding and direction.
I am reflecting on Deepmind. This company saw the AI revolution before anyone else. London based. They did an amazing job. Sold to Google. The problem is that long term the growth and money now goes back to the US. Maybe the UK is just too small to produce world leading companies.
The UK already has world-leading companies (as do many smaller countries), but it's very very hard to produce planet-scale wide-appeal consumer companies dur to market size. Even if it does happen, the temptation to appeal more and more to the US market is irresistable.
I'm sure it's not nothing, but to what extent does it matter that Alphabet is headquartered in the US? Deepmind seems to still be based in Kings Cross - and London benefits.
> I can't see how it can take 4 months to figure it out.
Well have you ever tried moving a company with a dozen services onto kubernetes piece-by-piece, with zero downtime? How long would it take you to correctly move and test every permission, environment variable, and issue you run into?
Then if you get a single setting wrong (e.g. memory size) and don't load-test with realistic traffic, you bring down production, potentially lose customers, and have to do a public post-mortem about your mistakes? [true story for current employer]
I don't see how anybody says they'd move a large company to kubernetes in such an environment in a few months with no screwups and solid testing.
Took us three-four years to go from self hosted multi-dc to getting the main product almost fully in k8s (some parts didn't make sense in k8s and was pushed to our geo-distributed edge nodes). Dozens of services and teams and keeping the old stuff working while changing the tire on the car while driving. All while the company continues to grow and scale doubles every year or so. It takes maturity in testing and monitoring and it takes longer that everyone estimates
It sounds like it's not easy to figure out the permissions, envvars, memory size, etc. of your existing system, and that's why the migration is so difficult? That's not really one of Kubernetes' (many) failings.
Yes, and now we are back at the ancestor comment’s original point: “at the end of the day kubernetes feels like complexity trying to abstract over complexity, and often I find that's less successful that removing complexity in the first place”
Which I understand to mean “some people think using Kubernetes will make managing a system easier, but it often will not do that”
It's good you asked, but I'm not ready to answer it in a useful way. It depends entirely on your use cases.
Some un-nuanced observations as starting points:
- Helm sucks, but so does Kustomize
- Cluster networking and security is annoying to set up
- Observability is awkward. Some things aren't exposed as cluster metrics/events, so you need to look at, say, service and pod state. It's not easy to see, e.g. how many times your app OOMed in the last hour.
- There's a lot of complexity you can avoid for a while, but eventually some "simple" use case will only be solvable that way, and now you're doing service meshes.
Maybe "wrong" is the wrong word, but there are spots that feel overkill, and spots that feel immature.
I'd argue that Kustomize is the bee's knees but editor support for it sucks (or, I'd also accept that the docs suck, and/or are missing a bazillion examples so us mere mortals could enlighten ourselves to what all nouns and verbs are supported in the damn thing)
> how many times your app OOMed in the last hour.
heh, I'd love to hear those "shell scripts are all I need" folks chime in on how they'd get metrics for such a thing :-D (or Nomad, for that matter)
That said, one of the other common themes in this discussion is how Kubernetes jams people up because there are a bazillion ways of doing anything, with wildly differing levels of "it just works" versus "someone's promo packet that was abandoned". Monitoring falls squarely in the bazillion-ways category, in that it for sure does not come batteries included but there are a lot of cool toys if one has the cluster headroom to install them
It largely depends how customized each microservice is, and how many people are working on this project.
I've seen migrations of thousands of microservices happening with the span of two years. Longer timeline, yes, but the number of microservices is orders of magnitude larger.
Though I suppose the organization works differently at this level. The Kubernetes team build a tool to migrate the microservices, and each owner was asked to perform the migration themselves. Small microservices could be migrated in less than three days, while the large and risk-critical ones took a couple weeks. This all happened in less than two years, but it took more than that in terms of engineer/weeks.
The project was very successful though. The company spends way less money now because of the autoscaling features, and the ability to run multiple microservices in the same node.
Regardless, if the company is running 12 microservices and this number is expected to grow, this is probably a good time to migrate. How did they account for the different shape of services (stateful, stateless, leader elected, cron, etc), networking settings, styles of deployment (blue-green, rolling updates, etc), secret management, load testing, bug bashing, gradual rollouts, dockerizing the containers, etc? If it's taking 4x longer than originally anticipated, it seems like there was a massive failure in project design.
2000 products sounds like you made 2000 engineers learn kubernetes (a week, optimistically, 2000/52 = 38 engineer years, or roughly one wasted career).
Similarly, the actual migration times you estimate add up to decades of engineer time.
It’s possible kubernetes saves more time than using the alternative costs, but that definitely wasn’t the case at my previous two jobs. The jury is out at the current job.
I see the opportunity cost of this stuff every day at work, and am patiently waiting for a replacement.
> 2000 products sounds like you made 2000 engineers learn kubernetes (a week, optimistically, 2000/52 = 38 engineer years, or roughly one wasted career).
Learning k8s enough to be able to work with it isn't that hard. Have a centralized team write up a decent template for a CI/CD pipeline, Dockerfile for the most common stacks you use and a Helm chart with an example for a Deployment, PersistentVolumeClaim, Service and Ingress, distribute that, and be available for support should the need for Kubernetes be beyond "we need 1-N pods for this service, they got some environment variables from which they are configured, and maybe a Secret/ConfigMap if the application rather wants configuration to be done in files" is enough in my experience.
For sure but that's the job of a good ops department - where I work at for example, every project's CI/CD pipeline has its own IAM user mapping to a Kubernetes role that only has explicitly defined capabilities: create, modify and delete just the utter basics. Even if they'd commit something into the Helm chart that could cause an annoyance, the service account wouldn't be able to call the required APIs. And the templates themselves come with security built-in - privileges are all explicitly dropped, pod UIDs/GIDs hardcoded to non-root, and we're deploying Network Policies at least for ingress as well now. Only egress network policies aren't available, we haven't been able to make these work with services.
Anyone wishing to do stuff like use the RDS database provisioner gets an introduction from us on how to use it and what the pitfalls are, and regular reviews of their code. They're flexible but we keep tabs on what they're doing, and when they have done something useful we aren't shy from integrating whatever they have done to our shared template repository.
> 2000 products sounds like you made 2000 engineers learn kubernetes (a week, optimistically, 2000/52 = 38 engineer years, or roughly one wasted career).
Not really, they only had to use the tool to run the migration and then validate that it worked properly. As the other commenter said, a very basic setup for kubernetes is not that hard; the difficult set up is left to the devops team, while the service owners just need to see the basics.
But sure, we can estimate it at 38 engineering years. That's still 38 years for 2,000 microservices; it's way better than 1 year for 12 microservices like in OP's case. Savings that we got was enough to offset these 38 years of work, so this project is now paying dividends.
Comparing the simplicity of two PHP servers against a setup with a dozen services is always going to be one sided. The difference in complexity alone is massive, regardless of whether you use k8s or not.
My current employer did something similar, but with fewer services. The upshot is that with terraform and helm and all the other yaml files defining our cluster, we have test environments on demand, and our uptime is 100x better.
Memory size is an interesting example. A typical Kubernetes deployment has much more control over this than a typical non-container setup. It is costing you to figure out the right setting but in the long term you are rewarded with a more robust and more re-deployable application.
> I don't see how anybody says they'd move a large company to kubernetes in such an environment in a few months with no screwups and solid testing.
Unfortunately, I do. Somebody says that when the culture of the organization expects to be told and hear what they want to hear rather than the cold hard truth. And likely the person saying that says it from a perch up high and not responsible for the day to day work of actually implementing the change. I see this happen when the person, management/leadership, lacks the skills and knowledge to perform the work themselves. They've never been in the trenches and had to actually deal face to face with the devil in the details.
Canary deploy dude (or dude-ette), route 0.001% of service traffic and then slowly move it over. Then set error budgets. Then a bad service wont "bring down production".
Thats how we did it at Google (I was part of the core team responsible for ad serving infra - billions of ads to billions of users a day)
Using microk8s or k3s on one node works fine. As the author of "one big server," I am now working on an application that needs some GPUs and needs to be able to deploy on customer hardware, so k8s is natural. Our own hosted product runs on 2 servers, but it's ~10 containers (including databases, etc).
Yup, I like this approach a lot. With cloud providers considering VMs durable these days (they get new hardware for your VM if the hardware it's on dies, without dropping any TCP connections), I think a 1 node approach is enough for small things. You can get like 192 vCPUs per node. This is enough for a lot of small companies.
I occasionally try non-k8s approaches to see what I'm missing. I have a small ARM machine that runs Home Assistant and some other stuff. My first instinct was to run k8s (probably kind honestly), but didn't really want to write a bunch of manifests and let myself scope creep to running ArgoCD. I decided on `podman generate systemd` instead (with nightly re-pulls of the "latest" tag; I live and die by the bleeding edge). This was OK, until I added zwavejs, and now the versions sometimes get out of sync, which I notice by a certain light switch not working anymore. What I should have done instead was have some sort of git repository where I have the versions of these two things, and to update them atomically both at the exact same time. Oh wow, I really did need ArgoCD and Kubernetes ;)
I get by with podman by angrily ssh-ing in in my winter jacket when I'm trying to leave my house but can't turn the lights off. Maybe this can be blamed on auto-updates, but frankly anything exposed to a network that is out of date is also a risk, so, I don't think you can ever really win.
I've always assumed that Google ads subsidises everything else at Google. So my concern with a split would be YouTube increasing price or less investment in the service.
Youtube just had the revenue of $8.92 billion where the total was $88.27 billion. Not a small number. Is there any information how much is profit from that Youtube part?
I have a hard time imagining that youtube is subsidized given the ridiculous number of obnoxious ads you're forced to watch and the steep price of removing them. Surely if they were subsidized this wouldn't be worth the cost of ruining their product.
What's ironic about it? YouTube premium removes ads that YouTube adds to videos. Those sponsor ads in the videos are put there by the content creator who made the video.
And what would be the reason for this subsidy? Do they think YouTube is going to keep growing and it's worth waiting for them to become somehow more popular? Would it be catastrophic for Alphabet if other large players entered YouTube's market?
And why would losing a subsidy mean increasing prices? As far as I can tell consumers think YouTube's offerings are overpriced as is and they could probably increase profit by lowering them, especially if it's not their parent's add subsidiary they'd be cannibalising.
If an entity takes measures to ensure that its service becomes the de-facto default in an area, that entity gives up its entitlement to dictate the terms of use of that service. We need something like this in our systems of ethics, or we permit Freedom Monsters (ref: https://existentialcomics.com/comic/259). Note that this isn't the only solution, but I expect other solutions to have the same shape.
If Google didn't promote YouTube so heavily, permitted channels to migrate to other services (like how they permit Blogger blogs to migrate to other websites), bundled a generic streaming video player with Android (e.g. VLC) instead of the YouTube app… then maybe I'd be more sympathetic to the position of content blocker opponents. To convince me to pay for YouTube, you have to offer me something other than "we've locked a capability of your computer away, but you can get it back if you pay us!".
iOS can play audio with the screen off and doesn’t require YT premium. Bit of an OS hack but it’s just a few taps:
-Start your YT video (from iOS app)
-Swipe up to have the floating player while it plays.
-Swipe down from top center to bring down your notifications bar. The video will automatically pause but you’ll have the large play button right there.
-Hit play
-while still on the notification bar screen, and while the audio starts playing again: Swipe down from the top right corner to bring up your control center.
- now turn off your screen and audio should still play.
This has been working for me for a while. Don’t recall where i first came across it but it’s been a few years now.
I don’t use it often because I have streaming services with higher audio quality but it’s nice to not worry about accidentally tapping the screen while listening to YT. Esp if you wanna keep the phone in your pocket. Also saves battery juice.
You can also start the video in the web browser (haven’t tried the app), turn off the screen, turn it back on, hit play on lock screen, turn the screen off again. Podcasts for poor people
Hackers will spend unlimited amounts of time and energy to argue why they shouldn't pay for something. I think it was the Dalai Lama who said that the highest ethical principle that exists is to use things without paying for them, and that it is in fact the people providing goods and services who are oppressing the people who are using them. But I could have remembered wrong.
There are options for you, like Floatplane and Nebula. The problem is universal - their curated content has limited appeal. The YouTube model is more attractive to people, so more people upload content to a larger audience. I have no confidence that a paid-only platform could reach 1/100th of the traffic YouTube gets in a similar timespan.
As a customer you really just have to ask yourself what you're willing to give up when paying for a YouTube analog. Content creators aren't going to engage in a mass exodus unless they're convinced their audience will follow them to other platforms.
You seriously think they haven't thought of that? I have no association with this project but it has been going for many years, has sold to many customers and institutions and the pictures certainly look like many healthy plants. Probably there is a cost/benefit trade-off to engineering watering at the soil level. Perhaps leaves would get damaged by the hardware.
It should be possible to have a private chat without spying.
However I have an unpopular opinion, interested to hear what others might think:
We should eliminate anonymity online. If you go on the internet everything you do should be tied back to your name. This can be done using device attestation. Everyone gets a private key tied to their name/address.
This is compatible with free speech. In fact it promotes free speech because being a "troll" becomes a lot more personal.
I think this way of living would be closer to our nature as tribal primates. It would improve behaviour and overall quality of life. Our brains are designed to have checks and balances from wider society which you don't get anonymously online.
This would also reduce the need for govt monitoring because any chat online could be "turned in" by an informer and then any criminals identified.
What technological measures do you propose to block Tor and VPN services to achieve this? Not even China's Great Firewall completely achieves this, though not for lack of trying.
Device attestation is a way for a server to attest that a requester is e.g. an iPhone. Sure, it can be expanded to cover if the requester is, say, John Smith. But the server has to demand it.
Decentralized platforms tout not doing this stuff as a feature. You'd have to roll out the attestation system and require everyone running a web server to set up this attestation infrastructure; that is, the small guy running a model train forum on his laptop or whatever must risk prison time if he doesn't do attestation. That'd be so draconian that afaik not even China does it.
The difference is that they aren't using this to make online conversations more polite or something (not that I agree with your point that removing anonymity somehow is better for free speech, at all). This isn't some sort of initiative to promote self policing. It's to get information that will allow authorities to arrest people and put them in jail. That automatically makes the entire argument that it could be beneficial for free speech and promote more "real life" like interactions online irrelevant, because that's not the point of this law.
> This would also reduce the need for govt monitoring because any chat online could be "turned in" by an informer and then any criminals identified.
This is where you completely break your premise, this is Stasi levels of informing, asking for the population to spy on each other. It's not healthy to society when you feel that any other person you interact with might be informing on you to the State, you leave a very wide avenue open for misuse when the State changes its rulings on what's considered criminal.
Good points. Difficult questions. I was thinking it is tied to the physical device. So you would register a laptop when you buy it with the state and the key would be in the HSM. So the main differences would be:
1. Give my name and address to activate a device
2. "The internet" requires authentication via the HSM.
Kind of like how a car is tied to an individual, via the logbook (in the UK at least). You need to think who you let use your laptop, lest they get you in trouble. If it's stolen or you sell it, you report it. To be fair, people were against passports and license plate numbers when these were first introduced, and it hasn't lead to the problems people envisaged.
That said... I don't know if this is feasible with a laptop. It's much easier to pawn my laptop, than it is to steal my car and drive it without me knowing. And at what point does a computer become a server, and are those regulated differently?
Knowing that you're never anonymous online would certainly improve some conversations, and mitigate some of the ability for state actors to e.g. sow discontent online. But it would arguably be a huge inconvenience and risk for everybody, so I don't know if it's worth the cost.
for certain platforms. IMO platforms should be able to decide for themselves whether they want the option to have people verify themselves via ID or not.
It sounds like a solid idea (bot rejection anyone), but I wonder if governments would use it to quarantine users instead. Would I have to pay a bi-yearly fee to maintain and reissue my online passport?
All in all I would be willing to be quarantined if that meant the bots would suddenly die.
That’s an easy one to shoot down because it would lead to endless harassment of innocent individuals and suppression of popular opinion or critical thought. It goes counter to everything that the internet is about. It also would lead to mass genocide by governments seeking to kill of any resistance.
I didn’t take the OP that way. I think it’s a gentle suggestion that each topic has far more depth than might first be apparent. As someone who has a tendency to want to dive in and learn everything possible about a subject in a short period of time, I appreciate the reality check.
He literally suggests waiting 6 months. I don't think damping enthusiasm like that is helpful coaching.
Any subject worth learning has depth to it. Nothing worth knowing can be fully understood in a short period of time. That doesnt mean you cant start today.
Six months to understand and learn. If you’re completely new to MFR and Supply Chain, it will take you about that long to start understanding how to think and ask the proper things. This is also about how long it will take to build up rapport with the shop floor crew.
Six months should give you time to fully unwind the dependencies of everything. And learn the history and context of decisions.
You will be working with guys who have been welding metal longer than you’ve been alive. Guys who have machined parts longer than the internet has been around. This is a deep, deep field. Six months is pretty quick.
I'm sorry I jumped on your post earlier since you clearly meant it in a positive way. I agree that listening to your colleagues and understanding then business you've joined should be your top priority.
The thing is that the original post never said they wouldn't do this. The post asked for resources and industry best practice. For all we know they would use this information with discression and practically.
You touched a nerve because I get deeply frustrated by the attitude in manufacturing where process knowledge is not readily shared like in software. The reality is that many business are not best practice.
When I told them I (as a 5 person business) obviously don't have time to go through 1000s of old emails they reacted with surprise to the amount of emails. I guess they don't send many. They didn't offer any other solution.
As others have mentioned this org is a tax on all UK business.