Hacker Newsnew | past | comments | ask | show | jobs | submit | chousuke's commentslogin

I think "core" kubernetes is actually pretty easy to understand. You have the kubelet, which just cares about getting pods running, which it does by using pretty standard container tech. You bootstrap a cluster by reading the specs for the cluster control plane pods from disk, after which the kubelet will start polling the API it just started for more of the same. The control plane then takes care of scheduling more pods to the kubelets that have joined the cluster. Pods can run controllers that watch the API for other kinds of resources, but one way or another, most of those get eventually turned into Pod specs that get assigned to a kubelet to run.

Cluster networking can sometimes get pretty mind-bending, but honestly that's true of just containers on their own.

I think just that ability to schedule pods on its own requires about that level of complexity; you're not going to get a much simpler system if you try to implement things yourself. Most of the complexity in k8s comes from components layered on top of that core, but then again, once you start adding features, any custom solution will also grow more complex.

If there's one legitimate complaint when it comes to k8s complexity, it's the ad-hoc way annotations get used to control behaviour in a way that isn't discoverable or type-checked like API objects are, and you just have to be aware that they could exist and affect how things behave. A huge benefit of k8s for me is its built-in discoverability, and annotations hurt that quite a bit.


Things can be simple to fix but actually getting people to implement the fix won't be there are incentives against it.

They are two different problems.


If your simple problem hinges on solving a harder problem to resolve it successfully , then it's not a simple problem.


I like abstractions when they are transparent enough that it's easy to tell how it could be implemented one layer down.

There might be many implementation details that you hide under the abstraction, but if the interface is so abstract that I can't envision a straightforward implementation of it just based on the interface, there's probably something wrong with the abstraction.

Additionally, if the behaviour of the implementation conflicts with the simplified model communicated by the interface, that'll also cause issues.


Similar to liking them for being easy to see what they mean one layer down, it is also nice to know what they mean one layer up. Your program has to inhabit a middle layer between what it is you want, and how it is that it will be executed.

Sometimes, we can get lucky and a declarative statement of what we want works. Often, that isn't the case.


The problem is that a secure, verifiable computing environment is also important for your privacy.

If you use this system with free software components, the dystopia won't materialize. It's the lack of transparency with proprietary components that causes problems.


The problem is not the secure boot itself, but how the designer, primary enforcer, primary CA and key control authority of said system is the worst imaginable candidate on the whole universe (and possibly beyond).


Sure, it's important that I can verify what my computer is running. But it's also important that the MAFIAA et al. cannot. If there were some way of guaranteeing that attestation could only work for the owner, then I'd support it, but if that's not possible, then I'd rather it not exist at all.


Remote atestation? Why ?


For example, I use my SIM embedded digital signature on my mobile phone. Being able to verify that it’s not altered with, and being able to verify this state with a remote secure entity sounds nice.

Assuming you can select/provide the baseline state to be verified against, I fail to see how this is harmful.

Of course this can be used to force “desired configuration” on anyone, but this is a social problem rather than technical.


> Assuming you can select/provide the baseline state to be verified against, I fail to see how this is harmful.

Okay, but that has 0% chance of being what happens.


> Of course this can be used to force “desired configuration” on anyone, but this is a social problem rather than technical.

No, remote attestation as laid out with escrowed keys is a technical vulnerability through and through, which upends the existing social power relationships. You wouldn't write off the installation of police surveillance cameras in your house a mere "social problem" even though you could still organize politically to create restrictions on their use. Rather the social aspect only becomes an issue due to the addition of the technical ability (vulnerability).

Key-escrowed remote attestation is fundamentally a rejection of the longstanding concept of mediation by open protocols. Right now, the demarcation point between independent parties is what goes over the wire. On my computer I run software that represents my interests, on a server a company runs software that represents their interests, and we temporarily cooperate by communicating in a well-known manner.

No matter how powerful the remote entity is, they still cannot force me to run software contrary to my interests. Sure they can make it harder by only shipping proprietary executables and obfuscating the protocols, but ultimately if the interaction is important enough then it can be reimplemented in Free software to appropriately represent users' rights.

Meanwhile, key-escrowed remote attestation lets each party insist on what software the other party can run while they're communicating. Of course, an individual user will have zero negotiating power to affect what software a company is running, just as end users have zero negotiating power to cross out objectionable terms in those blobs of legalese shoved at us. Rather, commercial services will be provided on the same take-it-or-leave-it basis, then with the addition of mechanically enforced conditions of only running specific software. Once this is easy enough to do that insisting upon it will only marginalize a small number of customers, companies will reflexively adopt it - remember, "security" departments love checking boxes.

Facilitating key-escrowed remote attestation on Free operating systems undermines our hard-won freedoms, and splits the Free software market-power bloc. Right now, a basic binary-distributed Firefox-on-Ubuntu user appears essentially the same as a user who has modified their software. The basic Ubuntu user is happy to be running Free software, but if we're honest it's more of a theoretical/upstream concern until they start hacking. However, if Ubuntu gets the vulnerability to attest exactly what software it's running, then that's a stark difference between them. The basic Ubuntu user won't notice that they have lost some freedoms they weren't using, whereas the smaller contingent of people that wish to run modified software will have directly lost FSF Freedom 1.

The only way to keep remote attestation honest is for there to be no privileged signing keys embedded by the manufacturer. Ideally the end user would prompt their generation, but if initial keys are created at the factory then nothing about them (including the public identity) must be recorded.

Then there would be no way for a random third party to tell if you are running on bare metal hardware, or within virtualization with mock attestation. True owners of the hardware can still record the signing keys and build their own trust relationships. But no centralized databases that would allow arbitrary third parties to trust that users' own hardware is undermining user interests.


I've learned that sometimes, there's literally nothing you can do to directly solve a problem; you might not even know what the problem is, so how could you come up with solutions to it.

In those cases it helps me to then think that maybe there's something in my circumstances that is creating my problem, rather than it being a problem with me directly.

For example, if your problem is that it's difficult to get out of the house for a daily walk, a solution to "just do it" will not accomplish anything. It's extremely difficult to just start wanting something out of thin air.

However, if instead of focusing on "how do I go for a walk" you think about "what am I doing when I go for a walk", you immediately open up questions that are very easy to act on: Are you dressed comfortably? Is the path you take for your walk enjoyable, or do you have options? Is there too much noise? Maybe headphones would help.

These are things that are easy to try and change, and free you from having to blame yourself for lacking some ill-defined quality of "having willpower" that no-one can even measure.

If there's something you can easily change, but don't want to, then you know it's a problem with you, and you need to deal with it accordingly. But many things are not a problem with you.

I use the walking example because I literally solved my own "I don't want to go for walks" problem by realizing that I was habitually walking along a noisy road and I hated that and not the walking itself; once I found a more pleasant path, the "chore" became something that I could enjoy instead. My problem wasn't "I'm lazy and I hate walking", it was "cars are noisy and the environment is too grey"


it's not that "best practices" or any of those things are what causes trouble; it's failing to recognize that they're just tools, and people will still be the ones doing the work. And people should never be treated as merely tools.

You can use all of those things as to enable people to do things better and with less friction, but you also need to keep in mind that if a tool becomes more of a hindrance than a help, you should go looking for a new one.


> it's not that "best practices" or any of those things are what causes trouble; it's failing to recognize that they're just tools, and people will still be the ones doing the work. And people should never be treated as merely tools.

For me, the concept of best practices is pernicious because it is a delegation of authority to external consensus which inevitably will lead to people being treated as tools as they are forced to contort to said best practices. The moment something becomes best practice, it becomes dogma.


Imagine your doctor or pilot eschewing “best practices” and what your reaction would be. There’s a reason knowledge communities build consensus.

Best practice doesn’t mean you’re at the mercy of the consensus, it just means you have to justify why you should stray from it.


Doctors “best practices” are handed down by the AMA (or local equivalent). Pilots “best practices” are handed down by the FAA (or local equivalent).

Programmers best practices are handed down by the twitter accounts of consultants. It’s not quite the same thing.


This comment perfectly encapsulates the point that I am making about best practices: the concept is used as a cudgel to silence debate and to confer a sense of superiority on the practitioner of "best practice." It is almost always an appeal to authority.

No one wants cowboy pilots ignoring ground control. Doctors though do not exactly have the best historical track record.

Knowledge communities should indeed work towards consensus and constantly be trying to improve themselves. Consensus though is not always desirable. Often consensus goes in very, very dark directions. Even if there is some universal best practice for some particular problem, my belief is that codifying certain things as "best practice" and policing the use of alternative strategies is more likely to get in the way of actually getting closer to that platonic ideal.


Perhaps a better example might be "covering indexes," or what Oracle would call an "index full scan."

Is is an idea so efficient that to disregard it is inefficiency.

"I had never heard of, for example, a covering index. I was invited to fly to a conference, it was a PHP conference in Germany somewhere, because PHP had integrated SQLite into the project. They wanted me to talk there, so I went over and I was at the conference, but David Axmark was at that conference, as well. He’s one of the original MySQL people.

"David was giving a talk and he explained about how MySQL did covering indexes. I thought, “Wow, that’s a really clever idea.” A covering index is when you have an index and it has multiple columns in the index and you’re doing a query on just the first couple of columns in the index and the answer you want is in the remaining columns in the index. When that happens, the database engine can use just the index. It never has to refer to the original table, and that makes things go faster if it only has to look up one thing.

"Adam: It becomes like a key value store, but just on the index.

"Richard: Right, right, so, on the fly home, on a Delta Airlines flight, it was not a crowded flight. I had the whole row. I spread out. I opened my laptop and I implemented covering indexes for SQLite mid-Atlantic."

This is also related to Oracle's "skip scan" of indexes.

https://corecursive.com/066-sqlite-with-richard-hipp/


Most software “best practices” are a poorly structured replacement for a manual.

Aviation best practices were written from the outcome of minor and major disasters.


> And people should never be treated as merely tools.

maybe on a tight knit team people don't mind being treated like tools because they understand what needs to get done next, and see that it makes the most sense for them to do it, it's nothing personal.

At my freshman year "1st day" our university president gave us an inspirational speech in which he said "people say our program just trains machines... I want you do know we don't train machines. We educate them."


I'd say that if you have a tight-knit team, you are already doing the very opposite of treating people as tools. There's nothing wrong with having a shared understanding of a goal and then assuming a specific role in the effort to accomplish that goal; people are very good at that.

The problem is when you think of people the same way you think of a hammer when you use it to hit nails: The hammer doesn't matter, only that the nail goes in.


Best practices are subjective. What is best practice is for C is not the same as Python.

SQL DBs provide consistency guarantees around mutating linked lists. It’s not hard to do that in code and use any data storage format.

Imo software engineers have made software “too literal” and generated a bunch of “products” to pitch in meetings. This is all constraints on electron state given an application. A whole lot of books are sold about unit tests but I know from experience a whole lot of critical software systems have zero modern test coverage. A lot of myths about the necessity of this and that to software have been hyped to sell stock in software companies the last couple decades.


"Best practices" are just a summary of what someone (or a group of someones) thinks is something that is broadly applicable, allowing you to skip much of the research required to figure out what options there are even available.

Of course, dogmatic adherence to any principle is a problem (including this one). Tools can be misused, but that doesn't really affect how useful they can be; though I think better tools are generally the kind that people will naturally use correctly, that's not a requirement.


If there's one thing that is true in this world, it's that you can't rely on people not fucking up.

All you can do is try making fuckups difficult by figuring out how to implement systems where doing the right thing is the path of least resistance. You won't stop fuckups, but a system where people will do the right thing 90% of the time instead of 50% is so much better.


It's a bit difficult to disagree with "not everything", because that's obviously true; not everything even can be perfectly analyzed.

However, I think you can (and should) try to at least maintain awareness of your behaviour and its effects, but only because if you don't, you may actually end up acting in ways that contradict your values; or if your values are ill-defined or vague, you might end up not understanding what it is that really matters to you. I think rationality is a necessary part of having a well-defined value system, even if the axioms are arbitrary. "Letting people enjoy things that aren't harming anyone" is a rational decision based on my value system.

What I can use rationality for, really, is figuring out if there's something I'm doing that's actually in conflict with my own values, or if there's something I can do to nudge the world (or at least my own life) to have more of the things I value.


I don't really see it as "giving" a commercial advantage to anyone if the new rule's purpose is to prevent something harmful and someone happens to benefit because they're already not doing that harmful thing.

In my view it's really a separate issue if SpaceX has too many advantages and that levelling the playing field somehow would be useful; allowing companies to grow too powerful does cause problems, and I don't think there's a moral requirement for regulators to be "fair" when dealing with corporations. They are not humans.

The need for that sort of intervention should not keep us from instating otherwise beneficial rules, though.


> In my view it's really a separate issue if SpaceX has too many advantages

That's not what I was saying. I was offering an observation, not a critique. I think this new rule is good.


Oh, I didn't really read it as a critique; mostly just the phrasing of "giving another commercial advantage" made me want to comment since it can be read as if that's the (or even just a) purpose of the rule.


Steampipe is pretty much "just" PostgreSQL with a foreign data wrapper and some fancy extras on top. The data is in tables from the database's perspective, so pretty much everything you can do with PostgreSQL, you can do with steampipe, including creating your own tables, views, functions and whatnot.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: