To add some context to the title for you, "Building the mouse Logitech won't make", it's referring to / implying the entire PC industry including Logitech.
Which would make a statement that "they no longer make wired trackballs" a falsehood. So likely not whatever "they" basscomm had in mind, if you think about it.
That's a terrible suggestion. A car is 20x the price of a generator while and at least 10x larger. A car can't power an entire household for days on end using a few gallons of gasoline per day, etc. A car is a transportation device, not a stationary energy generation machine designed as a backup in case of power failure.
> A car can't power an entire household for days on end
This is underestimating the ludicrous amount of power an EV's batteries have. You can absolutely power your house for days on end using one. (Of course, that should also give us pause to think about what it means that we spend so much energy for transportation compared to household necessities.) And gasoline has plenty of problems too, like its extremely short shelf-life.
I have two Tesla Powerwalls in my garage, and they (among other things) do just this.
I look at the size of those, and the size of a random Tesla, and I can easily see two of these shoved into the baseboard of a Tesla, much less the larger vehicles.
I know the Ford Lightning was advertised as a potential back up power source for the home.
The real trick is getting the power out of the battery and in to the home itself. It's one thing to run an extension cord to the refrigerator, quite another to get the battery plugged into your home circuitry. That requires more preparation, as well as an electrician.
Cheap way to power the home directly is to get a generator inlet put in ($500-1000) and connect the vehicle to that. You can then use either a vehicle or a gas generator, which is useful during extended outages when you need to top up the car.
Expensive way is to get some manufacturer specific automatic transfer switch (I got Tesla's put in). The hardware is $2500 and the labor is $4K+.
I did both and used both during a recent week long outage.
4 bedroom house in Austin, TX. A hot summer is around 100 kWh and my Rivian battery is 135 kWh. Which means I can roughly get a one full summer day out of my car's battery (assuming I still need to drive the car and usually leave it at 70% max).
So there you have it, I get about one full day. Not "days on end".
On the other hand here in Washington a little west of Seattle with a 3 bedroom all electric house I use about 40 kWh a day in the coldest month of winter and 8-10 kWh a day in summer.
You could get something like a Span electrical panel and only enable critical loads during a blackout and set the AC a little higher than normal. Even a large house can go down to 20-30 kWh a day.
That's on the higher end of household usage. On the lower end, during a cold windstorm in the PNW I was able to get it down to 600W (15 kWh/day) because I have mostly natural gas appliances. My Cybertruck kept us going for nearly a week with just one top-up (because I don't like to go above 80% or below 20%). We deferred using the dryer and dishwasher, and relied on the fireplace for warmth instead of the HVAC.
On a hot day yeah I'd be running the A/C but ideally you'd have solar to offset much of that.
> A car can't power an entire household for days on end using a few gallons of gasoline per day, etc
Average home uses 30kwH / day. Average EV battery size 40kwh. Correct not days on end, but at least a full day to full capacity, and perhaps a few days at reduced capacity. My Ioniq 5 has an 84kwH battery so I guess I'd get a bit further.
Im fully on board with the idea that root daemons shouldnt be necessary I just dont want systemd to become a dependency for yet again something else it shouldnt be a dependency for.
The point is that RedHat went on a tirade for years telling everyone: "Docker bad, root! Podman good, no root! Docker bad, daemon! Podman good, no daemon!".
And then here comes Quadlets and the systemd requirements. Irony at its finest! The reality is Podman is good software if you've locked yourself into a corner with Dan Walsh and RHEL. In that case, enjoy.
For everyone else the OSS ecosystem that is Docker actually has less licensing overhead and restrictions, in the long run, than dealing with IBM/RedHat. IMO that is.
But...you don't need systemd or Quadlets to run Podman, it's just convenient. You can also use podman-compose (I personally don't, but a coworker does and it's reasonable).
But yeah I already use a distro with systemd (most folks do, I think), so for me, using Podman with systemd doesn't add a root daemon, it reuses an existing one (again, for most Linux distros/users).
Today I can run docker rootless and in that case can leverage compose in the same manner. Is it the default? No, you've got me there.
SystemD runs as root. It's just ironic given all the hand waving over the years. And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point.
I've used Podman. It's fine. But the arguments of the past aren't as sharp as they originally were. I believe Docker improved because of Podman, so there's that. But to discount the reality of the doublespeak by paid for representatives from RedHat/IBM is, again, ironic.
> And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point
I would argue that Docker’s tooling is not well thought out, and that’s putting it mildly. I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.
Podman copied it, which honestly makes me not love podman so much. Podman has quite poor documentation, and it doesn’t even seem to try to build actually good designs for tooling.
FROM [foo]: [foo] is a reference that is generally not namespaced (ubuntu is relative to some registry, but it doesn't say which one) and it's expected to be mutable (ubuntu:latest today is not the same as ubuntu:latest tomorrow).
There are no lockfiles to pin and commit dependency versions.
Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
Mostly resulting from all of the above, build layer caching is basically a YOLO situation. I've had a build result in literally more than a year out-of-date dependencies because I built on a system that hadn't done that particular build for a while, had a layer cached (by name!), and I forgot to specify a TTL when I ran the build. But, of course, there is no correct TTL to specify.
Every lesson that anyone in the history of computing has ever learned about declarative or pure programming has been completely forgotten by the build systems.
Why on Earth does copying in data require spinning up a container?
Moving on from builds:
Containers are read-write by default, not read-only.
Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
The tooling around what constitutes a running container is, to me, rather unpleasant. I can't make a named group of services, restart them, possibly change some of the parts that make them up, and keep the same name in a pleasant manner. I can 'compose down' and 'compose up' them and hope I get a good state. Sometimes it works. And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
> Why on Earth does copying in data require spinning up a container?
It doesn't.
> Containers are read-write by default, not read-only.
I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
> Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
Almost all of this is wrong.
> And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
Pretty much everything you've outlined is, as I see it, a misunderstanding of what containers aim to solve and how they're operationalized. If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
>> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
> I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
They're not so different. An environment is just big software. People have come up with schemes for building large environments for decades, e.g. rpmbuild, nix, Gentoo, whatever Debian's build system is called, etc. And, as far as I know, all of these have each layer explicitly declare what it is mutating; all of them track the input dependencies for each layer; and most or all of them block network access in build steps; some of them try to make layer builds explicitly reproducible. And software build systems (make, waf, npm, etc) have rather similar properties. And then there's Docker, which does none of these.
> > Containers are read-write by default, not read-only.
> I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
Right. The issue is that the default is wrong. In a container:
$ echo foo >the_wrong_path
works, by default, using COW. No error. And the result is even kind of persistent -- it lasts until the "container" goes away, which can often mean "exactly until you try to update your image". And then you lose data.
> > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
> Almost all of this is wrong.
I would really like to believe you. I would love for Docker to work better, and I tried to believe you, and I looked up best practices from the horse's mouth:
Look, in every programming language and environmnt I've ever used, even assembly, an interface has a name. If I write a function, it looks like this:
void do_thing();
If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.
At least the docs try to remind people that the whole mechanism is "insecure by default".
I even tried asking a fancy LLM how to export a port by name, and LLM (as expected) went into full obsequious mode, told me it's possible, gave me examples that don't do it, told me that Docker Compose can do it, and finally admitted the actual answer: "However, it's important to note that the OCI image specification itself (like in a Dockerfile) doesn't have a direct mechanism for naming ports."
> > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
I'd like to have some way for a developer to declare that their software can be run with the 'app' container and a 'mysql' container and you connect them like so. Or even that it's just one container image and it needs the following volumes bound in. And you could actually wire them up with different orchestration systems, and the systems could all read that metadata and help do the right thing. But no, no such metadata exists in an orchestration-system-agnostic way.
> If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
> They're not so different. An environment is just big software.
Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code. Instead, Docker is a platform that enables packaging applications and their dependencies into lightweight, portable containers. These containers can be used in various stages of the software development lifecycle but are not the development environment themselves. This is not just "big software" - which makes absolutely no sense.
> Right. The issue is that the default is wrong. In a container: $ echo foo >the_wrong_path
Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong. If you are writing to a part of the filesystem that is not mounted outside of the container, yes, you will lose your data. Everyone using containers knows this and there are plenty of ways around it. I guess in your case you just always need to export the root of the filesystem so you don't foot gun yourself? I mean c'mon man. It sounds like you'd like to live in a software bubble to protect you from yourself at this point.
> If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.
You clearly don't understand Docker networking. What you're describing is the default bridge. There are other ways to use networking in Docker outside of the default. In your case, again, maybe just run your containers in "host" networking mode because, again, you're too ignorant to read and understand the documentation of why you have to deal with a port mapping in a container that's sitting behind a bridge network. Again you're making up arguments and literally have no clue what you're talking about.
> Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
OK? Grab a dictionary - read the definition for the word: "subjective", enjoy!
> > They're not so different. An environment is just big software.
> Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code.
You seem to be arguing about something entirely unrelated. GNU make, Portage, Nix, and rpmbuild also don’t provide tools to write, compile, or debug code.
> Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong.
This is the argument by which every instance of undefined behavior in C or C++ is entirely the fault of the developer doing it wrong, and there is no need for better languages.
And yes, I understand Docker networking. I also understand TCP and UDP just fine, and I’ve worked on low level networking tools and even been paid to manage large networks. And I’ve contributed to, and helped review, Linux kernel namespace code. I know quite well what’s going on under the hood, and I know why a Docker container has, internally, a port number associated with the port it exposes.
What I do not get is why that port number is part of the way you instantiate that container. The tooling should let me wire up a container’s “http” export to some consumer or to my local port 8000. The internal number should be an implementation detail.
It’s like how a program exposes a function “foo” and not a numerical entry in a symbol table. Users calling the function type “foo” and not “17”, even though the actual low-level effect is to call a number. (In a lot of widely used systems, including every native code object file format I’m aware of, the compiler literally emits a call to a numerical address along with instructions so the loader can fix up that address at load time. This is such a solved problem that most programmer, even agency
assembly programmers, can completely ignore the fact that function calls actually go to more or less arbitrary numerical targets. But not Docker users — if you want to stick mysql in a container, you need to type in the port number used internally in that particular container.)
There are exceptions. BIOS calls were always by number, as are syscalls. These are because BIOS was constrained to be tiny, and syscalls need to work when literally nothing in the calling process is initialized. Docker has none of these excuses. It’s just a handy technology with quite poorly designed tooling, with nifty stuff built on top despite the poor tooling.
> Why is the port number part of the way you instantiate the container?
Because that’s how networking works in literally every system ever. Containers don’t magically "export" services to the world. They have to bind to a port. That’s how TCP/IP, networking stacks, and every server-client model ever designed functions. Docker is no exception. It has an internal port (inside the container) and an external port (on the host), again, when we're dealing with the default bridge networking. Mapping these is a fundamental requirement for exposing services. Complaining about this is like whining that you have to plug in a power cable to use a computer. Clearly your "expertise" in networking is... Well. Another misunderstanding.
> The tooling should let me wire up a container’s 'http' export to some consumer or to my local port 8000.
Ummmm... It does. It's called: Docker Compose, --network, or service discovery. You can use docker run -p 8000:80 or define a Docker network where containers resolve each other by name. You already don’t have to care about internal ports inside a proper Docker setup.
But you still need to map ports when exposing to the host because… Guess what? Your host machine isn't psychic. It doesn’t magically figure out that some random container process running an HTTP server needs to be accessible on a specific port. That’s why port mapping exists. But you already know this because "you understand TCP and UDP just fine".
> The internal number should be an implementation detail.
This hands-down the dumbest part of the argument. Ports are not just "implementation details." They're literally how services communicate. Inside the container, your app binds to a port (usually one) that it was explicitly configured to use.
If an app inside a container is listening on port 5000, but you want to access it on port 8000, you must declare that mapping (-p 8000:5000). Otherwise, how the hell is Docker (or anyone) supposed to know what port to use? According to you - the software should magically resolve this. And guess what? You don’t have to expose ports if you don’t need to. Just connect containers via a shared network which happens automagically via container name resolution within Docker networking.
Saying ports should be an "implementation detail" is like saying street addresses should be an implementation detail when mailing a letter. You need an address so people know where to send things. I'm sure you get all sorts of riled up when you need to put an address on a blank envelope because the mail should just know... Right? o_O
I feel like we're talking right past each other or something.
Of course every TCP [0] and UDP networking system ever has port numbers. And basically every CPU has calls functions with numeric addresses. And you plug in power cables to use a computer. Of course Docker containers internally use ports -- if I have a Docker image plus its associated configuration, and I instantiate it as a container, and it uses its internal port 8080 to expose HTTP, then it uses a port number.
But this whole conversation is about Docker's tooling, not about the underlying concept of containers.
And almost every system out there that has decent tooling has abstraction layers to make this nicer. In AT&T assembly language, I can type:
1:
... code goes here
and that code is called "1" in that file and is inaccessible from outside. If I want to call it from outside, I type something more like:
name_of_function:
... code goes here
with maybe a .globl to go along with it. And I call it by typing a name. And that call still calls the numeric address of that function.
If I plug in a power cable to use a computer, I do not plug it into port 3 on the back of the computer, such that accidentally plugging it into port 2 will blow a fuse. I plug it into a port that has a specific shape and possibly a label.
So, yes, I know that "If an app inside a container is listening on port 5000, but you want to access it on port 8000, you must declare that mapping (-p 8000:5000)", but that's not a good thing. Of course, if it's listening on port 5000, I need to map 8000 to 5000. But the fact that I had to type -p 8000:5000 is what's broken. The abstraction layer is missing. That should have been -p 8000:http or something similar.
And the really weird thing is that the team that designed Dockerfile seemed to have an actual inkling that something was needed here, which is why we have:
EXPOSE 8080
VOLUME ["/mnt/my_data"]
but they completely missed the variant that would have been good:
or whatever other spelling of the same concept would have passed muster.
And yes, Docker Compose helps, but that's at the wrong layer. Docker Compose is a consumer of a container image. The mapping from logical exposed service to internal port should have been handled at an abstraction layer below Docker Compose, and Compose and Quadlet and Kubernetes and the command line could all share that abstraction layer.
> ... service discovery. You can use docker run -p 8000:80 or define a Docker network where containers resolve each other by name. You already don’t have to care about internal ports inside a proper Docker setup
Can you point me at some relevant reference? Because, both in my experience and from (re-)reading the basic docs, all of the above is about finding an IP address by which to communicate with a relevant service, not about port numbers, let alone internal port numbers (which are entirely useless to discover from inside another container, because you can't use them there anyway). Even Docker Swarm does things like:
$ docker service create ... --publish published=8080,target=80
and that's another site, external to the container image in question, where one must type in the correct internal port number.
> I'm sure you get all sorts of riled up when you need to put an address on a blank envelope because the mail should just know... Right? o_O
I will take this the most charitable way I can. Sure, it's mildly annoying that you have to use someone numerical phone number to call them, and we all have contact lists to work around this, but that's still missing the target. I'm not complaining about how you address a docker container, and it makes quite a bit of sense that you need someone's phone number to call them. But if you had to also know that that particular phone you were calling had its microphone on port 83 and you had you tell your phone that their microphone was port 83 if you wanted to hear them and you had to change your contact list if they changed phone models, then I think everyone would be rightly annoyed.
So I stand by my assertion: Docker's tooling is not very good.
[0] But not every networking protocol ever. Even in the space of non-obsolete protocols, IP itself has no port numbers. And the use of a tuple (name or IP, port) is actually a perennial source of annoyance, and people try to improve it periodically, for example with RFC 2782 SRV records and, much more recently, RFC 9460 SVCB and HTTPS records. This is mostly off-topic, as these are about externally visible ports, and I’m talking about internal port numbers.
I don't see your point. This is exactly how Docker works. Containers that are running when instantiated from the Docker daemon don't need to be run as root. But you can... Just like your containers started from SystemD (quadlet).
I run all my containers, when using Docker, as non-root. So where is the upside other than where your trust lies?
Quadlets is systemd. Red hat declared it to be the recommended/blessed way of running containers. podman compose is treated like the bastard stepchild (presumably because it doesnt have systemd as a dependency).
Please try to understand the podman ecosystem before lashing out.
yeah, it runs fine without systemd, until you need a docker compose substitute and then you get told to use quadlets (systemd), podman compose (neglected and broken as fuck) or docker compose (with a daemon! also not totally compatible) or even kubernetes...
> Anybody who moves from engineering to finance doesn't have their heart in engineering - which is fine, but its not like they had no choice.
Ahhh, the classic no true engineer / scotsman argument ... I couldn't possible be an Engineer because I like hard software projects with smart people, good budgets, and tight deadlines.
Gandi went sour when the original French company was forced to open a separate company for the US several years before being sold. IIRC it was related EU privacy but they publicly stated it was about credit card processing.