The thinking is they will offer it at a high enough cost that they don't go broke. The problem is a lot of consumers, if not most, would be fine reading street signs or using mapquest.
It's liquid propellant being vented, the fuel is under extreme pressure so when its released it immediately expands to a gas. I don't know that Honda has said what their propellant is, but it's probably liquid hydrogen and liquid oxygen.
Waymo is firmly SAE self driving level 4 "automated driving" vs Tesla at level 2 "driver support".
You're correct that waymo has operators that can jump in in the event something unusual happens, but the vast majority of the time the vehicle is operating autonomously within their defined area. All that is described by SAE level 4 where no human is driving while automatic driving is engaged.
The 2025 model of the Hyundai Ioniq 5 has NACS and is my favorite alternative in terms of looks, features, and price. Rivian has been in a different price category so I have a hard time believing the $45k starting price.
The author wanted to use languages that were new to them, if the author has enough familiarity with rust to have a vendetta then it probably isn't new to them.
It really isn't that challenging to get going with JWT auth in AWS. Gitlab has pretty good documentation for how to use Gitlab ID tokens to assume roles that includes everything other than how to generate a JWT here: https://docs.gitlab.com/ee/ci/cloud_services/aws/
And of course generating OIDC PKI JWTs is pretty easy and well documented elsewhere.
The harder parts in my mind are:
- Updating this OSS project to serve a JWK from OIDC .well-known
- Convincing people that this method of authn is safe and that those keys are securely stored
The US Army at least uses Azure and AWS govcloud and not their own infrastructure. I don't think this takes away from your points though, the infrastructure is very locked down and meticulously managed and approved.
An LLM isn't providing its "best" prediction, it's providing "a" prediction. If it were always providing the "best" token then the output would be deterministic.
In my mind the issue is more accountability than concerns about quality. If a person acts in a bizarre way they can be fired and helped in ways that an LLM can never be. When gemini tells a student to kill themselves, we have no recourse beyond trying to implement output filtering, or completely replacing the model with something that likely has the same unpredictable unaccountable behavior.
Are you sure that always providing the best guess would make output deterministic? Isn’t the fundamental point of learning, whether done my machine or human, that our best gets better and is hence non-deterministic? Doesn’t what is best depend on context?
There is no other way to use an LLM than to give it context and have it give its best guess, that's how LLMs fundamentally work. You can give it different context, but it's just guessing at tokens.
I'm fairly sure the only solution here is 2 down to 3 right to 1 to goal. You can of course then use this to generate a couple of more by changing all the numbers that are impossible to reach.