Hacker Newsnew | past | comments | ask | show | jobs | submit | pornel's commentslogin

Maybe it's not a coincidence that public services got worse when everything moved online.

In the process lots of things have been captured by private companies that consolidated and got richer and more powerful than many countries.

Now we have a bunch of rent seekers who don't support our local economies, but pour profits through Bermuda-Luxembourg money funnels into their Scrooge McDuck vaults.


Those who TL;DRd - it's for the factory, not the cars!

Old EV batteries are great for energy storage. A worse weight-to-capacity ratio doesn't matter for batteries sitting on the ground. A battery that holds only 70% of its original capacity is considered worn-out for EVs (and even replaced under warranty), but grid storage isn't driving anywhere, so any capacity left is still useful.


Battery banks are worse than degraded raid arrays in some important respects. The bad cells tend to try to bring the rest of the pack with them. It’s one of the reasons people keep toying with partitioning cells and putting controllers onto individual cells or small groups of them.

Parting out two or three dead battery packs to cull the best of the survivors can improve things quite a bit. And as you say, on a stationary pack you can afford to overdo telemetry, cooling, and safety circuitry because it doesn’t have to move, let alone accelerate.

I don’t know what the half life is like for the reused cells though. Do the cells that lasted twice as long as their neighbors continue to outperform or do they revert to the mean over time? I could see either being true. The days when you accidentally produce cells that are several stddev better than your target quality should make cells that last longer, unless they’re sold to a leadfootted driver.


A degraded battery bank does not mean a bank with outright "bad" cells. The cells will probably be way more off than they used to be, but there can still be plenty of effective capacity in the bank. Heck, if space isn't an issue, it's productive as long as it isn't self-discharging too fast.

You can still have a working battery even with some "bad" (i.e., way out of spec) cells, depending on the BMS. All the thresholds are configurable, just that a regular EV setup would lean towards safety.


plus if you aren't making your packs unrepairable on purpose with foamed construction (like Tesla). you can par out modules in the packs into new configurations somewhat easily for the amount of work needed.

There's usually a pretty big gap in time between 'worn' at say 70-80% of original capacity, and a pack that has actual failed cells in it too.

>The bad cells tend to try to bring the rest of the pack with them.

This is true (and in some cases potentially dangerous) when you have a several cells of varying voltages in parallel but it's fairly trivial (by EE standards at least) to overcome this with something similar to a charge pump.


You are talking about cells going bad and they are talking about cells that have less capacity from being cycled, not the same thing at all.

No I’m talking about what warrants putting a pack on the secondary market and they’re being unreasonable. Just being 80% of initial capacity isn’t going to prompt someone to swap the batteries out. A damaged pack will.

There are lots of car battery packs out there on the secondary market for many different reasons. Not every single battery pack is taken out from a good car because it has a bad cell.

they’re being unreasonable

No one is being 'unreasonable' you just started talking about something different.


Ehhh it really depends. There are plenty of vehicles that are getting traded in where manufacturers/dealers would rather just replace the batteries.

I'd wager the bulk of them are hybrids where the batteries see a pretty aggressive charge/discharge cycle on a relatively small capacity (and therefore being relatively cheap to replace compared to a full electric). Of course then there are also full electrics where the owners get upgraded capacity or replacements due to degradation from use.

And importantly they aren't just recycling EV batteries here. They are using lithium-ion, nickel metal-hydride, and lead-acid batteries. So they are also buying up traditional ICE automotive batteries as well.

Also worth noting this project is a collaboration between Toyota, JERA, and local universities for use at JERA's facilities. JERA is a large battery reprocessing and recycling company so they are already getting second hand batteries into their facilities on a regular basis. This project is primarily about doing the design and engineering work necessary so that JERA can set up an array of these battery containers, get notified when a unit fails, and swap out the battery with one from their stock for recycling.


> It’s one of the reasons people keep toying with partitioning cells and putting controllers onto individual cells or small groups of them.

I have been out of the battery tech game for a while now but decades ago we were balancing individual NiCd and NiMH cells for optimal performance, is this basically the same thing?

https://electronics.stackexchange.com/questions/463591/nicke...


Usually a group of cells are welded together using a conductor. If they are in series, you need to balance the cells using balance leads. If they are in parallel though, they are balanced ahead of time to prevent too much current between the cells and thereafter they will balance themselves once they are wired in parallel.

In a parallel bank, a single cell going bad can bring down the rest to the same voltage. Even worse, if the bank is directly connected to other banks it can take out them as well. Also, if there is an internal short in one battery, the rest will pump current through it very effectively lighting it on fire. Individual battery protection circuits, smart switches, and internal short detection can help with this.


At the cell level they don’t degrade linearly. First it’s slow then it’s fast and then it’s abrupt collapse. You probably have noticed that yourself with old devices. Some do not hold charge even for a minute.

With battery packs probably you can do some smart things to make the degradation curve look more linear, but again there is only so much you can do.


There’s quite a lot you can do when you can isolate and deactivate individual cells, big battery packs like this really do not fall off a cliff in the same way you’re describing.

How do you achieve this with cells in series? Do you kill an entire row and put it on "maintenance voltage"? I know how balance charging RC batteries work but they're smaller scale and "single row".

Also in RC car world it's generally preferred to have one cell per voltage step, I've had way more dual-cell(per series) packs fail than single ones. Though my experience is only 6s/~22v but it's "the same shit on a bigger scale" as far as I can comprehend.


> How do you achieve this with cells in series?

You don't, if one cell fail you shut off the whole array of cells that is in series. But a pack has several arrays of cells.


No EV that I'm aware of has the ability to bypass a single cell in a series string.

I have often wondered if it would be worth designing an EV battery that can permanently short out a bad cell in a string, perhaps by deliberately disabling balancing, letting the bad cells voltage fall to zero, and then perhaps having a single use 'bypass' that latches on.

It wouldn't be a seamless user experience, because if you discharge the cell to say 0.5 volts but then the user tries to charge their car, you can't let them, since you cannot safely charge a lithium cell which has fallen below the minimum voltage, but you also cannot bypass it till the voltage falls to zero. Could be done automatically at 3am though like system updates.


I guess you can put a relais/switch in for each cell in a series but then you need to account for voltage differences when taking them out. Either by over provisioning within the series and rotating in different cells. Or by have other strings take up the slack.

Either way you need some form of overbooking / compensating capacity.


A relay would allow you to switch it out and then back in again. Which you don't need. Just a fusible link that can be blown to permanently disconnect the battery from the string might be simpler and more ideal for the application.

Nearly all EV's today would run just fine on 3.7 volts fewer.

My car's high voltage circuitry seems to work down to about half of the nominal voltage.


Is the failure mode of whatever chemistry is in an EV that they just conduct electricity? Li-Po usually fail in more spectacular ways than so.

There's NCA (drones), NMC (laptops, phones, many electric cars), LFP (stationary/grid), and LFMP (many new electric cars; slightly more expensive but higher current variant of LFP).

Yes, this happens sometimes.

Search this page for "PTC": https://www.electricbike.com/inside-18650-cell/

The PTC protects the rest of the battery if a single cell internally fails short.


Why can you not bypass until the voltage falls to zero? I'm with you aside from that.

If you just close the bypass switch, a large current will flow out of the (dying) cell, making a lot of heat.

You could have a two-way bypass, disconnecting the original cell, but that would cost more. Remember the bypass switch is duplicated for every series cell group (hundreds) and must carry the whole battery current.

Or you could have some kind of slow drain resistor - but then you're back to the time issue.


Okay, I understand. I was definitely imagining some sort of latching switch, that would connect to the bypass instead of the cell. Makes sense that would be more expensive.

I don’t know to be honest, I just know that you can find second hand car batteries that have been used for 15/20 years that still have 40% charge and I don’t know of any single phone battery that’d be the same. I could be talking out of my ass but that’s what I thought.

Sudden failure in a big battery like these is usually due to a single cell failing, which can usually be replaced and then the battery pack is back to the 70% capacity or whatever. Probably in this context of scale it's worth doing the work of replacing bad cells.

Nobody is out there opening packs and replacing single cells, a battery pack is usually composed of multiple modules and each module can have multiple arrays of cells in series. You shutoff the whole array of cells around the cell that failed and the battery keeps working fine at reduced capacity.

If it happens multiple times in the same module you replace a whole module of cells. The packs can usually be disassembled and parts replaced, but the modules are usually soldered down to prevent/mitigate thermal runaway.

Also you can't mix cells of different chemistry or capacity together in the same module. So really if one fails in a module you replace the whole module. Or, in their case, just keep the module there disconnected until the whole battery fails then you scrap the whole battery pack. I assume it is not worth it for them to do any kind of replacements.


You do realize these batteries you're referring to resale at a decent price because, for the most part, they still function really well, just not in its existing capacity.

Balancing multiple battery packs at different wear levels is a huge nightmare. You have to run rebalancing operations all the time and on used packs it can be quite dangerous to trigger a thermal runaway.

If they do it with different types of batteries it is even more complicated, like you need to write some custom software to sync all that up. This is not a trivial project.


They reuse inverters too. So the packs are separated, no need to balance.

So? They considered it to be worth while for them. Thus are working on it.

I know, it seems it is fairly large operation however, if it wasn't I would assume it was a PR stunt.

This is definitely not worth doing for small scale operations. As far as I know there is no generic solution for doing this kind of thing (I used to work in the area, but not directly on the BMS systems).


More often than not these bad cells can be removed, manufactured a new, and reinstalled. Not 100% but almost good as new.

Making your own cells is fun.

For Toyota, this is trivial and the energy storage these “left over” batteries provide, given a tinkering, is sufficient.


Your own cells? Like you put together a lasagna of metal sheets and chemical goops and roll it up?

Nah, you can buy 18650 cells, nickel strips, a spot welder, and make a BMS. Build your own battery pack.

If you don't know what you're doing, don't do this. Even if you know, probably don't. It works, and is a regular industrial process, sure, but you're trying to perform controlled melting of the protective housing of one of the many, tightly packed chemical energy bombs you're sticking together. Doesn't take too much of a mistake to go very wrong.

If you're building a battery pack in this day and age, use something like LiFePo prismatic cells and bolt-on busbars instead - way less dangerous chemistry, way less spicy process - but realistically speaking just buy the premade packs. For normal sizes, they're not more expensive (but don't buy the "too good to be true" ones) and means not having to deal with entirely unprotected battery terminals eager to give you a Very Bad Time.


Generally speaking the cells that are welded on are designed to be welded on in the areas were you do the welding. Doing something other then welding on them properly is going to be more unsafe then welding.

The proper tools to do this are not that expensive anymore in the greater scheme of things. It is just a question of whether or not it is worth to do it at the scale you are doing it or pay somebody else to do it.

Of course if you buy cells that are designed to be bolted together then bolt them together.

Of course the bolts, or whatever else provides the threads, on those cells are welded on.


> Generally speaking the cells that are welded on are designed to be welded on in the areas were you do the welding.

... by automated spot welder programmed to the specified timing and temperature control from the cell spec sheet, in a controlled environment with suitable protection and fire suppression for a battery manufacturing line. Not by a hobbyist's first try with a homemade spot welder and a safety squint.

I have made such spot welder and done such spot welding. Sure it's fun to do stupid things, but it remains stupid and unnecessary. For a homebrew battery bank, this is the wrong tool, wrong cell and wrong chemistry.

Buy premade, or if you must, buy boltable prismatic lifepo cells. They can dump a lot of power if your short them, but you can drill straight through them and they'll remain stable. The random 18650 li-ion cells... Not so much.


Seconding that advice to just use prismatic lifepo cells. Those have become really cheap, too: You can order brandnew 1kWh cells for $60ish + shipping even if you only need single digit quantities (those want to be squished a bit for longevity, so you might have to design a suitable enclosure).

Energy autarky has never been so affordable, progress on batteries and solar panels was awesome over the last decade.


Context: there are chopstick shaped, ultra cheap, Chinese battery tab welders as well as no-brand battery tab value-packs to use with, available online.

One tabs each are placed onto each ends of cells, held down with the sticks, and instantly welded upon push of a button. This is much safer than heating up the whole battery by attempting to solder wires directly onto the battery cells(which are made of unsolderable materials anyway). The tabs limit heat conduction into the battery and it is considered safe to solder onto them.

If you're going to build your own battery pack no matter whatever whoever says or do, this types of cigarettes contain significantly less amounts of nicotine and tar components than others.


that is remarkably different from building your own CELLS. You are building your own pack in whatever series and parallel configuration you want. Which i agree is fun and a good skill

Username checks out. :D

If you take car EV batteries and use them for stationary storage when past end-of-life, the fire risk becomes fairly substantial because EV batteries often have a little water ingress, physical damage etc.

It can be solved by isolating each battery in its own steel box, but that gets fairly expensive fairly fast.


I've seen a video on the youtube where a battery recycling company does this; they leave the car battery packs in their original housing, which I presume is water resistant enough. Each unit is also connected to a controller, which I also presume monitors battery health, temperature (assuming temp sensors on car battery packs), voltage, etc. If a unit is dying they can safely dispose of it, else, the units were out in the open and with several meters in between, meaning any fire would be unlikely to spread to something else and there's plenty of access opportunities.

Very space inefficient though, but there's more than enough of that in the US.


> Very space inefficient though, but there's more than enough of that in the US.

Well, you could perhaps put photovoltaic cells on top to use that space? Your battery park needs to be connected to the grid anyway.


How much distance does one pack realistically need to not cascade? Honestly I can't imagine any more than half a meter since air is an extremely good insulator. Just make sure the fire can't crawl across though cable insulation?

I've personally set RC lipo on fire with the wood-nail-hammer technique and while the fire out of the pack is intense I can't imagine it igniting another pack.


Don't forget about radiant heat! There's a pretty much perfect insulator between the sun and Earth conduction-wise, yet it is still pretty cozy up here.

That's fair, I don't know how to calculate this but my naive assumption is a burning pack won't radiate enough to combust something 1 meter away if there's circulating air around.

Precautionary principle. There’s not good ways to extinguish these fires once they start. So you kinda have to let them go. Maybe you could use some sort of deluge system or aggressive liquid cooling on the surrounding cells however. Overbuilding the delivery system but then running the pumps at their most efficient cfm except when the smoke alarms go off.

Do we use the precautionary principle when we run nuclear, build dams and burn coal as well or is this an extra thing because it's a potentially good way to reuse EV batteries? I don't think we should build these hand-me-down EV batteries near population centers, but my understanding is that the worst case scenario would be the plant burning down and releasing bad things (hello coal & natgas) into the atmosphere?

If we could develop some basic standards for packs (which voltage steps per row and some kind of connector interface standard like for charging) I think we have a really good way to maximize the lifetime and use of EV batteries to help the environment.

I paraphrase Bill Gates: There's no one energy source which will save us, many will complement eachother.


> precautionary principle when we run nuclear, build dams

Yes. Dams in particular. You calculate for various failure modes and you design around mitigating the disaster if failure should occur. That's why dams are designed with emergency spillways. If there is a bunch of rain, gate failures, etc and you suddenly have more water than you know what to do with, you have the emergency spillway as a last resort. They exist to route water in high volume out of the resevoir, often in a sacrificial manner in an attempt to prevent the dam from failing. And if a dam would fail, it's preferably that it do so at the emergency spillway than elsewhere. So there is a certain amount of "in certain conditions failure can/will happen so this is how we design the system to fail as gracefully/least destructively as possible".

Nuclear has this as well. The plans for this are called "Severe Accident Mitigation Guidelines" or SAMGs with the general practice being called SAM (same abbreviation, just drop the G). Each nuclear site has them and they are generally framed as "this shouldn't go wrong but if it does". You can try to avoid those failure modes but they can always still potentially occur and the most you can do is just try to keep the damage from spreading to the best of your ability.


"Do we use the precautionary principle when we run nuclear, build dams and burn coal as well" ayfkm?

Dams bust, nuclear blows up. It's rare but it happens. Their worst case scenarios are worse than a park of batteries burning down on a gravel/concrete park?

Your interpretation seems to be "we don't use caution when building them" which is not what I meant at all, we do but the risk is non-zero.


> There’s not good ways to extinguish these fires once they start.

If one battery pack catches fire, you can start moving the others away from it.

If you normally keep 0.5m between them, you have plenty of buffer space to eat into.

Basically it would start as . . . X . . . with X being the pack on fire "." being a battery pack not on fire, and " " being the half metre between them. Then you move them to get:

... X ...

Where the dots now have perhaps only 30cm between them, but the space to the X is increased.


Eh, the math is probably pretty different with massive quantities of packs in racks.

I’m imaging every firefighter I’ve ever known suddenly having the hair stand up on the back of their necks.


Maybe they can also make the boxes out of concrete for a lot cheaper?

this is like picking up unfinished cigarette butts and making few finals puffs before burning your mouth.

worn-out batteries can swell and fail spectacularly, with fireworks


Except that the cigarrette is only 30% smoked and still perfectly fine to smoke for a while longer (if you insist on an analogy).

Car battery packs are really good; even the oldest Teslas are only now getting to less than 80% capacity. They are designed not to swell/fail if they're worn, else there would be a lot more car fires.


I think you may be confusing worn-out batteries with damaged batteries, these two subclasses do not have the same properties

Car power packs are batteries in the other sense of the word. They can be disassembled and culled. So what matters is the health of the best 1/kth cells in the array not the overall array health.

To a large extent yes, but Rust adds more dimensions to the type system: ownership, shared vs exclusive access, thread safety, mutually-exclusive fields (sum types).

Ownership/borrowing clarifies whether function arguments are given only temporarily to view during the call, or whether they're given to the function to keep and use exclusively. This ensures there won't be any surprise action at distance when the data is mutated, because it's always clear who can do that. In large programs, and when using 3rd party libraries, this is incredibly useful. Compare that to that golang, which has types for slices, but the type system has no opinion on whether data can be appended to a slice or not (what happens depends on capacity at runtime), and you can't lend a slice as a temporary read-only view (without hiding it behind an abstraction that isn't a slice type any more).

Thread safety in the type system reliably catches at compile time a class of data race errors that in other languages could be nearly impossible to find and debug, or at very least would require catching at run time under a sanitizer.


What annoys me about borrowing is, that my default mode of operating is to not mutate things if I can avoid it, and I go to some length in avoiding it, but Rust then forces me to copy or clone, to be able to use things, that I won't mutate anyway, after passing them to another procedure. That creates a lot of mental and syntactical overhead. While in an FP language you are passing values and the assumption is already, that you will not mutate things you pass as arguments and as such there is no need to have extra stuff to do, in order to pass things and later still use them.

Basically, I don't need ownership, if I don't mutate things. It would be nice to have ownership as a concept, in case I do decide to mutate things, but it sucks to have to pay attention to it, when I don't mutate and to carry that around all the time in the code.


It sounds like you may not actually know Rust then because non-owning borrow and ownership are directly expressible within the type system:

Non-owning non mutating borrow that doesn’t require you to clone/copy:

    fn foo(v: &SomeValue)
Transfer of ownership, no clone/copy needed, non mutating:

    fn foo(v: SomeValue)
Transfer of ownership, foo can mutate:

    fn foo(mut v: SomeValue)

AFAIK rust already supports all the different expressivity you’re asking for. But if you need two things to maintain ownership over a value, then you have to clone by definition, wrapping in Rc/Arc as needed if you want a single version of the underlying value. You may need to do more syntax juggling than with F# (I don’t know the language so I can’t speak to it) but that’s a tradeoff of being a system engineering language and targeting a completely different spot on the perf target.

Can you give examples of the calls for these procedures? Because in my experience when I pass a value (not a reference), then I must borrow the value and cannot use it later in the calling procedure. Passing a reference of course is something different. That comes with its own additional syntax that is needed for when you want to do something with the thing that is referred to.

> Because in my experience when I pass a value (not a reference), then I must borrow the value and cannot use it later in the calling procedure.

Ah, you are confused on terminology. Borrowing is a thing that only happens when you make references. What you are doing when you pass a non-copy value is moving it.

Generally, anything that is not copy you pass to a function should be a (non-mut) reference unless it's specifically needed to be something else. This allows you to borrow it in the callee, which means the caller gets it back after the call. That's the workflow that the type system works best with, thanks to autoref having all your functions use borrowed values is the most convenient way to write code.

Note that when you pass a value type to a function, in Rust that is always a copy. For non-copy types, that just means move semantics meaning you also must stop using it at the call site. You should not deal with this in general by calling clone on everything, but instead should derive copy on the types for which it makes sense (small, value semantics), and use borrowed references for the rest.


It is not possible then to pass a value (not a reference) and not implement or derive Copy or Clone, if I understand you correctly. That was my impression earlier. Other languages let you pass a value, and I just don't mutate that, if I can help it. I usually don't want to pass a reference, as that involves syntactical "work" when wanting to use the referenced thing in the callee. In many other languages I get that at no syntactical cost. I pass the thing by its name and I can use it in the callee and in the caller after the call.

What I would prefer is, that Rust only cares about whether I use it in the caller after the call, if I pass a mutable value, because in that case, of course it could be unsafe, if the callee mutates it.

Sometimes Copy cannot be derived and then one needs to implement it or Clone. A few months ago I used Rust again for a short duration, and I had that case. If I recall correctly it was some Serde struct and Copy could not be derived, because the struct had a String or &str inside it. That should a be fairly common case.


You can pass a value that is neither copy or clone, but then it gets moved into the callee, and is no longer available in the caller.

Note that calling by value is expensive for large types. What those other languages do is just always call by reference, which you seem to confuse for calling by value.

Rust can certainly not do what you would prefer. In order to typecheck a function, Rust only needs the code of that function, and the type defitions of everything else, the contents of the functions don't matter. This is a very good rule, which makes code much easier to read.


Is it expensive in Rust? Normally only data in stack gets copied. Heap data is untouched

Yes, but if you have a large value type it will be on the stack unless you manually box it. Passing by value can get quite expensive quite fast, especially if the value keeps being passed up and down the call chain.

Is this really true? What do you mean by value types? The types that implement copy or any struct types? Because I think struct types only get moved

Yes, Rust will not automatically turn a value into a reference for you. A reference is the semantic you desire. If you have a value, you’re gonna have to toss & on it. That’s the idiomatic way to do this, not to pass a value and clone it.

&str is Copy, String is not.


> Other languages let you pass a value, and I just don't mutate that, if I can help it

How do they do that without either taking a reference or copying/cloning automatically for you? Would be helpful if you provide an example.


I did not state, that they don't automatically copy or clone things.

I might be wrong what they actually do though. It seems I merely dislike the need to specify & for arguments and then having to deal with the fact, that inside procedures I cannot treat them as values, but need to stay aware, that they are merely references.


C++ auto copies as well, it's just a feature of value semantics. References must be taken manually - versus Java or C#, where we assume reference and then have to explicitly say copy. Rust, I believe, usually moves by default - not copy, but close - for most types.

The nice thing about value semantics is they are very safe and can be very performant. Like in PHP, if we take array that's a copy. But not really - it's secretly COW under the hood. So it's actually very fast if we don't mutate, but we get the safety of value semantics anyway.


Rust will transparently copy values for types that declare the Copy trait. But the default is move which is probably what C++ would have chosen had they had the 30+ years of language research to experiment with that Rust did + some other language to observe what worked and what didn't.

The pattern you're looking for is:

``` fn operate_on_a(a: A) -> A { // do whatever as long as this scope still owns A a } ```


If all you're doing is immutable access, you are perfectly free to immutably borrow the value as many times as you want (even in a multithreaded environment provided T is Send):

    let v = SomeValue { ... }
    foo(&v);
    foo(&v);
    eprintln!("{}", v.xyz);
You have to take a reference. I'm not sure how you'd like to represent "I pass a non-reference value to a function but still retain ownership without copying" - like what if foo stored the value somewhere? Without a clone/copy to give an isolated instance, you've potentially now got two owners - foo and the caller of foo which isn't legal as ownership is strictly unique. If F# lets you do this, it's likely only because it's generating an implicit copy for you (which Rust is will do transparently for you when you declare your type inheriting Copy).

But otherwise I'm not clear what ownership semantics you're trying to express - would be helpful if you could give an example.


I share the same pet peeve, it's not that it's not possible. It's that I would prefer copy and or move to be the default when assigning stuff. Kind of like the experience you get using STL stuff in c++.

Copy can’t be for types that aren’t copyable because there could be huge performance cliffs hiding (eg copying a huge vector which is the default in c++).

But Rust always moves by default when assigning so I’m not sure what your complaint is. If the type declares it implements Copy then Rust will automatically copy it on assignment if there’s conflicting ownership.


I have been thinking about how to express it.

My complaint is that because moves are the default, member access and container element access typically involves borrowing, and I don't like dealing with borrowed stuff.

It's a personal preference thing, I would prefer that all types were copy and only types marked as such were not.

I get why the rust devs went the other way and it makes sense given their priorities. But I don't share them.

Ps: most of the time I write python where references are the default but since I don't have to worry about lifetimes, the borrow checker, or leaks. I am much happier with that default.


You're not talking about copying values. You want it to be easy to have smart references and copy them around like you do in Python and Java, but it's more complicated in Rust because it doesn't have a GC like Python and Java.

In Rust, "Copy" means that the compiler is safe to bitwise copy the value. That's not safe for something like String / Vec / Rc / Arc etc where copying the bits doesn't copy the underlying value (e.g. if you did that to String you'd get a memory safety violation with two distinct owned Strings pointing to the same underlying buffer).

It could be interesting if there were an "AutoClone" trait that acted similarly to Copy where the compiler knew to inject .clone when it needed to do so to make ownership work. That's probably unlikely because then you could have something implement AutoClone that then contains a huge Vector or huge String and take forever to clone; this would make it difficult to use Rust in a systems programming context (e.g. OS kernel) which is the primary focus for Rust.

BTW, in general Rust doesn't have memory leaks. If you want to not worry about lifetimes or the borrow checker, you would just wrap everything in Arc<Mutex<T>> (when you need the reference accessed by multiple threads) / Rc<RefCell<T>> (single thread). You could have your own type that does so and offers convenient Deref / DerefMut access so you don't have to borrow/lock every time at the expense of being slower than well-written Rust) and still have Python-like thread-safety issues (the object will be internally consistent but if you did something like r.x = 5; r.y = 6 you could observe x=5/y=old value or x=5/y=6). But you will have to clone explicitly the reference every time you need a unique ownership.


No, I fully understand the difference. I am just saying since I don't have a GC, I would rather have the system do copies instead of dealing with references.

At least as long as I can afford it performance wise. Then borrowing it is. But I would prefer the default semantics to be copying.


> At least as long as I can afford it performance wise. Then borrowing it is. But I would prefer the default semantics to be copying.

How could/would the language know when you can and can't afford it? Default semantics can't be "copying" because in Rust copying means something very explicit that in C++ would map to `is_trivially_copyable`. The default can't be that because Rust isn't trying to position as an alternative for scripting languages (even though in practice it does appear to be happening) - it's remarkable that people accept C++'s "clone everything by default" approach but I suspect that's more around legacy & people learning to kind of make it work. BTW in C++ you have references everywhere, it just doesn't force you to be explicit (i.e. void foo(const Foo&) and void foo(Foo) and void foo(Foo&) all accepts an instance of Foo at the call site even though very different things happen).

But basically you're argument boils down to "I'd like Rust without the parts that make it Rust" and I'm not sure how to square that circle.


Borrowing isn't for mutability, but for memory management and limiting data access to a static scope. It just happens that there's an easy syntax for borrowing as shared or exclusive at the same time.

Owned objects are exclusively owned by default, but wrapping them in Rc/Arc makes them shared too.

Shared mutable state is the root of all evil. FP languages solve it by banning mutation, but Rust can flip between banning mutation or banning sharing. Mutable objects that aren't shared can't cause unexpected side effects (at least not any more than Rust's shared references).


> While in an FP language you are passing values

By passing values do you mean 'moving'? Like not passing reference?


Yes, I guess in Rust terms, that is called moving. However, when I have some code that "moves" the value into another procedure, then the code after that call, can no longer use the moved value.

So I want to move a value, but also be able to use it after moving it, because I don't mutate it in that other function, where it got moved to. So it is actually more like copying, but without making a copy in memory.

It would be good, if Rust realized, that I don't have mutating calls anywhere and just lets me use the value. When I have a mutation going on, then of course the compiler should throw error, because that would be unsafe business.


I'm not sure how what you're describing is different from passing an immutable/shared reference.

If you call `foo(&value)` then `value` remains available in your calling scope after `foo` returns. If you don't mutate `value` in foo, and foo doesn't do anything other than derive a new value from `value`, then it sounds like a shared reference works for what you're describing?

Rust makes you be explicit as to whether you want to lend out the value or give the value away, which is a design decision, and Rust chooses that the bare syntax `value` is for moving and the `&value` syntax is for borrowing. Perhaps you're arguing that a shared immutable borrow should be the default syntax.

Apologies if I'm misunderstanding!


Couldn't you just pass a reference to your value (i.e. `&T`)? If you absolutely _need_ ownership the function you call could return back the owned value or you could use one of the mechanisms for shared ownership like `Rc<T>`. In a GC'd functional language, you're effectively getting the latter (although usually a different form of GC instead of reference counting)

I think I could. But then I would need to put & in front of every argument in every procedure definition and also deal with working with references inside the procedure, with the syntax that brings along.

Fair to be annoyed by this, but not very interesting: This is just a minor syntactical pattern that exists for a very good reason.

Syntax is generally the least interesting/important part of languages.


When you pass &variable, I don't think it affects the syntax inside the called function, does it?

Correct. If you then want to subsequently re-reference or dereference that reference (this happens sometimes), you'll need to accordingly `&` or `*` it, but if you're just using it as is, the bare syntactical `name` (whatever it happens to be) already refers to a reference.

Also, Rust does implicit dereferencing, so it's not that much of an issue in practice.

Ownership serves another important purpose: it determines when a value is freed.

I guess it is then a necessary complication of the language, as it doesn't have garbage collection, and as such doesn't notice, when values go out of scope of all closures that reference them?

Yes, but it’s more subtle than that. What Rust does is track when the object goes out of scope, and will make sure that any closures that reference it live for a shorter time than that. Sort of backwards of what you’re asking.

The borrow checker is a compile-time garbage collector. If you think about it in that sense, you can understand a lot of the ways it restricts you.

Depends on the FP language though, they are only values in the logic sense, they can be reference as well, hence the various ways to do equality.

Readers would benefit from distinguishing effects systems from type systems - error handling, async code, ownership, pointers escaping, etc. are all better understood as effects because they pertain to usage of a value/type (though the usage constraints can depend on the type properties).

Similarly, Java sidesteps many of these issues in mostly using reference types, but ends up with a different classes of errors. So the C/pointer family static analysis can be quite distinct from that for JVM languages.

Swift is roughly on par with Rust wrt exclusivity and data-race safety, and is catching up on ownership.

Rust traits and macros are really a distinguishing feature, because they enable programmer-defined constraints (instead of just compiler-defined), which makes the standard library smaller.


Swift has such a long way to go in general ergonomics of its type system, it's so far behind compared to Rust. The fact that the type checker will just churn until it times out and asks the user to refactor so that it can make progress is such a joke to me, I don't understand how they shipped that with a straight face.

There's nothing wrong with this in principle, every type system must reject some valid programs. There's no such thing as a "100%" type system that will both accept all valid code and reject all non-valid code.

I'm not questioning the principle, I'm critiquing the implementation versus the competition. Swift doesn't do a good job, it throws up its hands too often.

If you solve 80% of the problems by spending 20% of the time, is it worth spending the 80% to solve 20% of the problems? Or even if it is, is it more valuable to ship the 80% complete solution first to get it into the hands of users while you work on the thornier version?

If someone else ships a 100% solution, or a solution that doesn't have the problems your potentially half-baked "80% solution" does, then you might be in trouble.

There's a fine line here: it matters a lot whether we're talking about a "sloppy" 80% solution that later causes problems and is incredibly hard to fix, or if it's a clean minimal subset, which restricts you (by being the minimal thing everyone agrees on) but doesn't have any serious design flaws.


Sure. And I'm not sure the type checker failing on certain corner cases and asking you to alter the code to be friendlier is a huge roadblock if it rarely comes up in practice for the vast majority of developers.

Swift even if catching up a bit is probably not going to impose strict border between safe and unsafe.

I think tagged unions with exhaustive type checking and no nulls are the two killer features for correctness

Apologies for the non sequitur

Do you think Zig is a valid challenger to Rust for this kind of programming?


Zig's trying to be a "nicer C" that's easy to learn and fast to compile. It's a great language with a lot of neat design, and definitely setting itself up to be a "valid challenger" in a lot of the systems-y, performance-focused domains Rust targets. But it's not trying to compete with Rust on the safety/program correctness front.

Almost none of the Rust features discussed in this subthread are present in Zig, such as ownership, borrowing, shared vs. exclusive access, lifetimes, traits, RAII, or statically checked thread safety.


Thank you

XSLT is merely a glorified compression/decompression mechanism.

XSLT used as a "stylesheet" in browsers isn't dynamic, so it's like browsing static HTML pages, but expanding of on-the-wire data representation into repetitive HTML is done with XSLT's rules instead of just gzip/brotli (and it doesn't make much difference, because regular compression is really good at compressing the same things that XSLT is good at generating).

For XHTML the effect is almost identical to preprocessing the data server-side, except minor quibbles about whose CPU time is spent on it, how many bytes can be saved.

The only remaining use-case for XSLT is stopping browsers from displaying RSS like garbage, but that's using one browser feature to fix another browser regression. It'd be easier and more helpful to bring back integrated RSS readers, instead giving you very sophisticated technology for apologising how useless the RSS URL is.


No, I don't think that's the right way to characterize it - it's not compression, it's separation of concerns. The original concept for XSLT was to separate the data from the presentation - and many successful systems were built using that technology. Some rendered server-side, some rendered in the browser.

A lot of publishers use XML formats like JATS [1] for articles and other documents. Those are rendered to HTML server-side using XSLT 2/3 and shown to users.

[1] https://jats.nlm.nih.gov/ ("Journal Article Tag Suite")


This is neither correct nor helpful, i think. There are examples of dynamic pages using XSLT, and the purpose is not compression at all.

A very simple one - https://wendellpiez.github.io/XMLjellysandwich/IChing/

You might as well say JavaScript is just a compression format for Web assembly language.


Except, authors can directly write the "compressed" representation, which can be vastly more convenient. (Like in every templating system)

For as long as mom was feeding me, I never needed to shop at a grocery store.

But here the mom is a robot taking produce for free. Not a good business for grocery stores.


This can be done with exclusively owned objects. That's how io_uring abstractions work in Rust – you give your (heap allocated) buffer to a buffer pool, and get it back when the operation is done.

&mut references are exclusive and non-copyable, so the hot potato approach can even be used within their scope.

But the problem in Rust is that threads can unwind/exit at any time, invalidating buffers living on the stack, and io_uring may use the buffer for longer than the thread lives.

The borrow checker only checks what code is doing, but doesn't have power to alter runtime behavior (it's not a GC after all), so it only can prevent io_uring abstractions from getting any on-stack buffers, but has no power to prevent threads from unwinding to make on-stack buffer safe instead.


Yes and no.

In my case, I have code that essentially looks like this:

   struct Parser {
     state: ParserState
   }
   struct Subparser {
     state: ParserState
   }
   impl Parser {
     pub fn parse_something(&mut self) -> Subparser {
       Subparse { state: self.state } // NOTE: doesn't work
     }
   }
   impl Drop for Subparser {
     fn drop(&mut self) {
       parser.state = self.state; // NOTE: really doesn't work
     }
   }
Okay, I can make the first line work by changing Parser.state to be an Option<ParserState> instead and using Option::take (or std::mem::replace on a custom enum; going from an &mut T to a T is possible in a number of ways). But how do I give Subparser the ability to give its ParserState back to the original parser? If I could make Subparser take a lifetime and just have a pointer to Parser.state, I wouldn't even bother with half of this setup because I would just reach into the Parser directly, but that's not an option in this case. (The safe Rust option I eventually reached for is a oneshot channel, which is actually a lot of overhead for this case).

It's the give-back portion of the borrow-to-give-back pattern that ends up being gnarly. I'm actually somewhat disappointed that the Rust ecosystem has in general given up on trying to build up safe pointer abstractions in the ecosystem, like doing use tracking for a pointed-to object. FWIW, a rough C++ implementation of what I would like to do is this:

  template <typename T> class HotPotato {
    T *data;
    HotPotato<T> *borrowed_from = nullptr, *given_to = nullptr;

    public:
    T *get_data() {
      // If we've given the data out, we can't use it at the moment.
      return given_to ? nullptr : data;
    }
    std::unique_ptr<HotPotato<T>> borrow() {
      assert(given_to == nullptr);
      auto *new_holder = new HotPotato();
      new_holder->data = data;
      new_holder->borrowed_from = this;
      given_to = new_holder;
    }

    ~HotPotato() {
      if (given_to) {
        given_to->borrowed_from = borrowed_from;
      }
      if (borrowed_from) {
        borrowed_from->given_to = given_to;
      } else {
        delete data;
      }
    }
  };

You can implement this in Rust.

It's an equivalent of Rc<Cell<(Option<Box<T>>, Option<Box<T>>)>>, but with the Rc replaced by a custom shared type that avoids keeping refcount by having max 2 owners.

You're going to need UnsafeCell to implement the exact solution, which needs a few lines of code that is as safe as the C++ version.


In my universe, `let` wouldn’t exist… instead there would only be 3 ways to declare variables:

  1. global my_global_var: GlobalType = …
  2. heap my_heap_var: HeapType = …
  3. stack my_stack_var: StackType = …
 
Global types would need to implement a global trait to ensure mutual exclusion (waves hands).

So by having the location of allocation in the type itself, we no longer have to do boxing mental gymnastics


Doesn't Rust do this? `let` is always on the stack. If you want to allocate on the heap then you need a Box. So `let foo = Box::new(MyFoo::default ())` creates a Box on the stack that points to a MyFoo on the heap. So MyFoo is a stack type and Box<MyFoo> is a heap type. Or do you think there is value in defining MyFooStack and MyFooHeap separately to support both use cases?

You may already know this, but let-bindings are not necessarily on the stack. The reference does say they are (it's important to remember that the reference is not normative), and it is often simpler to think of them that way, but in reality they don't have to be on the stack.

The compiler can perform all sorts of optimizations, and on most modern CPU architectures, it is better to shove as many values into registers as possible. If you don't take the address of a variable, you don't run out of registers, and you don't call other, non-inlined functions, then let-bindings (and function arguments/return values) need not ever spill onto the stack.

In some cases, values don't even get into registers. Small numeric constants (literals, consts, immutable lets) can simply be inlined as immediate values in the assembly/machine code. In the other direction, large constant arrays and strings don't spill onto the stack but rather the constant pool.


In particular, let bindings within async code (and coroutines, if that feature is stabilized at some point) might easily live on the heap.

The suggestion is c# class vs struct basically, with explicit globals which are just class with synchronization

Note that items declared as `static` in Rust are already globals that require synchronization (in Rust terms, static items must implement `Sync`), although they're located in static memory rather than on the stack or heap.

But what does "heap my_heap_var" actually mean, without a garbage collector? Who owns "my_heap_var" and when does it get deallocated? What does explicitly writing out the heap-ness of a variable ultimately provide, that Rust's existing type system with its many heap-allocated types (Box, Rc, Arc, Vec, HashMap, etc.) doesn't already provide?

> What does explicitly writing out the heap-ness of a variable ultimately provide, that Rust's existing type system with its many heap-allocated types (Box, Rc, Arc, Vec, HashMap, etc.) doesn't already provide?

To be honest, I was thinking more in terms of cognitive overload i.e. is all that Box boilerplate even needed if we were to treat all `heap my_heap = …” as box underneath? In other words, couldn’t we elide all that away:

    let foo = Box::new(MyFoo::default ());
Becomes:

    heap foo = MyFoo::default();
Must nicer!

The Nano model is 3.2B parameters at 4bit quantization. This is quite small compared to what you get from hosted chatbots, and even compared to open-weights models runnable on desktops.

It's cool to have something like this available locally anyway, but don't expect it to have reasoning capabilities. At this size it's going to be naive and prone to hallucinations. It's going to be more like a natural language regex and a word association game.


The big win for those small local models to me isn't knowledge based (I'll leave that to the large hosted models), but more so a natural language interface that can then dispatch to tool calls and summarize results. I think this is where they have the opportunity to shine. You're totally right that these are going to be awful for knowledge.

The point in these models isn't to have all the knowledge in the world available.

It's to understand enough of language to figure out which tools to call.

"What's my agenda for today" -> get more context

cal = getCalendar() getWeather(user.location()) getTraffic(user.location(), cal[0].location)

etc.

Then grab the return values from those and output:

"You've got a 9am meeting in Foobar, the traffic is normal and it looks like it's going to rain after the meeting."

Not rocket science and not something you'd want to feed to a VC-powered energy-hogging LLM when you can literally run it in your pocket.


Isn't this what Apple tried with Siri? I don't see anyone use it, and adding an LLM to the mix is going to make it less accurate.

They wrote a whole ass paper about SLMs that do specifically this - expert small language models with narrow expertise.

And then went for a massive (but private and secure) datacenter instead.


Speculation: I guess the idea is they build an enormous inventory of tool-use capabilities, then this model mostly serves to translate between language and Android's internal equivalent of MCP.

I've had Gemma 3n in edge gallery on my phone for months. It's neat that it works at all but it's not very useful.

There are two CUDAs – a hardware architecture, and a software stack for it.

The software is proprietary, and easy to ignore if you don't plan to write low-level optimizations for NVIDIA.

However, the hardware architecture is worth knowing. All GPUs work roughly the same way (especially on the compute side), and the CUDA architecture is still fundamentally the same as it was in 2007 (just with more of everything).

It dictates how shader languages and GPU abstractions work, regardless of whether you're using proprietary or open implementations. It's very helpful to understand peculiarities of thread scheduling, warps, different levels of private/shared memory, etc. There's a ridiculous amount of computing power available if you can make your algorithms fit the execution model.


Rust has safe and reliable GTK bindings. They used gir to auto-generate the error-prone parts of the FFI based on schemas and introspection: https://gtk-rs.org/gir/book/

Rust's bindings fully embrace GTK's refcounting, so there's no mismatch in memory management.


We also use gir to auto-generate our bindings. But stuff like this is not represented in gir: https://github.com/ghostty-org/ghostty/commit/7548dcfe634cd9... It could EASILY be represented in a wrapper (e.g. with a Drop trait) but that implies a well-written wrapper, which is my argument. It's not inherent in the safety Rust gives you.

EDIT:

I looked it up because I was curious, and a Drop trait is exactly what they do: https://github.com/gtk-rs/gtk-rs-core/blob/b7559d3026ce06838... and as far as I can tell this is manually written, not automatically generated from gir.

So the safety does rely on the human, not the machine.


This is a generic smart pointer. It had to be designed and verified manually, but that line of code has been written once 8 years ago, and nobody had to remember to write this FFI glue or even call this method since. It makes the public API automatically safe for all uses of all weak refs of all GTK types.

The Zig version seems to be a fix for one crash in a destructor of a particular window type. It doesn't look like a systemic solution preventing weak refs crashes in general.


Do you mean gtk-rs (https://gtk-rs.org/)? I have done a bit of programming with it. I respect the work behind it, but it is a monumental PITA - truly a mismatch of philosophies and design - and I would a thousand times rather deal with C/C++ correctness demons than attempt it again, unless I had hard requirements for soundness. Even then, if you use gtk-rs you are pulling in 100+ crate dependencies and who knows what lurks in those?


Yeah, Rust isn't OOP, which is usually fine or even an advantage, but GUIs are one case where it hurts, and there isn't an obvious alternative.

> gtk-rs you are pulling in 100+ crate dependencies and who knows what lurks in those?

gtk-rs is a GNOME project. A lot of it is equivalent to .h files, but each file is counted as a separate crate. The level of trust or verification required isn't that different, especially if pulling a bunch of .so files from the same org is uncontroversial.

Cargo keeps eliciting reactions to big numbers of "dependencies", because it gives you itemized lists of everything being used, including build deps. You just don't see as much inner detail when you have equivalent libs pre-built and pre-installed.

Crates are not the same unit as a typical "dependency" in the C ecosystem. Many "dependencies" are split into multiple crates, even when it's one codebase in one repo maintained by one person. Crates are Rust's compilation unit, so kinda like .o files, but not quite comparable either.

A Cargo Workspace would be conceptually closer to a typical small C project/dependency, but Cargo doesn't support publishing Workspaces as a unit, so every subdirectory becomes a separate crate.


Rust has all the features to do COM and CORBA like OOP.

As Windows proves, it is more than enough to write production GUI components, and the industry leading 3D games API, since the days of Visual Basic 5.

In fact that is how most new Rust components on Windows by Microsoft have been written, as COM implementations.


I remember it being bad enough for a project I was working on that the engineer working on it switched to relm: https://relm4.org

It is built on top of gtk4-rs, and fairly usable: https://github.com/hackclub/burrow/blob/main/burrow-gtk/src/...

I'm sure the gtk-rs bindings are pretty good, but I do wonder if anyone ran Valgrind on them. When it comes to C interop, Rust feels weirdly less safe just because of the complexity.


But the GTK-rs stuff has already abandoned GTK3. Wait... I guess if the GTK-rs API doesn't change and it just uses GTK4 that's a good way to go? Everyone can install both 3 and 4 on their system and the rust apps will just migrate. Is that how they did it?


You're looking at this from the perspective of what would make sense for the model to produce. Unfortunately, what really dictates the design of the models is what we can train the models with (efficiently, at scale). The output is then roughly just the reverse of the training. We don't even want AI to be an "autocomplete", but we've got tons of text, and a relatively efficient method of training on all prefixes of a sentence at the same time.

There have been experiments with preserving embedding vectors of the tokens exactly without loss caused by round-tripping through text, but the results were "meh", presumably because it wasn't the input format the model was trained on.

It's conceivable that models trained on some vector "neuralese" that is completely separate from text would work better, but it's a catch 22 for training: the internal representations don't exist in a useful sense until the model is trained, so we don't have anything to feed into the models to make them use them. The internal representations also don't stay stable when the model is trained further.


It’s indeed a very tricky problem with no clear solution yet. But if someone finds a way to bootstrap it, it may be a new qualitative jump that may reverse the current trend of innovating ways to cut inference costs rather than improve models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: