It would be cool if you could store floating point (or double precision floating point) values in some sort of pseudo-type where you want to ensure that you have at least this much precision, and any time you insert a new row into the db table, if any of the individual values exceed the configured db max precision, as specified in the initial migration, or explicit table creation, it just automatically creates a migrated to bump the precision, (in Django you have 2 required fields for a decimal, it's like "max digits" and "precision" iirc)
EDIT: like an expandable floating point type, where as you increasing the needed max_digits, you just have this implicit migration that occurs. Idk this is super half/quarter-baked but just going off script a bit
Creating a migration to bump the precision would rebuild the entire table in most SQL databases. That's not something you want happening at random intervals.
It just seems silly that I'm building some PoC, and 3 years later all the assumptions I applied initially become something I need to actively design around, or just refactor my data schema. How many layers of abstraction would I need to add to create a kind of more flexible types that just know to expand if I add more significant figures, or contract (on a row basis) depending on the entry.
Like I was building a django app 2 weeks ago, and pulled in django-money to deal with currencies without reinventing the wheel, and then with another field on a model (meant to simulate a crypto asset), I had to arbitrarily decide what level of precision and what the "max digits" were for a class of potential instances of this model. I get that this might be overoptimization, but really - this is silly. By specifically asserting a max length for some field, aren't you wasting space? If not, then my entire point is moot.
Not super quantitative, but just thinking out loud a bit.
Assuming you mean like fixed-point arithmetic? Well, if you know you want to support all of 1 BTC's satoshis but also want to support Danish krone's, you could be wasting a lot of space by just delegating to the currently largest precision-requiring currency (or whatever, depending on your use case)
Unless I'm not understanding what you mean by fixed point?
For accounting, you'd want fixed point. I guess in this case it's decimal and not binary. Aside from that, there are arbitrary-precision ints. Python ints are. PHP promotes overflowing ints to floats, but this was probably a bad idea. We're also at the point that you can afford 64-bit ints.
EDIT: like an expandable floating point type, where as you increasing the needed max_digits, you just have this implicit migration that occurs. Idk this is super half/quarter-baked but just going off script a bit