It’s an application problem caused as a result of choosing the library, a common recommended library, and how the library and its documentation promotes its usage. The tools used were as the tools recommend. Admittedly, joining the project the wrong tools for the job were selected but these are all the react standard supporting libs.
The application should have been architected not to use redux-query, instead use the browser fetchApi and server side cache control headers as we own the entire stack. Today that’s controversial as ”every body uses” and “this is what the docs recommends”
Well, these libraries are doing something inherently complicated on top of just using browser cache headers.
You're only talking about http endpoint caching. These solutions are providing application data caching. These are not mutually exclusive concepts.
The latter offers things like data sharing/normalization across endpoints, data persistence across browser sessions, fine-grained cache invalidation/manipulation, and a lot of things on top of that.
It's a lot harder to manage a cache, and for most apps I never found it worthwhile. But there may be cases where you do want that level of control, like PWAs or an app optimized to never hit the network unnecessarily.
But it's definitely not replaceable with browser http header caching.
I'm currently wondering how likely it is I'll get into deeper LLM usage, and therefore how much Apple Silicon I need (because I'm addicted to macOS). So I'm some way closer to your steel man than you'd expect. But I'm probably a niche within a niche.
The one without quotes means "people moving across borders" the one with quotes means "people we've chosen to label as scary and dangerous because it makes you vote for us".
I doubt this. Legislation is written by committee and passed by democracy. Most of the voting public don't look up the voting records which are available to them. Most of the voting public can't name a third of the members of parliament.
If there is a conspiratorial take, the one about regulatory capture is more believable.
> So you could spend 6 months working on a project, release your product, then get inundated with bad reviews because it didn't work for half the population with iPhones.
You spent 6 months developing against an unstandardised technology on a platform with well documented compatibility complexities, and you didn’t test it on one of your larger target devices?
And so you'd purchase a new iOS device for ~$1000 and test against it.
Then you realise that you're getting bugs from some customers that you literally cannot replicate on your device.
Then you realise that the bugs are type of device independent, so you need to purchase one of every kind of device apple offers for ~$10,000 and test against those.
Then you realise that the bugs aren't just type of device independent, they're actually dependent on a combination of OS version AND type of device.
So you spend another ~$10,000 for a second copy of each device, and set them up to never auto update.
But now you need to wait 12 months for the next iOS update so you can test the current and the previous version, but waiting 12 months won't do.
So you want to rollback iOS versions, but Apple doesn't let you do that.
But they do let you simulate combinations of iOS devices and versions through xcode. So you buy a macOS device and you're out another $5,000 and spend time simulating, but then you realise that the simulations don't actually replicate the device bugs, they're just running sandboxed versions of desktop Safari on the host machine that are scaled down and streamed into the simulated device. And so we've learnt a $5000 lesson on the difference between simulation and emulation.
So here you are, out ~$25,000 and dealing with customer complaints and troubleshooting, the you find something unexpected... You find a customer with a combination of type of device and OS version that you have, and you can't replicate the issue.
So it's not just type of device plus OS version dependent bugs. The bugs are independent to the devices themselves. Yes, really!
So what do you do at that point?
You have no way to reliably test if a feature works, the only thing you can do is take Apple at their word and recommend to customers that they can still access your product through other platforms (Android, macOS, Windows) and just put up with the angry complaints and reviews from iPhone customers that you can't help.
--------
The above comes from personal hands-on experience.
We have purchased multiple of the same device on the same day from the same shop with the same OS on factory settings and have witnessed different behaviours.
Reporting issues to Apple is useless, their responses are absent at best, and hostile at worst.
Is that really how it works in browsers and other rendering engines?
Intuition suggests to me that it wouldn’t start with CSS and then find all the matching DOM nodes. I would expect it started at each DOM node and then found the CSS rules which might apply.
So “I’m adding an A to the tree; what are all the CSS rules with or A or * at the rightmost token; which of that set applies to my current A; apply the rules in that sub set”. Going depth first into the DOM like this should result in skipping redundant CSS, and (as my imagination draws it) reduce DOM traversals.
There's three different modes of running a selector in typical browsers:
(a) Element#matches
(b) Element#querySelector(All)
(c) By the engine for updating style and layout
The GP seems to be talking about (b), but even then browsers are checking each element one by one not advancing through the selector state machine in parallel for every element. (There's one exception in the old Cobalt which did advance the state machines IIRC).
(a) and (c) are conceptually very similar except that when doing (c) you're checking many elements at the same time so browsers will do extra upfront costs like filling bloom filters for ancestors or index maps for nth-child.
In TFA they're doing .matches() which I would expect to be slower than a hash map lookup, but for a simple selector like they're doing (just tag name) it shouldn't do much more then:
(1) Parse the selector, hopefully cache that in an LRU
(2) Execute the selector state machine against the element
(2.1) Compare tagName in the selector
Apparently Nokogiri implements CSS in a very inefficient way though by collecting ancestors and then converting the CSS into xpath and matching that:
In browsers, DOM parsing starts before (all) CSS is loaded and parsed. Also, the sizes of elements in the flow are (by default) dictated by the text content, so it really does not make sense to try to paint a page in a root-to-leaf order.
I suspect the only “clean” alternative is open source and self-hosting. Buy a couple of GPUs; build a server rig/home lab/etc. own the means of produxtion.
reply