RP2040 is a pretty fitting MCU for this use case thanks to its PIOs coupled with DMA. It got me some impressive refresh rates on a 64x32 HUB75 display – over 2 kHz in 24 bit color mode. The lack of networking capabilities out of the box is a bit of shame though.
Isn’t Apply going to only scan the images that are about to be uploaded to iCloud? If so, I have a difficult time seeing how it differs from what Google is doing. Google scans what you’ve uploaded, Apple scans what you are about to upload.
People give Apple a lot of flak for the fact that Apple holds encryption keys for iCloud Photos (because obviously they do CSAM scanning server-side), but now that Apple is taking steps to ensure that they doesn’t hold these keys, they take flak again.
If you don’t use iCloud photos, this change doesn’t affect you. If you use iCloud photos already, nothing changes for you except where in the process the scanning takes place. Your phone already scans all your photos while your phone is asleep and charging to classify them, so this isn’t really much more than it already does.
Sweden has adapted something similar, Swish[1]. Co-owned by banks and so far without any fees for private entities. The adaption rate has been really incredible. Already 75% of the population has signed up for it and, in 2020, made over 600 million transactions.
For sequence diagrams in text format you can take diagwiz for a spin. There is also an online version of the tool. Disclaimer: It is working, but it's still just an experiment and might change a lot.
What sucks about the tags on GitHub is that there is no way of controlling who can create them. Ideally I would love to see a flow similar to the one for PRs. You could then require approval of other team members and also run GitHub actions that could for example verify the tag format and ensure correct format of the tag message.