Honestly I think Google needs to be broken up. It's not a novel idea but the more I think about it the more I like it.
So, Google becomes two orgs: Google indexing and Google search. Google indexing must offer its services to all search providers equally without preference to Google search. Now we can have competition in results ranking and monetisation, while 'google indexing' must compete on providing the most valuable signals for separating out spam.
It doesn't solve the problem directly (as others have noted, inbound links are no longer as strong a signal as they used to be) but maybe it gives us the building blocks to do so.
Perhaps also competition in the indexing space would mean that one seo strategy no longer works, disincentivising 'seo' over what we actually want, which is quality content.
I’m afraid the problem is not indexing, but monetization. Alternative google search will not be profitable (especially if you have to pay a share to google indexing) because no one will buy ads there - even for bing it is a challenge
The hope though is that by splitting indexing that puts search providers on an equal footing in terms of results quality (at least initially). Advertisers go to Google because users go to Google. But users go to Google because despite recent quality regressions, Google still gives consistently better results.
If search providers could at least match Google quality 'by default' that might help break the stranglehold wherein people like the GP are at the mercy of the whims of a single org
How sure are you about that? I find them to be subpar when compared to Bing, especially for technical search topics (mostly, PHP, Go, and C related searches).
Not a bad idea, but there are lots of details need to be fill in and, you know, devils is in the details.
Google's index is so large that it's physically very hard to transfer out while being updated. Bandwidth cost is non negligible outside google's data centre.
In terms of data structure, i can imagine it is arranged in a way that make google search easy.
Replication can be for HA, not just for scale. All depends on your business requirements.
Also replication can be good for other operational reasons, such as zero downtime major version upgrades. Again depends on the business need/expectations.
Same, as it stands you the user are legally liable for the full bill unless netlify graciously forgive it.
Even in op's case, they didn't (still charging 5k!).
If there was an option to cap billing, or at least some legally binding limit on liability, then I can countenance using netlify.
Until then, it's just not feasible nor worth the risk.
I'm also interested to know this. I have a couple of static sites running on the free tier for friends/family and now I'm planning on moving them all to a VPS as soon as I can.
It is beyond ridiculous that serverless providers don't offer a way to cap spending. The idea that it might cause your site to go offline is a complete non-argument. That what I _want_ to happen. I want to be able to say sure, I'm happy to sustain 10x traffic for a few hours, and maybe 3x sustained over days, but after that take it offline. I don't want infinitely scaling infra precisely because of the infinitely scaling costs.
1,656 rivers is still not a lot of rivers for 80% of all ocean plastic though considering the actual total number of rivers in the world.
Because the key bit of data there is that ocean-borne plastic is not primarily coming from beaches, or city storm run-off (at least in modernized areas) in an even distribution but is very obviously a product of local regulation (and in turn suggests that other measures - like foreign aid or imposing standards of behavior on local companies with foreign suppliers/subsidiaries would likely solve the problem).
The issue is no one's truly freed themselves from the "individual sacrifice" narrative of environmental remediation: the desire is to accuse people on an individual basis of ruining the world, and requiring all solutions to involve individual consequences for their "sins". There's much less enthusiasm for the reality, which is that other then some slight changes in tax allocation we might be able to just solve the entire problem and give a slightly improved standard of living anyway (i.e. people in wealthier cities generally like their waterways and beaches not to be clogged with trash).
The worst thing that I've found with teams is the latency in a video call is _just slightly_ too high (and noticeably higher than Google meet and zoom). I find this latency to be absolutely critical in keeping the flow of conversation and preventing people from accidentally talking over each other. After seeing a forced switch to teams when the whole company was WFH, it was extremely obvious the detrimental impact it had on every meeting. It's the most basic technical aspect of the product, it doesn't matter how great the rest is if you don't nail it. That's why I hate teams.
Just letting you know it's not you - this is a well known long standing issue with keycloak. Typically users see a significant performance cliff at around 300-400 realms. While one realm is not necessarily the same as one tenant in keycloak, it does make it a significantly larger headache to support multi tenant with SSO integrations in a single realm.
I'm afraid I can't give you more details than that, we just moved on from keycloak at that point.
A subtle advantage of cnpg is that it doesn't use statefulsets, instead the operator handles things like mapping storage volumes and stable identities. Regular kubernetes statefulsets have some tricky sharp edges for failure recovery.
I don't know if all of these alternatives use statefulsets but I remember several doing so.
I've personally found cnpg to be pretty robust, and supports everything you will eventually need once you're locked into a solution (eg. Robust backups, CDC, replica clusters).
I'm yet to find anything of a similar standard for mysql.
EDIT: it should also be noted that CrunchyData is a proprietary solution and requires a license to use in production. This is not particularly obvious from their docs.
So, Google becomes two orgs: Google indexing and Google search. Google indexing must offer its services to all search providers equally without preference to Google search. Now we can have competition in results ranking and monetisation, while 'google indexing' must compete on providing the most valuable signals for separating out spam.
It doesn't solve the problem directly (as others have noted, inbound links are no longer as strong a signal as they used to be) but maybe it gives us the building blocks to do so.
Perhaps also competition in the indexing space would mean that one seo strategy no longer works, disincentivising 'seo' over what we actually want, which is quality content.