But restoring 50TB of data from actual backups take a lot of time.
I like BTRFS to a fair degree, but thae fact that _any_ two drives failing in its "raid 10" configuration causes data loss is not obvious or intuitive.
If one has 50TB(!!) of mission critical data, one should not store it on one machine running btrfs. That is just silly. No many how many mirrors you throw at it.
I think this is undercutting the announcement. A baofeng is a specialized tool specifically for RX/TX 2m and 70cm analog audio radio waves. Of course it can listen to walkie talkies which are transmitting on 70cm.
I read your comment as someone posting about making toast with an iron and saying "Anyone can make toast. I can do it with my toaster. It's not that interesting"
All things that device can do is interesting because it is so many things, including this new ability.
I know posting a top-level comment seems indistinguishable from wagging my finger at the project, but it was just a license rant. I hoped that by including the third-party link it would make it more obvious what was being commented upon but I guess not
In the non-pinned, non-auto updated repos, I see commits from two weeks ago.
Though the main contributors are pretty active with nixpkgs, so a lack of commits here doesn't necessarily mean they aren't working on improving snowflakeos, but working on getting improvements they need implemented upstream.
Wait, people are using graphql for private, not exposed, backend apis?
Who would torture themselves like that?
Isn't the whole point that your frontend can make the exact queries it needs and load the exact data it needs?
Namely, last I checked, client libraries for working with graphql are only good with JS. I tried working with graphql a few years ago in python and the only two client libraries absolutely sucked. Server libraries were great, but python clients sucked, badly. I ended up writing bare requests with a hardcoded heredoc for the query and endless square brackets to get the fields for the little data I needed.
Maybe the situation improved dramatically in the last three years, but I can't imagine so dramatically.
I wouldn't pick graphql as a private backend API in a million years. Well, maybe if ever single service was written in nodejs with no possibility of using other languages.
I remain unconvinced that GraphQL is the tech we should have chosen, but we’re using it privately to mesh together subgraphs from different teams’ various areas of concern and it seems to be working pretty well. There’s been talk of opening it up to clients, but the idea of trying to apply a permission/access control layer is nightmare material.
As for language support, I’ve used a couple of libraries on both the client and server side for both Go and PHP, and they seem pretty good. JavaScript/TypeScript definitely seem to be the first-class citizens though.
Comparing GraphQL to REST and not a RESTful Graph-based data access API like JSON:API is just being disingenuous.
For GraphQL, you know it will be a nightmare. For REST, you don't, because you don't know what the actual API is. It could be a stopwatch API, obviously a stopwatch REST API is not comparable to GraphQL.
That doesn't matter at all. Handling authorization at the federation gateway point is a nightmare in any case. At least with GraphQL the scope is limited.
GraphQL literally is a "federation gateway point."
It's literally meant to be a gateway to a bunch of different external systems working in federation.
It's the exact same scope if you used a RESTful graph based data access API that connected to external systems..
Outside all the GraphQL jargon and gobbledygook you'll see that GraphQL is not some groundbreaking, unique or foundational product... The concept of generating graph based data access APIs from schemas is probably older than the concept of phones that can send emails. The only "innovation" is the "query language"...
> That doesn't matter at all.
I honestly don't know why I'm taking this seriously. Yes, it matters, your entire point was a comparison.. that your comparison is foundationally invalid matters when assessing the validity of your entire argument..
The whole point of my comment is that with REST you don't have any guarantees about the API and thus it's a much harder target to support. You're limiting it to json:api which is nice but I have never even seen a company use that. What happens in reality is that I get 20 apis from 20 teams, each of them completely different principle - and am supposed to build an universal auth layer for that. That's much more painful than in the companies that decided to go for GraphQL only.
> you don't have any guarantees about the API and thus it's a much harder target to support.
You won't find a developer worth hiring who thinks OpenAPIs are more difficult to work with than GraphQL.
Every REST API can be an OpenAPI
Practically every popular general purpose language has an OpenAPI client gen implementation.
And supporting a GraphQL API is not easier than supporting a REST API. All I have to do for my stopwatch API is handle one route.
> You're limiting it to json:api which is nice
Certain constraints do come with valid comparisons, comparing like things is a requirement of a valid pros/cons comparison, not a niceity.
It makes no sense to explain the pros of using one thing over an other when no one is thinking about using one thing as a replacement for another.
It's like talking about the pros/cons of using your proprietary web based chat platform vs. using UDP. Not using UDP for the chat or anything. Just comparing the chat with UDP itself, the whole protocol. No one considering your chat is going to step back and go "wait why am I using a web chat when I can just use UDP?"
> I have never even seen a company use that
My point isn't that a company does/doesn't use it. I mentioned JSON:API because it's a valid basis of comparison, no other reason.
> What happens in reality is that I get 20 apis from 20 teams, each of them completely different principle - and am supposed to build an universal auth layer for that. That's much more painful than in the companies that decided to go for GraphQL only.
But you have by your own admission never seen another company use JSON:API.. therefore you don't actually know that it is more painful than it would be for all REST APIs.
For all I know you could, again, be talking about my Stopwatch REST API... hence your comparison is invalid. By the way, auth is easier for my stopwatch API than it is for GraphQL. :)
Lol... Again, the point is that unrestricted is harder than restricted to something g specifically designed with this in mind. You go make your own comparison, I compare things that matter in my work. Your Stopwatch api is probably simple, the api to generate PDFs of financial projections that I have to support is not.
So then your argument isn't that GraphQL is better than REST. It's that you need something specifically designed for graph based data access to handle graph based data access.
Which a REST API can do.
This argument is quite a goalpost shift from your original point: that it is easier to implement authz/access control in GraphQL than for a REST API.
An API to generate PDFs is not comparable to GraphQL, because it's not a Graph based data access API...
You're so disingenuous and unserious it's insane. This is why people hate GraphQL and it's devs: 100% of your talking points are non-sequitur copy paste arguments. It doesn't matter how much context is provided to prove what you're saying is nonsense — you'll just bulldoze right along with said nonsense anyways. It's like I'm talking with a zombie.
For the purpose of accountability I'm pointing out that so far you've:
- made an invalid comparison
- tried to elaborate on that comparison only to make several other logical errors
- goal post shifted to a totally different point than your original point after being called out
you've already pointed out that you have zero experience with JSON:API and that you can't come up with anything good to compare GraphQL with sort of demonstrates that your reasoning behind using it is not based on anything other than marketing as you have never actually used an alternative to GraphQL... Everything you've said I've only ever seen in GraphQL promo material.
So basically you got shilled and turned into a shiller.
In my experience the focus should be on avoiding incidental complexity. Essential complexity born from business problems are counter intuitively often better left complex.
Depends on where you're working on the stack. Humans processes ( accounting, HR, etc) are complex by having evolved in world of murky definitions, but they work because people generally understand things and fill the gap. When trying to model those, then yes for sure things are going to look complex and you don't have much choice.
However, for all technical things, then i've noticed that things can be made quite obvious if you really focus on the exact problem you actually have to solve, and have a good understanding of its nature.
When I was younger I was eager to experiment with things, abstractions, etc, now I want to write (in the web development context) as much html, vanilla js and raw sql.
It's not really torture. I do. I have built a private federation API in a system where there's a "mothership" service that needs to talk to individual websites.
Those websites are WordPress at the moment but they may be Magento, PrestaShop or whatever in the future.
GraphQL means I can template the API calls used to keep them in sync with the mothership. It's awesome.
I also use it for the frontend/backend connection of a couple of admin APIs and an intranet app.
Nuxt frontends, Laravel/Lighthouse headless backends. The only pain point is the unnecessarily complex Apollo stuff in the client, but that gives you really nice smart queries in Nuxt 2; it may not be that crucial in Vue 3/Nuxt 3.
> that gives you really nice smart queries in Nuxt 2; it may not be that crucial in Vue 3/Nuxt 3.
It's even better with Vue 3 (or Vue 2 with the composition API anyway), especially when you combine it with graphql-codegen. Smart queries become just another composable, no `this.$apollo` to be seen. Got that going with API Platform on the backend myself, which has really nice integration with symfony, though it speaks only a bare-bones dialect of graphql as opposed to Lighthouse with all its crazy directives. No persisted queries either, but it's an internal app with bandwidth to burn.
Been eying wp-graphql for my one wordpress project. From my dabbling with it, the DX feels a lot nicer than the WP REST api, though I'm sure you know that's an awfully low bar to clear.
> especially when you combine it with graphql-codegen.
I did not know about this, thank you.
> Been eying wp-graphql for my one wordpress project. From my dabbling with it, the DX feels a lot nicer than the WP REST api, though I'm sure you know that's an awfully low bar to clear.
Yes on both scores. There's a reasonably useful Woocommerce binding:
Which is what I was using for the federated storefronts, with the API locked down so it could only be accessed from a back-office client I hooked up with a Laravel worker. At the time it was very early days, but wp-graphql has quite a sane interaction with the hooks/actions model, so it's not awfully difficult to add/patch/override the things you need.
You’d be amazed at how much self-inflicted difficulties developers are willing to subject themselves to in the name of doing things in the same way that a FAANG does.
Graphql is by far the most painful DX and least productive thing I've seen cargo-culted. Like most things there's a time and place for Graphql but I think it's vastly overused.
For backend services in particular literally anything else is probably a better choice - rest, grpc, soap, a socket. I honestly don't know why more people don't give twirp a shot.
Pothos + graphql Zeus gives us the ability to expose a prisma like interface to the clients with the ability to curate what fields are exposed and maintain the ability to setup field level security.
Oh, and we get DB schema to frontend propagation of changes to the type.
This is by far the best experience I have had for private APIs - Where we don't need to maintain backwards compatability.
I do get that the HN crowd is super adverse to graphql, but I fail to see other arguments than matters of taste.
Would you consider this solution for a public (enterprise/B2B, not public-on-the-internet-for-anyone) API? I may need to build a public API that lets customers select certain fields from a variety of objects with many fields.
Hmm, this is a really good questions and would definitely require a bit more information about the customer relation to fully answer.
In a setup where you can deprecate fields and remove them after X weeks I think it could wor well - My impression is just not that enterprise works like that?
I would probably go the route of having fully versioned APIs in the specific setup (GraphQl or not) - which is something that would be a hassle with a tight coupling between the Prisma schema and the GraphQl schema as proposed.
Thinking about it, all the situations where I have successfully used GraphQL, there has been a relatively tight coupling between the database schema and the graphQl schema. Previously it has been Ecto + Absinthe + Zeus (Elixir on the backend).
You’re right that there will need to be long-term support of API versions (or at least the current one and maybe the previous one).
My instinct is to give the users the full objects and let them select the fields they want on their side. It’s less work on our side. Also I’m not sure about customer comfort with GraphQL.
Where GraphQL really shines is in the synthetic fields - the ones that are just too expensive to compute for all requests, but kind of make sense to have en the entity.
Most interesting applications have some type of properties of entities that transcent a static object.
We use it for an application that aggregates data for consumption by several different teams that all consume different subsets of the data. When you have a pretty simple use-case it's really not that bad to get a decently functioning API off the ground, and because it's self-documenting we can spend our time on more mission critical work.
If you use OpenAPI/Swagger you can most likely generate the client code along with the types. I use the npm package below, and it just seems to work simply without any headaches.
I would recommend to check out feTS[0], it infers the types from the endpoints without the need of generating files, I find this easier to work with because you don't have an extra step when the schema changes. I have to say I didn't used it yet, is on my radar to try it when the opportunity arrives.
One thing I like about the codegen approach is that the generated code provides a snapshot of the changes through time in my Git history, which I refer to quite often.
Thanks for sharing feTS. It looks pretty awesome and I will be checking it out.
I was responding to your point specifically about manually writing code in typed languages, highlighting that there are solutions to avoid that. I am not dismissing your use of GraphQL.
Second, I don't think it's very comparable. The big drawback of "RESTful APIs" is that you cannot combine things. You call an endpoint and you add some query parameters, that's pretty much it.
In Graphql, you can combine and even nest queries. You simply cannot (or don't want to) generate all combinations in advance, so you'll decide adhoc in your code. Therefore you need a library that can do these combinations adhoc for you in a typesafe way.
Yeah. But since gql is very flexible, you then also want to combine them in an adhoc way but still get the right structure/types back. Then there is also batching, reusing inputs and so on, so ultimately a library saves a lot of time and effort when things grow bigger.
You want to execute a single query for a given page for performance. But whether a field is included in the query should depend on whether a component uses that field. If you write the raw query without fragments, you introduce implicit coupling between the query declaration and the subcomponents, which (like with REST endpoints, CSS classes, etc) means the query becomes append-only. Removing fields is dangerous and requires research, which devs on a Friday won’t do. Especially if you’re making changes to a component used across many pages.
Okay, so write fragments. However, without a framework like Relay (and soon Apollo), you still receive, at runtime, the entire network response. ie there is no masking at the fragment level. This means that there still is an implicit dependency between components, and removing fields is still dangerous.
I used to think like you. Until I used and understood the Relay GraphQL library. The learning curve is a bit steep. But if you are consuming a lot of GraphQL data in a web browser, this is totally worth the investment.
GraphQL really shines with data intensive applications where you need pagination, filtering, sorting, projections etc.
Normalized caching, pagination, and code generation are some features that many developers find appealing and are supported out of the box by many GraphQL clients.
As the current maintainer of graphql-python/gql, the most popular python client, I have to say that you should definitely try again to get an accurate opinion. A lot of changes have been made around that time and we are quite feature-full and stable right now, with 100% code coverage and good documentation.
I think there is some confusion with the term 'public' here.
The Graphql server itself is still publicly exposed to the internet, but the ability to query is not. Queries have to be whitelisted ahead of time (persisted queries).
I wrote a series of backend, internal GraphQL APIs. I think the argument of "make the exact queries it needs and load the exact data it needs" actually applies _more_ to internal APIs than to frontends, where I personally prefer a well schema'd BFF pattern. We did face some performance issues with some of the server libraries, unfortunately, but the backend development team were extremely productive in comparison to supporting multiple API shapes.
> backend APIs
Using GraphQL for service-to-service communication isn’t the sweet spot if you can deploy both services together. Rather, it’s great if the services are deployed on their own schedule. This is the case if one service is someone’s browser, where forcing a refresh whenever the backend is redeployed isn’t tenable. For service-to-service communication, where you can deploy both together, gRPC or something is a better option.
I think the public/private distinction here is more in "officially allowed for public use" vs "API your frontend uses but is not documented for public use". Obviously security is still a thing, but having good ergonomics for your frontend devs means your frontend devs can just work forward and backend people can focus on other things instead of going back and forth on perf issues downstream of REST APIs
> and backend people can focus on other things instead of going back and forth on perf issues downstream of REST APIs
Instead they now focus on perf problems of downstream federated GraphQL queries. And query complexity. And unbounded queries. And extreme overfetching of data that all clients inevitably do. And...
If anything REST APIs encourage overfetching, because you are almost never getting exactly the set of data you need.
Unbounded queries and extreme overfetching in general are problems that are ... easy-ish to solve when it's totally internal. Just be measuring perf, tag queries you are making to the frontend page you're making them from, and coordinate with frontend people if there's a problem.
Performance doesn't magically improve if you're using REST. And joins don't magically make things slower than "join via HTTP request" either. There might be patterns that are dangerous in GraphQL but honestly I feel like most internal APIs would benefit from easier scoping of fields and the like (rather than everyone re-inventing ad-hoc expansion and filtering)
> Performance doesn't magically improve if you're using REST. And joins don't magically make things slower than "join via HTTP request" either.
In REST you know the exact query and can optimise the hell out of it. Not so in GraphQL, where each query is potentially a new, never before seen request. Unless you use and optimise for persisted queries which just makes it REST with extra steps. And for large companies even that may not be an option since every query will end up being a persisted query once you deploy to production.
If your UI needs A and A.B and A.B.C, then you end up with three rest queries. Or you end up with some expansion logic. Meanwhile your REST query will likely also be throwing along for the write A.D, A.E, A.F, and maybe A.G.H because of some auto-expansion.
> Unless you use and optimise for persisted queries which just makes it REST with extra steps.
GraphQL offering a syntax and a way to state exactly what will be used makes it possible to chip away what is sent over the wire in persisted queries. This is not a negligable thing in environments with very wide and deep data models. These are places where their "REST" queries end up with "GraphQL with extra steps" through path inclusion/exclusion logic and expansion logic.
Obviously at the end of the day context matters most, but there's been loads of places and APIs I've used where you could feel REST causing performance issues in an end-to-end way, either by generating N+1 HTTP queries, or just shipping way too much data to show a list of names of resources.
This is nonsense. GraphQL queries are simple HTTP requests, with no more complexity than REST. You POST a query string and some JSON and it’s done. If your client makes it harder than that, don’t use it.
Here’s my workflow for creating an API with Postgraphile:
create view graphql.object as select some,columns from table;
(That’s it)
It’s trivial to query it with curl, I’d give an example but I’m afk rn.
I’ve been using GraphQL for about the same amount of time as in the article and it solved a bunch of problems for me.
It’s so easy to use, and saves so much time - once you spend the time to understand it.
I've veen reading about graphql forever and never understood it. Your comment finally made it click for me. Do you happen to have any more documentation around your method of working?
Unfortunately I’m on a bus to the airport for a couple of days so I’m a bit constrained.
If you know Postgres, I would recommend taking a look at Postgraphile. It’s awesome, and comes with an explorer web UI that really helps (GraphIQL with extras). Everything happens in real time. so if you update a view, the UI updates.
There are lots of GraphQL clients but many of them do all sorts of crap you don’t need. I just use graphql-request which is super simple. But of course you can just use fetch() too.
There are also lots of “standards” for GraphQL that make it seem more complex than it is. Ignore that stuff and just start playing with a good server like Postgraphile.
> This is nonsense. GraphQL queries are simple HTTP requests, with no more complexity than REST. You POST a query string and some JSON
The complexity of GraphQL in fact begins there, and also sort of explains a lot of why GraphQL is all but simple:
Why am I using a query language instead of just passing an AST via JSON, a data format every general purpose language supports very well these days?
The answer to the above question, and most of GraphQLs other complexities: Some arbitrary design decision.
Another example:
GraphQL could've easily been expressed as a REST API, even an Open API. From what I have seen, with the help of VC backing and FAANG endorsement, GraphQL mostly rose to replace JSON:API, which accomplishes pretty much all of the same goals in just JSON (and is RESTful).
One big issue of GraphQL is also that API clients tend to suck. That's not a problem for OpenAPIs.
And again, why is this the case? Some arbitrary design decision.
I feel like in general, someone creating a new DSL where it's not needed (and is obviously designed to look cool rather than actually be expressive), is a good sign they're just writing the software to stroke their ego rather than reach a meaningful end.
That's why in all the promo material for GraphQL you only see the query language, and not all of the actual setup required to send a request or even what an actual GraphQL HTTP request looks like. GraphQL, the framework, is not actually as simple and elegant as GraphQL the query language attempts to portray it as.
It's almost like someone came up with a query language for fun then came up with all the details of how a web server would utilize it afterwards.
Even today, GraphQL markets itself only as a query language (A query language for your API). When, as you have already mentioned, it is more than that.
That's why most developers know vaguely what GraphQL is ("Oh, that one language") but not how it actually works in practice. And when they actually encounter it, it feels almost like a betrayal, because it's nowhere near as simple, sleek or elegant as all the marketing they saw suggested.
At least, this was my experience when having to deal with a
third party GraphQL API (funny enough, they migrated from REST, see ShipHero).
Why do you mention GraphQL and JSON:API in the same sentence? The latter is at least 10x more difficult to understand with all its edge cases around entity relations and filtering.
These are just assertions with little to back them up. As TFA says, you can make all the same claims for REST. And GraphQL works the same as REST. But instead of a complex mishmash if positional and named parameters, it has a super simple query structure.
When you create a stack of REST APIs, you’re creating a DSL. But it’s a DSL with arbitrary and frequently undocumented relationships between objects, no type safety, and masses of hidden complexity.
GraphQL is simple. If you don’t think it’s simple, you don’t understand it.
> One big issue of GraphQL is also that API clients tend to suck. That's not a problem for OpenAPIs.
The clients are unnecessary. You can get ridiculously complex clients for REST, too. But you can also use GraphQL just using fetch().
The only material difference between the two from a client perspective is:
* REST gives you everything, even if you don’t want it
* GraphQL requires you to request what you want using a minimal query language.
GraphQL also lets you perform multiple queries in parallel, on the server, which REST can’t easily do.
REST is a PITA for any data model that’s more complex than CRUD.
> REST is a PITA for any data model that’s more complex than CRUD.
Also I'd just like to point out that for everything but queries themselves, GraphQL uses JSON. JSON is used in the C and U of GraphQL's CRUD... Explain to me why this couldn't have just been JSON and therefore a REST API again..?
Almost the only thing stopping it from being one is its "query language". I guess technically usage of one endpoint would make OpenAPI doc'ing difficult but I think possible depending on what JSON Schema/OpenAPI version you're using.
But it would also be trivial to just have separate endpoints for each schema.
You could even use all POST requests and only use the request body if you want.
> Also I'd just like to point out that for everything but queries themselves, GraphQL uses JSON.
Most clients send queries as json too, actually, not gql source. The funny thing is that GraphQL doesn't even specify JSON, or any wire format at all. It just has to be able to encode the gql type system, and JSON "just happens" to work (I'm pretty sure they did have json in mind when they wrote it, but it still isn't baked into the spec). The spec is also silent on transport, I think the whole tunneling-everything-through-POST thing came from the reference implementation.
You really seem attached to OpenAPI. I can't speak for everyone, but I for one would would much rather write SDL than json schema in yaml.
That's why I mentioned OpenAPIs, which you certainly can't make the same claims about. I have never had a problem with an OpenAPI, it's almost like a litmus test of web developer competence how simple it is.
> But instead of a complex mishmash if positional and named parameters, it has a super simple query structure.
Have you ever used an OpenAPI or an OpenAPI client? I seriously implore you to look at ShipHero's GraphQL documentation and look at any OpenAPI docs and pretend like GraphQL is simpler.
But if you've ever used an OpenAPI client, you know this is just a problem in theory, not in practice. I have never accidentally passed a parameter to the body, path, query or headers when it should've been elsewhere.
The problem is non-existent, especially since most people don't write OpenAPI requests from scratch.. you can just throw the document anywhere and you will have some easier way to make requests.
And even if that's really a problem, almost every OpenAPI doc UI I've seen has an option to show you the cURL and substitutes in parameters from the UI. You could just fill out the UI and copy the cURL request.
> When you create a stack of REST APIs, you’re creating a DSL. But it’s a DSL with arbitrary and frequently undocumented relationships between objects, no type safety, and masses of hidden complexity.
I'm beginning to think this entire comment is a strawman: my argument is that a graph-based OpenAPI would have been better than GraphQL. Any REST API can be an OpenAPI. You are comparing apples to oranges.
I am comparing a RESTful Graph based data access API (like JSON:API) to GraphQL.
You are taking what sounds like the kind of REST API you'd find in a w3schools tutorial and comparing it to GraphQL.
> GraphQL is simple. If you don’t think it’s simple, you don’t understand it.
That many developers find GraphQL difficult to understand is literally a testament to its complexity. This statement contradicts itself unless you believe I'm the only developer on the internet with this opinion, which I have a hard time believing given this entire comment is filled with strawman arguments you clearly use against others who dislike GraphQL.
> The clients are unnecessary. You can get ridiculously complex clients for REST, too. But you can also use GraphQL just using fetch().
Types are unnecessary. You can get ridiculously complex with strictly typed languages, too. But you can also just type everything dynamically and go crazy if the type system in my wacky language annoys you.
> REST gives you everything, even if you don’t want it
That's completely up to the REST API. I've written several that disagree with you. I've literally written RESTful Graph based data access APIs that don't do this. Multiple. At tech startups.
> GraphQL requires you to request what you want using a minimal query language.
So it requires complexity to enforce low bandwidth, got it. There is definitely no other way to lower bandwidth than enforcing complexity.
Usually I just use ABAC. You get the fields you can see. You can select a minimal list if you want. You don't have to type out 32 field names if you need them all.
I sometimes wish GraphQL just took a chapter from SQLs book. What you described here is not a feature.
> GraphQL also lets you perform multiple queries in parallel, on the server, which REST can’t easily do.
Ironically, that is up to the GraphQL implementation. ShipHero's does not support this, and requires N+1 queries in many common cases. I suspect that's not uncommon with GraphQL APIs, but I have no evidence for this.
But my suspicion is based on the fact that rigging up a GraphQL API implementation is a daunting enough task on its own, and multiple queries feels like it would be an after-thought to someone doing a quick-and-dirty setup. Which is how most people set up bleeding-edge infrastructure (which is what GraphQL was when it was largely popular), because there's not much knowledge, documentation or expertise readily available.
> REST is a PITA for any data model that’s more complex than CRUD.
Again, you're strawmanning. You're arguing against REST, not a RESTful Graph based data access API, like JSON:API.
REST is not an API. GraphQL is. It is basically an API proxy with a query language disguised as a graph based data access API.
”I've literally written RESTful Graph based data access APIs that don't do this. Multiple. At tech startups.”
Okay mr startup man, fact is you don’t know what you’re talking about and your head is so far up your startup ass you no longer know what’s good engineering.
Why are you writing your own restful graph based data access APIs?
And in addition, and TFA alludes to this, each time a person receives a transfusion there is a chance that their immune system will generate antibodies to it. That might not impact the transfusion at the time, but can make the recipient much harder to match for in the future.
My partner has had severe anemia, and because of a prior transfusion went from being an easy match (A+) to a being a 1 in a thousand match. The last time she needed blood it took 12 hours to locate and transport suitable blood.
If you read the literature on survival rates for people with very low hemoglobin level, it's all based on post operative recovery (or otherwise) of Jehovah Witnesses
There are too many reasons to list. But the primary reasons DNS is a hierarchy of DNS services hosted by many distinct groups. And DNS records are queriable publicly, but also consider private. Almost all servers in the world disable AFRX for a variety reasons.
Beyond that DNS servers are not just a "database" of records, they are services that can return different results depending on who queried. And this is quite common with CDNs.
There are many more reasons, but let's talk about what you could do.
You could download a list of all registered domains, and then query all of them for the most common records. It would take hours, not include 99.9 percent all records, and be out of date the second it completes. With this database, you could visit some websites, but many host services on subdomains and you have no way to dynamically get that list when you're populating the database.
I don't think DNSSEC would help in the common case of non-validating stub resolvers querying a public resolver. My understanding is that the DNS query response from a DNSSEC-validating public recursive resolver doesn't contain the information required for the stub client to validate it, only a single AD bit.
But restoring 50TB of data from actual backups take a lot of time.
I like BTRFS to a fair degree, but thae fact that _any_ two drives failing in its "raid 10" configuration causes data loss is not obvious or intuitive.