Hacker News new | past | comments | ask | show | jobs | submit login
How did REST come to mean the opposite of REST? (htmx.org)
398 points by edofic on July 18, 2022 | hide | past | favorite | 383 comments



The client knows nothing about the API end points associated with this data, except via URLs and hypermedia controls (links and forms) discoverable within the HTML itself. If the state of the resource changes such that the allowable actions available on that resource change (for example, if the account goes into overdraft) then the HTML response would change to show the new set of actions available.

If the client knows nothing about the meaning of the responses, it cannot do anything with them but show them to a human for interpretation. This would suggest a restful api is not made for system-to-system communication, but requires human mediation at every step of the way, which is precisely not how api’s are used. In short, this definition of restful api’s can’t be right, because it suggests restful interfaces aren’t api’s at all. Once a programmer adds code to parse the responses and extract meaningful data, treating the restful interface as an api, the tight coupling between client and server reappears.


> "This would suggest a restful api is not made for system-to-system communication, but requires human mediation at every step of the way"

Which is exactly what REST was originally designed to do: provide an architecture for the Internet (not your app or service) that allows for humans using software clients to interact with services developed by programmers other than those which developed the clients. It was about an interoperable, diverse Internet.

If the distributed application is not developed across distributed organizations, particularly independent unassociated organizations, then the architectural style of REST is overkill for what you intend and you could have just kept using RPC the whole time.

The point of the later RESTful API movement was to create distributed applications that leveraged the underlying architecture principles of the internet within their smaller distributed application. The theory being that this made the application more friendly and native to the broader internet, which I do agree is true, but was never the original point of REST.

That said, xcamvber [1] is right: this is me being an old person fighting an old person battle.

[1] https://news.ycombinator.com/item?id=32143382


People took the useful ideas and tossed the rest.

The whole idea of embedding links into the data that describe available operations was not seen as useful, because most web pages already do that. That was not a problem that needed to be solved.

But the concept of resource-oriented architectures which leveraged HTTP verbs to act on data with descriptive URIs was extremely useful in an era when interactions with web servers would look something like POST /common/backend/doActMgt.pl

Books like RESTful Web Services came out in 2007 and focused almost entirely on resource-oriented architecture. There's not really much mention of hypermedia in it. It's mostly about building API endpoints.

It also referenced AWS S3 (wow, S3 is old) a lot and treated it as a reference implementation / proof of concept that the idea works and is scalable.


Did you mean: People took the useful ideas and tossed the REST ? ;)


Brilliant!


I know you're not supposed to complain about downvotes, but seriously?! A complement about a witty comment gets me downvoted. I give up.


Did you mean, "...gets me downvoted. I give UP." ;)


That's what the upvote button is for.


There is more than one way to give positive feedback. Think of it like metric buckets. Imagine you could only rate movies out of one.

Sometimes you really like something and you want to tell the person you really liked it.

This obsession people develop about the "right" way to use upvotes and comments is inane.


As someone who has implemented S3 (TL for Cloudflare R2) I’ll choose to disagree that the RESTfulness of the S3 API is a resounding success. Just go ahead and try to write the code to route requests. So many features are likely excluded just because Amazon couldn’t figure out how to jam it into HTTP verbiage.

So sure. S3 is implemented on top of REST but I’d much rather pick a proper RPC protocol. The only reason to stick with REST is that there’s an entire ecosystem around intercepting it as the lowest common denominator (proxies, reverse proxies, caching, browsers etc). If all of those spoke something more modern (gRPC, capnproto, etc) we might be better of. Certainly it would be simpler to maintain and evolve these code bases.


Does R2 also exposes an R2 specific interface or does it only offer the S3 compatible interface?


It supports APIv4 (JSON REST API consistent with how all other Cloudflare APIs are managed) and Worker bindings (JavaScript API). The former is how the UI is implemented (not currently documented beyond creating and deleting a bucket because we haven’t taken time to review/stabilize the API for proper REST semantics). The latter doesn’t use REST but instead has a hacked up JSONRPC-like mechanism (which we’ll rip out at some point once the runtime makes certain things easier).


I’m not sure what the infatuation with HTTP verbs is. RPC allows modeling objects and arbitrary verbs. REST gives you a handful of verbs and punts on data modeling. It always seemed like a step back from an API design perspective. Definitely a battle I lost but never really understood the other side to begin with.


Most people are doing RPC-on-HTTP and call it REST, probably for the better in many ways.

I personally believe that the Verb/Noun/Resource part of rest is perfectly avoidable, while I believe that including URLs/URLs fragment/URLs-like object in responses is good.

HTTP is a very complex message passing transport layer, API should separately use the platform (HTTP caches/proxies/verbs/headers) and its main messaging feature (URL + body); so just set the right headers and verbs (GET/POST are enough) and pipe JSON/whatever around

(obviously this is for APIs, websites need to use the WWW/HTTP/HTML platform, not just the HTTP platform)


HTTP verbs are best understood as describing the style of interaction with the resource:

- GET/HEAD: read-only, cacheable - POST: side-effects, can be repeated, but not idempotent - DELETE: destructive side-effects, idempotent, no real response - PUT: creative side-effects, idempotent, can return the new resource


Standardization and a common language.

Limits.

Well developed error codes.

Rpc is a huge footgun.

What's annoying about rest is that it is a religion treated as a universal truth.

But really, rest is basically crud.


I disagree about well-developed error codes. Examples:

* There is no useful distinction between different kindes of bad requests (syntactically wrong, structurally wrong, not applicable to the data found, etc.)

* Special handling is needed to test if a resource exists -- you cannot just GET it and check for a 404 code because browsers log all 404 as errors, even if it's the "happy path"

* The error codes confuse authentication and authorization

These error codes are useful when there are middle-men (e.g. proxies) that can understand them without having any application-level logic. But for most REST APIs you don't want those in the way, so that point is moot.

My (to be evaluated) opinion is that RPC is underrated because there have been horrible RPC monstrosities in the past, though I'm aware that without any well-done RPC this opinion is useless.


The way I think of the status codes is as instructions for how a generic HTTP client or proxy should or may behave in response. Any status code which, in practice, does not affect generic client behavior, does not need to be distinguished in the status code (and can instead be distinguished in the body of the response).

For example, status 412 can be thought of as "try the transaction over again, your information about the resource is out of date". Maybe your client has generic support for atomic operations on resources, and 412 controls that. 429 can be thought of as "slow down". If your client has built-in throttling, 429 controls that.

> There is no useful distinction between different kindes of bad requests (syntactically wrong, structurally wrong, not applicable to the data found, etc.)

In general, a bad request is not something that client software would be expected to handle, other than "don't expect a different result if you try again". Obviously this is not a hard and fast rule because some errors are truly transient and some servers give the wrong error code due to bugs or whatever.

You could go all WebDAV with status 422 for non-syntactic errors, but I am skeptical of its utility. If your action is not applicable to the resource, you have 405.

When there's a 400 error, the expected client response is "some programmer will come by and look at the logs if it turns out to be a problem".

> Special handling is needed to test if a resource exists -- you cannot just GET it and check for a 404 code because browsers log all 404 as errors, even if it's the "happy path"

This is more of a UX issue with the browser console than any problem with HTTP.

> The error codes confuse authentication and authorization

Eh, it could be worse. We also have misspelled headers like "referer". As long as clients and semantics use the correct semantics, I don't care.


> You could go all WebDAV with status 422 for non-syntactic errors, but I am skeptical of its utility.

FastAPI uses 422 for "your data is structured wrong for this schema" and it's better than sliced bread.

I think the worst part about the 4xx/5xx distinction is that it doesn't really tell you whether the request is failing in a transient or permanent way (or can't tell). It would have been nice to have more leading digits (6xx, 7xx) and make some distinctions:

- the client did something wrong but might be able to succeed later if state changes (403, 404, 429)

- the client did something wrong and the result is unlikely to change without changing the request (405, 422)

Same with the 5xxs, split into "server" (endpoint) failed, vs middleware (load balancers, etc) failed or is overloaded.


Yes, maybe the distinction would be nice, but I think the existing 4xx and 5xx codes provide a reasonable baseline for building smart clients.

403, 404—log error, maybe retry later. I don’t think there’s a reasonable way for the server to distinguish transient vs permanent failures here.

429—client should back off, throttle requests.

5xx—client should back off, throttle requests.

The big problem here is that from the server side, if you try to figure out whether an error is transient or permanent, you often get the wrong answer. It’s a diabolical problem. The distinction between “failed” and “overloaded” is something that you might figure out in a post-mortem once humans take a look, but while it is happening, I would not expect the two to be distinguished.

What I do want to transmit from server to client are things like:

- Try again at a different URL (307).

- Try again at a different URL, and update your config to point at this other URL (301, 308).

- The request should be authenticated (401). Try again if you can authenticate.

- I understood the request and had the resources to process it, but the request failed (403, 404, 405, 412, etc) due to the system state. Retry if the user asks for it.

- There is something wrong with the request itself (400, 422, etc). This is a bug, get a programmer to look at it.

- Slow down (429). Retry automatically, but not too quickly.

- As above, but take a look at the server or proxy to see if it’s working correctly. (503, 504)

- Go look at the server logs. (500)

As a rule, I would say that any error can be transitory, and I would tend to write clients with the ability to retry any request. Not as a hard rule, just as a tendency. A “permanent” status code isn’t necessarily a statement of fact, but just a belief.


> There is no useful distinction between different kindes of bad requests (syntactically wrong, structurally wrong, not applicable to the data found, etc.)

That's false. The distinction is cromulently expressed by obeying the relevant RFC 9110 § 15.5.1. and 15.5.21. Further detail can be described as shown in RFC 7807 § 4.

> The error codes confuse authentication and authorization

It's not only the status codes. Like the misspelling of "referrer", this is legacy crap that cannot be changed to due backward compatibility.

> Special handling is needed to test if a resource exists -- you cannot just GET it and check for a 404 code because browsers log all 404 as errors, even if it's the "happy path"

That's analogue to "the whole world's a VAX" chestnut. Does existence of RFC ignorant software mean that we must penalise the other software that implements correctly? I think that would be a bad affair to be in. I refuse to make myself complicit in this.

> RPC is underrated because there have been horrible RPC monstrosities in the past

That's a misunderstanding of the argument in TFA. RPC on the Web is bad because the design is fundamentally worse, not a particular implementation.


However, anyone who has used HTTP error codes in REST knows to avoid that – Receiving a 404-Not-Found causes hours of debugging and blind poking compared to receiving a 200-OK-{“error”:”Entity not found”}…


Why can't you do a 404 + message?


HTTP is just a transport for your RPC. It's an implementation detail. At the HTTP layer, the transport was successful, so a 200 is appropriate.

A 404 would indicate 'not found' at the transport layer e.g. a bad proxy configuration or you didn't hit the right server at all. You definitely wouldn't confuse it with '${my_widget} not found'.


If you serve static content with nginx or any other webserver, you'll get 404 for any file that doesn't exist.

Why should '${my_widget} not found' be different?


The Webserver Servers a file or an endpoint. This file or endpoint have indeed be found. On this layer the transaction was successful.

So if you return 404 on this layer you are saying that whatever file or endpoint you searched is not there.

If you ask for a resource at an endpoint that does not exist e.g. via an parameter in the url and return 404 it is not clear if the endpoint does not exist e.g. the endpoint.php is missing or the resource you are looking for. Leading to questions like: is the endpoint down, is my internet wonky, did I misspell the name, and business logic related questions: is there no such widget, is the widget out of stock, should I call another endpoint?

404 does not indicate non existence but it indicates an inability to be found by the server. It's a nuance and it only matters when it does and then it bites...


You're talking about implementation details on thw server side. Why should the client care about that?

Why should the error code for /product/foo/thumbnails/123.jpg be different if served with nginx serving a static file or an application server that dynamically generates it based on the product id?


I make a distinction between the file and the endpoint. While 123.jpg can be found or not the thumbnails/123/ API Endpoint has two parts, the Call and the Argument: 123. To be honest... my argument unraveled in my head while writing - yes you have a point there. So I just can say that 404 is unhelpful because it's unspecific.


I think it's about the route. For a `thumbnails/123/` endpoint, the parameters are specified in the route. You're encoding more information in the HTTP level, so encoding the error in the HTTP level as well is reasonable, you can definitely have a 404 response if there's no 123 image present.

But if the route is ID-agnostic, like `/cgi-bin/generate_thumbnails.pl` or `/api/json_rpc.php` then you could justify a 200 OK that the endpoint itself was found, regardless of the parameters. In this design the RPC is definitely above the HTTP level.


This is absolutely counter to everything HTTP is supposed to be used for. I'm not quite sure how to even respond.


It's a common discussion with HTTP Status Codes. Application vs Transport layer. HTTP status are a mix of both and it can be misleading.

A couple of weeks ago there was an interesting post about this topic here in HN.


This is about REST, not RPC.


Yeah that seems the obvious thing. Our 404s say what wasn’t found (a resource; the endpoint, some associated data in the request).

Our 403s describe what permissions precisely the user is missing.

Our 400s describe what is malformed about the request.


HTTP verbs allow to leverage existing HTTP mechanisms like caching proxies and content-type negotiation.


Yes, people who use put and delete (and any other verb other than post and get) come off as insufferable bores to me. Heck, you just use GET and it will work fine 99% of the time.


And that 1% is a killer. Ask the people who had content deleted because of a GET API and bad interactions with pre-fetching and caching.


"GET" should not be used for anything that changes state (beyond simple logging of requests).

I'm an AppSec engineer and if I were doing a penetration test and found that an endpoint was using GET requests for state changes, I'd consider it a serious enough bug to put it on the report, even if it wasn't a security issue.


Having GET commonly understood as idempotent is ergonomic.


everything that relies on a "common understanding" from humans or ridiculous things like non-compiler-enforced idioms, programming styles, etc. is bound to fail at some point. the only thing that matters is what the PL / system accepts.


Do you think that browsers pre-emptively executing GET requests, or proxies caching GET responses, is a common understanding from humans?


Those are just bugs from engineers who assumed that such a common understanding existed. Bug reports should be filled, that's all.


> People took the useful ideas and tossed the rest.

Dylan Beattie actually had a great presentation about REST: "The Rest of ReST" https://youtu.be/g8E1B7rTZBI?t=250

I feel like some of the points in that video are really nice and also sadly some of the nicer possibilities of REST have been left underexplored: HATEOAS and resource expansion sounds great, but I've seen them be used very little in the real world.

Nowadays people reach for GraphQL more often and also sometimes shoot themselves in the foot when they need to deal with the more dynamic nature of querying data with it and the added complexity of an entire query language.

That said, it's nice that we get more and more stateless APIs (with JWTs for example), and sometimes we get the ability to add middleware (like additional auth, logging/auditing or caching layers) without altering the apps themselves too much, and honestly working with JSON and HTTP is wonderfully easy, even if not all of the APIs are actually "truly" RESTful.

I still find myself kind of sad that WADL never got big or that there weren't that many machine oriented API spec solutions that would implement a healthy dose of codegen, a bit more than OpenAPI seems to have built around it. Ideally, you could query a remote API and it would tell you everything that you need to build an API client:

  api-client-codegen --spec rest --input https://api.some-app.com/v3/api-description.json --implementation apache-http-client --output some-app-client-v3.jar
But alas, sadly that's not the world we live in and even while we could programmatically generate clients for APIs that change often, a lot of wisdom was lost from SOAP (and something like SoapUI, where you could feed it a WSDL and get a working client, despite SOAP itself being pretty bad).


In thay case RESTful API is an oxymoron, because if it is REST it isn't an API.


I agree with everything you said. As a fellow old person, I just wish they'd call them HTTP+JSON as calling them ReSTful obscures one of the core principals of ReST, Hateos.

It may not matter for a ton of "APIs", but there are a number of places within applications that would benefit from this form of decoupling vs the static client knowing what to do with endpoints, so conflated the makes actual ReST hard for engineers to understand and utilize.


Sounds very similar to (as far as I understand it) GOPHER.


My recollection is that using Gopher just felt more or less like browsing the early web with lynx. Simple text resources with hyperlinks.


By my read of this "REST API" is a near oxymoron. It was never supposed to be an "API" in the sense that a program consumes it. It was originally described as "Representational State Transfer (REST) architectural style for distributed hypermedia systems" with a focus on describing resources in generic ways for consumption by hypermedia systems (not arbitrary programs!).

I think this is most clearly described by two things Fielding wrote (and the original article links to):

https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...

https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...


> describing resources in generic ways

the trouble is that HTML is anything but generic. If you've ever tried to write a web scraper that can be used on _any_ webapage you quickly discover that its near impossible. I use to belived that there should be one way to use HTML to describe the page content and the rest should be CSS but gave up as its completely inflexable and the approch has been abandoned for simply describing presentation as this is actually practical. You'd need a HTML structural standard, but that will get mostly ignore (as have most of the W3Cs recommendations on that subject).

As you said "REST" and "RESTful API" are different beast, i guess the "-ful" should be more of an "-ish".


That was the idea behind micro formats and semantic HTML. The only people who ever used it thought the same as you, but there were never enough for any critical mass.


Similarly a client that just "does" HATEOAS and could be usefully used on any API is pretty much unattainable. At best you're going to be operating like a crawler.

Knowing what to actually do and what's useful and what the resources and their components actually mean, that's all out-of-band, much like knowing the structure of a site you want to scrape.


Well pound for pound most websites want to sell ads to human eyes and categorically block any robot that doesn't have a human piloting it.

Your best bet at finding standardized public data is projects that _want_ to be read and stored and don't make money, like Wikipedia


> If the client knows nothing about the meaning of the responses, it cannot do anything with them but show them to a human for interpretation.

This is the best counterpoint in this discussion and it deserves a lot of reflection.

But that reflection should include the realization that this is what the browser does all the time. Browsers don't have any particular semantic model in their head of what any particular form action or hyperlink "means" in terms of domain logic... but they do have some semantics for broad classes of verbs and other interactions associated with markup-represented potential actions, and so they serve as an agent presenting these to the user who does whatever wetware does in processing domain semantics and logic and makes a choice.

This has worked so well over the last 30 years it's been the biggest platform ever.

We're now in territory where we're creating software that's almost alarmingly good at doing some semantic processing out of no particular structure. The idea that we're facing a dead end in terms of potential for clients to automatically get anything out of a hypertext API seems pretty tenuous to me.


The original requirement is something on the direction of generalizing HTTP into a protocol for transferring all kinds of data with a human representation interlinked.

What would be very interesting, if HTML didn't evolve with the goal of containing all kinds of data with a human representation too. But now it's mostly redundant and people just prefer encoding things in HTML.


I would say "semantic web" is the key technology in an attempt to make that kind of API that doesn't need human intervention.

My understanding of the vision is that when all your responses all described using (fielding original) REST API's via RDF, using URI identifiers everywhere -- then a client that has never seen a particular server can still automatically figure out useful things to do with it (per the end-user's commands, expressed to the software in configuration or execution by some UI), solely by understanding enough of the identifiers in use.

You wouldn't need to write new software for each new API or server, even novel servers doing novel things would re-use a lot of the same identifiers, just mixing and matching them in different ways, and the client could still "automatically" make use of them.

I... don't think it's worked out very well. As far as actually getting to that envisioned scenario. I don't think "semantic web" technology is likely to. I am not a fan.

But I think "semantic web" and RDF is where you get when you try to take HATEOS/REST and deal with what you're describing: What do we need to standardize/formalize so the client can know something about the meaning of the response and be able to do something with it other than show it to a human, even for a novel interface? RDF, ostensibly.

The fielding/HATEOS REST, and the origial vision of RDF/semantic web... are deeply aligned technologies, part of the same ideology or vision.


It doesn't need the full SemWeb treatment to work. All it needs is a reasonable descriptive language such as SIREN [1] along with Web Link Relations list the IETF maintains already.

1. https://github.com/kevinswiber/siren


You have a typo, it's HATEOAS (Hypermedia as the Engine of Application State).


I wish I could upvote this 100 times.

REST, in its most strict form, feels like it was designed for humans to directly interact with. But this is exceptionally rare. Access will nearly always be done programmatically, at which point a lot of the cruft of REST is unnecessary.


> REST, in its most strict form, feels like it was designed for humans to directly interact with.

It was literally extracted from the browser’s interaction model so… kinda?


But browsers already have HTML for this. Links are just <a> tags. POST endpoints are exposed with <form>, etc. Webpages.

Why do we need a separate concept for this thing called REST if it just reduces to hypermedia in the end?


> Why do we need a separate concept for this thing called REST if it just reduces to hypermedia in the end?

Because REST is the formalisation of the interaction model. It was defined in a dissertation written about it. The very first section of the chapter is called “deriving REST”.

> But browsers already have HTML for this. Links are just <a> tags. POST endpoints are exposed with <form>, etc. Webpages.

HTML is not an interaction model, it’s a content-type (and one which assumes and requires human mediation).

REST was about formalising the underlying interaction model for machine contexts.


It seems like the article is basically calling REST an architecture or set of conventions that well-designed websites should adhere to, leaving APIs in the literal sense of the term basically out of it entirely. I mean, you could write code to parse any structured data, but if it can change anytime (because it's self-documenting?) then I'm not sure why you would bother.


I don't think that's what the article was saying. It was mainly pointing out the way people incorrectly use the term REST. I don't think the author was saying that well-designed websites should adhere to it, but rather sites that say they're REST but have APIs that are not RESTful.

An HTML response is able to be parsed and interpreted by the browser, and doesn't need any special client code to interpret its meaning. It understands what's a link, and the actions it can perform on it just using the response itself.

I'd argue that it's still an API, since it's still parsed, interfaced with by the system. The major difference is that the interface is uniform.


> I'd argue that it's still an API, since it's still parsed, interfaced with by the system.

Yes, HTML is an API that browsers can interface with. But that interface is on an entirely different level of abstraction from the web application itself. When a web developer writes HTML to be sent to a client in response to some request, they are not making any changes to HTML itself, as a language, and so they are not working on that particular API at all. Rather, they are using that API to develop their own application whose purpose is entirely different from the purpose of HTML. The value they add is not related to solving the problem of exchanging and displaying hypertext documents.


Browsers are merely one sort of "user agent"; it was envisaged that all sorts of agents, mostly automated, would be crawling the Web, e.g. fetching news, finding the cheapest price for some widget, etc.

Unfortunately, only browsers and search indexers seem to have caught on (and many sites are actively hostile to anything else)


Fielding isn't saying the client should know nothing about the meaning of the responses. He's saying that what the client knows about the meaning of the responses is derived by interpreting them according to the media type, which doesn't have to be HTML, rather than, for example, by looking at the URL. Quoting from what Fielding wrote in his post and comments:

> A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ...

> When I say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction.

> Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types.

That is, htmx.org or intercoolerjs.org might argue that "HATEOAS is [exclusively] for humans", but Roy Fielding doesn't agree, or didn't in 02008.

— ⁂ —

While I'm arguing, I'd also like to take exception to the claim that this discussion is irrelevant to anything people are doing today. It's an "old person battle," as some say, in the sense that old people are the people who have enough perspective to know what matters and what doesn't. REST matters, because it is an architectural style that enables the construction of applications that can endure for decades, permitting the construction of applications that evolve without a central administrator, remaining backwards-compatible with decades-old software, and can include billions of simultaneous users.

This is an important problem to solve, the WWW isn't the last such application anyone ever needs to build, and JSON RPC interfaces can't build one.

The trouble with redefining "REST" to mean "not REST" is that the first step in learning known techniques to solve a problem is learning the terminology that people use to explain the techniques. If you think you know the terminology, but you have the wrong definition in your mind, you will not be able to understand the explanations, and you will not be able to figure out why you can't understand them, until you finally figure out that the definition you learned was wrong.


2008 Fielding definitely wouldn't agree with me that HATEOAS is for humans.

That's a conclusion I came to after watching it never catch on in the JSON API space and then trying to come up with an explanation as to why. I'd love to hear what he thinks of the idea.

Thank you for the thoughtful comment!


Roy's suggestion in that comment thread is that it hasn't caught on because applying it takes more effort than people are willing to apply in most cases, in order to get benefits they don't care about, such as an architecture that can endure for decades (rather than, say, their OKRs for this quarter). I don't know if that's true, but it could be. It's also possible he's wrong about its benefits, or more precisely that they depend on a set of poorly characterized circumstances that are not present in other cases.

Thanks for your thoughtful essays!


I see how one can argue that's why Haskell hasn't caught on, with the same line of reasoning. So I feel very comfortable dismissing it.


I think that's a productive analogy that can be read in many different ways!

For example, you could argue that Haskell makes tradeoffs to provide benefits that most developers don't care about most of the time, such as code terse enough to include in papers with a page count limit; and that this is a valid reason for people to prefer languages that prioritize other things, which is why Haskell hasn't caught on (thus the Haskell motto "avoid success at all costs").

Or you could read it as a clueless argument ascribing disadvantages to Haskell that it in fact does not have. This would be more difficult if the person making the argument were Simon Peyton Jones, since he presumably isn't mistaken about Haskell as such, but he might still have erroneous beliefs about the world that affect Haskell's use in practice.

To take the discussion up a level, Rogers's factors in the diffusion of innovations are (perceived) "relative advantage", "compatibility", "complexity", "trialability, "reinvention potential", and "observed effects". If we accept this model, the failed diffusion of an innovation such as REST or Haskell doesn't necessarily imply that it has little relative advantage, or even little perceived relative advantage; it might be that they're incompatible with other established practices, are difficult to learn ("complex"), require heavy up-front commitment ("trialability"), hard to repurpose for unintended uses ("reinvention"), or hard to observe the use of.

In fact, the diffusion literature consists almost entirely of research on diffusing innovations that had great difficulty diffusing despite having dramatic relative advantages, at least according to the authors.

That still doesn't mean we should comfortably dismiss assertions that one or another innovation doesn't actually confer a relative advantage.


This.

I read quite much about REST and HATEOAS, and it didn't made any sense to me.

Somehow the "magic sauce" was missing. How should a client that doesn't know anything about an API interpret it's meaning?

I felt like an idiot. Like there was some high end algorithm or architecture that completely eluded me.

But it the end, it probably just meant, HATEOAS is for humans.


I feel almost like there’s an implicit dependency on GAI for interpreting HATEOS properly.


Yes.

I thought so too!

I just had the impression, that this was too much and I'm simply missing something obvious.


I am the author and I agree with most of what you are saying here, REST and HATEOAS are for humans:

https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...

I disagree that it isn't an API, but that's a definition quibble. It is probably more profitable to talk about RESTful systems rather than RESTful APIs, since people think API == machines talking.


> I disagree that it isn't an API, but that's a definition quibble.

I don't understand: An API is an application programming interface, i.e. it is meant to be consumed by other programs. How does that go together with

> REST and HATEOAS are for humans

?

And how does that go together with the requirements of "no interpretation needed" and therefore "little coupling" between client and server that were mentioned in the article? Any API must be interpreted by the calling application – i.e. the caller must know what they are calling and what responses they might get. Otherwise they cannot do anything useful with it – at least not in an automatic (programmatic) fashion.

I really don't understand how something can be a REST API on the one hand (clear, well-documented interface; used for programming), and on the other hand is supposed to be "for humans" and devoid of "interpretation" on the client's part. (Leaving aside that, even if this were possible, the interpretation would simply be done by the very final client of the API: The human.)

All in all, I simply fail to see how ideas like "REST is for humans", HATEOAS etc. are supposed to be actionable in the real world.


A concrete action I would suggest is splitting your data API out from your hypermedia API:

https://htmx.org/essays/hypermedia-apis-vs-data-apis/

Use hypermedia (and, who would have guessed I'd recommend this? something like htmx) to build your web application.

Use GraphQL or whatever other nonRESTful technology fits best to build your data API.


Isn't this just server-side rendering, just doing sections of the page rather than the full page?

Looks like it's encouraging coming almost full circle back to web 1.0 where all HTML was generated by the server.


Yes.


An API is also the interface used by humans to create programs. When you use a library, you're using its API. This sense of the term API is often lost.


True, but in this sense every code, every library and every API would be "for humans", which renders this distinction rather useless.


Not sure what you mean. Sometimes the word API is used in one sense, sometimes in another. It's a useful distinction insofar as it allows you to talk about APIs as things used by programmers. I find many developers have a hard time understanding this sense of the word API and as a result fail to apply good API design principles such as SOLID. In fact I think this is often what separates mediocre programmers from good ones.


That feels like a pretty significant quibble. API stands for “application programming interface”. If you cannot write an application to programmatically interface with it, why would you call it an API?

What you and the parent see REST as, should be called an HPAI: “human-poking-around interface”.


REST is an application programming interface for RESTful systems, interpreted by RESTful clients (browsers).

Generic Data APIs, and the clients that use them, are different and have different needs.


>REST is an application programming interface for RESTful systems, interpreted by RESTful clients (browsers).

No, the interpretation is done by the human using the browser, which is what makes it not programmable against, violating the P in the acronym.


The brower, a hypermedia client, sees the links, for example, in the API responses and renders them for the human to interact with. The browser is a hypermedia client, working against a hypermedia API. It not understanding the content of the responses beyond the hypermedia layer is by design: that's the uniform interface of REST.

I mean, this is quibbling over definitions.

I agree entirely with your general point that REST hasn't worked out well as a network architecture for non-hypermedia clients.


You can see the HTML website as a API, just because it’s hard and annoying to interface with it from another application doesn’t mean that you can’t.


Right. But the issue wasn’t whether you can interface with one via other applications, but whether you can do so while adhering to the REST principles Fielding gave, which require that you not have any context to interpret a response other than the response itself. So no out-of-band communication like documentation or your previous use or the site.

At that point, you do need a human at each point, and you can’t program against it. Programmatically interacting involves precisely the things REST principles prohibit, hence this point in the discussion.


You don’t mention Content-Type anywhere I could find in your post.

I don’t think hypermedia is only for humans.

You can totally do REST for computers. You’re just supposed to divide knowledge along Content-Type boundaries.

It’s true people mostly don’t do this, but it works great when people bother to describe rich Content-Types.


I mention Content-Type in that I think that discussions around it have largely been a distraction from what I consider the core innovation of REST: the uniform interface.

I recognize that many of the few people who still talk about REST would disagree with me on that.


You cannot have the uniform interface without the content type, unless you intend to stay strictly theoretical.

Most programmers have zero interest in staying theoretical, and since everyone had shit to do, and nobody felt like making a whole new content type for their applications, we ended up with only the parts that were immediately useful to people being adopted.


> It is probably more profitable to talk about RESTful systems rather than RESTful APIs, since people think API == machines talking.

If it's your stance that an interface designed to be interpreted by a program cannot be RESTful, then you could just shorten your rant to 'REST APIs cannot exist by definition'. It would save time. It's fair enough to be annoyed by words changing meaning I suppose.

It also seems like your RESTful system definition would include any server serving up straight html without client side scripting.


This is my criticism of the architecture.as well. But to try to take it seriously for a moment: I think the idea is supposed to be that a client can be programmed to understand the data types, but not the layout. So a programmer can teach a client that if it gets a content type of `application/json+balances`, and it sees, like `'links': [ 'type': 'withdraw', 'method': 'post', 'url': ..., 'params': { 'amount': 'int' } ]`, then it can know to send that method to that URL with that param and that it semantically means a withdrawal. That's all encoded into the documentation of the data type, rather than the API. I personally consider this all overly clever and not very useful, but I think that's the idea.


> If the client knows nothing about the meaning of the responses, it cannot do anything with them but show them to a human for interpretation.

The client "knows nothing about the meaning of the responses" only inasmuch as it intentionally abstracts away from that meaning to the extent practicable for the intended application. Of course, the requirements of a human looking at some data are not the same as those of a machine using that same data in some automated workflow.

Linked Data standards (usable even within JSON via JSON-LD) have been developed to enable a "hyperlink"-focused approach to these concerns, so there's plenty of overlap with REST principles.


It is not "If the client knows nothing about the meaning of the responses" but "The client knows nothing about the API end points associated with this data"

This means that if "/api/item/search?filter=xxxxx" returns an array of ids then you don't have to guess that item prices can be fetched by "/api/item/price?id=nnn" but this url (maybe in template form) needs to be provided by either the "/api/item/search?filter=xxxxx" query or another call you have previously executed.

So very similarly to how you click on links on a website. You often have a priori knowledge of the semantic of the website, but you visit the settings page by clicking of the settings link, not by manually going to "website.example/settings".

PS: these links could be provided by a separate endpoint, but this structure is often useful for things like pagination: instead of manually incrementing offsets each paginated reply can include links for next/previous pages and other relevant links. These need not be full URLs also relative URLs, just URL query fragments, or a JSON description of the query would work (together with a template URL from somewhere else)


This is right on the money in identifying the continual battle with using a hypermedia API for system-to-system communication. There comes a time where you realize that what you're building is great for discoverability, but not for instructing a computer to do something on your behalf.

That said, the situation isn't entirely dire. With some standard linking conventions (e.g RFC 8288 [1]), you can largely make an API that is pleasant to interact with in code as well. That the links/actions are enumerated is good for humans to learn how to manipulate the resource. That they have persistent rels is good for telling a computer how to do the same.

Think <link rel="stylesheet" href="foo"> as an example. A human reading the HTML source will see that there's a related stylesheet available at "foo". But a program wanting to render HTML will check for the existence of links with rel="stylesheet".

1: https://datatracker.ietf.org/doc/html/rfc8288#section-2.1.2


I don’t understand how anyone would ever expect a REST API to work to send information into a system? So far everything I’ve seen in the article is for reading.

You’d still need some generalized format for the client to get some form of input schema, and if you send the input schema for every action every time you retrieve a resource, things quickly become very data intensive.


Have you ever filled out an HTML form that was handled server-side? Is you, you've sent data into a REST system.


Why is the front end coder parsing anything? Just display the HTML.


EXACTLY. Regurgitators of the HATEOAS mantra never address this. Instead you get statements like one in this article: "the HTML response "carries along" all the API information necessary to continue interacting with the system directly within itself."

No, it doesn't. It's a list of URLs, which doesn't even indicate what operations they accept. The only thing REST supports, according to this philosophy, is a user manually traipsing through the "API" by clicking on stuff.

Thanks for summing it up so succinctly; I thought maybe I was missing something.


I'm the author, and I agree with you: a list of URLs jammed in a JSON response isn't much of a useful hypermedia affordance and, even if it was, what would some code do with it besides passing it on to a human to deal with?

Old web 1.0 applications, however, let you do a lot more than traipse through an API by clicking on stuff: you can send emails, update documents and on an on, all through links and forms. The HTML, in this system, does "carry along" all the API information necessary for the client (the browser) to interact with the resources it is representing. It relies on the human user to select exactly which actions to take, which is why I say "HATEOAS is for Humans", and wrote an article on exactly that.


Actually, in those cases the client is a person; the browser is only the transport mechanism.

But indeed we agree on the "for humans" problem. I'm inclined to accept widely-repeated guidelines on technology that I'm just learning, but this is another instance where the fallacy became obvious when it was time to implement something. Kinda like any attempt to use a lot of inheritance in OOP, which is now recognized as silly.

I assume that a lot of problems are "solved," but continue to be surprised that they aren't. I defined an API using OpenAPI and am now trying to generate code for it and do some prototyping, but the whole ecosystem is a mess and the standard itself is only now shaking off some truly dumb gaffes.


Would you say that, in this paper, when Fielding says "client", he means "a person"?

https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...

Or would the browser be the client?


After hastily searching for "client" and reading how it was used in several places, I would NOT say that he means a person.

But I also don't think he's talking about APIs, which is what the semantic debate seems to be about in this thread. I'm not trying to undermine your point; just to clarify that I was still talking about UI vs. API.


I agree: a lot of folks don't think a hypermedia response is an API response, which maybe deserves its own essay.

A hypermedia response is an API response for a hypermedia client, and additional possible API calls to the server are then discovered within that hypermedia response via hypermedia controls (e.g. links and forms) At least, that's my position, but I understand that many people would not call this an API (maybe just a RESTful system?) The language here is pretty darned academic, and Fielding didn't apologize for that:

https://roy.gbiv.com/untangled/2008/specialization

Unfortunately I think that unwillingness to come down a few pegs in abstraction ended up costing a lot in terms of confusion.


Indeed. Anyway, nice essay and thanks for the thoughtful responses.


> It's a list of URLs, which doesn't even indicate what operations they accept.

HTTP does not require a particular media type's representation to indicate the allowed methods. What made you think this should be the case? All popular media types I know either contain a partial indication (e.g. forms in HTML), and otherwise a fallback to GET/HEAD as default is implied or specified.

If a client is unsure, there's always the OPTIONS method and the Allow header.


I never said HTTP required that. The point of this discussion is that a remote COMPUTER can't derive the means of using a URL simply because it's listed. What query parameters does it support for a GET operation, for example.

This is about APIs, not bog-standard HTTP.


Oh, I see. The response carries along all the information necessary to continue interacting with the system directly within itself. For example, in HTML there are forms for the purpose you just mentioned, I'll give you two examples:

    <form method="get" action="search">
        <input name="q" type="text">

    <form method="get" action="filter">
        <select name="items_per_page">
            <option>10</option>
            <option>20</option>
            <option>50</option>
        </select>
The server instructs what query parameters are supported via the media type and its applicable semantics that are hopefully well defined, which is the case for HTML. Since the client (here: "remote COMPUTER", respectively its user agent) understands the media type, it can successfully uphold its half of the contract and generate the correct resource identifiers and accompanying HTTP requests that are delectable to the server.

The same idea applies to other media types.


This is incorrect. There are plenty of machines that read the web: web scrapers, and they've fostered a diverse ecosystem of tools. Search engines, archival tools, ML dataset collection, browser extensions and more are able to work with hypertext because it's self-describing. A new site can pop up and Google can index it without knowing if it's a blog, forum, storefront, or some new genre of website that may be invented in 30 years.


>Search engines, archival tools, ML dataset collection, browser extensions and more are able to work with hypertext because it's self-describing

They only read state but never modify it. So it misses the whole point of interaction with a web resource.


They can: my password manager auto-logs-in for me. Various tools will automatically find and click the "unsubscribe" link on a mailing list website. Scalper bots buy and resell all kinds of products by navigating web stores (unfortunately). etc.

Yes, generally it's more dangerous to make destructive actions automatically on a site you don't know the structure of than against an API a human has considered and designed against. But think about it: the design of HTTP / REST makes that possible. If you tried to write a bot that downloaded random Windows apps and clicked random buttons to "scrape" them, you could easily click a "delete all my files" button in a file management app (not even considering malware). In a REST API, that's not really possible by just issuing GET requests. Google can safely GET any random URL on your site with no harm, unless you've made a serious programming error.


> my password manager auto-logs-in for me. Various tools will automatically find and click the "unsubscribe" link on a mailing list website.

Those tools work by making assumptions. For example, your password manager looks for text input fields with the name attribute set to "user", "username", "login", "email", "account", etc. Create a page with a login form, but name the user field some gibberish, and I bet your password manager won't be able to auto-login.

> Scalper bots buy and resell all kinds of products by navigating web stores (unfortunately). etc.

And each of those bots will need to be written for each store. ie, a bot written to scalp products from Amazon won't work on BestBuy.com.


Web-scraping is not an example of published-API usage. Invalid comparison.


This is a wrong understanding of that point.

The HATEOAS model is designed for one thing: clients and servers that are developed independently of each other. This matches how the Web is designed (browsers and servers are not developed together), but does not match how most Web Apps are developed (there is almost universally a single entity controlling both the server and the client(s) for that app).

The point of HATEOAS-style REST APIs is that the client should decide what it can do based entirely on the responses it receives from the server and its understanding of the data model - not in any way on its own knowledge of what the server may be doing. This allows both to evolve separately, while still being able to communicate.

To contrast the two approaches, let's say we are building a clone of HN. In the more common web app approach, our client may work like this:

  1. Send a POST to https://auth.hnclone.com/login with {"username": "user", "password": "pass"}; wait for a 200 OK and a cookie

  2. Send a GET request to https://hnclone.com/threads?id=user (including the cookie) and display the results to the user
In the REST approach, our client and server could work like this:

  1. Send a GET to https://hnclone.com; expect a body that contains {"links": [{"rel": "threads", "href": "https://hnclone.com/threads", "query_params": [{"name":"id", "role": "username"}]}]}

  2. Send a GET request to the URL for rel="threads", populating the query param with role="username" with the stored username -> get a 401 NOT AUTHORIZED response with a body like {"links": [{"rel": "auth_page", "href": "https://auth.hnclone.com"}]}

  3. Send a GET request to the URL for auth_page and expect a response that contains {"links": [{"rel": "login", "href": "https://auth.hnclone.com/login"}]}

  4. Send a POST request to the link with rel == "login" with a body of {"username": "user", "password": "pass"}, expecting a 200 OK response and a cookie

  5. Re-send the request to the URL for rel="threads" with the extra cookies, and now get the threads you want to show
More complicated, but the client and server can now evolve more independently - they only have to agree on the meanings of the documents they exchange. The server could move its authentication to https://hnclone.com/api/auth and the client from 2 years ago would still work. The server could add (optional) support for OAUTH without breaking old clients.

You could even go further and define custom media formats and implement media format negotiation between client and server - the JSON format I described could be an explicit media format, and your API could evolve by adding new versions, relying on client Accept headers to decide which version to send.

Now, is this extra complexity (and slowness!) worthwhile to your use case? This depends greatly. For a great many apps, it's probably not. For some, it probably is. It has definitely proven to be extremely useful for building web browsers that can automatically talk to any HTTP server out there without custom Google/Facebook/Reddit/etc. plugins to work properly.


How do you know that the specific login page is where you send the username/password? What if the sevrer changes the login flow to be a 2-part process? What if the clone wants to add an extra challenge like TOTP?

The kinds of changes people are interested in are usually related to changing URLs, and those don't seem very valuable to me for the amount of complexity this flow adds. And ironically enough, the "URL changing" part we already have covered in the old system fine, with HTTP Redirect messages.


I feel old for I have witnessed many of these battles. But I feel that I have seen history.

There's nothing wrong in this article, in the sense that everything's correct and right. But it is an old person's battle (figuratively, no offense to the author intended, I'm that old person sometimes).

It would be like your grandparents correcting today's authors on their grammar. You may be right historically and normatively, but everyone does and says differently and, in real life, usage prevails over norms.

Same goes for REST.


This is me, 100%. I've seen enough to realize that progress is largely each generation re-thinking and re-mixing the previous generation's problems. Sometimes the remix makes things better, but plenty of times, older generation look at what's being done and says, "wow, you really don't understand what this was originally intended for" and there's a pain in that misunderstanding and inefficiency, watching work get redone, wars being refought, and hard won triumphs being forgotten.

That goes for technology, words, political concepts, music...


Since you have decided to expand beyond tech, I'd like to share a possibly optimistic perspective.

We are instructed to think that progress is a straight line. Without any surprise, it is not. But I like to picture it with a direction nonetheless... upwards. So as it's not a straight line, I see it as a pulse, with its ups and downs, the ups usually being mild and the lows potentially dramatically sharp.

But still somehow going up.

---

On the current and future challenges: I am mortified about what we did and are doing to the planet and horrified to witness the political tensions in Europe (not only the war, because Europe has seen Crimea, the Balkans, and war never stopped after 1945, but now, there are severe risks of expansion). Also, I do not believe in tech as a means to solve our problems, never did, never will.

So maybe my tiny optimistic pulse trending upwards is too candid and naive but at the moment, maybe blind with despair, I hold on to this idea.


Time is a flat circle.

But not really: time is an upward spiral, which just looks like a flat circle from the wrong perspective. Sometimes the distance between coils is so small as to vanish. Our job is to shove the coil up as much as we can while we are here.


The amount of people that think that insert musician here had this great original song is one of the most encountered fallacies in my daily life and online.

Not only that, people tend to compare in word and concept from the newer song to the older song! “Hey wow this song from 19xx sounds just like this new song I love”

No you fool, your new song sounds like the previous one :P. Causality matters!


Like the thousands of songs using Pachelbel's Canon.


Yuuup. Us fossils had a saying - "complexity kills". We learned it the HARD way.

Now the new generation of devs is re-learning the same lesson, all over again. It's all fun when you are in your 20s and the time and energy seems to be unlimited. Then you get a life, and you really start valuing a simple and working design, where everything seems under control, and your codebase is not on fire all the time.

This is why we got into distributed systems as last resort. It was a difficult solution for a difficult problem. For those that don't know, "distributed systems" is what you call "microservices" now, and 97% of the companies don't need to go anywhere near them.


this reminds me when I was learning programming. I was talking about it to my granddad (that pioneered a few things computer-related), and he asked me "okay great you program in C, but do you write computer programs or applications?". I'm pretty sure this nuance mattered in the mainframe-era, but 15 year-old me did not get it at all, and frankly, I'm still not sure what he meant, despite doing it for a living. But nowadays the distinction disappeared (did the two merge? did one fade?), and the REST APIs nuances did also disappear to only keep the parts that matter: those that made this model successful.


In Javascript world what you describe happens every two years.

At least to me it feels like it.


The descriptivist approach has a lot of merits when it comes to language -- to no small extent, words do mean what people think they mean; this is part of what it means for words to have any meaning at all, and when entire cultures let the meaning of a word drift it's hard to figure out what ground to stand on in order to say they're wrong.

And yet... right or wrong, something substantial is lost when "literally" fades into a figurative intensifier.

Same goes for REST.

There's little immediate problem in the misappropriation of the term. The problem is how few engineers spend anything more than literally zero time actually thinking about the potential merits of the original idea (and quite possibly less). Not even talking about the people who spend time thinking about it and reject it (whatever points of contention I might raise on that front), just the sheer volume of devs who sleepwalk through API engagement or design unaware of the considerations. That's the casualty to be concerned about.


This would be a better point if actual REST was something widely used. Then we would have lost a useful word.

But personally, I think I have never seen any actual REST that wasn't just browser-oriented HTML. So using the word for the API pattern is quite ok.


On the other hand, there's nothing fundamental about html that is encoded in the REST acronym itself. Is an http JSON API representational? Yes. Does it transfer state? Yes. If anything the cruel bit is that we use JSON over HTTP, which is "supposed to be" for hypertext.


> And yet... right or wrong, something substantial is lost when "literally" fades into a figurative intensifier.

Yes, but wisdom is to avoid to conflate "lost" for "degraded".


Articles like this are good to show how fads and buzzwords in tech take on a life of their own. Hopefully there are some younger devs in this thread who can learn from it.

Us older guys have to do the opposite. As we see these things come and go, we get jaded and start to dismiss new techniques as fads. I shudder to think of how much wasted effort I put into "Enterprise JavaBeans" and "XSL Transforms". Years later, I took a look at React when it first launched, dismissed it as crap because of the mess it made in the DOM back then, and then ignored it. It took me a few years until I realized I was wrong and it was going to stick around.


I was thinking about this the other day. One element that makes a successful senior is distinguishing fads from trends early enough, rather than just good from bad.

Trends and fads can look pretty similar in the early days, and trends often look bad early on too as they often take longer to mature than a fad. The trick is in spotting things that appear to be a bad fad, but will eventually be a good trend.


You were right and wrong. Just because something is crap doesn't mean it won't stick around...


I was there, at the Tannhauser Gate....

Like you I saw the wranglings over this meaning. And today I look at the documentation and see HTTP+JSON RPC and I still FEEL "that's not REST" but whatever.


The thing that confuses me with semantic drift, is that nobody stops at any point to make a word for what the other word used to mean. It's very hard to refer, at this point, to "the thing that people meant when they said REST ~15 years ago." Can't we just come up with another jargon term for that, that isn't being squatted on?


That's usually a sign that the thing that didn't get a new word is not actually all that useful or interesting.


Highly debatable. A (too quick?) Darwinist perspective would tell us that the original ReST was not fit. And the "new" ReST (aka, JSON-RPC) is more fit.

Fit in its current ecosystem, of course.


That's exactly what I meant. The "new REST" appropriated the term because it's what's actually being used. The "old REST" didn't get a new term because it's not actually being used.

There are still theoretical discussions around "old REST", but they all have "new REST isn't REST at all" as their core point, so the lack of new terminology is deliberate there.


We use the old REST a lot and call that what you name "new REST" Json-RPC. What's so difficult about naming things what they are? I mean Devs do have some brain. That's why we are able to write cool software. But then why are some of us too stupid to get that REST thing right?


There is already a standard called JSON RPC and it is not RESTful at all. RESTful means you are using HTTP verbs to convey some information. This is a “mostly stupid” idea, but it has caught on. RESTful is an now industry buzzword. So this is what the pragmatics implement.

We may as well have a debate on what object oriented really means before implementing something, but again, the pragmatic will just create a structure that is broadly recognizable as object oriented.


People do try to come up with new terms. I've seen such suggestions blog posts and comment boards.

The issue is getting everyone to agree with your new word or to even recognize the problems of semantics.

Many people also deliberately misuse the existing terms to get advantages. For example, DevOps in your title gives you a higher pay despite often being materially the same as a pure operations role or sys admin.


Good point. My only idea would be to say it in-extenso: representational state transfer...


People often mean CRUD when they say REST.


Already done: "hypermedia"

Does it delight you?


I see in your eyes the same fear that would take the heart of me

A day may come when the courage of old web developers fails,

when we forsake our old, RESTful network architecture

and break all bonds of HATEOAS

but it is not this day


    javascript fatigue:
    longing for a hypertext
    already in hand

    // haiku FTA


It's just, why are we calling it "Rest"?

Just call it adhoc RPC with JSON over HTTP.


That's what REST has come to mean.


REST has come to mean non-REST the same way "literally" has come to mean "figuratively".


REST is not a word. It's an acrynom. It's also a very technical term.


That's simply wrong!


I feel like the author has conflated hypertext with html.

The REST interface should be self describing, but that can be done in JSON.

If you go to Roy Fielding's post... there is a comment where someone asks for clarification, and he responds:

> When I say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction.

> Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types.

So, to me, a proper format is something like...

id: 1234

url: http://.../1234

name: foo

department: http://.../department/5555

projects: http://.../projects/?user_id=1234

This is hypertext in the sense that I can jump around (even in a browser that renders the urls clickable) to other resources related to this resource.


I found that json-ld, json-hal and other "describe your json" standards were needed to make json human readable-ish. I hate that there are many competing standards and the link syntax feels clumsy. JSON5 for "add comments, allow ES6 features" was perfect for private and in a small team use for a while.

No one seems to listen to the JSON inventor, who said he regrets creating a misnomer name and no successor should use the same naming parts, since it neither dependent on or compatible with JavaScript, nor was it only useful for storing and/or describing objects. (I am paraphrasing from my memory on his reasoning on both points.)

Open API 3 solved that problem for me, transforming JSON-RPC into a documentented API.


JSON is syntactically valid JavaScript, why do you say it's not compatible with JavaScript?


Ask Douglas Crockford. I did not find my source on his apprehension of his name.

What id did find: He states that he did plan for JSON to be a JavaScript subset, which contradicts his later finding, for which I have only me memory as a source, that JSON is not in some sense JavaScript.

More on topic, I did find a source that it was never meant to be human-readable in the sense of usable for documentation, but a machine-communication language, therefore comments were removed from the initial draft to avoid having non-standard meanings put in the freedom comments allow (think pre-process directives etc.).


JSON is not a strict subset of JS in that `true` is valid JSON, but not a valid JS program. Might be considered a nitpick but in a thread about semantics it's worth noting


Not all JSON is syntactically valid javascript.


Since ES2019[1], all JSON is valid JavaScript with U+2028 and U+2029 being accepted as unescaped characters in strings.

1: https://tc39.es/proposal-json-superset/


No, I haven't. JSON can be used as a hypermedia, and people have tried this, but it hasn't really worked out very much and the industry trend is against it, towards a more RPC-like approach.

I'm using HTML as an example to demonstrate the uniform interface constraint of REST, and showing how this particular JSON (and, I claim with confidence, most extent JSON APIs) do not have a uniform interface. Which is fine: the uniform interface hasn't turned out to be as useful when it is consumed by code rather than humans.

There are good examples of successful non-HTML hypermedias. One of my favorites is hyperview, a mobile hypermedia:

https://hyperview.org/


I'm old enough to remember WML which was a mobile non-HTML hypermedia standard. I enjoyed using it but it was obvious putting an HTML browser on a phone was always going to win out.

Hyperview isn't that interesting because it's a non-standard proprietary technology that isn't really "on the web". So you either have something like that, an actual full-featured HTML browser, or have something consuming a fully defined JSON API. It doesn't feel like there's anything interesting about non-HTML user agents on the web. HTML automatically makes them all irrelevant.


hyperview is a hypermedia that exposes native mobile features directly within the hypermedia

you wouldn't create an internet out of hyperview-apps, rather it is a way for you, a developer of a mobile application, to get the benefits of a hypermedia approach (e.g. versioning becoming much less of an issue, vs. app-store updates) while not giving up on native mobile behavior

it is open source, and I think it's one of the most innovative uses of hypermedia I've seen in the last decade:

https://github.com/instawork/hyperview


It's an app builder not a hypermedia client. It's a pretty slight semantic difference but when we're talking about REST APIs for 3rd party clients then hyperview isn't really relevant to the discussion.


I don't see how you can say it isn't a hypermedia client: it is a client that interprets hypermedia, just like the canonical hypermedia client, a web browser.

It's not an internet hypermedia client like a browser, with general access to a broad network of different sites/applications, but the core mechanic satisfies REST and the format being transferred is a hypermedia.


Because it's doesn't have general access to a broad network of different sites/applications it's not a hypermedia client. It's a client for a single service that uses hypermedia is an implementation detail.

An HTML client that can only view Facebook is not a browser, it's a Facebook client.

The point of the article is about how REST should contain all the information needed to traverse a service. But hyperview is for a building a single client/server application where the developers always know the entire interface. So that is the way that it's irrelevant to the discussion.


I dunno man, I don't see anything about "a broad network of different sites" requirement here:

https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...

A hyperview-based application has the exact same defining characteristic of a RESTful web application: there is a single URL that is entered, and beyond that all actions are afforded through hypermedia controls. All the information needed to traverse the service is encoded in responses, and no knowledge of the entire interface is needed by the client. Seems relevant to me.

I certainly agree it isn't a general purpose browser, but web browsers are only an instance of a hypermedia client, not the definition of one.


If, instead of text markup, the server just sent an image of the entire UI remote-desktop style how would it be any different? You could still have a single URL that is entered, all information needed to traverse the service can be encoded in the request/responses, and no knowledge of the entire interface is needed by the client.

But we have diverged pretty far from the point. Software clients using REST APIs cannot use the self-describing nature of REST so to claim they're all using it incorrectly I don't believe is a valid criticism. Browser-like clients, and you can include hyperview in that, can use the self-describing nature because they're pushing all the understanding to the user. But that is such a niche experience (outside of HTML browsers) that it isn't even worth discussing. For hyperview, it's not consuming generic REST APIs -- it's just acting as mobile-specific browser for a single service. That's why I said it's not really interesting.


I don't understand why something being "niche" or "not really interesting" has any relevance when asking if it is a hypermedia technology. You assert that "it's not consuming generic REST APIs" but that's exactly what it is doing as far as I can tell, unless you define "generic" to mean "HTML".

I dunno, I'm probably not smart enough to understand what you mean. In my mind, if it satisfies the constraints outlined by Fielding in Chapter 5 of his dissertation, it's a RESTful network architecture, and, as far as I can see, hyperview satisfies those constraints.

Maybe you can simplify it for me and point to the specific constraint in there that hyperview doesn't satisfy?

Again, I'm sorry to be so dense.


> You assert that "it's not consuming generic REST APIs" but that's exactly what it is doing as far as I can tell, unless you define "generic" to mean "HTML".

It's not consuming a weather API or a storage API or a query API. It's consuming an API designed exactly for the singular application that is being designed. It's not distributed except in the sense that it is a client/server application but it's tightly coupled to a singular implementation. A completely different environment to what the entire paper is about.

You've responded to literally the least interesting part of my comment as even accepting hyperview as a hypermedia technology doesn't negate any of the criticism of "by the book" REST. I may be guilty of caring more about whether REST as described is practical and possible then appealing to the authority of Fielding's definitions.


But it isn't coupled, that's the whole point: hyperview provides a general, mobile-centric hypermedia that embeds hypermedia controls in exactly the manner that fielding describes. (sorry, I'm going to keep referring to that since that's where the definition comes from.)

Until you can show me a REST constraint that it is violating, rather than vague assertions like "an API designed for the singular application", I'm not going to grant you that the technology isn't RESTful. Any web app can have "an API designed for that singular application." I can link out to other sites, but it doesn't have to, and the fact that it does or doesn't isn't material to whether or not it is ReSTful. If it has identification of resources, manipulation of resources, self-descriptive messages and uses Hypermedia As The Engine of Application State, it's RESTful.

Here you go:

https://en.wikipedia.org/wiki/Representational_state_transfe...

Pretty simple, just show me the constraint violated by hyperview and I'll agree with you. I don't know if HyperView provides Code On Demand, but that's an optional constraint of REST, if I understand correctly.


You make a good argument but it's unfortunately not in response to anything I'm saying.

I never said hyperview isn't RESTful. In fact, it's probably one of the few examples of a technology that follows the REST principles to the letter you're advocating for and thus why you used it as example.

Yet it's also an example of the lack of necessity of the uniform interface constraint. As the developer of everything in the application, I don't need metadata to describe the resources or self-descriptive messages. It describes everything, I as the developer, would already know. It's pointless.

Regular software clients that aren't "browsers" like hyperview, the self-description metadata don't help there either.

So I guess I'm saying the whole uniform interface constraint is completely misguided for any software that isn't some kind of browser. And then only if that browser is actually meant to be a browser.


OK, then that's the core disagreement: I think the uniform interface and the benefits of a RESTful network architecture are useful even if the software you are using isn't a "browser". In the case of hyperview, a major benefit is that you can update your mobile application without redeploying the client, exactly because of that uniform interface. That's why the creator went with the RESTful approach over a more traditional mobile app network architectures.


There's nothing new about that model; having a client interpret an application from a server is as old as computing itself.

> I think the uniform interface and the benefits of a RESTful network architecture are useful even if the software you are using isn't a "browser".

Except it can't be used for anything other than a "browser" either a real browser or something like hyperview. That's fine but it hardly suggests that it should be necessary.


In practice, this turns out to be a painful way for a client to access the data; instead of a single fetch to get the relevant information, we're now doing multiple fetches and collating the results client-side. For this kind of data access, I'd recommend either

a) just denormalizing the field contents if you know what the client needs them for

b) supporting GraphQL if you want to support a general-purpose query endpoint


REST says nothing about normalization. In fact the opposite — a resource representation in REST is the very same thing as a graph in GraphQL, or a view in SQL: an arbitrary remix of underlying data. REST is meant to be a projected view-model layer for web clients to interact with underlying data through; it's not meant to inform or be strongly bound to the shape of that underlying data.


Because HATEOAS is stupid for client-server communication.

It mandates discoverability of resources, but no sane client will go around and request random server-provided urls to discover what is available.

On the other hand, it does not provide means to describe semantics of the resource properties, nor its data type or structure. So the client must have knowledge on the resources structure beforehand.

Under HATEOAS the client would need to associate the knowledge of resource structure with a particular resource received. A promising identifier for this association would be the resource collection path, i.e. the URL.

If the client needs to know the URLs, why have them in the response?

Other problems include creating new resource - how the client is supposed to know the structure of to-be created resource, if there is none yet? The client has nothing to request to discover the resource structure and associations.

Also hypertext does not map well to JSON. In JSON you can not differenciate between data and metadata (i.e. links to other resources). To accomodate, you need to wrap or nest the real data to make side-room for metadata. Then you get ugly and hard to work with JSON responses. It maps pretty good to XML (i.e. metadata attributes or metadata namespace), but understandably nobody wants to work with XML.

And the list goes on and on.


Cont.: Because using HTTP verbs and status codes is stupid too:

- the API is then tied to single "transport" protocol (it is application layer protocol in ISO/OSI but if you are not building a web browser, your application should reside one layer upwards) - it crosses ISO/OSI layer boundaries (exposes URLs in data, uses status codes for application error reporting, uses HTTP headers for pagination etc.)

I think the second issue is vastly underrated. Protocols that cross layer boundaries are source of trouble and require tons of workarounds. Do you remember how FTP does not work well with NATs? It's because it exposes IPs and ports - transport layer concepts on application layer. SIP? The same thing.

With true REST you can build only HTTP APIs, no websockets, no CLIs, no native libraries.


> exposes URLs in data

That's, uh, the point. Without that, it's not "the web." (And yes, properly-structured APIs are part of "the web" — e.g. the first page of your paginated resource can be on one server, while successive pages are on some other archival server.) This is the whole reason for doing HATEOAS: there's no longer a static assumption that you're working against some specific server with a specific schema structure in place. Rather, your API client is surfing the server, just like you're surfing HN right now.

> no websockets

Correct in a nominal sense, but not in any sense 99% of developers would care about. Instead of RPC over websockets, do REST. Instead of subscriptions over websockets, do REST against a subscription resource for the control-plane, and then GET a Server-Sent Events stream for the data-plane. Perfectly HATEOAS compliant — nobody ever said resource-representations have to be finite.


> This is the whole reason for doing HATEOAS.

Seems like a premature optimization to me. Building all applications so that they potentially could be distributed over multiple servers. YAGNI.


We're not talking about centrally-developed distributed applications; we're talking about services that tell you to talk to other services. Services that could very well be developed by other people/organizations, years earlier or later. In fact, services that might very well be "plugins", in the sense that those URLs are user-supplied content from the perspective of the API. (Think: permalink URLs from user-supplied RSS feeds, in the responses of an RSS feed-reader API. Or image URLs, hotlinked in an oldschool phpBB forum. Or RDF vCards.)


Hello, this is the internet, have you heard of us?


You know what else crosses ISO/OSI layer boundaries? Switches. But I don't see anyone saying hubbed networks are better.

And realistically how often are you going to change your "transport"? And if you added an abstraction layer would that actually make it any easier? Stuff like SOAP ends up being the inner-platform effect where you reimplement all of HTTP on top of HTTP and actually implementing a new SOAP transport is just as hard as porting your protocol use a second "transport" if you actually needed to (which you probably won't).


> You know what else crosses ISO/OSI layer boundaries? Switches.

No, they do not. Switches work at L2 and are only interested in L2 concepts (MAC addresses). They work transparently for any application that is not crossing the L1/L2 boundary. Routers are L3.


they might be talking about switches that also have routers, which is common enough these days to think they are one in the same (they are not, as you noted)


> On the other hand, it does not provide means to describe semantics of the resource properties

Yup, wouldn't it be nice to have some sort of standardized framework to describe those resources? You could perhaps call it a Resource Description Framework, or RDF if you like acronyms.


Aaaand we are back to XML. I don't think XML or RDF is particulary bad for APIs, but they are overly complex and that makes them unpopular.


RDF does not depend on XML. You can use JSON-LD or text-based syntax (N3, TURTLE or extensions thereof) and those are preferred in modern applications.


FWIW, JSON-LD allows one to avoid XML but still gain the benefits of RDF, but I hear you about the "complex" part so ultimately it's like the other commenters have said: is it solving a problem your API and userbase has or not?


How is RDF (sans XML) overly "complex"? Much of it falls out quite naturally from the choice of building data interchange on hyperlinks to resources, similar to human-readable hypertext.


Sorry, I didn't mean that I personally find it complex, merely that I've worked in enough "wwwaaah, software is hard, right-click-y, push" organizations to appreciate why GP would use the word complex and would find friction in rolling out a sane data structure

The only leverage I've ever found for using good data structures is when I can demonstrate they solve a "today" pain point, because money/management DGAF about tomorrow anything, and only the tinyest fuck about customer pain so long as the checks clear


> no sane client will go around and request random server-provided urls to discover what is available.

"Random" isn't what's supposed to happen. You hit a top level endpoint, and then at that point other endpoints are made manifest, and then the UA/client and the user decide together what the next relevant endpoint is.

And this is what happens all the time with the most common client (the browser). Seems to have worked more or less for 30 years.

As for what semantics the UA/client is capable of exploring and providing assistance with: who knows what's possible with additional heuristics + machine learning techniques?

> Also hypertext does not map well to JSON... It maps pretty good to XML... understandably nobody wants to work with XML.

I don't understand that, actually. Markup is underrated for data exchange potential these days. JSON is frequently (though not always) somewhat lighter on bandwidth and often briefer to type for the same underlying reasons, but beyond that there's no inherent advantage. It just became the easiest serialization/deserialization story, half out of riding the way that JS won, half out of what it didn't bother to try doing (a lot of the meta/semantics) and so gave devs permission to stop thinking about.


> You hit a top level endpoint, and then at that point other endpoints are made manifest, and then the UA/client and the user decide together what the next relevant endpoint is.

That's not how APIs are used. APIs consume and provide data. Raw data is unsuitable to be presented to the user. That's why HTML has so many formatting options. Formatting information is completely missing from APIs.

> Seems to have worked more or less for 30 years.

Yes, worked for good old web. In this sense, true REST is nothing new and even seems backwards. If we try to do REST while keeping data and presentation separate, we will come to something very similar to XML for data + XSLT for formatting. Or XForms. Old ideas all over again.

> I don't understand that, actually. Markup is underrated for data exchange potential these days.

XML/markup does not map well to basic data types in current programming languages. These work with strings, ints, floats and arrays/dictionaries thereof. Not Nodes, Elements and attributes of unknown/variant data types.


> XML/markup does not map well to basic data types in current programming languages. These work with strings, ints, floats and arrays/dictionaries thereof. Not Nodes, Elements and attributes of unknown/variant data types

Exactly.

Arrays are pretty much the most primitive computer data structure (aside from primitives), and the contortions XML must go through to express them is remarkable. So much ceremony, and it can't do a simple list. XML has this core implicit assumption that everything ought to be an object, and I think this was the ultimate death knell for it. You can add complexity to JSON (e.g. JSON-LD) but you can't take the complexity out of XML.


> And this is what happens all the time with the most common client (the browser).

Right. Any technology that works this way is basically a "browser". You could create a new markup language or data format and a new user agent to consume it. But you'd be re-inventing the wheel.

There may be some use case for that, as opposed to software clients consuming a well defined API, but I haven't seen it yet. The HTML web browser basically depreciated all other browser-like Internet technologies when it came out (remember Gopher) and is even replacing actual desktop software clients. There's no market for alternative hypermedia clients so why are we giving this so much thought.


> but no sane client will go around and request random server-provided urls to discover what is available

Compare and contrast: what SQL admin GUI clients do to discover the DBMS schema. They essentially spider it.

> Under HATEOAS the client would need to associate the knowledge of resource structure with a particular resource received. A promising identifier for this association would be the resource collection path, i.e. the URL. If the client needs to know the URLs, why have them in the response?

The client does not need to know the URL; the client needs to know how to get to the URL.

Have you ever written a web scraper, for a site that doesn't really enjoy being scraped, and so uses stuff like CSRF protection "synchronizer tokens"? To do/get anything on such a site, you can't just directly go to the thing; you have to start where a real browser would start, and click links / submit forms like a browser would do, to get there.

HATEOAS is the idea that APIs should also work that way: "clicking links" and "submitting forms" from some well-known root resource to get to the API you know about.

As with a scraper, it's the "link text" or the "form field names" — the delivered API — that you're assuming to be stable; not the URL structure.

> Other problems include creating new resource - how the client is supposed to know the structure of to-be created resource, if there is none yet? The client has nothing to request to discover the resource structure and associations.

What do you think HTML forms are, if not descriptions of resource templates? When you GET /foos, why do you think the browser result is a form? It's a description of how to make a foo (and, through the submit attribute, a place to send it to get it made.)

Alternately, compare/contrast what Kubernetes does — exposes its resource schemas as documents, living in the same resource-hierarchy as the documents themselves live.

> Also hypertext does not map well to JSON. In JSON you can not differenciate between data and metadata (i.e. links to other resources)

It's right in the name: in HATEOAS, hypertext is the engine of application state. Hypertext as in, say, HTML. JSON is not hypertext, because it's not text — it's not a markup language like SGML/XML/HTML are.

You realize that HTML is machine-readable, right? That you can respond to an XHR with HTML, and then parse it using the browser (or API client's) HTML parsing capabilities, just like you can receive and parse JSON? That there's nothing stopping you from using a <table> to send tabular data, etc.? And that if you do so, then debugging your API becomes as simple as just visiting the same link the API client is visiting, and seeing the data rendered as HTML in your browser?


I'm not sure if you are agreeing with me or disagreeing. I was answering the original question - why has REST become non-REST: it is not suitable for client-server apps.

>Compare and contrast: what SQL admin GUI clients do to discover the DBMS schema. They essentially spider it.

Not really, there is information_shema where they get everything they need to know about structure separately from data.

> Have you ever written a web scraper, for a site that doesn't really enjoy being scraped, and so uses stuff like CSRF protection "synchronizer tokens"?

Yes. Awful. Do we want all APIs to be like that? Why?

> It's right in the name: in HATEOAS, hypertext is the engine of application state. Hypertext as in, say, HTML.

Fully agree on this one. Just that HTML is unsuitable for machine-to-machine communication, so it is not used for APIs.


> Not really, there is information_shema where they get everything they need to know about structure separately from data.

But these tables of metadata are accessed as a graph (usually with plenty of cycles, and iteratively by following relations from specific objects of interest) and the result is a "hypertext" (usually presented as tables or as diagrams of objects) of tables, columns, indices, users, grants etc.


> Not really, there is information_shema where they get everything they need to know about structure separately from data.

information_schema only includes standardized information about standard SQL features. It doesn't expose DBMS-specific features like PG's comments; let alone give you not-standardized info on things that are part of the standard, e.g. any DDL-equivalent machine-accessible representation for views or procedures or types or domains. Gathering all of that stuff requires carefully joining five or ten different DBMS-specific tables together. Some of it even requires feature negotiation + graceful degradation.

> Yes. Awful. Do we want all APIs to be like that? Why?

Because then everything other than the well-known root can be changed freely without breaking the client.

Compare-and-contrast: remote object brokers in dynamic languages, e.g. Ruby's https://github.com/ruby/drb. There are module functions that serve as well-known roots to the APIs you're remoting against; but as soon as you get an object handle, everything from that point forward is the result of sending a proxy-object (opaque API) a message, and getting a handle to a new object passed back in return. Where that new object might actually be an object-handle to an object on a different system than the one you first connected to.

If you write a client to talk to such a remote system the way it's intended (i.e. by making OOP-style call-chains, and holding onto object-handles you want to reuse), then there's very little that should be able to break your client library, besides a fundamental change in the semantics of what the remote service delivers. Your client library isn't making assumptions — it's being told what the possibilities are at each step, like a browser is/does.

Also, the other thing you get is: anyone with a web browser can use your API, because your API is, in a certain sense, "a website." Like how anyone can interact with S3 by just visiting the URL of an object.

> HTML is unsuitable for machine-to-machine communication

Who said? Someone back in 2005, when every language in common use didn't have HTML-parsing libraries?

HTML needs conventional microformats to specify the abstract types of otherwise-text data... but so does JSON. JSON gives you text and numbers, but it doesn't give you ADTs like you'd get from e.g. Avro / Protobufs / etc. You still need a schema on top of JSON to get anywhere. So what are you really getting from JSON that you don't get from having your XHR responses be HTML webpage that were designed to be highly machine legible?


Excellent explanation. But I think plenty of technology has been misunderstood as of late.

I've been in this industry two decades, but it's only in the past 5 years that I've noticed entire teams of absolute morons entering the field and being given six figure jobs without understanding what their job even is, much less how to do it properly. (And by "properly" I mean "know what a REST API is")

The industry is awash in quants who took a four week course in Python and are now called "data scientists"; backend software engineers who don't know what the fuck a 500 error is; senior developers who graduated a year ago; engineering managers who "hire DevOps"; product managers and DMs who don't know how to use Jira or run a stand-up. It's like all that's left is people who think they have "impostor syndrome" when they actually are impostors of professionals.

I tried to find a new job recently, and I couldn't find a single org with at least 50% of the staff properly understanding how to do their jobs. Of course half of them were bullshit VC-funded money-bleeding terrible businesses, and the other half were fat cash cows that through their industry dominance became lazy and stupid. Maybe we just hit peak tech, and all the good teams were formed by boring companies long ago and don't have new positions open. Or maybe all the good people cashed out and retired.


This has always been the case, you just weren't experienced enough to realize it


The short answer is... the web moved in a different way than expected and the useful portions of rest were preserved while other portions were jettisoned (the biggest one IMO isn't the hypertext portion (JSONs fine, it's fine) but the self-discoverable portion - I haven't seen a self-discoverable REST API ever in the wild).

Unfortunately the name REST was too awesome sounding and short - so we've never had a fork with a different name that has proclaimed itself as something more accurate.

I don't think this is awful, FYI - it's sort of the evolution of tech... the OG REST wouldn't have ever gotten popular due to how stringent it was and I can use "That it's RESTful enough." to reject bad code practices without anyone being able to call me out on it because nobody actually remembers what REST was supposed to be.

I'd also add - what precisely is self-descriptive HTML? All descriptions need reference points and nothing will be universally comprehensible - let alone to a machine... expecting a machine to understand what "Insert tendies for stonks." on an endpoint is unreasonable.


> a self-discoverable REST API

I’ve been writing web services for over a decade and this just seems like a cute idea that is actually almost never at all useful in the real world.


Swagger/OpenApi is probably the closest thing to "true REST" with self-discoverable APIs. Without some sort of common schema of where/how to look for resources, there's no real automated discovery to be had without AGI.

Turns out, a lot of places don't even want self-discovery. We shut off our openapi.json in prod because a) security b) we don't want bots messing around c) Hyrum's law, as soon as you expose an API endpoint, folks try to build against it and grumble if it chances/goes away.

Wikipedia (along with any true wiki) is, and likely forever will be, the one true REST/HATEOAS application.


Yep, there's a big gap between what's useful in Ph.D dissertations and what's useful in the business of software.


GitHub’s REST api is pretty self discoverable fwiw. It kinda sucks honestly - every request yields pages of response with field after field of totally irrelevant links. Totally unparseable without json formatting and collapsing support, so CURL/wget/etc. via the terminal are painful. See for example the example response for getting an issue: https://docs.github.com/en/rest/issues/issues#get-an-issue

And that even has omitted some fields you’d get back querying manually.


> GitHub’s REST api is pretty self discoverable fwiw.

I had a look at the example response to see whether that is true. I give you that there are hyperlinks existing in resources of media type `application/vnd.github+json`, but there is no uniform interface to discover a link. Fielding would disapprove. It's an indication of bad design that a client must hardcode e.g. `labels_url` instead of having a generic reusable way to access a link that works across a multitude of JSON derived media types.


Wouldn't any interface to discover links need to be hardcoded? IIRC, REST didn't actually define a unified way to describe where the links existed in the returned document.


That's because you're looking at the wrong place, that's not the responsibility of REST: it can't because it's not a specification, but a description of an architectural style.

Instead, it's the responsibility of the media type. HTML offers hypermedia controls: hyperlinks are denoted with <a>, <area>, <base>, <link> elements and the "href" attribute; forms are denoted with the <form> element as its "action" attribute. I assume you are familiar with HTML, you can see that this is a very generalised way to find links (or controls in general), it is applicable to all HTML documents. XML has close analogues: XLink and XForms. JSON-done-right offers hypermedia controls: moral equivalent of links <https://datatracker.ietf.org/doc/html/draft-kelly-json-hal#s...>, moral equivalent of forms <https://rwcbook.github.io/hal-forms/>. It is easy to reimagine `application/vnd.github+json` in the guise of this media type so that the problem I was describing earlier disappears. If a media type cannot express hypermedia controls or link relations inline (e.g. WebP images), then the HTTP header can be used instead, see RFCs 8288 and 6570.


I can run a regex to find <a> tags as well as the next guy[1] but what do those links mean? Is one of those links a link to delete the object I am viewing and one of them a link to update it? What about the link to get a list of all the foos associated with this particular bar? How am I supposed to parse this blob of HTML programmatically to figure out which link is mapped to which action?

I think this is where I think the big failing of original REST comes into play - these self-discoverable actions are sort of impossible to actually leverage... either you're hardcoding some information about the format (i.e. look for the <a class='delete'>) or you're automatically deriving the link anyways... there isn't really a reasonable way for REST to work where these links are self-descriptive enough for a machine to figure out on their own so... in the end... why not just separate that information out into a nicely formatted API document that the devs can have on hand while building their interoperable tool?

1. And personally prefer to always parse my HTML with regexes of course...


> what do those links mean? […] figure out which link is mapped to which action

The meaning is expressed by the link relation, see <https://news.ycombinator.com/item?id=32155744>.

> Is one of those links a link to delete the object I am viewing and one of them a link to update it?

Sure, see RFC 5023 § 11.1.

> What about the link to get a list of all the foos associated with this particular bar?

Okay, custom link relations since the existing predefined ones are insufficient. Because `foo` and `bar` are vague, I have to make up some details and can show one way to do it:

    GET /bar/this-particular HTTP/1.1
    Host: munk-a.example

    HTTP/1.1 200 OK
    Link: </ns-rel/bar>; rel="type"
    Link: </collection/foo?filter=/bar/this-particular>; rel="/ns-rel/foo"; title="list of all the foos associated with this particular bar"
    Link: </bar/this-particular>; rel="self"
    Link: </collection/bar>; rel="collection /ns-rel/bar"; title="list of all bars"
    Content-Type: application/octet-stream
    …

    GET /collection/foo?filter=/bar/this-particular HTTP/1.1
    Host: munk-a.example

    HTTP/1.1 200 OK
    Link: </ns-rel/foo>; rel="type"
    Link: </foo/some>; rel="item /ns-rel/foo"; title="some foo"
    Link: </foo/another>; rel="item /ns-rel/foo"; title="another foo"
    Link: </collection/foo?filter=/bar/this-particular>; rel="self start"
    Link: </collection/foo?filter=/bar/this-particular;page=2>; rel="next"
    Link: </collection/foo?filter=/bar/this-particular;page=11>; rel="last"
    Content-Type: multipart/mixed
    …
> impossible […] there isn't really a reasonable way

I'm skipping over the rant in the second paragraph which is clearly based on misunderstanding of the subject matter. Instead of baseless assertions, stick to asking questions if you don't want to make HN readers think you're an ignorant fool.


> I haven't seen a self-discoverable REST API ever in the wild

I'm super cognizant this entire discussion hinges upon semantics, nuance, etc but the Okta API isn't terrible about that

https://developer.okta.com/docs/reference/core-okta-api/#hyp...

I haven't personally tried `curl https://${yourOktaDomain}/api/v1` to see what the bootstrapping process looks like, but I can attest they do return _links in a bunch of their resource responses, and I believe their own SDK uses them, so they're not just lip service


> the useful portions of rest were preserved while other portions were jettisoned

Not even remotely. All portions of rest were jettisoned, and the nice branding got slapped on familiar rpc.

Fielding’s rest was never about http-specific concepts, not the verbs, not the status code, and not cute-looking URLs.


I don't think that's true, unless I just don't understand REST at all. Most "RESTful APIs" in the wild I encounter / implement are

- stateless - cacheable - follow client-server architecture - support resource identification in requests - loosely support resource manipulation through representation (with some caveats)

I don't see how it's RPC by any but the most broad interpretation (a function executes at another address space).


The entire point of Fielding's REST was loose coupling - that you don't hard-code interactions/URLs into clients but rather discover actions/links/interaction from representations dynamically, such that the only thing a client must know a priori is a URL. "REST API" practices are criticized because they follow all the rules and brag about eg HTTP response status code semantics and whatnot, but without following HATEOAS which is the entire point of it all. I can see why teams implement "REST APIs" - to end pointless architectural meta discussions and get stuff done; and there's practical value in that. But, harsh as it sounds, and I'm apologizing in advance if this is offending to anyone, dealing with colleagues who mindlessly follow REST casuistry feels like working with hopeless rote learners, psychopathic pretenders, or unstable personalities projecting naive, irrational, and premature ideas of craftsmanship into our profession to me.


I don't think that was the entire point. But who cares? Loose coupling is of little benefit to the client in a client-server pair that are implemented by the same company (which presumably Fielding wasn't picturing). There are other benefits people get from REST. Semantically understandable APIs, handlers that do one thing, predictable interfaces (this is close to as valuable as discoverable imo). I don't really get your point re: casuistry. The implementation of REST as practiced by the industry is largely pragmatic. The article appears to be arguing for REST purity.


> I don't think that was the entire point. But who cares?

People who like language to have meaning?

> which presumably Fielding wasn't picturing

Fielding was describing an architectural style. Although there may have been an element of advocacy, at the end of the day whether you used it or not was not his concern.

> The implementation of REST as practiced by the industry is largely pragmatic.

In the same sense that the implementation of airships in Boeing’s 747 is largely pragmatic, except for the part where Boeing doesn’t misscall their plane an airship.

> The article appears to be arguing for REST purity.

Much like saying a serval is not a dog is arguing for dog purity, yes.


The benefit is then opening up the coupling to other clients.

Like how Reddit used to have tons of apps before they started locking down all new functionality.


Twitter had a ton of clients despite a REST API that was exactly like the OP was saying was not REST.


>(JSONs fine, it's fine)

I read this like you're convincing yourself vs the reader.


If you are designing a REST API, please don't follow this author's advice and return HTML instead of JSON. HTML takes much longer to parse and makes the API more fragile.

The history did its job: it preserved the most useful features of the original idea (expressing RPCs as URLs in GET and POST requests) and has dropped the unnecessarily complicated bits.

What this article is about is a pedantic terminology battle of whether to call the current practice REST or not.


> If you are designing a REST API, please don't follow this author's advice and return HTML instead of JSON. HTML takes much longer to parse and makes the API more fragile.

you're describing what a browser does.

> has dropped the unnecessarily complicated bits.

you're viewing this content from a browser.


What's your point? If something works for the purposes of UI, it doesn't mean it is good for an API.


Don’t parse the HTML, just display it. That’s what the browser does best.


What if a mobile application wants to use the API?


This to me is the real question. I don't see how you could easily create multiple clients if you're practicing what is preached in the article.

You're really tying your server to the browser at that point so things like CLI clients and mobile clients become painful to write (not impossible but ain't nobody got time for parsing HTML when browsers do a perfectly good job of it, not to mention CSS.).


REST systems are designed for use with "generic" clients. The generic client for REST systems on the WWW is a browser. There is probably one available for your mobile platform.


What if a native application (mobile or desktop) wants to consume the API?


Not the parent commenter, but I believe the idea is that if the webpage uses a framework like React or Angular, then the server doesn't need to send html, it can just send the bare minimum data necessary (in JSON) and the client turns it into html and displays it.

In other words, HTML is just the standard format that the browser uses. But if you use frameworks or define your own application specific formats, you can send less information over the wire and have it properly decoded and displayed by your client-side javascript.


This article really is about getting people to use their overloaded HTML libraries.


Software Development is like water filling a container. It's always water but its form takes the shape of its container.

After doing this for 15+ years, I tell my junior developers to take it easy on the "proper way" of doing things. It will change, people will argue, and money talks.


I'd like to see a lot more humility on certainty of approach than the absolutism we often see. There's often a handful or more ways to do anything, why is there so much certainty in all contexts of the correct way. The correct way is often best defined by the unique context a given piece of software is developed within. Tradeoffs take into account more than technology alone.


dhh has a talk about this general idea.

Writing software is such a _human_ thing. It has so much more in common with writing than it does with other kinds of engineering.

Most of what we're doing has to do with how to lay things out so that it's clear and easy for other humans (including the humans that write code) to understand, interact with, and modify. Any time you're dealing with humans brains, there's going to be a lot of complex subtlety in terms of what the "best" approach is.

But because it's software "engineering", people think we need to have fairly hard-and-fast rules about the right way to do everything.


Same applies to the project management aspects of it too.

So many leaders have an arrogance about how things must be done. They are all 100% correct and they all disagree with each other. Reminds me of diet/fitness gurus.


Great article. Calling APIs RESTful because they return JSON has always been a peeve of mine. But here's the question though, why do APIs need to be RESTful? What is the need for a client to have no knowledge of the server, if the server can also provide logic that can run on the client. In some sense, one could argue that a service that provides both raw data and client logic to transform raw data to hypermedia, is still very much in the spirit of REST. Webapps by definition must satisfy this requirement, so it is moot to be asking if a webapp follows RESTful principles. Of course it does, it runs on the web! Native apps on the other hand have sure ruined everything.


IMHO, they've wildly missed the mark. APIs, as colloquially known, can't be RESTful as they define it because the client systems aren't AI based and can't follow links. To use their example:

  <html>
      <body>
          <div>Account number: 12345</div>
          <div>Balance: $100.00 USD</div>
          <div>Links:
            <a href="/accounts/12345/deposits">deposits</a>
            <a href="/accounts/12345/withdrawals">withdrawals</a>
            <a href="/accounts/12345/transfers">transfers</a>
            <a href="/accounts/12345/close-requests">close-requests</a>
        </div>
    <body>
  </html>
I can navigate to that page and because I know English can follow links to my withdrawls and deposits. A computer can't. The client program needs to have an understanding of withdrawl and deposit in order to function. The only way to do that involves coupling the client to the server.


> The client program needs to have an understanding of withdrawl and deposit in order to function. The only way to do that involves coupling the client to the server.

Rest never denied that coupling, it defined that coupling at the content-type level.


Can you expand? I don't think I understand.


In Fielding’s REST, the exchange is defined in terms of the document contents (the content types). These are what define the interaction both ways, both in terms of data and in terms of navigating the set and interacting with it (think links and forms).

The client necessarily has to know about that. That’s where the coupling happens. Where the coupling doesn’t happen is in things like resource names and locations, as far as Fielding’s thesis was concerned there’s only one location the client needs to know and it’s the root location, everything else can be obtained by navigating the service.


>Where the coupling doesn’t happen is in things like resource names and locations, as far as Fielding’s thesis was concerned there’s only one location the client needs to know and it’s the root location, everything else can be obtained by navigating the service

Yeah, any my point is that relies on the client (i.e. a human) being able to understand what the different links mean. A programmatic client can't do that without in someway hard coding the structure of the API.


> A programmatic client can't do that without in someway hard coding the structure of the API.

That's false. Here is the HTML again, augmented with a link relation which by its nature is perfectly understandable to a program.

    <a href="/accounts/12345/deposits" rel="https://schema.org/DepositAccount">deposits</a>
Anyone can coin a link relation on the Web, as long as its identifier is a URI. Tokens are reserved and need to be registered at IANA: https://www.iana.org/assignments/link-relations


It's understandable if you write the code for the program to understand. If it changes from deposits to credits tomorrow then your program breaks. Hence it's coupled to the server.


There is always coupling. The question is whether a certain architectural design is coupled loosely or tightly. If you read Fielding's blog post referenced in TFA, you can see he is an advocate of loose coupling.

I have demonstrated this design in my post above.


And that's even before we get into:

* Localization * Accessibility * Different clients * Different presentations for different contexts.

The whole premise of RESTful hyper-media driven APIs described in this article is predicated on "The One" client talking to the server. Our modern world is not this.


Why should a server limit itself to a single client? At some point, you might want to make a mobile app companion to your site, or you might white-label your services in partnership with another company who will need to use your APIs, or any number of other common scenarios.

A hobbyist / small company doesn't need to have RESTful APIs. The whole point is to design them so that they play well with others, and when you get to that point, you (or more likely the people who depend on you) will wish you had.


> you might white-label your services in partnership with another company who will need to use your APIs,

As soon as you have a third party using your API things get another layer of conplexity: do you charge them? Do you rate limit them? if you have several partners, how do you authenticate them? etc.

API gateway solve some of that, and sometimes you dont care, but generally its not as sinple as goving your internal API to people and telling them to go wild


Of course it isn't that simple- most of the additional challenges you mentioned are business problems, though, not technical, and are generally orthogonal to the actual design of the API.

The biggest hurdle to opening up your API is usually needing to move from a single tenant to a multi-tenant architecture in your database.

Some tenants will have regulatory burdens you need to meet, and your early adopters will likely have a slew of requests that you'll need to decide on- do you risk tailoring your application to their needs with features future clients won't want?

To these last points, I think RESTful architecture helps, rather than hinders, but YMMV.


> What is the need for a client to have no knowledge of the server, if the server can also provide logic that can run on the client

It's a measure of decoupling I think. If your client started out with no knowledge of the server and still managed to work, then it will still work even when the server is upgraded, restructured, etc etc.

Of course, having every client just started at the root URL and then navigate its way to what it needs by effectively spidering the server just aint practical in any meaningful sense. But in small ways and critical places it is still possible to follow this pattern, and to the extent you do, in return you get a level of decoupling that is a useful property to have.


There are two fundamental reasons HATEOS just doesn't work in practice. The first is that most services can't easily or reliably know their own absolute URLs , and HATEOS (and the behavior of many HTTP libraries) is inconsistent around relative URLs, so hacky and unmaintainable response rewriting gets applied at various levels. The second is that if you are diligently following a convention for how paths are constructed it's utterly redundant information--you can simply synthesize any path you need more easily than you can grok it out of HATEOS. The reasonable bits of REST that are left are not just HTTP verbs and JSON, but significantly the use of entity type names and ids in the path portion of the URL rather than in query parameters.


I never understood this and still don’t.

The P in API is programmer’s. Specifically it is a programattic call.

REST says you get hyperlinks, which are effectively documentation in the response.

Which is nice.

But a program isn’t a person it doesn’t need docs in the response.

And URL links are not sufficient documentation to use the interface.

So I don’t get the REST use case outside of some university AI project where your program might try to “make sense”’of the API.

And therefore I have never tried to use REST and I have never seen anyone else either at anywhere I have worked.

It is a nonsense concept to me.

REST API is a contradiction in terms.


I don't know if you read the article.

REST is a post-hoc description of how the web worked (at the time it was made).

You had web pages with hyptertext content, and that included forms. The forms had fields with names and an "action" destination.

The client (the browser) knows nothing about the server or even its APIs. It just knows how to send an HTTP request with parameters. In the case of forms, those parameters were encoded in the body of a POST request. That's it.

There was no "client side code" that talked to the server.

The "client side" is literally just the browser. Talking to the server is done by the user clicking links and filling forms.

I don't think the article is particularly encouraing you to program this way in 2022. Just telling you that if you are not programming in this way, do not call what you are doing "REST", because it is not.


Then they can't call it a REST API: it's not programmatic, so it's not an API

Aka somebody pulled some ancient obscure definition out of nowhere that just means _everybody_ is wrong.


It's only obscure for people who arrived at the scene long after the distortion of terms already took place, and never studied where the terminology comes from.


this whole discussion seems pedantic.

whether it is a local function or a remote function, both caller and callee need to agree on the parameters (input), and returns (output).

I send you X. You send me back Y. That's it - this is the contract we both agree to.

OP is saying - the caller should NEVER do anything with Y other than display it on screen, for it to be called REST. Well - why even display it, why not just discard it ? Calling print(Y) is as good as calling businesslogic(Y). Whatever further logic a human plans to do after print(Y), a machine can do the same.

In other words, REST is just step 1 of returning data from a remote function. The moment you code any additional logic on the returned data (which is 99% of use cases), it's not REST anymore ? Sounds like an extremely limited definition /use case of REST.


This article does a disservice to the benefits of “Richardson Maturity Level 2” i.e. “Use REST verbs”.

A standard set of methods—with agreed upon semantics—is a huge architectural advantage over arbitrary “kingdom of nouns” RPC.

I’d argue that by the time your API is consistently using URLs for all resources and HTTP verbs correctly for all methods to manipulate those resources, you’ve achieved tremendous gains over an RPC model even without HATEOAS.


Man I really disagree. I think the set of verbs on the write side in REST is weird and creates so much bikeshedding. I'd much rather write an RPC API with semantically meaningful verbs taking and returning semantically meaningful nouns. I don't design my library interfaces around the verbs that HTTP happens to define and I don't see why I should design my network application interfaces that way either.


I'm on team RPC. I'll stay away from the term REST since my beef is with HTTP APIs that mix transport and application concerns.

- The Kingdom of Nouns comparison is forced. Yegge's complaint is that nouns own the verbs, meaning Java doesn't have first-class functions. The closest remote analog might be promise-pipelining which doesn't have much headway other than a single implementation for Cap-n-proto.

- RPC APIs are more consistent than an HTTP API. With HTTP, a unique method requires both a path and method, and if you're really unlucky, the method is polymorphic based on the contents of the request body.

- HTTP API requests can transport in different ways: request body, query parameters, HTTP path, and if you're really unlucky, headers.

Tremendous gains doesn't match my experience. The first step in using a HTTP API is to wrap it with an OpenAPI generator to build a consistent way to invoke the API, reinventing RPC client stubs in the process.


I wholeheartedly agree, which is why hardly anyone ever builds "REST" APIs. It doesn't really add that much to a data API


An earnest answer? Because few people have taken the couple hours(?) and read Roy Fielding's dissertation from start to finish. The biggest likely reason for not doing so is that frankly a bunch of people simply don't care, and why should they. There's very little incentive to do so. In fact, the fewer people that do, the less of an incentive there is - there is no one can call them out on it and they can reap the rewards of calling an API RESTful despite the accuracy of the statement.

Having worked in an organization where people were very familiar with the academic definition of REST, the biggest benefit of being a backend developer was that when client-side folks depended on nonRESTful behavior, we had some authority to back that claim. It gave us leeway in making some optimizations we couldn't have made otherwise, and we got to stick to RFCs to resolve many disputes rather than use organizational power to force someone to break compliance to standards. I suppose it meant that we were often free to bikeshed other aspects of design instead.


The dissertation is neither a specification or a standard which has led to decades long bickering of what 'REST' really is.

Edit: See how this post has zoomed to hundreds of comments in just minutes by people arguing the 'one true REST'. The situation is insufferable.


For sure. It's not even focused on APIs. REST is an architectural style. I don't think I've ever heard the term RESTful architecture in design meetings or online discourse. That should say that something is off.


I tend to find it helpful to ask, "is this proper REST as in HATEOAS, or is just 'REST-ish'?" It's usually just REST-ish: predefined URL patterns, roughly 1:1 with specific resource-APIs; they care about your HTTP verb and return JSON (but usually not with a lot of URLs in that JSON).


It's funny because HATEOAS violations were single-handedly the most common violation made by the client team during my time there. They loved hard coding URLs and willfully ignoring hypermedia.


See, to me, this sounds like a failure to listen to your customer. They were telling you that well known URLs were useful to them, and you were forcing them to make more compromises so that you could make fewer.


We answered to the architecture team on matters of design. If the client team had had beef with the design, the client team took it up with them.


What do they do if they don’t hard code it?

   bookHotelUri = listHotelResponse.data.urls[4];
Or

    listHotelResponse .data.urls.find(u=>u.action===‘book’).url

?


    listHotelResponse.data.urls.find(u=>u.rel===‘booking’).url
Web linking is defined in https://www.rfc-editor.org/rfc/rfc8288.html and that's the basis for using rel as well as a host of other attributes.


Sort of. Verbs are still eschewed so rather than finding a 'book' link, it would be a 'booking' link to which a POST could be made to create a new booking. As a note, "Booking" is a confusing example because while a hotel booking is a noun it at first glance reads as a progressive tense verb.


So instead of a hard coded url it's a hard coded reference to a url? How does it deal with a breaking changing? Say the order of the links changes? It doesn't seem like this works without some out of band information.


They are keyed using rels. You're right, they aren't relying on order; that would be terrible. And yes, JSON doesn't "have" rels. This is worked around by specifying a mimetype which does. Is it turtles all the way down? Yes, but not all turtles are equivalent. You should already be specifying a "Content-Type": application/json, it's not too much to change that to application/vnd.api+json and write a spec somewhere.


Counterpoint: This all got a lot worse and more confusing for me when I read the dissertation. What people seemed to call RESTful struck me as a bit convoluted but fine and workable. But after I read the dissertation, it was clear that all that was "wrong", but I was left without any real concept of what was both right and workable. It took me years to realize I could just ignore the whole thing and focus on designing useful interfaces. (And that, indeed, this is what most people had been doing already.)


I think you're missing a little historical context here, but maybe you were already more experienced back then than I was, for example (which means your bubble was in the know and the unwashed masses weren't). I started making websites in 1998 and earning money with it in 2001ish (that's the year Wikipedia launched) and imho back then it was a lot harder to find the 'correct' way with teaching yourself. I saw REST/RESTful and best practices for the web mostly develop after 2005-2008 (yeah, around the time there was the great Rails exodus as well) and people started to standardize more. What I'm trying to say is that it was all more the Wild West in web development and by the time people had more insight, the naming/switch to JSON had already begun. I also don't remember when I read the REST dissertation first, probably only much later than 2001.


That aligns roughly with my experience. What I mean is that even post 2010 most people haven't read the material and continue to promote weird frankenapis which they call RESTful based on scrap anecdotes or secondhand messageboard info, or hermeneutical readings of other API designs.


At this point I’ve stopped caring about REST. It’s like agile and scrum where everyone says they are doing it but everyone has their own opinion of what’s correct.

As long as there’s an OpenAPI spec, sane API routes, and it uses a format that’s easily consumable with a given ecosystem (so pretty much always JSON anyway), and it doesn’t do anything dumb like return 200OK { error: true } then I’m happy with it. Too much bikeshedding.

Bonus points if the API has a client library also.


Why do so many APIs do that i.e. 200 OK - {"errorCode": 45634, "errorMessage": "you messed up"}

Is there a reason that I'm just not aware of? a throwback to SOAP?


200 OK but actually you have an error from a pretend-REST API is my number one "old man yells at clouds" thing that drives me nuts. It is fundamentally disrespectful to the users of the API.

Especially if you have metrics/alerts that are tracking status codes.


>Especially if you have metrics/alerts that are tracking status codes.

This is the point though. As a client I don't want to throw 400 errors. As a server I don't want to throw 500 errors nor some 400 error.

As an example, if either client or server sees a spike of 404 errors they want to investigate. When the result of that investigation is "some crawler went haywire" or "a user is trying to access resources that don't exist" it's annoying. So the 200 OK with an error is an attempt to stop those sorts of scenarios. Of course like anything people take it too far but there's decent logic behind it.


I as a client care about 400s because that means I fucked up.

As a server I don't care about 400s because (within constraints) I don't care if my clients have fucked up.

Ad a server I care about 500s as that means I've fucked up. As a client I care about 500s only so much as to know if I should give the server a break and try somewhere else.

There is rich, important semantic meaning in the status codes.


>I as a client care about 400s because that means I fucked up.

Not necessarily. If the user enters in the wrong CC number and gets a 402 or 422 back then you don't really care.

>As a server I don't care about 400s because (within constraints) I don't care if my clients have fucked up.

Not necessarily. 404s can be caused by your application routing or a bad link you're generating.

>Ad a server I care about 500s as that means I've fucked up.

Not necessarily as 501 and 505 can be expected behavior

>As a client I care about 500s only so much as to know if I should give the server a break and try somewhere else.

This one is pretty safe


After many years and reading an insightful post by Ned Batchelder I realized it is folly to swallow errors in low level code.

Keep the low-level simple, always throw errors and let the high level code decide what to do. It has the context and hopefully smarts to make the best decision.


The lead for a project I wasn't working on casually informed me they were doing this a few years ago, the reason for it was nothing complicated, they just hated HTTP status codes and believed it was easier to pretend they didn't exist.

I mentioned that it was frustrating to work with APIs like that since a bunch of tooling relies on status codes, including the browser network tab but they just told me they didn't care. They also made a bunch of other questionable design decisions before leaving, so now I just take it for was it is, a red flag.


That's very much a red flag. Worked with a couple of similar troglodytes that inspired my original "I don't care about REST anymore" comment - most people are doing it wrong.


Different layers: the call to the api was successful on the transport Layer, thus 200. You messed up something in the business logic or you asked for a resource that's not there. While often you will get a 404, this is wrong: the http call is successful. The endpoint did not vanish. You just asked for something the business end could not deliver. The Protocol is fine with your call.


> While often you will get a 404, this is wrong: the http call is successful. The endpoint did not vanish

According to RFC 7231, status code 404 means that the specified resource wasn't found. Not that the endpoint wasn't fount.

"The 404 (Not Found) status code indicates that the origin server did not find a current representation for the target resource or is not willing to disclose that one exists. A 404 status code does not indicate whether this lack of representation is temporary or permanent; the 410 (Gone) status code is preferred over 404 if the origin server knows, presumably through some configurable means, that the condition is likely to be permanent."

So replying 404 is the correct response.

https://datatracker.ietf.org/doc/html/rfc7231#section-6.5.4


The problem is you can't differentiate between "resource not found" and "we never even got a chance to process the request for a resource" purely by status code. Maybe your upstream reverse proxy got mis-configured. Maybe DNS is broken.


But 404 is not the representation for not existing but for not found: if you return 404 you leave the user of the api wondering if he mistyped something, or the DNS broke, or some part of the routing went down.

Maybe 204 would be a middle ground


There's.... There is a whole.set of status codes for exactly those things!


> There's no HTTP result code for "your request was successful but your Smart Washing Machine is out of detergent", for example.

That comes down to your definition of success. Yeah, the client successfully connected and the server read the request, but it was unable to process said request.

To my mind, that's a 500, as in the server was not able to handle the request due to a circumstances beyond the clients control.


When used in an API, HTTP acts mostly as a transport layer. There's no HTTP result code for "your request was successful but your Smart Washing Machine is out of detergent", for example.


That's a 5xx (assuming the washing machine is the server in this scenario)

A problem on remote that may be resolved in the future that will allow a succesful request.

You are allowed to attach info to error code responses, there is a body you can put the details in.


And yet your example is perilously close to 418


@number6 is right. There are all kinds of problems when you use HTTP status codes to represent something that was correct for HTTP but failed business logic. You don't want a tonne of errors logged because someone has violated business logic but otherwise called the API correctly otherwise good luck finding actual errors where you screwed up by deleting an endpoint (actual 404) or where you changed the request model (400) etc.

I'm sure people might disagree with that approach but it is very common and very reasonable.


Yes, because error handling is hard.

The standard reply is that people "the industry" did not clearly defined if HTTP is a transport protocol, has the responsibility to delivery business messages, or if it's an application protocol, has the responsibility to define business messages/process. (Even I trying to make the issue clear I can not use good terms.)

The simplest/tired dev way to do it make HTTP just a transport protocol which means HTTP Status Code only mean Transport success/errors. Failed to send request, HTTP 4XX, failed to received response HTTP 5XX. Application server catastrophic failure, HTTP 5XX.

"RESTfull way" - Transport errors can be generated by client library, HTTP status codes. Business erros can be generated by client library, HTTP status and response body JSON content.

It's mess and made worse by those pretending it's easy, not a mess and it's a standard.


Lots of legacy APIs, and bad practices. In the front-end for instance, response 200 was preferred because of easier handling (back in jQuery and Angular times). These days GraphQL still returns 200 on query error


I’ve always thought it might be PHP legacy of returning 200 even on critical errors. AFAIK Slack and FB APIs work like that, and they are, or have been, PHP based…


Because you made an error but you shouldn’t worry everything will be OK.


The usefulness of metadata in rest responses ended up not mattering as much as people thought in most cases. Pagination is the best counter example but many rest apis do return next/prev links in the payload or in the headers. It’s still REST but the parts that mattered (http verbs for semantic info, etc) stayed around.


Seems strange to me that people fight over REST since it is very clearly communicated in the mentioned PhD dissertation.

I fully understand the movement of complexity, historically belonging on the server, shifted just as much on to the client, but that's not a discussion of being restful - it's not restful to have the client determine application state so to speak - the server does that for you.

In the sense of catering multiple clients via an API is of tremendous value but you still moved the complexity on to the client - you cannot argue against that.

I find it fascinating to have at least blobs of state/representation being served without having to fiddle with inner workings of an API and simply rely on the API to give me what I must show the user.

I am in the HATEOES camp, it sit well with me. But that's just me


I regularly interview devs and ask, "What makes a RESTful API RESTful?" and have never heard anyone mention hypertext or hypermedia. A typical answer is: stateless, uses HTTP and HTTP verbs, and (if the person is over 40) easier to read than SOAP.

Related, it seems like "API" is quickly becoming synonymous with "web service API". In my experience, the thing that goes between two classes is almost always referred to as an "interface" only.


It used to be one of my favorite pet peeves when folks would put "expert at RESTful APIs" on their resumes.

In interviews, I would ask, "What makes an API RESTful?" and wait gleefully for them to stutter and stumble towards an answer.

I would accept any kind of answer, and it was only really a mark against you if you couldn't dredge up something about "resources" or "HTTP verbs," or even just express some kind of awareness that there were other kinds of API.

It wasn't unusual for someone to just have no clue.

Maybe that makes me a grammar nazi or a*, and maybe adding that was just a way for kids with one internship under their belt to pad their experience, but I always felt like you should know the words on your resume.

I guess now that I know about this "hypermedia" requirement, I should be a little more forgiving?


But but but they mentioned hypertext in http..


I'm walking away from this article unable to find a really good reason to implement HATEOAS in an API meant to be consumed by a program (as Application Programming Interfaces typically are).

The best I can come up with (and this is me trying to like it) is that I guess the API is somewhat self documenting?

I see benefits to resource orientation and statelessness, but why do people get so upset about these APIs not following HATEOAS? Is it just a form of pedantry, that it's not really a REST API, it's a sparkling JSON-RPC?


Because no one knows what the hell the acronym means, except that it sounds good.

Everyone wants to be RESTful. RESTful is chill. It's resting - good programmers are lazy! But RESTful is resting while using an acronym, which is technical and sophisticated. To be RESTful is to be one of the smart lazy ones.

Now if you're one of the few who cares what your acronyms mean, you look it up and ... "representational state transfer". How do you transfer state without representing it? I guess everything that transfers state is RESTful. And everything is state, so everything that transfers is RESTful. So every API is RESTful! Great, I guess if we make an API we're one of those cool smart and lazy people. And let's make sure to call it RESTful so that everyone knows how cool, smart, and lazy-yet-technical we are.

Roy Fielding made a meaningless but cool-sounding acronym popular and has reaped the predictable consequences.


this whole discussion seems pedantic.

whether it is a local function or a remote function, both caller and callee need to agree on the parameters (input), and returns (output).

I send you X. You send me back Y. That's it - this is the contract we both agree to.

OP is saying - the caller should NEVER do anything with Y other than display it on screen, for it to be called REST. Well - why even display it, why not just discard it ? Calling print(Y) is as good as calling businesslogic(Y). Whatever further logic a human plans to do after print(Y), a machine can do the same.

In other words, REST is just step 1 of returning data from a remote function. The moment you code any additional logic on the returned data (which is 99% of use cases), it's not REST anymore ? Sounds like an extremely limited definition /use case of REST.


I found the discussion flat-out silly. It basically says something is only "self-describing" if it's an HTML blob, but in the example given, the JSON blob is actually easier to understand.

Sure, you can run the HTML through a browser and it looks nicer, but who cares? You can also render a JSON blob with a slightly different set of rules. And if we don't bother with the rendering on either side, it remains just as obvious what the JSON means.

If these people wanted their ideas to stand the rest of time, rather than merely the acronyms, they should have had better ideas.


This is hilariously inaccurate. There’s a whole dissertation on what REST is and it’s easily graspable, if you care to read it. REST is about hypermedia, if there is not hypermedia, it is not REST.

It must be said that confidence with which you present your inaccurate assertions is only going to make others as confused as you are.


If you found it hilarious, well, that was the point: the words in the acronym are very un-denotative, so it encourages deconstruction to mean whatever the reader wants it to mean.

If I truly want people to understand what I'm talking about, rather than just make something popular, I don't give them evocative, fluffy slogans and then direct them to a dissertation to find out what the fluff really means. I use carefully chosen, denotative words in my slogan.

HATEOAS is a good example of prizing clarity over vague evocativeness. It's all about hypermedia, so hypermedia is in the acronym.

If I don't do this, I can't be too upset at what predictably ensues, as people take my cool-sounding slogans and repurpose them to legitimize their own ideas.


REST, noun, acronym.

1. A term to indicate APIs that use HTTP as the transport protocol, and typically JSON as representation.

2. (archaic) A term conied in a paper from 2000, indicating a model that describes how the internet works.


Roy Fielding has no one to blame but himeself for creating such a catchy name that will definitely be reappropriated to refer to another thing.

Same goes for OOP. /s


IoC too!


It's down to utility.

It turns out that using JSON is easy, has good support and is relatively compact on the wire.

It also turns out that using HTTP verbs and transferring the entire state of an object makes development easier.

And equally, for 99% of use cases, it turns out that HATEOAS is nice but not necessary.


I remember wasting a huge amount of time on this debate over a decade ago. Frankly, I think the issue is that HATEOAS is not useful enough. It's a really nice idea, but in practice nobody actually wants to write an API client that way. So they don't. So API creators don't optimize for it. I consider all of RPC, GraphQL, and bastardized-REST systems to be less elegant but more pragmatic than "true REST". I can still palpably feel how refreshing it was when I finally went to work somewhere that didn't bother with this whole debate and just focused on building good RPC interfaces. More people should embrace that.


If an API is truly RESTful does that automatically mean it's a high quality API? Does that mean it is functional, performant, reliable, secure, and generally well designed? No. An API can be RESTful and also trash. An API can also be non-REST and be amazing.

Is the author correct in that APIs are inaccurately calling themselves RESTful? Yes, yes they are very correct. Congratulations. Here's a trophy for being correct. Now let's focus on what matters, and that is building software that works and works well, REST or not.

Please dump the pedantry and focus on practicality.


Por que no los dos? You can build a "real" restful API by returning HTML or JSON based on the request's suffix or HTTP-Accept value. Hit it with a web browser and get a nice "home page" that says, "This is my very cool API. Here are links to various endpoints. Make your requests with 'Accept: application/json' header, or end the request path in '.json' to get a JSON response". Delightfully discoverable and self-documenting.


It may be time to re-visit this approach now that HTML5 is fully matured.

HTML5 was only just released in 2014, and took many years to be fully supported by major browsers.

At the time of JSON vs HTML, HTML was not yet in a standard place yet (XML API implementations were extremely inconsistent).

Fast forward to 2022, fetching <div> and <a> is an elegant pattern, and probably the way to go for self documenting API in the future!


This post tries to clarify things, but it's extremely confused and kinda' wrong. First, REST is a type of (distributed) architecture and really has nothing to do with how data is sent over the wire (as long as it's "hypermedia"). The claim that RESTful state has to be transferred via HTML is plain wrong[1].

Second, a JSON response is simply that: a bunch of data in JSON format. It is not JSON-RPC. JSON-RPC, unlike REST, is a protocol -- a way a client talks to a server -- and it usually looks like this:

    --> {"jsonrpc": "2.0", "method": "subtract", "params": [42, 23], "id": 1}
    <-- {"jsonrpc": "2.0", "result": 19, "id": 1}
    --> {"jsonrpc": "2.0", "method": "subtract", "params": [23, 42], "id": 2}
    <-- {"jsonrpc": "2.0", "result": -19, "id": 2}
XML-RPC is the same thing, but done with XML instead of JSON.

> The entire networking model of the application moved over to the JSON RPC style.

No, it didn't. Well, actually, I don't know exactly what "networking model" means in this case. Pretty sure we're still using TCP/IP. But I think it means the data-layer protocol; this is, however, still wrong. We're actually using HTTP methods (also known as RESTful verbs[2]) along with a JSON data format. This is still quite screwed up in the grand scheme of things, but in different ways than the article argues.

[1] According to the man himself, in fact: https://restfulapi.net/

[2] A terribly confusing term


Two simple factors: 1) We needed a word for using JSON and HTTP statuses instead of treating HTTP as a "transport" for opaque SOAP payloads 2) Actual "REST" (i.e. HATEOS) is utterly useless and has never been successfuly implemented (except to the extent that the web itself qualifies), so the word was going free.


Anarchy, it always has been. Everyone has their own ideas about what REST means or what Hypermedia means.


There is an authoritative source of truth here, though, whether or not anyone chooses to listen to him—Roy Fielding defined REST, so his ideas on this matter are, I'd argue, definitionally correct (which is not automatically the same as most useful).


I think the plot that Roy set for us was lost a long time ago. Right or wrong we can let things evolve.


Semantic Diffusion comes for everything. See also everyone calling the thing they use to avoid integration "CI".


Even for the term Semantic Diffusion.


Is there actually an API out there that actually qualifies as RESTful, other than the WWW itself? When you really take the time to consider the spirit of RESTfulness, and the designs and constraints of hypermedia in general, I can't really think of anything other than just... websites on the internet. Or some WWW-like that might not be transferred over HTTP nor use HTML but would be functionally the same regardless (and far less popular).

It seems that trying to build a hypermedia API in the spirit of hypermedia precludes someone actually designing an app with any particularity (themes, layouts, pages, certain requests/responses being valid/invalid, etc.), since it must be so general, the only client application that could qualify is the web browser itself - able to render any HTML document without actually knowing about the document semantically. Because having prior semantic knowledge of the API violates REST. Assuming that, are the 'apps' and APIs one would design then just hypermedia documents? Sounds like web 1.0. Not necessarily a bad thing, really, but REST seems too specific to be meaningful outside of websites, either that or I'm not imaginative enough.

I'm not sure what a 'hypermedia API' looks like that isn't just a web page or a functional equivalent thereof. It seems that either the WWW is the REST reference implementation or REST is simply the architecture of the WWW codified.


Author says that the json is not self-describing, but the html example is... That's true if "self describing" means "describes html structure".

Perhaps modern use of the term REST is officially incorrect, but I think most REST clients really need to understand what they are receiving. How many rest clients merely show the result (as-is) to the human user? No, most clients are themselves programs which need to consume the response and make further decisions.

Imaging having to parse out the official REST HTML response to get the balance of the account. I hope the source is only in one language, because I would hate to have to build my own reverse-localization system just to make sense of the REST response I just consumed.

I was really trying to grasp why someone would build such a tall soapbox to complain about the incorrect use of a term, when the correct use would mean building arguably near-useless APIs. But then I took a look at what the htmx site is all about. It's about everything-as-html. "Note that when you are using htmx, on the server side you typically respond with HTML, not JSON. This keeps you firmly within the original web programming model, using Hypertext As The Engine Of Application State without even needing to really understand that concept."

Looking at the rest of their site, I'm finding it very difficult to see the value proposition over the current approach of JSON APIs.


> That's true if "self describing" means "describes html structure".

> Imaging having to parse out the official REST HTML response to get the balance of the account.

It's not the HTML that matters, it could be any self describing format containing hypermedia controls, for example: https://jasonette.com or https://hyperview.org


The article's "good" (HTML) example above this statement is bone stock HTML. The only self-description given is that there are 4 anchors and 3 divs in that response. Any other description would have to be assumed based on text between the tags.


I see this argument made assuming that REST service creators are not aware of L3 REST but the fact that it never caught on is at least some proof of its ill-fitness to the general problem. Nowadays we have many more options that do what L3 REST tries to do (OData, GraphQL, etc) and most APIs at least conform to L2 REST. This is a classic case of friction between design intention and actual user experience: there was a push-door with a handle, and users aren't using the handle, they're just pushing it.


The term is used wrong most of the time now, if you try to be more precise you're likely to confuse people not aware of the original meaning of REST. And there's not a really nice term for JSON-based web API that is as short as REST.

Many web APIs are not REST, but still take at least a tiny bit of inspiration from it. Mostly the resource-based structure, not so much any of the other stuff like HATOAS. In practice the self-describing nature simply isn't useful enough, so most people don't bother.


Several commenters take the position that the distinction doesn't matter. This is "an old person's battle." What matters is getting things done.

I'm not so sure. For one thing, it's of both theoretical and practical interest to trace the path of how a technical term comes to mean its opposite over time. If you're in the business of creating technical terms (everyone building technologies is), you might learn something by studying the REST story.

For one thing, Fielding's writing is not exactly approachable. REST is described in a PhD dissertation that is dense, packed with jargon and footnotes, and almost devoid of graphics or examples. His scarce later writings on REST were not much better.

Others who thought they understood Fielding, but who could write/speak better than him, came along with different ideas. Their ideas stuck and Fielding's didn't because he wrote like an academic and they did not.

The other thing that happened is that the technological ground shifted. To even begin to understand Fielding requires forgetting much or all of what one knows about modern web technologies. Part of that shift is the timing of Fielding's rediscovery with deep frustration over XML/RPC.


I wrote that it is an old person's battle.

And I'd like to clarity that never did I mean that the knowledge and history fueling this so-called battle was meant to the trash.

Quite the opposite actually. As a self-described old person, I much appreciate the historical perspective and the subtleties and the changes the term has seen.


REST, SCRUM, Agile, etc these are just cargo cults built from temerity (in the blindness sense).


The hidden cost of using hypermedia in REST has always been that hypermedia is machine-hostile. It has been for years.

So a format intended for machine-to-machine communication is taking on huge cost adopting a full hypermedia format for its output. Ignoring the initial question of "What version of hypermedia" (i.e. are we doing full modern HTML? Can I embed JavaScript in this response and expect the client to interpret it?), that's just overkill when 99% of the time the client and the server both understand the format of the data and don't need the supporting infrastructure a full hypermedia REST implementation provides.

For the same reasons XML-RPC more-or-less lost the fight, HTML (as a not-very-lightweight subset of XML) was going to lose the fight.

That having been said, there are some great ideas from the REST approach that can make a lot of sense with a JSON payload (such as standardizing on URLs as the way foreign resources are accessed, so your client doesn't have to interpret different references to other resources in different ways). But using HTML-on-the-wire isn't generally one of them; it's a solution looking for a problem that brings a full flotilla of problems with it.


Bottom line: REST+HATEOAS is great for distributed hypermedia consumed and navigated by humans.

It is just not a practical architecture for API’s.


I am not quite sold on the dichotomy between “REST” and “RPC” as suggested by the article: if sending over a structured response that has to be interpreted by a client program is considered RPC then why is sending over a structured response that has to be interpreted by the browser not RPC? REST as described by the article is by definition a subset of RPC - you invoke a remote procedure, you get a response. Besides, it stands to reason that the stateless properties of REST are much more interesting than the “returns a hypermedia response” but.

As to the rest (pun) of the article… I have no problem accepting that REST was originally proposed as a way to navigate the web using hypermedia responses. But I also have no problem in accepting that the term has since moved on to describe the API design principles which ultimately what makes it useful for the modern web.

Funnily enough, recent interest in SSR almost makes it a full circle.


There are only two hard things in Computer Science: cache invalidation and naming things -- Phil Karlton

A new name is needed for Classic REST. The use of HATEOAS is ugly because it has HATE in the name. Hypermedia Constraint REST is better. Stateless REST. Pure REST. Classic REST. Separation of Concerns REST. Rest 1.0. Hypermedia REST vs JSON REST


... only two ... "and off-by-ones" :)


The details of how computers talk to each other is, or really should be, largely irrelevant. It's silly busywork all the little micromanagement of interfaces and data structures.

It's plumbing.

Some time in the future there will be another level of the software revolution in which alot of those details can be left to the computers themselves to work out.


Probably, but right now, it's definitely not irrelevant.

In fact, your plumbing analogy is more correct than you think. Most devs these days are connecting existing pieces together to make a system flow. We don't get paid to make the pieces, we get paid because we know how they should fit together.


Answering the headline question: developers love buzzwords and sticking to ideas instead of doing jobs in a way that fits a situation.

What bugs me most in this is that ceremonial part. “Pass parameters in urlencoded form, except when GET, then put them into query string”. Wtf? We are clearly doing RPC, not a clerical work. Some APIs may look like documents, e.g. stock market personal order book looks like a collection of signed legal documents, but others do not, e.g. ticker stream, weather info, etc. Can we please stop stretching buzzwords and just settle on RPC, which could be abstracted away into `result = await resource.foo(bar, baz)` instead of processing numerous structured outcomes from network failures to operational errors which have no corresponding http status codes, unless you stretch to the one that sounds similar.


Talk about a getting lost in semantics rant. If you want to make your API "RESTful" like this person describes, there is JSON-LD. Most engineers are going to look at all the extra JSON as getting in their way. What was the point of this rant again?


Agree.

> Today, when someone uses the term REST, they are nearly always discussing a JSON-based API using HTTP.

Yup, this is exactly what I do. So what? Maybe that's incorrect naming but most people only care about being able to easily use the API, not whether it is true REST. And not being 100% REST-spirit-compliant does not prevent from using tools like OpenAPI to document it.


The argument seems to be that modern-day REST is not self-descriptive, but... making it so is a lot of extra work for little benefits; people have optimized for efficiency and focus on what they need.

The links in a JSON response are only applicable if you need a client to be able to explore from the response on, but in practice it's not necessary and you're better off saving the response overhead.

A high over design like an OpenAPI spec is better in all the cases I've seen. And of course there's alternatives like GraphQL or grpc depending on the use case. I'd still prefer REST for public APIs though.


I once attempted to implement HATEOAS many years ago after one of our senior devs got a bug up their ass about it. It was a disaster. Our clients weren't going to suddenly sprout new functionality to consume the discoverable endpoints if we changed the API, there were always going to be client-side changes to be made as well so it was an enormous exercise in futility. Ever since then HATEOAS has always felt to me like a way to design and implement APIs thought up in the ivory towers of academia, not by anyone on the ground, in the trenches at your everyday software company.


I hate when people change meanings too, but they do. What can you do about it? Complaining is like whining about all the words that have changed meanings over the decades.

https://www.google.com/search?q=top+words+that+have+changed+...

One I hate is "rougelike". I doesn't mean "like the game Rouge" (which might include Diablo and certainly includes Larn) Instead it now means any game with randomly generated levels but requires no other similarities to Rouge.


*rogue :-)


Have I understood idea of REST that described in the article right? I still have some documented "route" to resource, but instead of mapping it trivially to URL (as done in "pseudo-REST"), client requests entry point, then goes through labelled hyperlinks and forms according to "route" to reach resource.

It makes sense as it allows implementation of seamless API across multiple servers and removes need to make consistent URL structure. But won't it add too much overhead then?


> I still have some documented "route" to resource, but instead of mapping it trivially to URL (as done in "pseudo-REST"), client requests entry point, then goes through labelled hyperlinks and forms according to "route" to reach resource.

The basic idea is correct, the wording is not. Resources have identifiers, not routes. Hyperlinks have link relations, not labels.

> won't it add too much overhead

What overhead?


Because REST as designed while it might be some academic ideal, doesn't make sense in distributed computing performance point of view.

Needless network requests to understand what a REST API is supposed to be able to do, and then navigate through their description, until the actual call can be finally be done.

And then even if it isn't pure REST, we have all those global warming contributions out of needless parsing.

Thankfully with the uptake of gRPC we are getting back to protocols where network performance is taken into consideration.


I feel like the biggest reason why REST failed is because it’s just not useful from a business point of view. Like how will I put my logo and brand colors over all of this.


I love this kind of article. I’ve been involved in web development since the mid-1990s, so I lived through all the things the author is talking about, but I never got down into the weeds and did things like read dissertations. Reading this article makes so many things make sense about where web development has gone and why. I could never quite put my finger on why I was so uncomfortable with where it had gone until now.


This article answers its own question: "The situation is hopeless, but not serious."

REST vs HATEOS vs HTTP API vs Web Service

It's not serious and it doesn't matter that much. If I'm writing an API, I probably want to give people or systems access to my data and services. Missing links will be a mild inconvenience when compared to things like bad naming, inconsistent data structures, confusing error codes or domain complexity.


Technology took a turn unexpected by the creator. Developers utilize REST not for "REpresentaional" part, but for the constraints it enforces: client-server, uniform interface, stateless and others[1]

[1]: https://en.wikipedia.org/wiki/Representational_state_transfe...


I do love json-html-rpc (rest) api's that embed the "next page" and "previous page" links in their json. That is super useful.


Here's the crux of the argument but it falls flat IMO:

>A proper hypermedia client that receives this response does not know what a bank account is, what a balance is, etc. It simply knows how to render a hypermedia, HTML.

No true client would know how to display json or not know how to display html. So if you have a browser plugin that pretty prints json, it's RESTful? Seems pretty specious.


> a browser plugin that pretty prints json, it's RESTful?

Only if its a hypermedia rendering (for a example it renders hypertext so the end-user can interact with the system).


I love how our industry likes to use official sounding names like the "Richardson Maturity Model" and the "Liskov Substitution Principle", etc, as if these informal ideas are on the same level as general relativity and quantum electrodynamics. Thanks Uncle Bob and Martin Fowler for your important contributions to science.


I did my last webdevelopment in the early noughties, so I have little skin in the game. My mental summary of this debate is that if you need your browser to be a universal turing machine instead of a renderer, you are probably not doing it the RESTful way. Whether that is a bad thing, is a matter of opinion.


It's because nobody cares what it meant and what the original concept was, and they don't have a use for it (the original concept).

People want and find useful what they do use in practice: the "opposite of REST" REST.

It's just that purists and/or Fielding overestimate the importance of the original REST.


I have no doubt the original vision of REST will eventually come to fruition. It will probably be "discovered" by some energetic entrepreneur in future and by used to incredible effect. I believe the rest of us are just going along with the flow and earning paychecks.


We all decided to keep the important principles of REST and drop the HTML responses because HTML is a terrible format for data exchange between automated systems. JSON is much better for this because it has a simpler schema.

This post is very pedantic. Being pedantic is not helpful.


It starts describing how we got here. While if one reads to the end says, "it's fine."


I think it was to distance these APIs from the complexities of SOAP. I think modern JSON-based APIs are somewhat RESTful--mostly in URL structure and using the basic HTTP verbs and response codes, but that's where it basically ends.


And sometimes you end up with worst of both world:

The salesman say: the documentation is the API it's RESTfull!!!!!!!!

The developer hear: Don't care about user documentation it's RESTfull!!!!!!!

The client get: A shitty documentation and a JSON API


This guy.. I do whatever the fuck I want. Why does this person feels the need how to write my API and what I should call it?. This guy's obviously living in a cocoon. Who even uses the term hypermedia?


See also: Hungarian notation (apps vs system)

https://en.wikipedia.org/wiki/Hungarian_notation


This reads exactly like someone going to great length eloquently bemoaning how racetracks are meant for horses and anything else is a travesty, and here’s a pedantic list of reasons as to why!


No mention of Ruby on Rails - they made REST mass popular with drawing resource routes, nesting, going out of their way on client side Ajax to support http verbs, error codes etc.


>You can tap the sign as much as you want, that battle was lost a long time ago.

Hill in the battle that I intend to die on: "crypto" means "cryptography" dammit!


I want the author of htmx to get together with the guy from pandastrike and rant about the misuse of REST for an hour a week. It would be my new favorite podcast.



This is undated, but I could swear I had read it before years ago! But I can't find an earlier publication, I guess I read a similar thing!


The author writes on the subject somewhat often and includes ideas from earlier posts. This must be the latest version because it had a link to few-day-old HN thread.


Better question is how opl can write API like this and call it rest:

POST /api/findAllImagesMatchingMyPredicate

Body: jsonObject

And I work with those ppl. Time to look around..


”REST must be the most broadly misused technical term in computer programming history."

I think OOP and REST are neck-and-neck on this one.


Interesting, but the cat is out of the bag.

Now REST means JSON (or other data format) over HTTP with respect to the HTTP methods


Beyond the HTML web, are there any practical examples of "actual REST" that we can look at?


I need to make a SPA and it's requisite backend. Which highfalutin theory of API should I use?


"the situation is hopeless but not serious" is a wonderful way to describe it.


Responding to the topic: Similar to how Agile isn't very Agile, I suspect.


The best correct usage of REST is WebDAV and let it die in peace.


You either die a hero or live long enough to become the villain.


How did Agile come to mean the opposite of Agile?


One sprint at a time.


that made me smile.


sounds like we just need a new acronym, or can we just call it REST2 and move on?


As an "elder" in our industry, I encounter "old people arguments" on a regular basis. A few observations that seem to apply here:

1. Naming things is hard. Sometimes a thing gets a name for a reason that made sense a long time ago, but things evolve, and the original name no longer makes sense.

This isn't necessarily a problem. Nobody cares that we no longer typeset words and print images on paper, physically cut them out, and then physically paste them onto a board, which we take a picture of and use the picture to run the phototypesetter (https://en.wikipedia.org/wiki/Phototypesetting).

Yes, I am old enough to have worked on hybrid publishing system that used laser printers to create text that was physically copied and pasted in the manner described above. No, I don't argue that "cut" and "paste" are the wrong words to describe what happens in editing software.

So if we use the term "REST" today in a manner that doesn't agree with how the coiner of the term meant it when discussing the architecture of a distributed hypermedia system... Sure, why not, that's ok. We also don't use terms like "OOP" or "FP" precisely the way the terms were used when they were first coined, and for that matter, we probably don't all agree on exactly what they mean, but we agree enough that they're useful terms.

What else matters? Well...

2. Sometimes arguing about what the words used to mean is a proxy for arguing about the fact that what we consider good design has changed, and some people feel it may not be for the better.

That's always a valuable conversation to have. We sometimes do "throw out the baby with the bathwater," and drop ideas that had merit. We footgun ourselves for a while, and then somebody rediscovers the old idea.

The original OOP term was about hiding internal state, yes, and about co-locating state with the operations upon that state, yes, but it was also about message-passing, and for a long time we practiced OOP with method calling and not message-passing, and sure enough, we had to rediscover that idea in Erlang and then Elixir.

Forget whether the things we do with JSON should or shouldn't be called "REST-ful" because they aren't the same was what the word was intended to describe way back when. Good questions to ask are, "What did that original definition include that isn't present now?" "Woudl our designs be better if we behaved more like that original definition?" "What problems did the original definition address?" "Do we still have those problems, and if so, are they unsolved by our current practices?"

If there's something good that we've dropped, maybe we will get a lot of value out of figuring out what it is. And if we want to bring it back, it probably won't be by exactly replicating the original design, maybe we will develop entirely new ways to solve the old problems that match the technology we have today.

TL;DR

The question of whether an old term still applies or not can generate a lot of debate, but little of it is productive.

The question of whether an old design addressed problems that we no longer solve, but could solve if we recognized the problems and set about solving them with our current technology is always interesting.


Don't care about the semantics or grammar-- I just need a way for the UI to run a function on a server.


JSON actually means Representation State Transfer that the original author wanted, that's what made JSON the same meaning as RESTful API !




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: