Comment by dwaltrip
3 days ago
I'll never understand why the HATEOAS meme hasn't died.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
> I'll never understand why the HATEOAS meme hasn't died.
> Is anyone using it? Anywhere?
As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.
Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.
> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
LLMs seem to do well at this.
And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.
> As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol.
On this case specifically, everybody's lives are worse because of that.
I'm not super familiar with acme, but why is that? I usually dislike the HATEOS approach but I've never really seen it used seriously, so I'm curious!
Yes. You used it to enter this comment.
I am using it to enter this reply.
The magical client that can make use of an auto-discoverable API is called a "web browser", which you are using right this moment, as we speak.
This is true, but isn’t this quite far away from the normal understanding of API, which is an interface consumed by a program? Isn’t this the P in Application Programming Interface? If it’s a human at the helm, it’s called a User Interface.
I agree that's a common understanding of things, but I don't think that it's 100% accurate. I think that a web browser is a client program, consuming a RESTful application programming interface in the manner that RESTful APIs are designed to be consumed, and presenting the result to a human to choose actions.
I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:
https://htmx.org/essays/hypermedia-clients/
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
AI may change this at some point.
8 replies →
So, given a hateos api, and stock firefox (or chrome, or safari, or whatever), it will generate client views with crud functionality?
Let alone ux affordances, branding, etc.
Yes. You used such an api to post your reply. And I am using it as well, via the affordances presented by the mobile safari hypermedia client program. Quite an amazing system!
4 replies →
The web browser is just following direct commands. The auto discovery and logic is implemented by my human brain
Yes.
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
I also use Google Maps, YouTube, Spotify, and Figma in the same web browser. But surely most of the functionality of those would not be considered HATEOAS.
Yes, very strongly agree. Browsers, through the code-on-demand "optional" constraint on REST, have become so powerful that people have started to build RPC-style applications in them.
Ironic that Fielding's dissertation contained the seed of REST's destruction!
Wait what? So everything is already HATEOAS?
I thought the “problem” was that no one was building proper restful / HATEOAS APIs.
It can’t go both ways.
The web, in traditional HTML-based responses, uses HATEOAS, almost by definition. JSON APIs rarely do, and when they do it's largely pointless.
https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
I used it on an enterprise-grade video surveillance system. It was great - basically solved the versioning and permissions problem at the API level. We leveraged other RFCs where applicable.
The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.
https://htmx.org/ might be the closest attempt?
https://data-star.dev are taking things a bit further in terms of simplicity and performance and hypermedia concepts. Worth a look.
I think OData isn't used, and that's a proper standard and a lower bar to clear. HATEOAS isn't even benefiting from a popular standard, which is both a cause and a result.
You realize that anyone using a browser to view HTML is using HATEOS, right? You could probably argue whether SPAs fit the bill, but for sure any server rendered or static site is using HATEOS.
The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.
We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.
HATEOAS is anything that serves the talking point now apparently
For a traditional web application, HATEOS is that. HTML as the engine of application state: the application state is whatever the server returns, and we can assess the application state at any time by using our eyeballs to view the HTML. For these applications, HTML is not just a presentation layer, it is the data.
The application is then auto-discoverable. We have links to new endpoints, URLs, that progress or modify the application state. Humans can navigate these, yes, but other programs, like crawlers, can as well.
What do you mean? Both HATEOAS and REST have clear definitions.
Can you be more specific? What exactly is the partial knowledge? And how is that different from non-conforming APIs?
Not totally sure I understand your question, sorry if I don't quite answer it here.
With REST you need to know a few things like how to find and parse the initial content. I need a browser that can go from a URL to rendered HTML, for example. I don't need to know anything about what content is available beyond that though, the HTML defines what actions I can take and what other pages I can visit.
RPC APIs are the opposite. I still need to know how to find and parse the response, but I need to deeply understand how those APIs are structured and what I can do. I need to know schemas for the API responses, I need to know what other APIs are available, I need to know how those APIs relate and how to handle errors, etc.