Comment by dwaltrip

3 days ago

I'll never understand why the HATEOAS meme hasn't died.

Is anyone using it? Anywhere?

What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?

> I'll never understand why the HATEOAS meme hasn't died.

> Is anyone using it? Anywhere?

As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.

Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.

> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?

LLMs seem to do well at this.

And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.

  • > As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol.

    On this case specifically, everybody's lives are worse because of that.

    • I'm not super familiar with acme, but why is that? I usually dislike the HATEOS approach but I've never really seen it used seriously, so I'm curious!

Yes. You used it to enter this comment.

I am using it to enter this reply.

The magical client that can make use of an auto-discoverable API is called a "web browser", which you are using right this moment, as we speak.

I used it on an enterprise-grade video surveillance system. It was great - basically solved the versioning and permissions problem at the API level. We leveraged other RFCs where applicable.

The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.

I think OData isn't used, and that's a proper standard and a lower bar to clear. HATEOAS isn't even benefiting from a popular standard, which is both a cause and a result.

You realize that anyone using a browser to view HTML is using HATEOS, right? You could probably argue whether SPAs fit the bill, but for sure any server rendered or static site is using HATEOS.

The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.

We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.

  • HATEOAS is anything that serves the talking point now apparently

    • For a traditional web application, HATEOS is that. HTML as the engine of application state: the application state is whatever the server returns, and we can assess the application state at any time by using our eyeballs to view the HTML. For these applications, HTML is not just a presentation layer, it is the data.

      The application is then auto-discoverable. We have links to new endpoints, URLs, that progress or modify the application state. Humans can navigate these, yes, but other programs, like crawlers, can as well.

  • Can you be more specific? What exactly is the partial knowledge? And how is that different from non-conforming APIs?

    • Not totally sure I understand your question, sorry if I don't quite answer it here.

      With REST you need to know a few things like how to find and parse the initial content. I need a browser that can go from a URL to rendered HTML, for example. I don't need to know anything about what content is available beyond that though, the HTML defines what actions I can take and what other pages I can visit.

      RPC APIs are the opposite. I still need to know how to find and parse the response, but I need to deeply understand how those APIs are structured and what I can do. I need to know schemas for the API responses, I need to know what other APIs are available, I need to know how those APIs relate and how to handle errors, etc.