Comment by motorest

3 days ago

> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.

Why do people feel compelled to even consider it to be a battle?

As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.

Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?

[1] https://en.wikipedia.org/wiki/Richardson_Maturity_Model

For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.

  • It was buried towards the bottom of the article, but the reason, to me:

    Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.

    Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.

    However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.

    Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.

    • > Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.

      That was the theory, but it was never true in practice.

      The oft comparisons to the browser really missed the mark. The browser was driven by advanced AI wetware.

      Given the advancements in LLMs, it's not even clear that RESTish interfaces would be easier for them to consume (say vs. gRPC, etc.)

  • Then let developer-Darwin win and fire those people. Let the natural selection of the hiring process win against pedantic assholes. The days are too short to argue over issues that are not really issues.

Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).

Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.

  • > (...) and part of the reason for this is just that defining media types is not something people do (...)

    People do not define media types because it's useless and serves no purpose. They define endpoints that return specific resource types, and clients send requests to those endpoints expecting those resource types. When a breaking change is introduced, backend developers simply provide a new version of the API where a new endpoint is added to serve the new resource.

    In theory, media types would allow the same endpoint to support multiple resource types. Services would sent specific resource types to clients if they asked for them by passing the media type in the accept header. That is all fine and dandy, except this forces endpoints to support an ever more complex content negotiation scheme that no backend framework comes close to support, and this brings absolutely no improvement in the way clients are developed.

    So why bother?

>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.

Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?

We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.

  • > We should probably stop calling the thing that we call REST (...)

    That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.

    None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.

    Ultimately it's all about nitpicking.

> Why do people feel compelled to even consider it to be a battle?

Because words have specific meanings. There’s a specific expectation when using them. It’s like if someone said “I can’t install this app on my iPhone” but then they have an android phone. They are similar in that they’re both smartphones and overall behave and look similar, but they’re still different.

If you are told an api is restful there’s an expectation of how it will behave.

  • > If you are told an api is restful there’s an expectation of how it will behave.

    And today, for most people in most situations, that expectation doesn’t include anything to do with HATEOAS.

  • Words derive their meaning from the context in which they are (not) used, which is not fixed and often changes over time.

    Few people actually use the word RESTful anymore, they talk about REST APIs, and what they mean is almost certainly very far from what Roy had in mind decades ago.

    People generally do not refer to all smartphones as iPhones, but if they did, that would literally change the meaning of the word. Examples: Zipper, cellophane, escalator… all specific brands that became ordinary words.

> but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.

Only because we never had the tools and resources that, say, GraphQL has.

And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well

  • > Only because we never had the tools and resources that, say, GraphQL has.

    GraphQL promised to solve real-world problems.

    What real world problems does HATEOAS addresses? None.

    • GraphQL was "promising" something because it was a thing by a single company.

      HATEOAS didn't need to "promise" anything since it was just describing already existing protocols and capabilities that you can see in the links I posted.

      And that's how you got POST-only GraphQL which for years has been busily reinventing half of HTTP

I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.

When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.

> Why do people feel compelled to even consider it to be a battle?

Because September isn't just for users.

HATEOAS adds lots of practical value if you care about discoverability and longevity.

  • Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.

    HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"

    • You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.

      21 replies →

    • > If it's for robots, then _maybe_ there's some value...

      Nah, machine readable docs beat HATEOAS in basically any application.

      The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.

  • For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.

  • LLMs also appear to have an easier time consuming it (not surprisingly.)