Comment by cjpearson
3 days ago
I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle. When I see "REST API" I can safely assume the following:
- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
Fielding won the war precisely because he was intellectually incoherent and mostly wrong. It's the "worse is better" of the 21st century.
RPC systems were notoriously unergonomic and at best marginally successful. See Sun RPC, RMI, DCOM, CORBA, XML-RPC, SOAP, Protocol Buffers, etc.
People say it is not RPC but all the time we write some function in Javascript like
which does a
and on the backend we have a function that looks like
with some annotation that explains how to map the URL to an item call. So it is RPC, but instead of a highly complex system that is intellectually coherent but awkward and makes developers puke, we have a system that's more manual than it could be but has a lot of slack and leaves developers feeling like they're in control. 80% of what's wrong with it is that people won't just use ISO 8601 dates.
When I realized that I was calling openapi-generator to create client side call stubs on non-small service oriented project, I started missing J2EE EJB. And it takes a lot to miss EJB.
I'd like to ask seasoned devs and engineers here. Is it the normal industry-wide blind spot where people still crave for and are happy creating 12 different description of the same things across remote, client, unit tests, e2e tests, orm, api schemas, all the while feeling much more productive than <insert monolith here> ?
I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules. Furthermore it became hard to onboard to these environments and figure out how to make changes and deploy them safely. Sometimes the repetition is really the lesser evil.
4 replies →
I keep pining for a stripped-down gRPC. I like the *.proto file format, and at least in principle I like the idea of using code-generation that follows a well-defined spec to build the client library. And I like making the API responsible for defining its own error codes instead of trying to reuse and overload the transport protocol's error codes and semantics. And I like eliminating the guesswork and analysis paralysis around whether parameters belong in the URL, in query parameters, or in some sort of blob payload. And I like having a well-defined spec for querying an API for its endpoints and message formats. And I like the well-defined forward and backward compatibility rules. And I like the explicit support for reusing common, standardized message formats across different specs.
But I don't like the micromanagement of field encoding formats, and I don't like the HTTP3 streaming stuff that makes it impossible to directly consume gRPC APIs from JavaScript running in the browser, and I don't like the code generators that produce unidiomatic client libraries that follow Google's awkward and idiosyncratic coding standards. It's not that I don't see their value, per se*. It's more that these kinds of features create major barriers to entry for both users and implementers. And they are there to solve problems that, as the continuing predominance of ad-hoc JSON slinging demonstrates, the vast majority of people just don't have.
2 replies →
Brb, I'm off to invent another language independent IDL for API definitions that is only implemented by 2 of the 5 languages you need to work with.
I'm joking, but I did actually implement essentially that internally. We start with TypeScript files as its type system is good at describing JSON. We go from there to JSON Schema for validation, and from there to the other languages we need.
8 replies →
It's not that we like it, it's just that most other solutions are so complex and difficult to maintain that repetition is really not that bad a thing.
I was however impressed with FastAPI, a python framework which brought together API implementation, data types and generating swagger specs in a very nice package. I still had to take care of integration tests by myself, but with pytest that's easy.
So there are some solutions that help avoid schema duplication.
1 reply →
My experience is that all of these layers have identical data models when a project begins, and it seems like you have a lot of boilerplate to repeat every time to describe "the same thing" in each layer.
But then, as the project evolves, you actually discover that these models have specific differences in different layers, even though they are mostly the same, and it becomes much harder to maintain them as {common model} + {differences}, than it is to just admit that they are just different related models.
For some examples of very common differences:
- different base types required for different languages (particularly SQL vs MDW vs JavaScript)
- different framework or language-specific annotations needed at different layers (public/UNIQUE/needs to start with a capital letter/@Property)
- extra attached data required at various layers (computed properties, display styles)
- object-relational mismatches
The reality is that your MDW data model is different from your Database schema and different from your UI data model (and there may be multiple layers as well in any of these). Any attempt to force them to conform to be kept automatically in sync will fail, unless you add to it all of the logic of those differences.
1 reply →
Having 12 different independent copies means nobody on your 30 people multi-region team is blocked.
I remember getting my hands on a CORBA specification back as a wide-eyed teen thinking there is this magical world of programming purity somewhere: all 1200 pages of it, IIRC (not sure what version).
And then you don't really need most of it, and one thing you need is so utterly complicated, that it is stupid (no RoI) to even bother being compliant.
And truly, less is more.
I'm not super familiar with SOAP and CORBA, but how is SOAP any more coherent than a "RESTful" API? It's basically just a bag of messages. I guess it involves a schema, but that's not more coherent imo, since you just end up with specifics for every endpoint anyways.
CORBA is less "incoherent", but I'm not sure that's actually helpful, since it's still a huge mess. You can most likely become a lot more proficient with RESTful APIs and be more productive with them, much faster than you could with CORBA. Even if CORBA is extremely well specified, and "RESTful" is based more on vibes than anything specific.
Though to be clear I'm talking about the current definition of REST APIs, not the original, which I think wasn't super useful.
SOAP, CORBA and such have a theory for everything (say authentication) It's hard to learn that theory, you have to learn a lot of it to be able to accomplish anything at all, you have to deal with build and tooling issues, but if you look closely there will be all sorts of WTFs. Developers of standards like that are always implementing things like distributed garbage collection and distributed transactions which are invariably problematic.
Circa 2006 I was working on a site that needed to calculate sales tax and we were looking for an API that could help with that. One vendor uses SOAP which would have worked if we were running ASP.NET but we were running PHP. In two days I figured out enough to reverse engineer the authentication system (docs weren't quite enough to make something that worked) but then I had more problems to debug. A competitive vendor used a much simpler system and we had it working in 45 min -- auth is always a chokepoint because if you can't get it working 100% you get 0% of the functionality.
HTTP never had an official authentication story that made sense. According to the docs there are basic, digest, etc. Have you ever seen a site that uses them? The world quietly adopted cookie-based auth that was an ad-hoc version of JSON Web Tokens, once we got an intellectually coherent spec snake oil vendors could spam HN with posts about how bad JWT is because... It had a name and numerous specifics to complain about.
Look at various modern HTTP APIs and you see auth is all across the board. There was the time I did a "shootout" of roughly 10 visual recognition APIs, I got all of them working in 20-30 mins except for Google where I had to install a lot of software on my machine, trashed my Python, and struggled mightily because... they had a complex theory of authentication which was a barrier to doing anything at all.
Worse is better.
4 replies →
What RPC mechanisms, in your opinion, are the most ergonomic and why?
(I have been offering REST’ish and gRPC in software I write for many years now. With the REST’ish api generated from the gRPC APIs. I’m leaning towards dropping REST and only offering gRPC. Mostly because the generated clients are so ugly)
Just use gRPC or ConnectRPC (which is basically gRPC but over regular HTTP). It's simple and rigid.
REST is just too "floppy", there are too many ways to do things. You can transfer data as a part of the path, as query parameters, as POST fields (in multiple encodings!), as multipart forms, as streaming data, etc.
31 replies →
Amen. Particularly ISO8601.
Always thought that a standard like ISO8601 which always stores the date and time in UTC but appends the local time zone would beneficial.
6 replies →
ISO8601 is really broad with loads of edge cases and differing versions. RFC 3339 is closer, but still with a few quirks. Not sure why we can't have one of these that actually has just one way of representing each instant.
Related: https://ijmacd.github.io/rfc3339-iso8601/
1 reply →
That would be solved if JSON had a native date type in ISO format.
6 replies →
> Fielding won the war
It’s a bit odd to say fielding “won the war” when for years he had a blog pointing out all the APIs doing RPC over HTTP and calling it REST.
He formalised a concept and gave it a snappy name, and then the concept got left behind and the name stolen away from the purpose he created it for.
If that’s what you call victory, I guess Marx can rest easy.
> He formalised a concept and gave it a snappy name, and then the concept got left behind and the name stolen away from the purpose he created it for.
I'm not sure the "name was stolen" or the zealot concept actually never got any traction in production environments due to all the problems it creates.
1 reply →
I mean, HTTP is an RPC protocol. It has methods and arguments and return types.
What I object to about eg xml-rpc is that it layers a second RPC protocol over HTTP so now I have two of them...
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
Why do people feel compelled to even consider it to be a battle?
As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.
Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?
[1] https://en.wikipedia.org/wiki/Richardson_Maturity_Model
For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.
It was buried towards the bottom of the article, but the reason, to me:
Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.
Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.
However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.
Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.
1 reply →
Then let developer-Darwin win and fire those people. Let the natural selection of the hiring process win against pedantic assholes. The days are too short to argue over issues that are not really issues.
Can we just call them HTTP APIs?
Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).
Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.
> (...) and part of the reason for this is just that defining media types is not something people do (...)
People do not define media types because it's useless and serves no purpose. They define endpoints that return specific resource types, and clients send requests to those endpoints expecting those resource types. When a breaking change is introduced, backend developers simply provide a new version of the API where a new endpoint is added to serve the new resource.
In theory, media types would allow the same endpoint to support multiple resource types. Services would sent specific resource types to clients if they asked for them by passing the media type in the accept header. That is all fine and dandy, except this forces endpoints to support an ever more complex content negotiation scheme that no backend framework comes close to support, and this brings absolutely no improvement in the way clients are developed.
So why bother?
>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?
complexity
1 reply →
We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.
> We should probably stop calling the thing that we call REST (...)
That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.
None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.
Ultimately it's all about nitpicking.
> Why do people feel compelled to even consider it to be a battle?
Because words have specific meanings. There’s a specific expectation when using them. It’s like if someone said “I can’t install this app on my iPhone” but then they have an android phone. They are similar in that they’re both smartphones and overall behave and look similar, but they’re still different.
If you are told an api is restful there’s an expectation of how it will behave.
> If you are told an api is restful there’s an expectation of how it will behave.
And today, for most people in most situations, that expectation doesn’t include anything to do with HATEOAS.
Words derive their meaning from the context in which they are (not) used, which is not fixed and often changes over time.
Few people actually use the word RESTful anymore, they talk about REST APIs, and what they mean is almost certainly very far from what Roy had in mind decades ago.
People generally do not refer to all smartphones as iPhones, but if they did, that would literally change the meaning of the word. Examples: Zipper, cellophane, escalator… all specific brands that became ordinary words.
> but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Only because we never had the tools and resources that, say, GraphQL has.
And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well
> Only because we never had the tools and resources that, say, GraphQL has.
GraphQL promised to solve real-world problems.
What real world problems does HATEOAS addresses? None.
1 reply →
I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.
When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.
> Should REST without HATEOAS classify as REST? Why not?
Because what got backnamed HATEOAS is the very core of what Fielding called REST: https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
Everything else is window dressing.
> Why do people feel compelled to even consider it to be a battle?
Because September isn't just for users.
HATEOAS adds lots of practical value if you care about discoverability and longevity.
Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
24 replies →
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
5 replies →
LLMs also appear to have an easier time consuming it (not surprisingly.)
To me, the most important nuance really is that just like "hypermedia links" (encoded as different link types, either with Link HTTP header or within the returned results) are "generic" (think that "activate" link), so is REST as done today: if you messed up and the proper action should not be "activate" but "enable", you are in no better position than having to change from /api/v1/account/ID/activate to /api/v2/account/ID/enable.
You still have to "hard code" somewhere what action anything needs to do over an API (and there is more missing metadata, like icons, translations for action description...).
Mostly to say that any thought of this approach being more general is only marginal, and really an illusion!
While I ask people whether they actually mean REST according to the paper or not, I am one of the people who refuse to just move on. The reason being that the mainstream use of the term doesn’t actually mean anything, it is not useful, and therefore not pragmatic at all. I basically say “so you actually just mean some web API, ok” and move on with that. The important difference being that I need to figure out the peculiarities of each such web API.
>> The important difference being that I need to figure out the peculiarities of each such web API
So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.
op's article could've been plucked from 2012 - this is one of my favorite rest rants from 2012: https://mikehadlow.blogspot.com/2012/08/rest-epic-semantic-f...
..that was written before swagger/openAPI was a thing. now there's a real spec with real adoption and real tools and folks can let the whole rest-epic-semantic-fail be an early chapter of web devs doing what they do (like pointing at remotely relevant academic paper to justify what they're doing at work)
So you enjoy being pedantic for the sake of being pedantic? I see no useful benefit either from a professional or social setting to act like this.
I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.
I can see a value in pedantry in a professional setting from a signaling point of view. It's a cheap way to tell people "Hey! I'm not like those other girls, I care about quality," without necessarily actually needing to do the hard work of building that quality in somewhere where the discerning public can actually see your work.
(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)
You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.
1 reply →
What some people call pedantic, others may call precision. I normally just call the not-quite-REST API styles as simply "HTTP APIs" or even "RPC-style" APIs if they use POST to retrieve data or name their routes in terms of actions (like some AWS APIs).
1 reply →
REST is pretty much impossible to adhere to for any sufficiently complex API and we should just toss it in the garbage
100%. The needs of the client rule, and REST rarely meets the challenge. When I read the title, I was like "pfff", REST is crap to start with, why do I care?
REST means, generally, HTTP requests with json as a result.
It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id/child/:child_id`.
It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.
9 replies →
I also view it as inevitable.
I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.
This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.
Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.
xml-rpc (before it transmogrified into SOAP) was pretty simple and flexible. Still exists, and there is a JSON variant now too. It's effectively what a lot of web APIs are: a way to invoke a method or function remotely.
HTTP/JSON API works too, but you can assume it's what they mean by REST.
It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).
> HTTP/JSON API works too, but you can assume it's what they mean by REST.
This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".
> The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
wat?
Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST
3 replies →
This. Or maybe we should call it "Rest API" in lowercase, meaning not the state transfer, but the state of mind, where developer reached satisfaction with API design and is no longer bothered with hypermedia controls, schemas etc.
Assuming the / was meant to describe it as both an HTTP API and a JSON API (rather than HTTP API / JSON API) it should be JSON/HTTP, as it is JSON over HTTP, like TCP/IP or GNU/Linux :)
I recall having to maintain an integration to some obscure SOAP API that ate and spit out XML with strict schemas and while I can't remember much about it, I think the integration broke quite easily if the other end changed their API somehow.
> it had proper standards
Lol. Have you read them?
SOAP in particular can really not be described as "proper".
It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I had to chuckle here. So true!
This is very true. Over my 15 years of engineering, I have never suffered_that_ much with integrating with an api (assuming it exists). So the lack of "HATEOaS" hasn't even been noticable for me. As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429) I usually have no issuss integrating and don't even notice that they don't have some "discoverable api". As long as I can get the data I need or can make the update I need I am fine.
I think good rest api design is more a service for the engineer than the client.
> As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429)
A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.
Some developers just do not understand http.
I just consumed an API where errors were marked with a "success": false field.
The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.
Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.
1 reply →
Ive seen this a few times in the past but for a different reason. What would happen in these cases was that internally there’d be some cascade of calls to microservices that all get collected. In the most egregious examples it’s just some proxy call wrapping the “real” response.
So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.
1 reply →
I've had frontend devs ask for this, because it was "easier" to handle everything in the same then callback. They wanted me to put ANY error stuff as a payload in the response.
{ "statusCode": 200, "error" : "internal server error" }
Nice.
> So the lack of "HATEOaS" hasn't even been noticable for me.
I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.
In the end it's a solution searching for problems, and no one has those problems.
It isn't clear that HATEOS would be better. For instance:
>>Clients shouldn’t assume or hardcode paths like /users/123/posts
Is it really net better to return something like the following just so you can change the url structure.
"_links": { "posts": { "href": "/users/123/posts" }, }
I mean, so what? We've create some indirection so that the url can change (e.g. "/u/123/posts").
Yes, so the link doesn't have to be relative to the current host. If you move user posts to another server, the href changes, nothing else does.
If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes.
It's brittle and will break some time in the future.
1 reply →
I use the term "HTTP API"; more general. Context, in light of your definition: In many cases labeled "REST", there will only be POST, or POST and GET, and HTTP 200 status with an error in JSON is used instead of HTTP status codes. Your definition makes sense as a weaker form of the original, but it it still too strict compared to how the term is used. "REST" = "HTTP with JSON bodies" is the most practical definition I have.
>HTTP 200 status with an error in JSON is used instead of HTTP status codes
This is a bad approach. It prevents your frontend proxies from handling certain errors better. Such as: caching, rate limiting, or throttling abuse.
On the other hand, functional app returning http errors clouds your observability and can hide real errors. It's not always ideal for the client either. 404 specifically is bad. Do I have a wrong id, wrong address, is it actually 401/403, or is it just returned by something along the way? Code alone tells you nothing, might as well return 200 for a valid request that was correctly processed.
(devil's advocate, I use http codes :))
> HTTP 200 status with an error in JSON is used instead of HTTP status codes
I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.
Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.
> I can safely assume [...] CRUD actions are mapped to POST/GET/PUT/DELETE
Not totally sure about that - I think you need to check what they decided about PUT vs PATCH.
Isn't that fairly straightforward? PUT for full updates and PATCH for partial ones. Does anybody do anything different?
PUT for partial updates, yes, constantly. What i worked with last week: https://docs.gitlab.com/api/projects/#edit-a-project
That's straightforwardly 'correct' and Fielding's thesis, yes. Yes people do things differently!
Lots of people make PUTs that work like PATCHes and it drives me crazy. Same with people who use POST to retrieve information.
5 replies →
You sweet summer child.
It's always better to use GET/POST exclusively. The verb mapping was theoretical from someone who didn't have to implement. I've long ago caved to the reality of the web's limited support for most of the other verbs.
Agreed... in most large (non trivial systems) REST ends up looking/devolving closer to RPC more and more and you end up just using get and post for most things and end up with a REST-ISH-RPC system in practice.
REST purists will not be happy, but that's reality.
What is the limited support for CONNECT/HEAD/OPTIONS/PUT/DELETE ?
3 replies →
> - The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I really wish people just used 200 status code and put encoded errors in the payloads themselves instead of trying to fuse the transport layer's (which HTTP serves as, in this case) concerns with the application's concerns. Seriously, HTTP does not mandate that e.g. "HTTP/1.1 503 Ooops\r\n\r\n" should be stuffed into the TCP's RST packet, or into whatever TLS uses to signal severe errors, for bloody obvious reasons: it doesn't belong there.
Like, when you get a 403/404 error, it's very bloody difficult to tell apart the "the reverse proxy before the server is misconfigured and somebody forgot to expose the endpoint" and "the server executed your request to look up an item perfectly fine: the DB is functional, and the item you asked for is not in there" scenarios. And yeah, of course I could (and should) look at and try to parse the response's body but why? This "let's split off the 'error code' part of the message from the message and stuff it somewhere into the metadata, that'll be fine, those never get messed up or used for anything else, so no chance of confusion" approach just complicates things for everyone for no benefit whatsoever.
The point of status codes is to have a standard that any client can understand. If you have a load balancer, the load balancer can unhealthy backends based on the status code. Similarly if you have some job scheduler or workflow engine that's calling your API, they can execute an appropriate retry strategy based on the status code. The client in most cases does not care about why something failed, only whether it has failed. Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern and the server can always do that with its own custom error codes.
> The client in most cases does not care about why something failed, only whether it has failed.
"...and therefore using different status codes in the responses is mostly pointless. Therefore, use 200 and put "s":"error" in the response".
> Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern.
One of the very common failures is for the request to simply never reach "the server". In my experience, one of the very first steps in improving the error handling quality (on the client's side) is to start distinguishing between the low-level errors of "the user has literally no connection Internet" and "the user has connected somewhere, but that thing didn't really speak the server protocol", and the high-level errors "the client has talked with the application server (using the custom application protocol and everything), and there was an error on the application server's side". Using HTTP-status codes for both low- and high-level errors makes such distinctions harder to figure out.
2 replies →
what is a unhealthy request? is searching for a user which was _not found_ by the server unhealthy? was the request successful? thats where different opinions exist.
1 reply →
Eh, if you're doing RPC where the whole request/response are already in another layer on top of HTTP, then sure, 200 everything.
But to me, "REST" means "use the HTTP verbs to talk about resources". The whole point is that for resource-oriented APIs, you don't need another layer. In which case serving 404s for things that don't exist, or 409s when you try to put things into a weird state makes perfect sense.
Hell yeah. IMO we should collectively get over ourselves and just agree that what you describe is the true, proper, present-day meaning of "REST API".
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
401 Unauthorized. When the user is unauthenticated.
403 Forbidden. When the user is unauthorized.
Yeah
I can assure you very few people care
And why would they? They're getting value out of this and it fits their head and model view
Sweating over this takes you nowhere
I really hate my conclusions here, but from a limited freedom point of view, if all of that is going to happen...
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.
> There's a decent chance listing endpoints were changed to POST to support complex filters
So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.
The world would be lovely if we could have standard error, listing responses, and a common query syntax.
I haven't done REST apis in a while, but I came across this recently for standardizing the error response: https://www.rfc-editor.org/rfc/rfc9457.html
I really like the idea of a type URL.
> - CRUD actions are mapped to POST/GET/PUT/DELETE
Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.
Do you care? From my point of view, post, put, delete, update, and patch all do the same. I would argue that if there is a difference, making the distinction in the url instead of the request method makes it easier to search code and log. And what's the correct verb anyway?
So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.
So I say: GET or POST.
> From my point of view, post, put, delete, update, and patch all do the same.
That's how we got POST-only GraphQL.
In HTTP (and hence REST) these verbs have well-defined behaviour, including the very important things like idempotence and caching: https://github.com/for-GET/know-your-http-well/blob/master/m...
7 replies →
I agree. From what I have seen in corporate settings, using anything more than GET/POST takes the time to deploy the API to a different level. Using UPDATE, PATCH etc. typically involves firewall changes that may take weeks or months to get approved and deployed followed a never ending audit/re-justification process.
> Do you care?
I don't. I could deliver a diatribe on how even the common arguments for differentiating GET & POST don't hold water. HEAD is the only verb with any mild use in the base spec.
On the other hand:
> correct status codes and at least a few are used contrary to the HTTP spec
This is a bigger problem than verb choice & something I very much care about.
I actually had to change an API recently TO this. The request payload was getting too big, so we needed to send it via POST as a body.
> even sometimes read operations behind a POST
Even worse than that, when an API like the Pinboard API (v1) uses GET for write operations!
I work with an API that uses GET for delete :)
Sounds about right. I've been calling this REST-ish for years and generally everyone I say that to gets what I mean without much (any) explanation.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I've done this enough times that now I don't really bother engaging. I don't believe anyone gets it 100% correct ever. As long as there is nothing egregiously incorrect, I'll accept whatever.
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.
Exactly. What you describe is how I see REST being used today and I wish people accepted the semantic shift and stopped with their well-ackshually. It serves nothing.
I have seen monstrosities claiming to be rest that use HTTP but actually have a separate set of action verbs, nestled inside of HTTP's.
In a server holding a "deck of cards," there might be a "HTTP GET <blah-de-blah>/shuffle.html" call with the side-effect of performing a server-side randomization operation.
I just made that up because I don't want to impugn anyone. But I've seen API sets full of nonsense just like that.
> Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood
This is an insightful observation. It happens with pretty much everything
As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]
1: https://simonwillison.net/2025/Mar/19/vibe-coding/
As long as it's not SOAP, it's great.
If I never have to use SOAP again in my life, I will die a happy man.
100% agreed, “language evolves”
This article also tries to make the distinction of not focusing on the verbs themselves. That the RESTful dissertation doesn’t focus on them.
The other side of this is that the IETF RESTful proposals from 1999 that talk about the protocol for implementation are just incomplete. The obscure verbs have no consensus on their implementation and libraries across platforms may do PUT, PATCH, DELETE incompatibly. This is enough reason to just stick with GET and POST and not try to be a strict REST adherents since you’ll hit a wall.
Haha, our API still returns XML. At least, most of the endpoints do. Not the ones written by that guy who thinks predictability in an API is lower priority than modern code, those ones return JSON.
I present to you this monstrosity: https://stackoverflow.com/q/39110233
Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!
Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.
I think people tend to forget these things are tools, not shackles
RESTful has gone far beyond the http world. It's the new RPC with JSON payload for whatever. I use it on embedded systems that has no database at all, POST/GET/PUT/DELETE etc are perfectly simple to map into WRITE|READ|Modify|Remove commands. As long as the API is documented, I don't really care about its http origins.
Importantly for the discussion, this also doesn't mean the push for REST api's was a failure. Sure, we didn't end up with what was precisely envisioned from that paper, but we still got a whole lot better than CORBA and SOAP.
The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.
We still have gRPC though...
the last point got me.
How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).
So ... how does one do it?
One uses POST and recognizes that REST doesn't have to be so prescriptive.
The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).
When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.
Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.
POST the filter, get a response back with the query to follow up with for the individual resources.
which then responds with
And then you can make GET request calls against that resource.
It adds in some data expiration problems to be solved, but its reasonably RESTful.
This has RESTful aesthetics but it is a bit unpractical if a read-only query changes state on the server, as in creating the uuid-referenced resource.
1 reply →
Isn't this twice as slow? If your server was far away it would double load times?
1 reply →
There was a proposal[1] a while back to define a new SEARCH verb that was basically just a GET with a body for this exact purpose.
[1]: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-meth...
Similarly, a more recent proposal for a new QUERY verb: https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...
If you really want this idiomatically correct, put the data in JSON or other suitable format, zip it and encode in Base64 to pass via GET as a single parameter. To hit the browser limits you will need so big query that you may hit UX constraints earlier in many cases (2048 bytes is 50+ UUIDs or 100+ polygon points etc).
Pros: the search query is a link that can be shared, the result can be cached. Cons: harder to debug, may not work in some cases due to URI length limits.
"Filters" suggests that you are trying to query. So, QUERY, perhaps? https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...
Or stop worrying and just use POST. The computer isn't going to care.
HTML FORMs are limited to www-form-encoded or multipart. The length or the queries on a GET with a FORM is limited by intermediaries that shouldn't be limiting it. But that's reality.
Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.
In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.
Soon, hopefully, QUERY will save us all. In the meantime, simply using POST is fine.
I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.
Haha yes! Is it even a dev team if they haven't had an overly heated argument about which 4xx code to return for an error state?
I describe mine as a JSON-Based Representational State SOAP API to other internal teams. When their eyes cross I get to work sifting through the contents of their pockets for linting errors and JIRA tickets.
I've been doing web development for more than a decade and I still can't figure out what REST actually means, it's more of a vibe.
When I think about some of the RESTy things we do like return part of the response as different HTTP codes, they don't really add that much value vs. keeping things on the same layer. So maybe the biggest value add so far is JSON, which thanks to its limited nature prevents complication, and OpenAPI ecosystem which grew kinda organically to provide pretty nice codegen and clients.
More complexity lessons here: look at oneOf support in OpenAPI implementations, and you will find half of them flat out don't have it, and the other half are buggy even in YOTL 2025.
> I've been doing web development for more than a decade and I still can't figure out what REST actually means, it's more of a vibe.
While I generally agree that REST isn’t really useful outside of academic thought experiments: I’ve been in this about as long as you are, and it really isn’t hard. Try reading Fieldings paper once; the ideas are sound and easy to understand, it’s just with a different vision of the internet than the one we ended up creating.
You can also read Fielding’s old blog posts. He used to write about it a lot before before he stopped blogging.
>- There's a decent chance listing endpoints were changed to POST to support complex filters
Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.
this is most probably a 90% hit
[flagged]
I disagree. It's a perfectly fine approach to many kinds of APIs, and people aren't "mediocre" just for using widely accepted words to describe this approach to designing HTTP APIs.
> and people aren't "mediocre" just for using widely accepted words
If you work off "widely accepted words" when there is disagreeing primary literature, you are probably mediocre.
3 replies →
The point is lost on you though. There are REST APIs (almost none), and there are "REST APIs" - a battle cry of mediocre developers. Now go tell them their restful has nothing to do with rest. And I am now just repeating stuff said in article and in comments here.
8 replies →
I met a DevOps guy who didn't know what "dotfiles" are.
However I'd argue people who use the term to describe it the same as everyone else is the smart one, if you want to refer to the "real" one just add "strict" or "real" in front of it.
I don't think we should dismiss people over drifting definitions and lack of "fountational knowledge".
This is more like people arguing over "proper" English, the point of language is to communicate ideas. I work for a German company and my German is not great but if I can make myself understood, that's all that's needed. Likewise, the point of an API is to allow programs, systems, and people to interoperate. If it accomplishes that goal, it's fine and not worth fighting over.
If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF? My job isn't an academic paper, good enough to get the job done is going to have to be good enough.
I agree, thought it would be really really nice if a http method like GET would not modify things. :)
> This is more like people arguing over "proper" English, the point of language is to communicate ideas.
ur s0 rait, eye d0nt nnno wy ne1 b0dderz tu b3 "proppr"!!!!1!!
</sarcasm>
You are correct that communication is the point. Words do communicate a message. So too does disrespect for propriety: it communicates the message that the person who is ignorant or disrespectful of proper language is either uneducated or immature, and that in turn implies that such a person’s statements and opinions should be discounted if not ignored entirely.
Words and terms mean things. The term ‘REST’ was coined to mean something. I contend that the thing ‘REST’ originally denoted is a valuable thing to discuss, and a valuable thing to employ (I could be wrong, but how easy will it be for us to debate that if we can’t even agree on a term for the thing?).
It’s similar to the ironic use of the word ‘literally.’ The word has a useful meaning, there is already the word ‘figuratively’ which can be used to mean ‘not literally’ and a good replacement for the proper meaning of ‘literally’ doesn’t spring to mind: misusing it just decreases clarity and hinders communication.
> If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF?
Whether something is JSON or XML is independent of the representation — they are serialisations (or encodings) of a representation. E.g. {"type": "foo","id":1}, <foo id="1"/>, <foo><id>1</id></foo> and (foo (id 1)) all encode the same representation.
1 reply →
> I work for a German company and my German is not great but if I can make myself understood, that's all that's needed.
Really? What if somebody else wants to get some information to you? How do you know what to work on?
1 reply →
What an incredibly bad take.