← Back to context

Comment by motoboi

3 months ago

JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.

It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.

The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.

No, they aren’t.

https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...

Your client is already generic you just aren’t using that functionality:

https://htmx.org/essays/hypermedia-clients/

  • Maybe just give up the ghost and use a new unambiguous term instead of REST. Devs aren't going to let go of their JSON apis as browsers often are not the only, or even main, consumers of said APIs.

    Creating frameworks and standards to support "true" RESTful APIs is a noble goal, but for most people it's a matter of semantics as they aren't going to change how they are doing things.

    A list of words that have changed meaning, sometimes even the opposite, of their original meaning:

    https://ideas.ted.com/20-words-that-once-meant-something-ver...

    It seems these two discussions should not be conflated: 1. What RESTful originally meant, and 2. The value of RESTful APIs and when they should and shouldn't be used.

    • I think "both sides" right now are a bit wrong about the other. Content Negotiation is also an ancient part of REST. User Agent A prefers HTML and User Agent B prefers XML and User Agent C prefers JSON and all of those are still valid representations of a resource, and a good REST API can deliver some or all three as it has capability to do so. It's very much in the spirit of REST to provide HTML for your Browsers and JSON for your CLI apps and machine-to-machine calls.

      This shouldn't be a "war" between "HTML is the original REST" and "JSON is what everyone means today by REST", this should be a celebration together that if these proposals pass we can do both better together. Let User Agents negotiate their content better again. It's good for JSON APIs if the Browser User Agents "catch up" to more features of REST. The JSON APIs can sometimes better specialize in the things their User Agents need/prefer if they aren't also doing all the heavy lifting for Browsers, too. It's good for the HTML APIs if they can do more of what they were originally intended to do and rely on JS less. Servers get a little more complicated again, if they are doing more Content Negotiation, but they were always that complicated before, too.

      REST says "resources" it doesn't say what language those resources are described in and never has. REST means both HTML APIs and JSON APIs. (Also, XML APIs and ASN.1 APIs and Protobuf APIs and… There is no "one true" REST.)

    • In the near future I'll write a blog about this, but the short answer is that even though more developers use REST incorrectly than not, it's still the term that best communicates our intent to the audience we are trying to reach.

      Eventually, I would like that audience to be "everyone," but for the time being, the simplest and clearest way to build on the intellectual heritage that we're referencing is to the use the term the same way they did. I benefited from Carson's refusal to let REST mean the opposite of REST, just as he benefited from Martin Fowler's usage of the term, who benefited from Leonard's Richardson's, who benefited from Roy Fielding's.

    • It's weird that you would argue "people won't change" at the same time as you point out how word meanings change.

      Have you forgotten how XML was all the rage not that long ago ?

      Also, specific people might not change, but they do retire/die, and new generations might have different opinions...

      2 replies →

    • RESTful is an oddly-specific term, so I don't see the point of changing the meaning.

      Feel free to change the meaning of 'agile' to mean 'whatever' (which is how it's interpreted by 99.99% of the population), but leave things like RESTful alone.

      Signed, CEO of htmx

It sounds like the client you're describing is less capable than the client of 2005, and I'd be curious to hear why you think that's a good thing.

  • The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API. Hypermedia is completely self-describing when you have human users involved but not when you don't have human users involved. If you insist on HATEOAS as the substrate of the API then you need to give us a schema language that we can use to automate things. Then you can have your hypermedia as part of the UI and the API.

    The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication. You can cure that code duplication by just using the API from JavaScript on the user-agent to render the UI from data, and now you're essentially using something like a schema but with hand-compiled codecs to render the UI from data.

    Even if you go with hypermedia, using that as your API is terribly inefficient in terms of bandwidth for bulk data, so devs invariably don't use HTML or XML or any hypermedia for bulk data. If you have a schema then you could "compress" (dehydrate) that data using something not too unlike FastInfoSet by essentially throwing away most of the hypermedia, and you can re-hydrate the hypermedia where you need it.

    So I think GP is not too far off. If we defined schemas for "pages" and used codecs generated or interpreted from those schemas then we could get something close to ideal:

      - compression (though the
        data might still be highly
        compressible with zlib/
        zstd/brotli/whatever,
        naturally)
    
      - hypermedia
    
      - structured data with
        programmatic access methods
        (think XPath, JSONPath, etc.)
    

    The cost of this is: a) having to define a schema for every page, b) the user-agent having to GET the schema in order to "hydrate" or interpret the data. (a) is not a new cost, though a schema language understood by the user-agent is required, so we'd have to define such a language and start using it -- (a) is a migration cost. (b) is just part of implementing in the user-agent.

    This is not really all that crazy. After all XML namespaces and Schemas are already only referenced in each document, not in-lined.

    The insistence on purity (HTML, XHTML, XML) is not winning. Falling back on dehydration/hydration might be your best bet if you insist.

    Me, I'm pragmatic. I don't mind the hydration codec being written in JS and SPAs. I mean, I agree that it would be better if we didn't need that -- after all I use NoScript still, every day. But in the absence of a suitable schema language I don't really see how to avoid JS and SPAs. Users want speed and responsiveness, and devs want structured data instead of hypermedia -- they want structure, which hypermedia doesn't really give them.

    But I'd be ecstatic if we had such a schema language and lost all that JS. Then we could still have JS-less pages that are effectively SPAs if the pages wanted to incorporate re-hydrated content sent in response to a button that did a GET, say.

    • > The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API

      We already have the schema language; it’s HTML. Stylesheets and favicons are two examples of specific links that are automatically interpreted by user-agents. Applications are free to use their own link rels. If your point is that changing the name of those rels could break automation that used them, in a way that wouldn’t break humans…then the same is true of JSON APIs as well.

      Like, the flaws you point out are legit—but they are properties of how devs are ab/using HTML, not the technology itself.

      1 reply →

    • > The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication.

      What code duplication? If both these APIs use the same data fetching layer, there's no code duplication; if they don't, then it's because the JSON API and the Hypermedia UI have different requirements, and can be more efficiently implemented if they don't reuse each other's querying logic (usually the case).

      What you want is some universal way to write them both, and my general stance is that usually they have different requirements, and you'll end up writing so much on top of that universal layer that you might as well have just skipped it in the first place.

      1 reply →

> JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.

Yes.

> It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.

No. Where the client is a user-agent browser sort of application then it has to be generic.

> The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.

Yes-ish. If instead of a hand-coded "re-hydrator" library you had a schema a schema whose metaschema is supported by the user-agent, then everything would be better because

a) you'd have less code,

b) need a lot less dev labor (because of (a), so I repeat myself),

c) you get to have structured data APIs that also satisfy the HATEOAS concept.

Idk if TFA will like or hate that, but hey.