Co-author here! I'll let the proposal mostly speak for itself but one recurring question it doesn't answer is: "how likely is any of this to happen?"
My answer is: I'm pretty optimistic! The people on WHATWG have been responsive and offered great feedback. These things take a long time but we're making steady progress so far, and the webpage linked here will have all the status updates. So, stay tuned.
i don't think it would change htmx at all, we'd probably keep the attribute namespaces separate just to avoid accidentally stomping on behavior
i do think it would reduce and/or eliminate the need for htmx in many cases, which is a good thing: the big idea w/htmx is to push the idea of hypermedia and hypermedia controls further, and if those ideas make it into the web platform so much the better
When I was reading “The future of htmx” blog post which is also being discussed on HN [1], the “htmx is the new jQuery” idea jumped out at me. Given that jQuery has been gradually replaced by native JavaScript [2], I wondered what web development could look like if htmx is gradually replaced by native HTML.
Triptych could be it, and it’s particularly interesting that it’s being championed by the htmx developers.
this is a set of proposals by Alex Petros, in the htmx team, to move some of the ideas of htmx into the HTML spec. He has begun work on the first proposal, allowing HTML to access PUT, DELETE, etc.
In the meanwhile, I found that enabling page transitions is a progressive enhancement tweak that can go a long way in making HTML replacement unnecessary in a lot of cases.
1) Add this to your css:
@view-transition { navigation: auto; }
2) Profit.
Well, not so fast haha. There are a few details that you should know [1].
* Firefox has not implemented this yet but it seems likey they are working on it.
* All your static assets need to be properly cached to make the best use of the browser cache.
Also, prefetching some links on hover, like those on a navbar, is helpful.
Add a css class "prefetch" to the links you want to prefetch, then use something like this:
In many cases, browsers will also automatically perform a "smooth" transition between pages if your caching settings are don well, as described above. It's called paint holding. [0]
One of the driving ideas behind Triptych is that, while HTML is insufficient in a couple key ways, it's a way better foundation for your website than JavaScript, and it gets better without any effort from you all the time. In the long run, that really matters. [1]
Its genuinely incredible that we are more than 20 years since the takeover of HTML from the W3C and there isn't anything in the browser approaching even one tenth of the capability of XForms.
I wish the people behind this initiative luck and hope they succeed but I don't think it'll go anywhere; the browser devs gave up on HTML years ago, JavaScript is the primary language of the web.
The partial page replacement in particular sounds like it might be really interesting and useful to have as a feature of HTML, though ofc more details will emerge with time.
Unless it ended up like PrimeFaces/JSF where more often than not you have to finagle some reference to a particular table row in a larger component tree, inside of an update attribute for some AJAX action and still spend an hour or two debugging why nothing works.
The idea of using PUT, DELETE, or PATCH here is entirely misguided. Maybe it was a good idea, but history has gone in a different direction so now it's irrelevant. About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems. This inconsistency led to unpredictable failures, varying by website, network, and the specific proxy or caching software in use.
The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.
Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing. The entire internet infrastructure operates on these semantics, with little to no consideration for other HTTP verbs. Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.
Please let's just use what already works. GET for reading, POST for writing. That’s all we need to define transport behavior. Any further differentiation—like what kind of read or write—is application-specific and should be decided by the endpoints themselves.
Even the <form> element’s "action" attribute is built for this simplicity. For example, if your resource is /tea/genmaicha/, you could use <form method="post" action="brew">. Voilà, relative URLs in action! This approach is powerful, practical, and aligned with the infrastructure we already rely on.
Let’s not overcomplicate things for the sake of theoretical perfection. KISS.
> About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems.
This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]
> The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.
This is also incorrect. The organic evolution we actually have is that servers widely support the standardized method semantics in spite of the incomplete browser support. [1] When provided with the opportunity to take advantage of additional methods in the client (via libraries), developers user them, because they are useful. [2][3]
> Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing.
What you're describing isn't the de defacto standard, it is the actual standard. GET is for reading and POST is for writing. The actual standard also includes additional methods, namely PUT, PATCH, and DELETE, which describe useful subsets of writing, and our proposal adds them to the hypertext.
> Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.
You're not making an actual argument here, just asserting that takes time—I agree—and that it has no value—I disagree, and wrote a really long document about why.
A GET request to `/users/delete?id=354` is dangerous. In particular, it is more vulnerable to a CSRF attack, since a form on another domain can just make a request to that endpoint, using the user's cookies.
It's possible to protect against this using various techniques, but they all add some complexity.
Also, the former is more semantically correct in terms of HTTP and REST.
Everybody has already pointed out the problem with GETTING a deletable resource, but I figured I would add this (and maybe someone will remember extra specifics).
About 2007 or so there was a case where a site was using GET to delete user accounts, of course you had to be logged in to the site to do it so what was the harm the devs thought, however a popular extension made by Google for Chrome started prefetching GET requests for the users - so coming in to the account page where you could theoretically delete your account ended up deleting the account.
It was pretty funny, because I wasn't involved in either side of the fight that ensued.
I would provide more detail than that, but I'm finding it difficult to search for it, I guess Google has screwed up a lot of other stuff since then.
on edit: my memory must be playing tricks on me, I think it had to be more around 2010, 2011 that this happened, at first I was thinking it happened before I started working in Thomson Reuters but now I think it must have happened within the first couple years there.
> Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.
> In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval.
I'd say deleting a user is pretty idempotent: deleting twice is the same as deleting once, as long as you aren't reusing IDs or something silly like that. It's more that GET requests shouldn't have major side effects in the first place.
I haven't seen the proposal, but buttons can already set the form method (and action, and more). So I guess the "Button HTTP Requests" will just save the need to nest one tag?
I love these. Its the things we've been doing (or attempting to do) with our web pages for decades. I've written tons of jquery to do these exact things, lots of react code as well.
JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.
It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.
Maybe just give up the ghost and use a new unambiguous term instead of REST. Devs aren't going to let go of their JSON apis as browsers often are not the only, or even main, consumers of said APIs.
Creating frameworks and standards to support "true" RESTful APIs is a noble goal, but for most people it's a matter of semantics as they aren't going to change how they are doing things.
A list of words that have changed meaning, sometimes even the opposite, of their original meaning:
It seems these two discussions should not be conflated: 1. What RESTful originally meant, and 2. The value of RESTful APIs and when they should and shouldn't be used.
The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API. Hypermedia is completely self-describing when you have human users involved but not when you don't have human users involved. If you insist on HATEOAS as the substrate of the API then you need to give us a schema language that we can use to automate things. Then you can have your hypermedia as part of the UI and the API.
The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication. You can cure that code duplication by just using the API from JavaScript on the user-agent to render the UI from data, and now you're essentially using something like a schema but with hand-compiled codecs to render the UI from data.
Even if you go with hypermedia, using that as your API is terribly inefficient in terms of bandwidth for bulk data, so devs invariably don't use HTML or XML or any hypermedia for bulk data. If you have a schema then you could "compress" (dehydrate) that data using something not too unlike FastInfoSet by essentially throwing away most of the hypermedia, and you can re-hydrate the hypermedia where you need it.
So I think GP is not too far off. If we defined schemas for "pages" and used codecs generated or interpreted from those schemas then we could get something close to ideal:
- compression (though the
data might still be highly
compressible with zlib/
zstd/brotli/whatever,
naturally)
- hypermedia
- structured data with
programmatic access methods
(think XPath, JSONPath, etc.)
The cost of this is: a) having to define a schema for every page, b) the user-agent having to GET the schema in order to "hydrate" or interpret the data. (a) is not a new cost, though a schema language understood by the user-agent is required, so we'd have to define such a language and start using it -- (a) is a migration cost. (b) is just part of implementing in the user-agent.
This is not really all that crazy. After all XML namespaces and Schemas are already only referenced in each document, not in-lined.
The insistence on purity (HTML, XHTML, XML) is not winning. Falling back on dehydration/hydration might be your best bet if you insist.
Me, I'm pragmatic. I don't mind the hydration codec being written in JS and SPAs. I mean, I agree that it would be better if we didn't need that -- after all I use NoScript still, every day. But in the absence of a suitable schema language I don't really see how to avoid JS and SPAs. Users want speed and responsiveness, and devs want structured data instead of hypermedia -- they want structure, which hypermedia doesn't really give them.
But I'd be ecstatic if we had such a schema language and lost all that JS. Then we could still have JS-less pages that are effectively SPAs if the pages wanted to incorporate re-hydrated content sent in response to a button that did a GET, say.
> JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.
Yes.
> It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
No. Where the client is a user-agent browser sort of application then it has to be generic.
> The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.
Yes-ish. If instead of a hand-coded "re-hydrator" library you had a schema a schema whose metaschema is supported by the user-agent, then everything would be better because
a) you'd have less code,
b) need a lot less dev labor (because of (a), so I repeat myself),
c) you get to have structured data APIs that also satisfy the HATEOAS concept.
Co-author here! I'll let the proposal mostly speak for itself but one recurring question it doesn't answer is: "how likely is any of this to happen?"
My answer is: I'm pretty optimistic! The people on WHATWG have been responsive and offered great feedback. These things take a long time but we're making steady progress so far, and the webpage linked here will have all the status updates. So, stay tuned.
Thank You for the work. It is tedious and takes a long time. I know we are getting some traction on WHATWG.
But do we have if Google or Apple have shown any interest? At the end you could still end up being on WHATWG and Chrome / Safari not supporting it.
Is it possible to see their feedback? Is it published somewhere public?
How much would HTMX internals change if these proposals were accepted? Is this a big simplification or a small amount of what HTMX covers?
Similarly, any interesting ways you could see other libraries adopting these new options?
i don't think it would change htmx at all, we'd probably keep the attribute namespaces separate just to avoid accidentally stomping on behavior
i do think it would reduce and/or eliminate the need for htmx in many cases, which is a good thing: the big idea w/htmx is to push the idea of hypermedia and hypermedia controls further, and if those ideas make it into the web platform so much the better
This covers a lot of the common stuff.
This is native HTMX, or at least a good chuck of the basics.
When I was reading “The future of htmx” blog post which is also being discussed on HN [1], the “htmx is the new jQuery” idea jumped out at me. Given that jQuery has been gradually replaced by native JavaScript [2], I wondered what web development could look like if htmx is gradually replaced by native HTML.
Triptych could be it, and it’s particularly interesting that it’s being championed by the htmx developers.
[1]: https://youmightnotneedjquery.com/
> I wondered what web development could look like > if htmx is gradually replaced by native HTML
This perspective seems to align closely with how the creator of htmx views the relationship between htmx and browser capabilities.
1. https://www.youtube.com/watch?v=WuipZMUch18&t=1036s 2. https://www.youtube.com/watch?v=WuipZMUch18&t=4995s
this is a set of proposals by Alex Petros, in the htmx team, to move some of the ideas of htmx into the HTML spec. He has begun work on the first proposal, allowing HTML to access PUT, DELETE, etc.
https://alexanderpetros.com/triptych/form-http-methods
This is going to be a long term effort, but Alex has the stubbornness to see it through.
Congrats, you seem to be a co-author of the proposal as well, right?
i help alex out a bit, but he's the main author
In the meanwhile, I found that enabling page transitions is a progressive enhancement tweak that can go a long way in making HTML replacement unnecessary in a lot of cases.
1) Add this to your css:
2) Profit.
Well, not so fast haha. There are a few details that you should know [1].
* Firefox has not implemented this yet but it seems likey they are working on it.
* All your static assets need to be properly cached to make the best use of the browser cache.
Also, prefetching some links on hover, like those on a navbar, is helpful.
Add a css class "prefetch" to the links you want to prefetch, then use something like this:
There's more work on prefetching/prerendering going on but it is a lil green (experimental) at the moment [2].
--
1: https://developer.mozilla.org/en-US/docs/Web/CSS/@view-trans...
2: https://developer.mozilla.org/en-US/docs/Web/API/Speculation...
In many cases, browsers will also automatically perform a "smooth" transition between pages if your caching settings are don well, as described above. It's called paint holding. [0]
One of the driving ideas behind Triptych is that, while HTML is insufficient in a couple key ways, it's a way better foundation for your website than JavaScript, and it gets better without any effort from you all the time. In the long run, that really matters. [1]
[0] https://developer.chrome.com/blog/paint-holding [1] https://unplannedobsolescence.com/blog/hard-page-load/
Its genuinely incredible that we are more than 20 years since the takeover of HTML from the W3C and there isn't anything in the browser approaching even one tenth of the capability of XForms.
I wish the people behind this initiative luck and hope they succeed but I don't think it'll go anywhere; the browser devs gave up on HTML years ago, JavaScript is the primary language of the web.
It looks wonderful, but the adoption will be a thoroughly uphill battle. Be it from browsers, be it from designs and implementations on the web.
I'll be first in line to try it out if it ever materializes, though!
Looks really pragmatic and I'd be glad to see this succeed.
Is anyone able to credibly comment on the likelihood that these make it into the standard, and what the timeline might look like?
Alex is working on it now and we have contacts in the browser teams. I’m optimistic but it will be a long term (decades) project.
Good luck!
The partial page replacement in particular sounds like it might be really interesting and useful to have as a feature of HTML, though ofc more details will emerge with time.
Unless it ended up like PrimeFaces/JSF where more often than not you have to finagle some reference to a particular table row in a larger component tree, inside of an update attribute for some AJAX action and still spend an hour or two debugging why nothing works.
No, please, just no.
The idea of using PUT, DELETE, or PATCH here is entirely misguided. Maybe it was a good idea, but history has gone in a different direction so now it's irrelevant. About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems. This inconsistency led to unpredictable failures, varying by website, network, and the specific proxy or caching software in use.
The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.
Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing. The entire internet infrastructure operates on these semantics, with little to no consideration for other HTTP verbs. Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.
Please let's just use what already works. GET for reading, POST for writing. That’s all we need to define transport behavior. Any further differentiation—like what kind of read or write—is application-specific and should be decided by the endpoints themselves.
Even the <form> element’s "action" attribute is built for this simplicity. For example, if your resource is /tea/genmaicha/, you could use <form method="post" action="brew">. Voilà, relative URLs in action! This approach is powerful, practical, and aligned with the infrastructure we already rely on.
Let’s not overcomplicate things for the sake of theoretical perfection. KISS.
> About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems.
This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]
> The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.
This is also incorrect. The organic evolution we actually have is that servers widely support the standardized method semantics in spite of the incomplete browser support. [1] When provided with the opportunity to take advantage of additional methods in the client (via libraries), developers user them, because they are useful. [2][3]
> Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing.
What you're describing isn't the de defacto standard, it is the actual standard. GET is for reading and POST is for writing. The actual standard also includes additional methods, namely PUT, PATCH, and DELETE, which describe useful subsets of writing, and our proposal adds them to the hypertext.
> Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.
You're not making an actual argument here, just asserting that takes time—I agree—and that it has no value—I disagree, and wrote a really long document about why.
[0] https://alexanderpetros.com/triptych/form-http-methods#ref-6
[1] https://alexanderpetros.com/triptych/form-http-methods#rest-...
[2] https://alexanderpetros.com/triptych/form-http-methods#usage...
[3] https://alexanderpetros.com/triptych/form-http-methods#appli...
> This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]
I see no such thing in the link you have there. #ref-6 starts with:
> [6] On 01/12/2011, at 9:57 PM, Julian Reschke wrote: "One thing I forgot earlier, and which was the reason
But the link you have there [1] does not contain any such comment. Wrong link?
[1] https://lists.w3.org/Archives/Public/public-html-comments/20...
(will reply to other points as time allows, but I wanted to point out this first)
1 reply →
What is the upside of
over
A GET request to `/users/delete?id=354` is dangerous. In particular, it is more vulnerable to a CSRF attack, since a form on another domain can just make a request to that endpoint, using the user's cookies.
It's possible to protect against this using various techniques, but they all add some complexity.
Also, the former is more semantically correct in terms of HTTP and REST.
An important consideration is also that browsers may prefetch GET requests.
Everybody has already pointed out the problem with GETTING a deletable resource, but I figured I would add this (and maybe someone will remember extra specifics).
About 2007 or so there was a case where a site was using GET to delete user accounts, of course you had to be logged in to the site to do it so what was the harm the devs thought, however a popular extension made by Google for Chrome started prefetching GET requests for the users - so coming in to the account page where you could theoretically delete your account ended up deleting the account.
It was pretty funny, because I wasn't involved in either side of the fight that ensued.
I would provide more detail than that, but I'm finding it difficult to search for it, I guess Google has screwed up a lot of other stuff since then.
on edit: my memory must be playing tricks on me, I think it had to be more around 2010, 2011 that this happened, at first I was thinking it happened before I started working in Thomson Reuters but now I think it must have happened within the first couple years there.
JimDabell earlier recalled 37Signals and the Google Web Accelerator that sounds like what you mean: https://news.ycombinator.com/item?id=42619712
1 reply →
Hey there, good question! Probably worth reading both sections 6 and 7 for context, but I answer this question specifically in section 7.2: https://alexanderpetros.com/triptych/form-http-methods#ad-ho...
HTTP/1.1 spec, section 9.1.1 Safe Methods:
> Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.
> In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval.
See the "GET scenario" section of https://owasp.org/www-community/attacks/csrf to learn why ignoring the HTTP spec can be dangerous.
Or this blog post: https://knasmueller.net/why-using-http-get-for-delete-action...
What HTTP method would you expect the second example to use? `GET /users/delete?id=354`?
The first has the advantage of being a little clearer at the HTTP level with `DELETE /users/354`.
GET because that is also the default for all other elements I think. form, a, img, iframe, video...
Ok, but what is the advantage to be "clear at the http level"?
7 replies →
implied idempotence
I'd say deleting a user is pretty idempotent: deleting twice is the same as deleting once, as long as you aren't reusing IDs or something silly like that. It's more that GET requests shouldn't have major side effects in the first place.
> giving buttons to ability
Might want to fix that. :)
I haven't seen the proposal, but buttons can already set the form method (and action, and more). So I guess the "Button HTTP Requests" will just save the need to nest one tag?
To be clear, I was referring to the minor typo.
This proposal also includes the ability to update a target DOM element with the response from that delete action.
I love these. Its the things we've been doing (or attempting to do) with our web pages for decades. I've written tons of jquery to do these exact things, lots of react code as well.
I think its an uphill battle, but I am hopeful.
[dead]
[dead]
JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.
It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.
No, they aren’t.
https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
Your client is already generic you just aren’t using that functionality:
https://htmx.org/essays/hypermedia-clients/
Maybe just give up the ghost and use a new unambiguous term instead of REST. Devs aren't going to let go of their JSON apis as browsers often are not the only, or even main, consumers of said APIs.
Creating frameworks and standards to support "true" RESTful APIs is a noble goal, but for most people it's a matter of semantics as they aren't going to change how they are doing things.
A list of words that have changed meaning, sometimes even the opposite, of their original meaning:
https://ideas.ted.com/20-words-that-once-meant-something-ver...
It seems these two discussions should not be conflated: 1. What RESTful originally meant, and 2. The value of RESTful APIs and when they should and shouldn't be used.
6 replies →
It sounds like the client you're describing is less capable than the client of 2005, and I'd be curious to hear why you think that's a good thing.
The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API. Hypermedia is completely self-describing when you have human users involved but not when you don't have human users involved. If you insist on HATEOAS as the substrate of the API then you need to give us a schema language that we can use to automate things. Then you can have your hypermedia as part of the UI and the API.
The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication. You can cure that code duplication by just using the API from JavaScript on the user-agent to render the UI from data, and now you're essentially using something like a schema but with hand-compiled codecs to render the UI from data.
Even if you go with hypermedia, using that as your API is terribly inefficient in terms of bandwidth for bulk data, so devs invariably don't use HTML or XML or any hypermedia for bulk data. If you have a schema then you could "compress" (dehydrate) that data using something not too unlike FastInfoSet by essentially throwing away most of the hypermedia, and you can re-hydrate the hypermedia where you need it.
So I think GP is not too far off. If we defined schemas for "pages" and used codecs generated or interpreted from those schemas then we could get something close to ideal:
The cost of this is: a) having to define a schema for every page, b) the user-agent having to GET the schema in order to "hydrate" or interpret the data. (a) is not a new cost, though a schema language understood by the user-agent is required, so we'd have to define such a language and start using it -- (a) is a migration cost. (b) is just part of implementing in the user-agent.
This is not really all that crazy. After all XML namespaces and Schemas are already only referenced in each document, not in-lined.
The insistence on purity (HTML, XHTML, XML) is not winning. Falling back on dehydration/hydration might be your best bet if you insist.
Me, I'm pragmatic. I don't mind the hydration codec being written in JS and SPAs. I mean, I agree that it would be better if we didn't need that -- after all I use NoScript still, every day. But in the absence of a suitable schema language I don't really see how to avoid JS and SPAs. Users want speed and responsiveness, and devs want structured data instead of hypermedia -- they want structure, which hypermedia doesn't really give them.
But I'd be ecstatic if we had such a schema language and lost all that JS. Then we could still have JS-less pages that are effectively SPAs if the pages wanted to incorporate re-hydrated content sent in response to a button that did a GET, say.
21 replies →
> JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.
Yes.
> It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
No. Where the client is a user-agent browser sort of application then it has to be generic.
> The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.
Yes-ish. If instead of a hand-coded "re-hydrator" library you had a schema a schema whose metaschema is supported by the user-agent, then everything would be better because
a) you'd have less code,
b) need a lot less dev labor (because of (a), so I repeat myself),
c) you get to have structured data APIs that also satisfy the HATEOAS concept.
Idk if TFA will like or hate that, but hey.