← Back to context

Comment by ethbr1

2 days ago

You can't think about alternate web evolution without considering (1) the early browser wars (specifically Netscape vs IE) & (2) the need to decouple data transfer and re-rendering that led to AJAX (for network and compute performance reasons).

Folks forget that before js was front-end frameworks and libaries, it was enabling (as in, making impossible possible) async data requests and page updates without requiring a full round-trip and repaint.

It's difficult to conceptualize a present where that need was instead fully served by HTML+CSS, sans executable code sandbox.

What, ~2000 IE instead pushes for some direct linking of HTML with a async update-able data model behind the scenes? How are the two linked together? And how do developers control that?

You're correct that the main thing enabled by JS is partial updates, but the fact that it relies on JS is IMO itself in large part due to path dependent evolution (i.e. JS was there and could be used for that, so that's what we standardized on).

Imagine instead if HTML evolved more in the direction of enabling this exact scenario out of the box. So that e.g. you could declaratively mark a button or a link as triggering a request that updates only part of the page. DOM diffs implemented directly by the browser etc.

  • When that data streams in though, how is a developer defining what it changes?

    Or in this hypothetical is the remote server always directly sending HTML?

    • I was thinking more along the latter lines - i.e. the link/button would specify the ID of the element to update, and it would be replaced with the received HTML.

      If we're unwinding back to early 00s though, it could also be fetching XML from the server and then running it through the XSLT stylesheet associated with the current document to convert it to HTML, to reduce the amount of data on the wire.

      The specifics could be debated here. But I'm pretty sure that a generic mechanism could be devised that'd adequately cover many use cases that require JS today.

I wrote JavaScript before libraries, I remember when prototype.js came out and was a cool new thing and actually useful after "client side validation and snowflakes chasing mouse cursor" era. I think there was a short period when it was a positive development.

It seemed so at the time but I think it didn't work out... Why is interesting to speculate about... My pet theory that convenient frameworks lowering the barriers were part of the problem.

I think if at it's time JavaScript went the way of java applets and ActiveX controls (and yes I understand part of the reason these could be driven out is availability of JavaScript), web would be in a much better shape right now. 90% of the apps (banking, email, forums, travel, etc) and 100% of the pages would just be plain better. For the remainder you'd just install an app, something they nag you about anyway.