Comment by sershe
1 day ago
I dunno, I think it was a net negative by a large margin. 1) html only Gmail shows that pretty advanced, well made apps are possible without scripting; 2) There are very few web apps that without JavaScript wouldn't just be implemented as native without loss of convenience; 3) OTOH for simple apps and sites JavaScript adds inconvenience (non standard links breaking browser features etc), security risks, compatibility issues, massive bloat and tracking.
Nothing like 3 paragraphs of text that requires downloading 2 megabytes of crap, runs code from 20 sketchy looking domains, takes 15 seconds to load, cannot be linked to, and demands you upgrade your browser. As a consolation you can have slightly slower maps in browser instead of downloading an app, once.
I think web scripting is probably THE worst technology ever invented in the IT field. "If I ruled the world", a full ban would be better than its current state; or some AMA on steroids (+Jones act) making JavaScript developers extremely rare and well paid, so that it was limited to the best (as determined by the market) uses with better quality.
You can't think about alternate web evolution without considering (1) the early browser wars (specifically Netscape vs IE) & (2) the need to decouple data transfer and re-rendering that led to AJAX (for network and compute performance reasons).
Folks forget that before js was front-end frameworks and libaries, it was enabling (as in, making impossible possible) async data requests and page updates without requiring a full round-trip and repaint.
It's difficult to conceptualize a present where that need was instead fully served by HTML+CSS, sans executable code sandbox.
What, ~2000 IE instead pushes for some direct linking of HTML with a async update-able data model behind the scenes? How are the two linked together? And how do developers control that?
You're correct that the main thing enabled by JS is partial updates, but the fact that it relies on JS is IMO itself in large part due to path dependent evolution (i.e. JS was there and could be used for that, so that's what we standardized on).
Imagine instead if HTML evolved more in the direction of enabling this exact scenario out of the box. So that e.g. you could declaratively mark a button or a link as triggering a request that updates only part of the page. DOM diffs implemented directly by the browser etc.
When that data streams in though, how is a developer defining what it changes?
Or in this hypothetical is the remote server always directly sending HTML?
1 reply →
I wrote JavaScript before libraries, I remember when prototype.js came out and was a cool new thing and actually useful after "client side validation and snowflakes chasing mouse cursor" era. I think there was a short period when it was a positive development.
It seemed so at the time but I think it didn't work out... Why is interesting to speculate about... My pet theory that convenient frameworks lowering the barriers were part of the problem.
I think if at it's time JavaScript went the way of java applets and ActiveX controls (and yes I understand part of the reason these could be driven out is availability of JavaScript), web would be in a much better shape right now. 90% of the apps (banking, email, forums, travel, etc) and 100% of the pages would just be plain better. For the remainder you'd just install an app, something they nag you about anyway.