Comment by golemotron
2 years ago
> Summary: AI is introducing the third user-interface paradigm in computing history, shifting to a new interaction mechanism where users tell the computer what they want, not how to do it — thus reversing the locus of control.
Like every query language ever.
I'm not sure the distinction between things we are searching for and things we're actively making is as different as the author thinks.
But this is basically the absence of a query syntax, a way to query via natural language, and not just get back a list of results, but have it almost synthesize an answer.
To everyone who isn't a software developer, this is a new paradigm with computers. Hell even for me as a software dev it's pretty different.
Like I'm not asking Google to find me info that I can then read and grok, I'm asking something like ChatGPT for an answer directly.
It's the difference between querying for "documentation for eslint" but instead asking "how do you configure eslint errors to warnings" or even "convert eslint errors to warnings for me in this file".
It's a very different mental approach to many problems for me.
For years I've just typed questions, in English, into browser search bars. It works great. Maybe that is why it doesn't seem like a new paradigm to me.
Search engines like Google + countless websites outshine LLMs, and they've been around for a good 20 years. What's the added value of an LLM that you can't get with Google coupled with the internet?
Oh, yes, websites like HN, Reddit & forums create spaces where you can ask experts for targeted advice. People >> GPT, we already could ask the help of people before we met GPT-4. You can always find someone available to answer you online, and it's free.
It is interesting to notice that after 20 years of "better than LLM" resources available for free there was no job crash.
It's not the same. You can try the very query above "how do you configure eslint errors to warnings".
Using Google with Bard, the regular results from Google search for me are:
1) Is it possible to show warnings instead of errors on ALL of eslint rules? 2) Configure Rules - ESLint - Pluggable JavaScript Linter 3) ESLint Warnings Are an Anti-Pattern
None of them answers the question directly. Bard on the other hand returns with:
To configure ESLint errors to warnings, you can either: - Set the severity of the rule to "warn" in your ESLint configuration file. - Use the eslint-disable-next-line comment to disable the rule for a single line of code. For example, to set the severity of the "no-unused-vars" rule to "warn", you...
I'm not familiar with eslint and have no idea if the answer is correct, but it's definitely a more concise and to the point, and an upgrade over the regular search.
In your view, then, is AI best described as an incremental improvement over (say) SQL in terms of the tasks it enables users to complete?
Incremental improvement over Google search. And, it's not about the tasks that it enables users to complete, it is about the UI paradigm as per the article.
Sorry for the confusion, I just view UI as being basically synonymous with task completion: in the end, the user interface is the set of tools the system gives users to complete tasks.
Since the Google search interface is meant to look like you're talking to an AI, and probably has a lot of what we'd call AI under the hood, to turn natural language prompts into a query, I'm not surprised you view it as an incremental improvement at best.
Or constraint-based programming, where some specification is given for the end result and the comouter figure out how to make it happen. But that's usually a programming thing, and UIs with that kind of thing are rare.
But I wouldn't say they were nonexistent for 60 years.