googler no longer works either. I went to use it and got no results. Since they just scrape the results page a layout change is probably going to break everything.
I like the idea behind DDGR. However, I was surprised that it doesnt offer a pager for search results. Whenever I search something, I need to use the scrollback buffer to actually see the first results... That is not very userfriendl, even for a CLI tool.
A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
That they are only underscores Google's massive blunder here.
Yes, there've been CLI wrappers around web queries before. Until the past few years, these simply addressed search format, URI arguments, and namespace. They launched in the user's choice of browser, text or graphical. Surfraw is the classic, I've written a few very brief bash functions for equivalent capabilities, again, launching any arbitrary browser (though usually w3m by my personal preference).
Now what's needed, and you're recommending, is a content-side wrapper as well. This story ends poorly.
> A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
I think this ship, if it hasn't sailed already, is at least starting its engines and getting ready to leave the harbour.
User agents and servers are increasingly trending towards an adversarial relationship, where the user doesn't want to do much of what the server is asking of it. This has been true from the first pop-up blockers, through to modern adblocking and anti-tracking measures (the situation now being so bad that tracking protection is a built-in default feature in some browsers).
Eventually, a "filter out the crap" strategy becomes too onerous, an "extract what looks good" strategy starts to look better, and you end up with tools like Reader Mode. Custom clients are a natural next step - when someone gets desperate enough to write an article-dl to match youtube-dl, we'll be there.
Oh, I agree that the ship is now barely visible over the horizon.
That doesn't diminish the fact that the original intent was to have a common, freely-available mechanism for accessing, viewing, and presenting content.
(I'll probably be asked for citations. TBL has probably written on this, and Tim O'Reilly had an essay on his early response to the WWW as opposed to alternative, proprietary, systems, when O'Reilly & Associates were plotting their early course.)
googler no longer works either. I went to use it and got no results. Since they just scrape the results page a layout change is probably going to break everything.
This issue was opened recently for it.
https://github.com/jarun/googler/issues/306
It got updated and working again.
Also ddgr: https://github.com/jarun/ddgr
Same author, no tracking, works over Tor.
I like the idea behind DDGR. However, I was surprised that it doesnt offer a pager for search results. Whenever I search something, I need to use the scrollback buffer to actually see the first results... That is not very userfriendl, even for a CLI tool.
A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
That they are only underscores Google's massive blunder here.
Yes, there've been CLI wrappers around web queries before. Until the past few years, these simply addressed search format, URI arguments, and namespace. They launched in the user's choice of browser, text or graphical. Surfraw is the classic, I've written a few very brief bash functions for equivalent capabilities, again, launching any arbitrary browser (though usually w3m by my personal preference).
Now what's needed, and you're recommending, is a content-side wrapper as well. This story ends poorly.
> A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
I think this ship, if it hasn't sailed already, is at least starting its engines and getting ready to leave the harbour.
User agents and servers are increasingly trending towards an adversarial relationship, where the user doesn't want to do much of what the server is asking of it. This has been true from the first pop-up blockers, through to modern adblocking and anti-tracking measures (the situation now being so bad that tracking protection is a built-in default feature in some browsers).
Eventually, a "filter out the crap" strategy becomes too onerous, an "extract what looks good" strategy starts to look better, and you end up with tools like Reader Mode. Custom clients are a natural next step - when someone gets desperate enough to write an article-dl to match youtube-dl, we'll be there.
Oh, I agree that the ship is now barely visible over the horizon.
That doesn't diminish the fact that the original intent was to have a common, freely-available mechanism for accessing, viewing, and presenting content.
(I'll probably be asked for citations. TBL has probably written on this, and Tim O'Reilly had an essay on his early response to the WWW as opposed to alternative, proprietary, systems, when O'Reilly & Associates were plotting their early course.)
https://weboob.org/ will become mainstream.
> A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
Perhaps, but the Web is now effectively requires Chrome, or things which look enough like it.