Thank you for posting this, and I'm really sorry and saddened to see this broken; this is certainly not intended behavior.
I've reproduced the issue you described in the blog (Lynx does not allow clicking on the search results page). Even though Google serves valid HTML to Lynx, it's probably HTML that Lynx cannot parse, and since it used to work before, this is a regression. I filed a bug for our team to look further into it and address the root cause.
Interestingly enough, pressing <L> shows a list of links in the current document, and it does show all the available result links, so Lynx does see and parse them, it's just not rendering the links inline with the search results, so that's something we have to investigate as well.
In the meantime, as a temporary workaround, if you're open to using Chrome with a simple UI that would be amenable to a screenreader and keyboard navigation, you can use a User Agent switcher extension by Google [1] to set the user agent header to Lynx [2] and Google will serve the same HTML that it would have served to Lynx. You can then use the <Tab> key to iterate over the search results, and <Enter> to select a particular result.
I look forward to seeing this bug resolved, and will be personally following up on the status of this bug.
Again, I'm really sorry to see this, and I hope we'll be able to restore your ability to use Google with Lynx shortly!
Thanks a lot for this positive reply! I am thrilled to read that this might be counted a regression and actually fixed. I really hope that can happen.
Regarding 'L', Lynx sometimes "hides" links if the anchor is around a div. Maybe it is just that simple. IIRC, <a href=...><div>...</div></a> will trigger a similar behaviour.
Regarding your Chrome suggestion, that really doesnt help me much since I spend 99% of my workday on a plain virtual console. The context switch of moving to another computer that runs Windows for simple searches is really not practical.
Your analysis is correct; the issue was due to <div> tags appearing inside <a> tags. This should be fixed now; I've verified that I can follow result links using Lynx.
Once again, my apologies for you running into this issue! Thank you for reporting & debugging it and thank you for your patience as well.
I hope this is resolved for you now; please try it out and let me know whether or not it works for you, or if you run into any other issues.
Will bet good money this is due to your user-agent based content serving. I have similar issues with non-standard browsers I use. I don't know why google and google based sites (including recaptcha) are the only ones uaving this issue. It really is bad for the web to have this sort of user agent discrimination.
> It really is bad for the web to have this sort of user agent discrimination.
Eh, if the issue really was (as figured out below) that lynx doesn't support divs inside anchor tags, that seems like the best possible solution if you aren't going to drop lynx support altogether. Even IE6 allows that.
It just isn't worth trying to do progressive enhancement by tying everything into knots trying to keep the page strict html 4.
I feel like many of the text-mode browsers have failed to keep up with changing web standards. We were all mad when IE was holding back the Internet, and I'm not sure we should give lynx and w3m a pass because they're geek tools. (Accessibility is an important concern, but web browsers running under a GUI system support screen readers.)
https://www.brow.sh/ is a console-based browser that claims to support modern standards. Perhaps that is what we should be using.
(I am now prepared for 6 comments replying to me saying that anything that can be implemented with HTML from 1999 should be, and a list of search results can be. I guess. If all that stuff works for everyone, why did we invent new stuff? Just because? Or perhaps it wasn't really as amazing as well all remember.)
> If all that stuff works for everyone, why did we invent new stuff? Just because? Or perhaps it wasn't really as amazing as well all remember.
To better track people and push ads. It's really mostly just it. Modern web has very little to do with providing value to the end-user; any utility that's provided is mostly a side effect, and/or a vector to lure people into situations where they can be monetized.
Text browsers aren't holding the web down, they're anchoring it in the port of productivity, even as the winds of commerce desperately try to blow it onto the open seas of exploitation.
Creating sophisticated web pages is massively easier than 10 or 20 years ago. Yes, HTML of plain simple text-only pages is still pretty much the same, but most users actually prefer visually fancier content with pictures and colors.
Yes, companies presenting themselvses online profit of more capabilities. And yes, presenting ads is probably easier too. But if you think those changes were just made because of monetary greed, you could say the same about almost any technological advancement, like color photography, or electric cars, because all of these had a commercial side to them too.
To say that text browsers are "anchoring" the web to those text only standards would imply that developers are making design decisions based on testing and feedback from text only browsers.
There is no way that the percentage of developers doing that isn't vanishingly small. Like 0.1% or less. I always chuckle when 1 person chimes in on a show hn post to complain that the site doesn't work well in lynx... Ya, I'll get right on that, top priority!
> Regarding 'L', Lynx sometimes "hides" links if the anchor is around a div. Maybe it is just that simple. IIRC, <a href=...><div>...</div></a> will trigger a similar behaviour.
I'm generally against unnecessary web complexity, but I don't understand how anyone can paint Lynx a hero for randomly ignoring anchor tags.
I embrace progressive enhancement where possible, all of my blogs/sites will load and function without Javascript. I'm not going to serve alternative HTML in a scenario like this. There has to be a give and take towards Lynx supporting objectively valid pure HTML content.
It wouldn't violate any of Lynx's pure-text principles to parse modern HTML correctly.
>Modern web has very little to do with providing value to the end-user
I disagree strongly with this. The web has moved a lot in the direction of developer experience (ES6, modules) and new capabilities (WebSockets, WebRTC, WebAudio, SVG, canvas...). Yes, most of this happened because it's a side-effect of big surveillance capitalism companies wanting make that sweet sweet digital pollen to be even sweeter, but that doesn't make it any less sweet just because it was made in bad faith.
> I am now prepared for 6 comments replying to me saying that anything that can be implemented with HTML from 1999 should be, and a list of search results can be.
They would be correct replies!
In addition there is this concept called "graceful degradation", where if the browser has more advanced features, you support them, otherwise you work anyway. It's not like supporting Lynx means you can't have a map in the search results when using Chrome. Certainly not for a company with the resources of Google.
Also they should probably send something like a Lynx-version of Google down to people with a poor internet connection.
They probably are hiring only the best machine learning, cloud-native engineers fresh out of university who have never heard of lynx and now the institution doesn’t even realize it broke support for it.
> We were all mad when IE was holding back the Internet
As I recall it, we weren't mad about IE "holding back the Internet", we were mad about IE encouraging web designers to stick a bunch of dynamic clutter such as ActiveX controls into their webpages. Largely because they created this lock-in where sites only worked well on one browser.
It turns out that JavaScript has been co-opted into being the new ActiveX, and Chrome is the new IE. But since JavaScript is nominally an open standard, and Chrome runs on the big 3 OSes, nobody seems to get mad that alternative browser projects are dying because they can't keep up with all the stuff that needs to be implemented in order to work well with sites that were only tested on Chrome and WebKit.
Amen, amen! The ignominous "best viewed in" which we fought in the Second Browser War is creeping back in - except now it says "this hour's current Google Chrome" instead of "MSIE 6".
One does not need to time travel to 1999 to accommodate text browsers, current HTML works just fine.
OTOH I don't worry too much, accessibility-enforcing laws will provide plenty of job opportunities for future developers... So yeah, good move, I guess.
Not sure that's the case, if so, I'd expect a comment that shows a little deeper understanding of the issues involved.
"HTML from 1999" isn't the issue. Lynx did fine on that... and HTML from 5 years ago, just like a whole host of non-visual or semi-visual user agents.
brow.sh is... OK, I guess, nice to have around in a pinch, but like most other schemes that rely on a headless full-fledged browser in order to work with applications that have become dependent on JS to merely render content it introduces a glorified screen-scraping layer to something that can easily be much simpler, assuming web application developers can be bothered to think about it.
We don't need to freeze the web at 1999, and some applications fit poorly in a non-visual context. But a little bit of reflection on how the merits of progressive enhancement and moving forward without losing the benefits we had at that stage would be nice. The stage where people cared about such things was pretty amazing in terms of the breadth of devices web applications would in fact work on pretty well.
Also, if you're not thinking about pretty plain HTML version of your app, chances are half decent you're missing an opportunity to engineer your application better whether or not you care about UA interop.
But, you know, if you're pretty sure the browser should be "thought" about as nothing more than The VM That Lived™, by all means, carry on.
I haven't used it, but it works in a Docker container without any device mounting or access to the host X server, so I'd assume that it does indeed work without any X available.
It does work without an external X11 all right (trick question, actually, there's an Xvfb underneath all the turtles - so X11 actually is required, even though it's all hidden inside the container ;))
w3m gets a pass because javascript is bad. The FSF is right about that. wasm will be even worse. Not technically worse, but for removing another layer of control. DNS over HTTPS is just as bad in that sense.
If you can make something without JS you should. cryptomarketplot.com has an accessible mode FOR CRYPTO!! If crypto sites can, everybody can.
It's funny because "javascript is bad" was common geek sentiment at the turn of the century. Now it is flamebait, or being caught up in the past [itself possibly a bit of a code for ageism]. Seeing this attitude change is one of the most interesting things I've seen in tech nerd circles in the last decade or so.
The redesigned version of Google Search that is being A/B tested no longer shows links. Despite being a developer, I'm anxious to click search results, especially because when results are filtered to be from the last day or week, they are full of phishing sites and pages with scraped content that immediately redirect to malware.
This change can't possibly be beneficial to users. It makes people even more ignorant about the technologies they depend on, and exposes them to further risk of being exploited.
I'm in this A/B test too, and like you, it makes me anxious. I've become so accustomed to looking at the full URL (in green) of what I'm about to click on, that without having it there, I trust Google search results less as a whole.
The one that got me was when a search result pointed me to a site that was something like:
example.com/?ipaddr=10.3.4.3
And Google, in an attempt to be helpful, showed me this:
example.com > ...
> It makes people even more ignorant about the technologies they depend on, and exposes them to further risk of being exploited.
This is the point. If you've ever viewed an AMP site using Mobile Safari, you'll still see "google.com" in the Location Bar, instead of the site's own domain name. Google's fix for this is to try to kill the URL.
For example, searched "hierarchy" and it'll show "en.wikipedia.org > wiki > hierarchy" above the wikipedia search result.
> This change can't possibly be beneficial to users.
You're right if the url/domain isn't even shown at all. But I can think of a few benefits of showing the domain as it currently does, like to avoid phishing. It also basically parses the url and interprets it for less-technical users which is something that more-technical users are already doing when they read the url.
I don't think it's so bad as a default if there's a config option for displaying the full url for more technical users, or the necessary data available to at least write a browser extension.
I got this crap on my work PC, and it was the last straw for me. I've switched to duckduckgo (to train myself, what I did was actually change my dynamic bookmark, so that when I type "google X" it goes to the duckduckgo search page for X instead of google, as it used to do). This morning I tried entering google and it's showing the domains again, but I don't care any more, they've lost me.
After seeing these, I switched my phone and browser to search with DDG by default. Most of the time, I don't notice, although Google definitely catches news and blogs much quicker and has a bigger shopping portfolio. Other than those two, DDG has been good enough for me.
The URL is right there, above the search result title. Just the / has been replaced by a > and it's been made a more human readable. To the average person it's even more prominent now.
I'm looking forward to this change. It will incentivise websites to make their URL paths more human readable because now their
example.com > cgi > html > static > actually_human_readble_part.html
noise is seen by everyone not just weirdos that look at the URL bar like me.
You should take another look at the image—there's no domain suffix, and they've also removed any arguments in the URL, both of which are _incredibly_ important.
Speaking of recent Google changes - Has anyone else noticed Google has removed the 'sign out' link? Before, you could click the upper right corner icon and "sign out", but that is now gone and I cannot find any way to sign out of Google, anywhere!
When I click on my profile icon at the top-right, I have a sign out button at the bottom of the menu (well, technically it's "Sign out of all accounts" since I'm logged into multiple), fwiw.
This happened to me after I had formatted (new PC), and since I had just started using Firefox as my main browser and did not see this change on Chrome, I believed that Firefox had been gimped by Google in this specific manner.
Needless to say, this was the straw that made me switch to DuckDuckGo instantly, and I've been happy with it, especially with the ability to use the Google bang (!g) in the infrequent case it's required.
This has taught me a valuable thing about A/B testing though—don't make an experiment any longer than it needs to be, and a refresh should bring them back to the old behaviour, just in case it's bad enough to make them switch completely.
I wonder if Google does proper X-testing (X as in exodus), but I guess they don't take care about a couple users leaving, as they're still busy spreading through the rest of the world. I still hope that just means the clock's ticking for the next dot-com bubble to burst so that a new generation of websites could blow up big.
Blind users do not, as a rule, use Lynx. This is a common misconception. The Lynx interface is, in fact, very poorly suited for visually impaired users, as it relies heavily on visual layout, color, and cursor positioning to convey information.
The majority of blind users use standard desktop web browsers with screen reader addons like JAWS or VoiceOver.
The blind user in TFA might be surprised at your assertion.
Blind users typically rely on screen-readers, including tools such as emacsspeak (which relies on either Emac's built-in eww browser, w3m (of which I believe eww is based), lynx, etc.
The ability to rely on console-based tools with text-to-speech capability, and receiving typed input, is fairly widespread.
The requirement that interactive content be rendered directly to speech is key.
You are falling for a pretty common misunderstanding. While many blind people use text-to-speech in combination with a classical "screen reader", that is not the end of the story. There is another major technology called Braille. And in countries where there is a good social system, blind people actually own so-called Braille displays. That applies to me, for instance. I am pretty much a pure braille user. Sometimes, when I use a Windows machine, speech will rumble along, but I really primarily rely on what I can feel beneath my fingers.
And for braille display users, Lynx is really a nice option.
Back when I did web design stuff and I couldn't get anyone to put any effort into accessibility, I would "sell" the concept as it being SEO: search engines see what we put in for text, not images. Accessibility for humans is accessibility for Google.
It was pitiful that I had to do this but there you have it.
Strict or no, Google actually fails at accessibility a lot. The basically broken unergonomic keybindings in docs for starters and the entire fiasco that is Android come to mind immediately. For a really "fun" fail that's recent, Youtube recommendations are now a live region. This means that if you leave that tab focused, your screen reader just starts reading things while you're trying to play music and are across the room and can't stop it. That seems like a minor example except the whole point of Youtube is hearing things so that's kind of a big ball for someone to drop, thinking that live regions are a good plan. Things have improved some (I no longer hate the GCP console) but my point is that strict about accessibility doesn't mean good at accessibility by any means, and Google has a deservedly bad reputation in blindness circles that they earned over a very long time.
The problem is that the search query does not go in the URL in the Lite version. That really sucks. It makes it useless in the browser history as well. In fact, because all the URLs are the same on each query, they're not even added as separate entries in the history.
I've fully ditched Chrome desktop for Firefox, however.
Other than the landing page for search itself, the distinction doesn't much matter. If you set web search as a homepage or bookmark, you can definitely do it there, however.
Another person notes that ddg is inferior. Maybe people get used to a certain way of searching with Google that doesn't translate to ddg. Haven't noticed a drop in quality myself and I think I might have been retrained to use different patterns and techniques in structuring my queries.
DDG results are so much worse for me, especially anything longer tail or in Spanish, that I switch to Google when I'm actually getting work done. I find myself adding "!g" to an important search just to check for any results that DDG doesn't know about and it's almost always an upgrade to see Google's results.
Search is hard.
I don't like to chime in to say something negative about an underdog like DDG, but I see this "people probably just don't know how to use DDG" suggestion a lot and it's quite the opposite: Google feels like it can practically read my mind with minimal context, like knowing I also may be talking about a recent event that shares the name with a generic search term. And I'm not talking about personalized search.
I know you're probably annoyed that I'm telling you that you're using ddg wrong. That's not exactly what I'm saying. It's more like: we're trained to expect certain things from the search engine, and so it's hard to switch.
I assume it's the same because ddg is claiming not to affect search results by anything except time and user configuration.
Google's results (for me) are a full page of references to every different version of elm's documentation for dict. Not exactly a wide net, and frankly pretty redundant. To see anything else, I have to click at the bottom of the page. It doesn't show me the source code. I went through the first ten pages and didn't see any link to it.
For ddg, I just use the arrow key to scroll down, and I can press enter to follow the link I want, changing the meaning of "first search page" for me quite a bit.
> DDG results are so much worse for me, especially anything longer tail or in Spanish, that I switch to Google when I'm actually getting work done. I find myself adding "!g" to an important search just to check for any results that DDG doesn't know about and it's almost always an upgrade to see Google's results.
I have a completely different experience in italian. They're actually pretty good, which is surprising given the small audience.
For work, usually I directly search for documentation in reference systems (e.g. en.cppreference.com). Neither ddg or google will consistently direct me to the "best" documentation. YMMV.
Sure, but 90% of the time, I and most people I know don't do very sophisticated searches. I'm actually mostly using google as a billion dollar search engine for wikipedia / stackoverflow / arch wiki / bbc / nyt / ft / whatever big site there is in a given domain. Because these sites happen to have 90% of what i'm looking for. For the rest, we all have our own little forums we follow: fb, hn, email etc.
So instead of trying to beat google on full web searches, the trick might actually be to index all 100 best ranking websites according to some metric (alexa rank for instance) and do it better than google. Then, maybe you can grab over 50% of the search traffic. For broader queries (in the search knowledge graph sense), in this scenario, people would fallback to google.
> Google feels like it can practically read my mind with minimal context, like knowing I also may be talking about a recent event that shares the name with a generic search term. And I'm not talking about personalized search.
I wish there was some compromise, because Google regularly seems to read another mind than my own, automatically "correcting" search terms to terms with similar spelling that are totally irrelevant to my search or including what is superficially synonymous but for my purposes irrelevant in the results. I frequently feel like I have to convince Google to stop second-guessing me and actually consider what I wrote rather than what it assumes I meant.
A few more knobs and switches to adjust that behavior would be helpful at least for power users.
I'll agree with this too. Thing is, 99.XX% of the time DDG works fine. 1% of the time if I can't find a thing, I try with google and probably 50% of the time I can then find what I wanted. E.g. DDG has a 99.5% success rate, Google has a 99.75% success rate. Not too bad by DDG, as I know that last .25% is REALLY hard.
Either way, google is seeing only a tiny % of my search queries, so I'm happy.
Of note, I switched to DDG earlier this year. I've tried to do it in the past and found that the DDG/Google ratios were like 80%/99+% in the past, which is WAY too much of a tradeoff to make. DDG has MASSIVELY improved, I'm using google search <1/day now.
Same here: 99% of the time. If I want to dig -more specifically- than DDG wants to go in some cases, I just add a !b in front (for Bing). I do a lot of research; haven't used Gargle for 10 years.
What many people don't realise (especially in HN) is that DDG is not as good as Google, in my experience, in looking for non-English content. One of the things that stopped me was the lack of good Dutch results that Google can pick up easily with its internal translations and whatnot.
I use DuckDuckGo when I know what the first result is likely to be. If I'm actually _searching_ for something (i.e. the majority of the time) I add !g. I can feel myself flinch every time I submit a DuckDuckGo query. It's just much, much worse.
During my first 2 weeks I definitely noticed a difference in quality,
but I guess over time you get an intuition for how to combine search terms when
it comes to less common queries.
Now I prefer DDG over Google, even without all the nice shortcuts.
I think Google's results have become dramatically worse, and I've become better at DDG. I now often find it difficult or impossible to find what I want on Google, and straightforward on DDG. I also use the bangs all the time now, they're great.
For my purposes its results aren't as good. But also overall it provides much better privacy and isn't intrusive.
Then - for folks who equally value these three things - ddg isn't inferior. When I can't find something on DDG, I use Google. Or if I'm pretty sure I won't find it, I start with Google. But I normally start - and end - my searches with DDG. For me DDG is not inferior to GSE.
Defaulting to simple HTML allows one to support all incredibly niche browser. That is the beauty of protocols and standards. This is particularly relevant when it comes to accessibility. Google search results are literally lists of web links, so this is absolutely doable.
They don't do it because they are more preoccupied with extracting data about their users than they are about accessibility, and yes, this includes blind people. There is not way around it.
It is perfectly legal to be selfish, but let's not bullshit ourselves about what is really going on...
Yep. Following standards has great side-effects all around. The same things that break sites for the blind also break it for UX-enhancement extensions like Tridactyl, which lets you click elements from the keyboard, so long as sites don't go out of the way to make clickable buttons undiscoverable.
(Extreme apologies for any implied equivalence between myself and the blind.)
I don't feel that's fair -- yes, Lynx is not really updated much anymore and at this point very niche. But it must work for their workflow, and I feel like something as critical as search should have fallback to work with very 'primitive' browsers and older W3 standards.
(FWIW I work at Google, but not on the search team. I might go looking at internal discussions to see if this is being looked at at all)
Recently I read announcements that Google was making it harder to view internal projects/discussions on other teams [0] (suspiciously, this came after the leak that they were still working on a search browser for China [1]). Have you, as a Google employee, found it hard to audit or observe previously visible projects?
Perhaps you could contribute better by citing examples where Google does care about blind people instead of calling an actual blind person's argument ridiculous.
It doesn't matter if he's blind; I'm calling the connections ridiculous. Lynx isn't a browser for blind people, it's a browser for the terminal. The terminal isn't some accessibility tool either, and was never made to be one. Drawing a connection between Lynx and blindness accessibility is tenuous, saying that this change is Google is attacking Lynx is even more tenuous, and drawing some sort of transitive connection between all of them to say that the change made is somehow against accessibility because it doesn't work on lynx is doubling so, bordering on...
It should also be noted that it seems that this is a bug with Lynx, not Google.
Perhaps we could contribute ourselves instead of asking if others can.
Google seems to be heavy users of https://developer.mozilla.org/en-US/docs/Web/Accessibility/A... which offers a lot more power and flexibility towards accessibility compared to plain text by enabling accessibility to full featured web interfaces instead of basic versions.
It seems that the problem is caused by the fact that Lynx isn't standards compliant anymore, and fails to interpret valid HTML5 structures correctly because they aren't valid under older standards.
What does that even mean? What standards it complies with? Is there a compliance test suite or report somewhere? Because there definitely are bunch of standards it does not comply with.
Not really a bug more than an outdated browser not being updated for HTML 5.
In HTML 4, <a href="..."><div>...</div></a> is an error and Lynx deals with this by implicitly closing the <a>, turning it into a hidden link (which can still be followed by pressing 'l').
In HTML 5, <a href="..."><div>...</div></a> is valid.
I think it is because in html4 the content of an <a> element is restricted to inline elements, whereas in html5 <a> is transparent so its content can be block elements if its parent allows them.
That would explain why my blog is terribly broken in lynx. The links from my category pages to article pages are usually done with <a href...><figure>...</figure></a>, which breaks the <a> in lynx.
> Luckily, there is duckduckgo. However, I have to admit, the search results of duckduckgo are by far inferior to what Google used to give.
For cases where DuckDuckGo isn't quite enough, I usually rely on StartPage [1]. It uses results from Google, but like DuckDuckGo it doesn't track its users.
Startpage, like DuckDuckGo, also works well in Lynx (I've just tested in Lynx 2.8.9 on Ubuntu 16.04).
Additionally, you can get StartPage results right from within DuckDuckGo by just appending the !sp shortcut to your search query [2]
Edit: you may want to keep in mind, however, that StartPage is now owned by an advertising company. Some users took issue with that. Personally, I'm OK with that as long as users are not being tracked. Relevant HN thread: https://duckduckgo.com/bang?c=Online+Services&sc=Search
DDG's results are very bad still, and they will likely always be terrible. It's not hard to find a query that is objectively much worse than what Google has. DuckDuckGo will never be able to catch up to Google's search quality results. It doesn't have Google's data, Google relies on the search behavior of most of the people of the internet to guide its results along with its vast human and hardware resources to create results that even Microsoft can't match.
Did they also remove link wrapping with this? The HREF goes straight to the destination for me now on Chrome, where previously it went to some Google domain redirect. It's there on the first HTML load too, it's not a JS thing after the fact. Is there a different response for Lynx or are they formatting it in such a way that Lynx doesn't pick it up?
googler no longer works either. I went to use it and got no results. Since they just scrape the results page a layout change is probably going to break everything.
I like the idea behind DDGR. However, I was surprised that it doesnt offer a pager for search results. Whenever I search something, I need to use the scrollback buffer to actually see the first results... That is not very userfriendl, even for a CLI tool.
A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
That they are only underscores Google's massive blunder here.
Yes, there've been CLI wrappers around web queries before. Until the past few years, these simply addressed search format, URI arguments, and namespace. They launched in the user's choice of browser, text or graphical. Surfraw is the classic, I've written a few very brief bash functions for equivalent capabilities, again, launching any arbitrary browser (though usually w3m by my personal preference).
Now what's needed, and you're recommending, is a content-side wrapper as well. This story ends poorly.
> A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
I think this ship, if it hasn't sailed already, is at least starting its engines and getting ready to leave the harbour.
User agents and servers are increasingly trending towards an adversarial relationship, where the user doesn't want to do much of what the server is asking of it. This has been true from the first pop-up blockers, through to modern adblocking and anti-tracking measures (the situation now being so bad that tracking protection is a built-in default feature in some browsers).
Eventually, a "filter out the crap" strategy becomes too onerous, an "extract what looks good" strategy starts to look better, and you end up with tools like Reader Mode. Custom clients are a natural next step - when someone gets desperate enough to write an article-dl to match youtube-dl, we'll be there.
When it comes to searches for technical reasons, dev related, sci, even gov searches, duckduckgo blows google out of the water. Junk random searches, trivial things, yea... its weak. My experience I should say.
Serious question, why search Google in your terminal with Lynx as opposed to `googler`? The actual result pages can still open in Lynx but the experience of navigating the results is very nice.
Not pretending to be able to answer for OP, but one answer would be that it's because lynx is a browser, and Google is a web site. Traditionally you read web sites with browsers, rather than requiring special tools for every particular site.
> However, being a blind person, I guess I have to accept that Google doesn't care anymore.
This is not a blindness issue. It would be more accurate to say that Google doesn't care about geeks who cling to old ways of doing things long after there's a good reason to do so. As others on the thread have pointed out, blind people can use graphical web browsers with a screen reader, even under GNU/Linux. I know you know this; I'm pointing it out for the benefit of everyone watching the thread.
On the other hand I don't feel like I'm in a position to blame a blind user for clinging to something they already know instead of learning how to use a graphical browser. A lot of people in general have good reasons for sticking to what they know. Especially if what they stuck to worked fine until recently, and doesn't now only for some trivial reason that's easy to fix.
> It would be more accurate to say that Google doesn't care about geeks who cling to old ways of doing things long after there's a good reason to do so.
Is it that you can't think of a good reason to use text-based browsers at all or is lynx itself the issue? command line web tools like lynx and curl are pretty handy to have available.
> However, I have to admit, the search results of duckduckgo are by far inferior to what Google used to give.
I've switched to DuckDuckGo since I read its CEO's book "Super Thinking", and I'm not feeling that it's inferior. Sure, it doesn't have rich cards and other goodies, but I've come to realize that these are nice-to-have, but not essential.
On the other hand, reducing the confirmation bias by getting out of the filter bubble is, I believe, essential.
It's been probably years since I've used Google search. It's entirely possible that I'm not getting desired results, but as far as I've been able to tell. It's entirely possible that because I don't end up on answers, I end up (most of the time) going directly to read source code and documentation more often than not. This seems to have helped created something of a better understanding of whatever tech I'm using at the time.
It doesn't work for me for quite a while now (for up to a year), even with recent versions from git [0]. My preliminary guess is that's because Google renders each result with <div> (block element) inside an <a> tag. I didn't have any extra spare time to test that further (and report) though, so I simply ditched Google and just went with duck.com from that point.
Lynx is not a screen reader. (In fact, it's considerably less accessible to blind users than a typical desktop browser -- as a console application, it has no way to provide accessibility data to a screen reader.)
A screen reader is a tool like JAWS or VoiceOver which interacts with desktop software (including web browsers like Chrome or Safari) to provide information about what the user is interacting with.
This is a trend I've noticed also, behemoths breaking standards without permission nor apology, crushing the canary in the coal mine (w3m/lynx, though emacs eww browser works). As a member of an IT Development team creating various web applications, we have found different approaches
are needed based on client needs and workflow. We develop many complex data-based backed query/display single page web apps usually with zero js. Where not needed it is little more than a security hole begging for trouble. Where no 'live' user experience is required this is equals
resource waste (xtra payload, xtra client cpu cycles if no js blocker) especially in many work environments.
As user/engineer I am annoyed when any site sends me their worthless js to execute unnecessarily, thus wasting my devices cpu cycles, battery, plus my lifejuice, and for what??? In most cases nothing I want/desire/need, therefore just to make a bigger tool of me than
before, and not for the better. That being said, using web communications, live/simulated data visualizations benefit greatly from js.
Separation of concerns is what is missing, js was created to benefit and enhance user experience, the ne'er do wells have hacked it into a tool mindlessly used and often enough does screw the user over without permission nor apology...
Interestingly the creators of what became Google received their start with NSF funding. Good way to finish off giving everyone the finger Google, continue to 'do only evil'...
Greetings, Lynx users. There is a reason this page doesn't use ALT tags on the images.
The reason is that the bozos responsible for both MSIE
and Netscape Confusicator 4.0 decided that they would display the ALT
tags of images every time you move the mouse over them -- even if the
images are loaded, and even if they are not links. The ALT attribute
to the IMG tag is supposed to be used instead of the image, not in
addition to the image.
This looks absolutely terrible, so I don't use ALT tags any more in
self-defense.
If they wanted to implemented tooltips, they should have used the TITLE
attribute to the A tag. That's in the HTML 1.2 spec and everything.
I had to decide between making this page look good for the vast majority
of viewers, or making it be readable by the miniscule minority of you
stuck in the 70s. Those of you in the retro contingent lost. Sorry.
I'm Google's public liaison for search. Thank you for bringing this to our attention, and our apologies for the inconvenience caused. This was indeed a bug that our engineers have explored, and it should be fixed now.
I am willing to bet, attorneys who specialize in exerting money from ecommerce sites based on fluke accessibility lawsuits will not take notice of this problem that presents an actual hardship.
I think you may be confusing console (the terminal emulator, command line interface) with console (video game console like Xbox or Playstation or Stadia).
Screen readers are tools like JAWS or VoiceOver which interact with desktop applications, including desktop web browsers such as Chrome or Safari. Google works fine with these.
Hi Mario,
I am an engineer on Google Search frontend.
Thank you for posting this, and I'm really sorry and saddened to see this broken; this is certainly not intended behavior.
I've reproduced the issue you described in the blog (Lynx does not allow clicking on the search results page). Even though Google serves valid HTML to Lynx, it's probably HTML that Lynx cannot parse, and since it used to work before, this is a regression. I filed a bug for our team to look further into it and address the root cause.
Interestingly enough, pressing <L> shows a list of links in the current document, and it does show all the available result links, so Lynx does see and parse them, it's just not rendering the links inline with the search results, so that's something we have to investigate as well.
In the meantime, as a temporary workaround, if you're open to using Chrome with a simple UI that would be amenable to a screenreader and keyboard navigation, you can use a User Agent switcher extension by Google [1] to set the user agent header to Lynx [2] and Google will serve the same HTML that it would have served to Lynx. You can then use the <Tab> key to iterate over the search results, and <Enter> to select a particular result.
I look forward to seeing this bug resolved, and will be personally following up on the status of this bug.
Again, I'm really sorry to see this, and I hope we'll be able to restore your ability to use Google with Lynx shortly!
[1] https://chrome.google.com/webstore/detail/user-agent-switche...
[2] https://developers.whatismybrowser.com/useragents/explore/so...
Thanks a lot for this positive reply! I am thrilled to read that this might be counted a regression and actually fixed. I really hope that can happen.
Regarding 'L', Lynx sometimes "hides" links if the anchor is around a div. Maybe it is just that simple. IIRC, <a href=...><div>...</div></a> will trigger a similar behaviour.
Regarding your Chrome suggestion, that really doesnt help me much since I spend 99% of my workday on a plain virtual console. The context switch of moving to another computer that runs Windows for simple searches is really not practical.
Again, thanks for spotting this and acting on it!
Hi Mario,
Your analysis is correct; the issue was due to <div> tags appearing inside <a> tags. This should be fixed now; I've verified that I can follow result links using Lynx.
Once again, my apologies for you running into this issue! Thank you for reporting & debugging it and thank you for your patience as well.
I hope this is resolved for you now; please try it out and let me know whether or not it works for you, or if you run into any other issues.
Will bet good money this is due to your user-agent based content serving. I have similar issues with non-standard browsers I use. I don't know why google and google based sites (including recaptcha) are the only ones uaving this issue. It really is bad for the web to have this sort of user agent discrimination.
> It really is bad for the web to have this sort of user agent discrimination.
Eh, if the issue really was (as figured out below) that lynx doesn't support divs inside anchor tags, that seems like the best possible solution if you aren't going to drop lynx support altogether. Even IE6 allows that.
It just isn't worth trying to do progressive enhancement by tying everything into knots trying to keep the page strict html 4.
1 reply →
Another win for Hanlon.
I feel like many of the text-mode browsers have failed to keep up with changing web standards. We were all mad when IE was holding back the Internet, and I'm not sure we should give lynx and w3m a pass because they're geek tools. (Accessibility is an important concern, but web browsers running under a GUI system support screen readers.)
https://www.brow.sh/ is a console-based browser that claims to support modern standards. Perhaps that is what we should be using.
(I am now prepared for 6 comments replying to me saying that anything that can be implemented with HTML from 1999 should be, and a list of search results can be. I guess. If all that stuff works for everyone, why did we invent new stuff? Just because? Or perhaps it wasn't really as amazing as well all remember.)
I'll be the first (EDIT: third) of six.
> If all that stuff works for everyone, why did we invent new stuff? Just because? Or perhaps it wasn't really as amazing as well all remember.
To better track people and push ads. It's really mostly just it. Modern web has very little to do with providing value to the end-user; any utility that's provided is mostly a side effect, and/or a vector to lure people into situations where they can be monetized.
Text browsers aren't holding the web down, they're anchoring it in the port of productivity, even as the winds of commerce desperately try to blow it onto the open seas of exploitation.
Come on, you can't be serious about this.
Creating sophisticated web pages is massively easier than 10 or 20 years ago. Yes, HTML of plain simple text-only pages is still pretty much the same, but most users actually prefer visually fancier content with pictures and colors.
Yes, companies presenting themselvses online profit of more capabilities. And yes, presenting ads is probably easier too. But if you think those changes were just made because of monetary greed, you could say the same about almost any technological advancement, like color photography, or electric cars, because all of these had a commercial side to them too.
60 replies →
To say that text browsers are "anchoring" the web to those text only standards would imply that developers are making design decisions based on testing and feedback from text only browsers.
There is no way that the percentage of developers doing that isn't vanishingly small. Like 0.1% or less. I always chuckle when 1 person chimes in on a show hn post to complain that the site doesn't work well in lynx... Ya, I'll get right on that, top priority!
1 reply →
Amen; fourth here... don't need js cycles for html query
From another comment digging into the issue.[0]
> Regarding 'L', Lynx sometimes "hides" links if the anchor is around a div. Maybe it is just that simple. IIRC, <a href=...><div>...</div></a> will trigger a similar behaviour.
I'm generally against unnecessary web complexity, but I don't understand how anyone can paint Lynx a hero for randomly ignoring anchor tags.
I embrace progressive enhancement where possible, all of my blogs/sites will load and function without Javascript. I'm not going to serve alternative HTML in a scenario like this. There has to be a give and take towards Lynx supporting objectively valid pure HTML content.
It wouldn't violate any of Lynx's pure-text principles to parse modern HTML correctly.
[0]: https://news.ycombinator.com/item?id=21636159
>Modern web has very little to do with providing value to the end-user
I disagree strongly with this. The web has moved a lot in the direction of developer experience (ES6, modules) and new capabilities (WebSockets, WebRTC, WebAudio, SVG, canvas...). Yes, most of this happened because it's a side-effect of big surveillance capitalism companies wanting make that sweet sweet digital pollen to be even sweeter, but that doesn't make it any less sweet just because it was made in bad faith.
8 replies →
> I am now prepared for 6 comments replying to me saying that anything that can be implemented with HTML from 1999 should be, and a list of search results can be.
They would be correct replies!
In addition there is this concept called "graceful degradation", where if the browser has more advanced features, you support them, otherwise you work anyway. It's not like supporting Lynx means you can't have a map in the search results when using Chrome. Certainly not for a company with the resources of Google.
Also they should probably send something like a Lynx-version of Google down to people with a poor internet connection.
They probably are hiring only the best machine learning, cloud-native engineers fresh out of university who have never heard of lynx and now the institution doesn’t even realize it broke support for it.
> We were all mad when IE was holding back the Internet
As I recall it, we weren't mad about IE "holding back the Internet", we were mad about IE encouraging web designers to stick a bunch of dynamic clutter such as ActiveX controls into their webpages. Largely because they created this lock-in where sites only worked well on one browser.
It turns out that JavaScript has been co-opted into being the new ActiveX, and Chrome is the new IE. But since JavaScript is nominally an open standard, and Chrome runs on the big 3 OSes, nobody seems to get mad that alternative browser projects are dying because they can't keep up with all the stuff that needs to be implemented in order to work well with sites that were only tested on Chrome and WebKit.
Amen, amen! The ignominous "best viewed in" which we fought in the Second Browser War is creeping back in - except now it says "this hour's current Google Chrome" instead of "MSIE 6".
2 replies →
One does not need to time travel to 1999 to accommodate text browsers, current HTML works just fine.
OTOH I don't worry too much, accessibility-enforcing laws will provide plenty of job opportunities for future developers... So yeah, good move, I guess.
> I am now prepared for 6 comments
Not sure that's the case, if so, I'd expect a comment that shows a little deeper understanding of the issues involved.
"HTML from 1999" isn't the issue. Lynx did fine on that... and HTML from 5 years ago, just like a whole host of non-visual or semi-visual user agents.
brow.sh is... OK, I guess, nice to have around in a pinch, but like most other schemes that rely on a headless full-fledged browser in order to work with applications that have become dependent on JS to merely render content it introduces a glorified screen-scraping layer to something that can easily be much simpler, assuming web application developers can be bothered to think about it.
We don't need to freeze the web at 1999, and some applications fit poorly in a non-visual context. But a little bit of reflection on how the merits of progressive enhancement and moving forward without losing the benefits we had at that stage would be nice. The stage where people cared about such things was pretty amazing in terms of the breadth of devices web applications would in fact work on pretty well.
Also, if you're not thinking about pretty plain HTML version of your app, chances are half decent you're missing an opportunity to engineer your application better whether or not you care about UA interop.
But, you know, if you're pretty sure the browser should be "thought" about as nothing more than The VM That Lived™, by all means, carry on.
Thanks for making me aware of brow.sh. What an amazing project.
I use text browsers because they are efficient and fast. Also Static content shouldn't ideally need js because it is a security concern.
AFAIK most of us don't complain when a dynamic website doesn't work. We just use a modern browser.
> I feel like many of the text-mode browsers have failed to keep up with changing web standards
Yeah, web standards such as images and video and audio. Not keeping up with those standards is kind of the point.
How about tables, or frames? Lynx doesn't even support those.
1 reply →
> brow.sh
Does headless Firefox (what brow.sh is at it's core) even launch if there's no X11 available? Is it that headless?
I haven't used it, but it works in a Docker container without any device mounting or access to the host X server, so I'd assume that it does indeed work without any X available.
I've played with it and it works over a headless server instance through an ssh session, so I think it does not require X11
It does work without an external X11 all right (trick question, actually, there's an Xvfb underneath all the turtles - so X11 actually is required, even though it's all hidden inside the container ;))
w3m gets a pass because javascript is bad. The FSF is right about that. wasm will be even worse. Not technically worse, but for removing another layer of control. DNS over HTTPS is just as bad in that sense.
If you can make something without JS you should. cryptomarketplot.com has an accessible mode FOR CRYPTO!! If crypto sites can, everybody can.
It's funny because "javascript is bad" was common geek sentiment at the turn of the century. Now it is flamebait, or being caught up in the past [itself possibly a bit of a code for ageism]. Seeing this attitude change is one of the most interesting things I've seen in tech nerd circles in the last decade or so.
6 replies →
The redesigned version of Google Search that is being A/B tested no longer shows links. Despite being a developer, I'm anxious to click search results, especially because when results are filtered to be from the last day or week, they are full of phishing sites and pages with scraped content that immediately redirect to malware.
This change can't possibly be beneficial to users. It makes people even more ignorant about the technologies they depend on, and exposes them to further risk of being exploited.
UPDATE: This is the new design I've seen, the domains are missing: https://i.imgur.com/5RTdXI1.png
I'm in this A/B test too, and like you, it makes me anxious. I've become so accustomed to looking at the full URL (in green) of what I'm about to click on, that without having it there, I trust Google search results less as a whole.
The one that got me was when a search result pointed me to a site that was something like:
And Google, in an attempt to be helpful, showed me this:
> It makes people even more ignorant about the technologies they depend on, and exposes them to further risk of being exploited.
This is the point. If you've ever viewed an AMP site using Mobile Safari, you'll still see "google.com" in the Location Bar, instead of the site's own domain name. Google's fix for this is to try to kill the URL.
They took urls away and then added this: https://i.imgur.com/RI4xxgs.png
For example, searched "hierarchy" and it'll show "en.wikipedia.org > wiki > hierarchy" above the wikipedia search result.
> This change can't possibly be beneficial to users.
You're right if the url/domain isn't even shown at all. But I can think of a few benefits of showing the domain as it currently does, like to avoid phishing. It also basically parses the url and interprets it for less-technical users which is something that more-technical users are already doing when they read the url.
I don't think it's so bad as a default if there's a config option for displaying the full url for more technical users, or the necessary data available to at least write a browser extension.
Multiple versions are being tested. The one I've seen does not contain the domain, not even in a tokenized form.
It looked like this: https://www.searchenginejournal.com/google-is-testing-search...
2 replies →
What happens when you click the down arrow next to the domain? Google cache/similar links?
Edit: I can see it on all my searches on https://www.google.co.uk/ it is cache/similar
I got this crap on my work PC, and it was the last straw for me. I've switched to duckduckgo (to train myself, what I did was actually change my dynamic bookmark, so that when I type "google X" it goes to the duckduckgo search page for X instead of google, as it used to do). This morning I tried entering google and it's showing the domains again, but I don't care any more, they've lost me.
After seeing these, I switched my phone and browser to search with DDG by default. Most of the time, I don't notice, although Google definitely catches news and blogs much quicker and has a bigger shopping portfolio. Other than those two, DDG has been good enough for me.
The URL is right there, above the search result title. Just the / has been replaced by a > and it's been made a more human readable. To the average person it's even more prominent now.
I'm looking forward to this change. It will incentivise websites to make their URL paths more human readable because now their
example.com > cgi > html > static > actually_human_readble_part.html
noise is seen by everyone not just weirdos that look at the URL bar like me.
You should take another look at the image—there's no domain suffix, and they've also removed any arguments in the URL, both of which are _incredibly_ important.
Speaking of recent Google changes - Has anyone else noticed Google has removed the 'sign out' link? Before, you could click the upper right corner icon and "sign out", but that is now gone and I cannot find any way to sign out of Google, anywhere!
When I click on my profile icon at the top-right, I have a sign out button at the bottom of the menu (well, technically it's "Sign out of all accounts" since I'm logged into multiple), fwiw.
This happened to me after I had formatted (new PC), and since I had just started using Firefox as my main browser and did not see this change on Chrome, I believed that Firefox had been gimped by Google in this specific manner.
Needless to say, this was the straw that made me switch to DuckDuckGo instantly, and I've been happy with it, especially with the ability to use the Google bang (!g) in the infrequent case it's required.
This has taught me a valuable thing about A/B testing though—don't make an experiment any longer than it needs to be, and a refresh should bring them back to the old behaviour, just in case it's bad enough to make them switch completely.
I wonder if Google does proper X-testing (X as in exodus), but I guess they don't take care about a couple users leaving, as they're still busy spreading through the rest of the world. I still hope that just means the clock's ticking for the next dot-com bubble to burst so that a new generation of websites could blow up big.
At least in chrome, mousing over a hyperlink pulls up the destination address in the bottom left corner (for now).
Is there a feedback button near the bottom? Maybe you can voice your concerns.
Surely these are just all ads? If not, that's a shame.
There used to be a saying in a11y circles, "Google is a blind user".[1]
Now Google are manifestly anti-blind-user.
Shouldn't be anything a major ADA lawsuit couldn't fix.
Meantime, DDG is actually pretty damned good. For console users: https://duckduckgo.com/lite
________________________________
Notes:
1. https://www.w3.org/2006/Talks/06-08-steven-web40/
Blind users do not, as a rule, use Lynx. This is a common misconception. The Lynx interface is, in fact, very poorly suited for visually impaired users, as it relies heavily on visual layout, color, and cursor positioning to convey information.
The majority of blind users use standard desktop web browsers with screen reader addons like JAWS or VoiceOver.
The blind user in TFA might be surprised at your assertion.
Blind users typically rely on screen-readers, including tools such as emacsspeak (which relies on either Emac's built-in eww browser, w3m (of which I believe eww is based), lynx, etc.
The ability to rely on console-based tools with text-to-speech capability, and receiving typed input, is fairly widespread.
The requirement that interactive content be rendered directly to speech is key.
18 replies →
You are falling for a pretty common misunderstanding. While many blind people use text-to-speech in combination with a classical "screen reader", that is not the end of the story. There is another major technology called Braille. And in countries where there is a good social system, blind people actually own so-called Braille displays. That applies to me, for instance. I am pretty much a pure braille user. Sometimes, when I use a Windows machine, speech will rumble along, but I really primarily rely on what I can feel beneath my fingers. And for braille display users, Lynx is really a nice option.
4 replies →
Are you saying the OP is lying about being blind or is a fake?
6 replies →
Back when I did web design stuff and I couldn't get anyone to put any effort into accessibility, I would "sell" the concept as it being SEO: search engines see what we put in for text, not images. Accessibility for humans is accessibility for Google.
It was pitiful that I had to do this but there you have it.
a11y = accessibility; I had to look it up so I'll save everyone else the trouble.
And it reads "ally" ?
25 replies →
Thanks. I should have spelled that one out.
LOL. Do you know how goddamn strict Google is, internally, regarding a11y/i18n/l10n?
The idea that your precious text browser == blind people is wildly presumptuous. Don’t use people with disabilities as your human shield.
Strict or no, Google actually fails at accessibility a lot. The basically broken unergonomic keybindings in docs for starters and the entire fiasco that is Android come to mind immediately. For a really "fun" fail that's recent, Youtube recommendations are now a live region. This means that if you leave that tab focused, your screen reader just starts reading things while you're trying to play music and are across the room and can't stop it. That seems like a minor example except the whole point of Youtube is hearing things so that's kind of a big ball for someone to drop, thinking that live regions are a good plan. Things have improved some (I no longer hate the GCP console) but my point is that strict about accessibility doesn't mean good at accessibility by any means, and Google has a deservedly bad reputation in blindness circles that they earned over a very long time.
Is there a way to make DDG Lite chrome's default search engine?
Chrome seems to let me create non-default search engines, set the full DDG as the default, but not set lite as the default.
The problem is that the search query does not go in the URL in the Lite version. That really sucks. It makes it useless in the browser history as well. In fact, because all the URLs are the same on each query, they're not even added as separate entries in the history.
4 replies →
For desktop, I believe so. Not for Android.
I've fully ditched Chrome desktop for Firefox, however.
Other than the landing page for search itself, the distinction doesn't much matter. If you set web search as a homepage or bookmark, you can definitely do it there, however.
How would I get into such accessibility circles?
A good starting point:
https://a11yproject.com/resources/
Another person notes that ddg is inferior. Maybe people get used to a certain way of searching with Google that doesn't translate to ddg. Haven't noticed a drop in quality myself and I think I might have been retrained to use different patterns and techniques in structuring my queries.
DDG results are so much worse for me, especially anything longer tail or in Spanish, that I switch to Google when I'm actually getting work done. I find myself adding "!g" to an important search just to check for any results that DDG doesn't know about and it's almost always an upgrade to see Google's results.
Search is hard.
I don't like to chime in to say something negative about an underdog like DDG, but I see this "people probably just don't know how to use DDG" suggestion a lot and it's quite the opposite: Google feels like it can practically read my mind with minimal context, like knowing I also may be talking about a recent event that shares the name with a generic search term. And I'm not talking about personalized search.
Or consider how "elm dict" in Google takes me to https://package.elm-lang.org/packages/elm/core/latest/Dict (#1 result), but https://duckduckgo.com/?q=elm+dict&t=h_&ia=web in DDG doesn't (nowhere on page 1).
Run into this enough and it becomes hard to willfully use DDG when you know you're likely missing out on good results when trying to do real work.
I know you're probably annoyed that I'm telling you that you're using ddg wrong. That's not exactly what I'm saying. It's more like: we're trained to expect certain things from the search engine, and so it's hard to switch.
> Or consider how "elm dict" in Google takes me to https://package.elm-lang.org/packages/elm/core/latest/Dict (#1 result), but https://duckduckgo.com/?q=elm+dict&t=h_&ia=web in DDG doesn't (nowhere on page 1).
ddg gives me the source code to elm Dict (8th hit, so it's in the first page): https://github.com/ivanov/Elm/blob/master/libraries/Dict.elm
I assume it's the same because ddg is claiming not to affect search results by anything except time and user configuration.
Google's results (for me) are a full page of references to every different version of elm's documentation for dict. Not exactly a wide net, and frankly pretty redundant. To see anything else, I have to click at the bottom of the page. It doesn't show me the source code. I went through the first ten pages and didn't see any link to it.
For ddg, I just use the arrow key to scroll down, and I can press enter to follow the link I want, changing the meaning of "first search page" for me quite a bit.
> DDG results are so much worse for me, especially anything longer tail or in Spanish, that I switch to Google when I'm actually getting work done. I find myself adding "!g" to an important search just to check for any results that DDG doesn't know about and it's almost always an upgrade to see Google's results.
I have a completely different experience in italian. They're actually pretty good, which is surprising given the small audience.
For work, usually I directly search for documentation in reference systems (e.g. en.cppreference.com). Neither ddg or google will consistently direct me to the "best" documentation. YMMV.
1 reply →
Sure, but 90% of the time, I and most people I know don't do very sophisticated searches. I'm actually mostly using google as a billion dollar search engine for wikipedia / stackoverflow / arch wiki / bbc / nyt / ft / whatever big site there is in a given domain. Because these sites happen to have 90% of what i'm looking for. For the rest, we all have our own little forums we follow: fb, hn, email etc.
So instead of trying to beat google on full web searches, the trick might actually be to index all 100 best ranking websites according to some metric (alexa rank for instance) and do it better than google. Then, maybe you can grab over 50% of the search traffic. For broader queries (in the search knowledge graph sense), in this scenario, people would fallback to google.
1 reply →
> Google feels like it can practically read my mind with minimal context, like knowing I also may be talking about a recent event that shares the name with a generic search term. And I'm not talking about personalized search.
I wish there was some compromise, because Google regularly seems to read another mind than my own, automatically "correcting" search terms to terms with similar spelling that are totally irrelevant to my search or including what is superficially synonymous but for my purposes irrelevant in the results. I frequently feel like I have to convince Google to stop second-guessing me and actually consider what I wrote rather than what it assumes I meant.
A few more knobs and switches to adjust that behavior would be helpful at least for power users.
> Another person notes that ddg is inferior
I'll agree with this too. Thing is, 99.XX% of the time DDG works fine. 1% of the time if I can't find a thing, I try with google and probably 50% of the time I can then find what I wanted. E.g. DDG has a 99.5% success rate, Google has a 99.75% success rate. Not too bad by DDG, as I know that last .25% is REALLY hard.
Either way, google is seeing only a tiny % of my search queries, so I'm happy.
Of note, I switched to DDG earlier this year. I've tried to do it in the past and found that the DDG/Google ratios were like 80%/99+% in the past, which is WAY too much of a tradeoff to make. DDG has MASSIVELY improved, I'm using google search <1/day now.
Same here: 99% of the time. If I want to dig -more specifically- than DDG wants to go in some cases, I just add a !b in front (for Bing). I do a lot of research; haven't used Gargle for 10 years.
What many people don't realise (especially in HN) is that DDG is not as good as Google, in my experience, in looking for non-English content. One of the things that stopped me was the lack of good Dutch results that Google can pick up easily with its internal translations and whatnot.
I use DuckDuckGo when I know what the first result is likely to be. If I'm actually _searching_ for something (i.e. the majority of the time) I add !g. I can feel myself flinch every time I submit a DuckDuckGo query. It's just much, much worse.
During my first 2 weeks I definitely noticed a difference in quality, but I guess over time you get an intuition for how to combine search terms when it comes to less common queries. Now I prefer DDG over Google, even without all the nice shortcuts.
What's really weird about DDG is that the results from the no-JavaScript version seem far inferior to the results from the JS-enabled version of DDG.
I think Google's results have become dramatically worse, and I've become better at DDG. I now often find it difficult or impossible to find what I want on Google, and straightforward on DDG. I also use the bangs all the time now, they're great.
Inferior how?
For my purposes its results aren't as good. But also overall it provides much better privacy and isn't intrusive.
Then - for folks who equally value these three things - ddg isn't inferior. When I can't find something on DDG, I use Google. Or if I'm pretty sure I won't find it, I start with Google. But I normally start - and end - my searches with DDG. For me DDG is not inferior to GSE.
> However, being a blind person, I guess I have to accept that Google doesn't care anymore.
Jumping from "this isn't working on my incredibly niche browser" to "Google don't care about blind people" is completely ridiculous.
Defaulting to simple HTML allows one to support all incredibly niche browser. That is the beauty of protocols and standards. This is particularly relevant when it comes to accessibility. Google search results are literally lists of web links, so this is absolutely doable.
They don't do it because they are more preoccupied with extracting data about their users than they are about accessibility, and yes, this includes blind people. There is not way around it.
It is perfectly legal to be selfish, but let's not bullshit ourselves about what is really going on...
Yep. Following standards has great side-effects all around. The same things that break sites for the blind also break it for UX-enhancement extensions like Tridactyl, which lets you click elements from the keyboard, so long as sites don't go out of the way to make clickable buttons undiscoverable.
(Extreme apologies for any implied equivalence between myself and the blind.)
6 replies →
I don't feel that's fair -- yes, Lynx is not really updated much anymore and at this point very niche. But it must work for their workflow, and I feel like something as critical as search should have fallback to work with very 'primitive' browsers and older W3 standards.
(FWIW I work at Google, but not on the search team. I might go looking at internal discussions to see if this is being looked at at all)
Recently I read announcements that Google was making it harder to view internal projects/discussions on other teams [0] (suspiciously, this came after the leak that they were still working on a search browser for China [1]). Have you, as a Google employee, found it hard to audit or observe previously visible projects?
[0]: Couldn't find a quick source on this
[1]: https://theintercept.com/2019/03/04/google-ongoing-project-d...
Perhaps you could contribute better by citing examples where Google does care about blind people instead of calling an actual blind person's argument ridiculous.
It doesn't matter if he's blind; I'm calling the connections ridiculous. Lynx isn't a browser for blind people, it's a browser for the terminal. The terminal isn't some accessibility tool either, and was never made to be one. Drawing a connection between Lynx and blindness accessibility is tenuous, saying that this change is Google is attacking Lynx is even more tenuous, and drawing some sort of transitive connection between all of them to say that the change made is somehow against accessibility because it doesn't work on lynx is doubling so, bordering on...
It should also be noted that it seems that this is a bug with Lynx, not Google.
There's this is you care to read it: https://www.google.com/accessibility/
But that's not the point.
1 reply →
Perhaps we could contribute ourselves instead of asking if others can.
Google seems to be heavy users of https://developer.mozilla.org/en-US/docs/Web/Accessibility/A... which offers a lot more power and flexibility towards accessibility compared to plain text by enabling accessibility to full featured web interfaces instead of basic versions.
See other methods https://www.chromium.org/developers/design-documents/accessi...
How about the built-in screen reader in ChromeOS?
Lynx is niche now? Low user count, yeah, but it’s a standards complaint browser.
The definition of niche is literally "appeals to a small, specialized section of the population" so, er, yes.
10 replies →
It seems that the problem is caused by the fact that Lynx isn't standards compliant anymore, and fails to interpret valid HTML5 structures correctly because they aren't valid under older standards.
Lynx is the definition of niche...
According to some other comments, the problem might indeed be that it is not a browser compliant with the current HTML standard: https://news.ycombinator.com/item?id=21629207
> it’s a standards complaint browser.
What does that even mean? What standards it complies with? Is there a compliance test suite or report somewhere? Because there definitely are bunch of standards it does not comply with.
Curious, I installed lynx just to check this out.
I find that I physically cannot navigate to the links in the page except the first few at the top.
But.
On the pages I get, the <a ... href="..." ...>...</a> structure is still 100% intact. It's buried in a table and div soup, but it's there.
So, I argue Lynx parsing bug!
The author of this article would have done well to save and diff the working/not-working HTML they received. :(
Not really a bug more than an outdated browser not being updated for HTML 5.
In HTML 4, <a href="..."><div>...</div></a> is an error and Lynx deals with this by implicitly closing the <a>, turning it into a hidden link (which can still be followed by pressing 'l').
In HTML 5, <a href="..."><div>...</div></a> is valid.
Google actually detects the Lynx user agent and sends an HTML 4 page, but apparently this new code wasn't written with that in mind.
I think it is because in html4 the content of an <a> element is restricted to inline elements, whereas in html5 <a> is transparent so its content can be block elements if its parent allows them.
That would explain why my blog is terribly broken in lynx. The links from my category pages to article pages are usually done with <a href...><figure>...</figure></a>, which breaks the <a> in lynx.
Why should anyone have to do that? If the site stops working for the user than it no longer works.
Should they be expected to make their own ‘re-Googler’ to fix the page so they can use it again?
>Why should anyone have to do that?
When there's a browser bug, someone needs to debug it and fix the browser. Otherwise the browser bug will remain forever.
> Luckily, there is duckduckgo. However, I have to admit, the search results of duckduckgo are by far inferior to what Google used to give.
For cases where DuckDuckGo isn't quite enough, I usually rely on StartPage [1]. It uses results from Google, but like DuckDuckGo it doesn't track its users.
Startpage, like DuckDuckGo, also works well in Lynx (I've just tested in Lynx 2.8.9 on Ubuntu 16.04).
Additionally, you can get StartPage results right from within DuckDuckGo by just appending the !sp shortcut to your search query [2]
Edit: you may want to keep in mind, however, that StartPage is now owned by an advertising company. Some users took issue with that. Personally, I'm OK with that as long as users are not being tracked. Relevant HN thread: https://duckduckgo.com/bang?c=Online+Services&sc=Search
DDG has been getting better. I've switched my browsers search to it and only occasionally go back to Google for something specific.
DDG's results are very bad still, and they will likely always be terrible. It's not hard to find a query that is objectively much worse than what Google has. DuckDuckGo will never be able to catch up to Google's search quality results. It doesn't have Google's data, Google relies on the search behavior of most of the people of the internet to guide its results along with its vast human and hardware resources to create results that even Microsoft can't match.
https://www.bloomberg.com/news/articles/2019-07-15/to-break-...
3 replies →
Or you could use searx.
Did they also remove link wrapping with this? The HREF goes straight to the destination for me now on Chrome, where previously it went to some Google domain redirect. It's there on the first HTML load too, it's not a JS thing after the fact. Is there a different response for Lynx or are they formatting it in such a way that Lynx doesn't pick it up?
They're using the ping property of the <a> element now, the only good thing to come out of this
On a more substantial note, that's documented here:
https://www.w3.org/TR/2008/WD-html5-20080122/#hyperlink0
It's a little odd, to see that a browser "must parse", but also "may either ignore the ping attribute altogether, or selectively ignore URIs".
It strikes me as a bit clumsy compared to the typical MUST/SHOULD/MAY wording.
Anyone (other than Google) using a-pings?
Ah yeah that's definitely going to be different HTML on Lynx, then, since I bet they don't support that and Google's not missing out on tracking.
Yup - curling with Lynx user agent gets a targets of href="/url?q=<whatever>" rather than href="<whatever>" ping="tracking"
Hm, so now I can go straight to the Google hosted AMP version without bumping through Google an extra time first? /s
Try googler https://github.com/jarun/googler
googler no longer works either. I went to use it and got no results. Since they just scrape the results page a layout change is probably going to break everything.
This issue was opened recently for it.
https://github.com/jarun/googler/issues/306
It got updated and working again.
Also ddgr: https://github.com/jarun/ddgr
Same author, no tracking, works over Tor.
I like the idea behind DDGR. However, I was surprised that it doesnt offer a pager for search results. Whenever I search something, I need to use the scrollback buffer to actually see the first results... That is not very userfriendl, even for a CLI tool.
A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
That they are only underscores Google's massive blunder here.
Yes, there've been CLI wrappers around web queries before. Until the past few years, these simply addressed search format, URI arguments, and namespace. They launched in the user's choice of browser, text or graphical. Surfraw is the classic, I've written a few very brief bash functions for equivalent capabilities, again, launching any arbitrary browser (though usually w3m by my personal preference).
Now what's needed, and you're recommending, is a content-side wrapper as well. This story ends poorly.
> A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
I think this ship, if it hasn't sailed already, is at least starting its engines and getting ready to leave the harbour.
User agents and servers are increasingly trending towards an adversarial relationship, where the user doesn't want to do much of what the server is asking of it. This has been true from the first pop-up blockers, through to modern adblocking and anti-tracking measures (the situation now being so bad that tracking protection is a built-in default feature in some browsers).
Eventually, a "filter out the crap" strategy becomes too onerous, an "extract what looks good" strategy starts to look better, and you end up with tools like Reader Mode. Custom clients are a natural next step - when someone gets desperate enough to write an article-dl to match youtube-dl, we'll be there.
2 replies →
> A fundamental principle of the Web is that site-specific clients or apps shouldn't be necessary.
Perhaps, but the Web is now effectively requires Chrome, or things which look enough like it.
When it comes to searches for technical reasons, dev related, sci, even gov searches, duckduckgo blows google out of the water. Junk random searches, trivial things, yea... its weak. My experience I should say.
Serious question, why search Google in your terminal with Lynx as opposed to `googler`? The actual result pages can still open in Lynx but the experience of navigating the results is very nice.
Not pretending to be able to answer for OP, but one answer would be that it's because lynx is a browser, and Google is a web site. Traditionally you read web sites with browsers, rather than requiring special tools for every particular site.
Author of the post is blind so I'd imagine the usage of Lynx is for screen reading/accessibility reasons.
Spivak is asking why Mario isn't using https://github.com/jarun/googler .
If Mario is able to use Lynx, the expectation is that Mario can probably also use googler.
I would imagine that most are not aware of googler &/or do not want to use one tool to search and another tool to browse
There's a patch over v3.9 that works with the new layout: https://github.com/jarun/googler/issues/306
Hopefully we'll have a PR merged soon.
> However, being a blind person, I guess I have to accept that Google doesn't care anymore.
This is not a blindness issue. It would be more accurate to say that Google doesn't care about geeks who cling to old ways of doing things long after there's a good reason to do so. As others on the thread have pointed out, blind people can use graphical web browsers with a screen reader, even under GNU/Linux. I know you know this; I'm pointing it out for the benefit of everyone watching the thread.
On the other hand I don't feel like I'm in a position to blame a blind user for clinging to something they already know instead of learning how to use a graphical browser. A lot of people in general have good reasons for sticking to what they know. Especially if what they stuck to worked fine until recently, and doesn't now only for some trivial reason that's easy to fix.
> It would be more accurate to say that Google doesn't care about geeks who cling to old ways of doing things long after there's a good reason to do so.
Is it that you can't think of a good reason to use text-based browsers at all or is lynx itself the issue? command line web tools like lynx and curl are pretty handy to have available.
> However, I have to admit, the search results of duckduckgo are by far inferior to what Google used to give.
I've switched to DuckDuckGo since I read its CEO's book "Super Thinking", and I'm not feeling that it's inferior. Sure, it doesn't have rich cards and other goodies, but I've come to realize that these are nice-to-have, but not essential. On the other hand, reducing the confirmation bias by getting out of the filter bubble is, I believe, essential.
It's been probably years since I've used Google search. It's entirely possible that I'm not getting desired results, but as far as I've been able to tell. It's entirely possible that because I don't end up on answers, I end up (most of the time) going directly to read source code and documentation more often than not. This seems to have helped created something of a better understanding of whatever tech I'm using at the time.
Startpage.com, a Google proxy, still works fine in lynx.
Unfortunately, they got bought: https://news.ycombinator.com/item?id=21371577
Wow, first Private Internet Access, now startpage? What's next, the next Waterfox version will be based on Chrome!?
Starting to feel like Luke Skywalker and Princess Leia in the trash compacter, the walls are closing in.
True, and it's a shame. But it might not matter that much in this accessibility context as Google is also an advertising company.
How does Qwant perform? It's supposed to be privacy-focused like DDG and SP, and it's European, so covered by GDPR too:
https://lite.qwant.com
Hey! We will be happy to give you an infinite access to our search API: https://serpapi.com
It will be slower than regular Google but at least you are not going to be blocked.
Edit: Create an account without credit card details, send an email to julien _at_ serpapi.com with it, and I’ll make sure you have an active account.
w3m works fine for me.
Any reason people prefer lynx over w3m or eww?
I'm surprised elinks doesn't get more mention whenever text browsers comes up and I wonder why. I prefer it over the others.
It doesn't work for me for quite a while now (for up to a year), even with recent versions from git [0]. My preliminary guess is that's because Google renders each result with <div> (block element) inside an <a> tag. I didn't have any extra spare time to test that further (and report) though, so I simply ditched Google and just went with duck.com from that point.
[0] https://github.com/tats/w3m
Also works with elinks.
Indeed.
What? w3m is resulting in the same thing for me. No actual links to what pops up. Sometimes links to some videos.
Compatibility with screen readers that blind people use.
Moreover, fuck Google and non standard practices in general.
Lynx is not a screen reader. (In fact, it's considerably less accessible to blind users than a typical desktop browser -- as a console application, it has no way to provide accessibility data to a screen reader.)
A screen reader is a tool like JAWS or VoiceOver which interacts with desktop software (including web browsers like Chrome or Safari) to provide information about what the user is interacting with.
1 reply →
This is a trend I've noticed also, behemoths breaking standards without permission nor apology, crushing the canary in the coal mine (w3m/lynx, though emacs eww browser works). As a member of an IT Development team creating various web applications, we have found different approaches are needed based on client needs and workflow. We develop many complex data-based backed query/display single page web apps usually with zero js. Where not needed it is little more than a security hole begging for trouble. Where no 'live' user experience is required this is equals resource waste (xtra payload, xtra client cpu cycles if no js blocker) especially in many work environments.
As user/engineer I am annoyed when any site sends me their worthless js to execute unnecessarily, thus wasting my devices cpu cycles, battery, plus my lifejuice, and for what??? In most cases nothing I want/desire/need, therefore just to make a bigger tool of me than before, and not for the better. That being said, using web communications, live/simulated data visualizations benefit greatly from js.
Separation of concerns is what is missing, js was created to benefit and enhance user experience, the ne'er do wells have hacked it into a tool mindlessly used and often enough does screw the user over without permission nor apology...
Interestingly the creators of what became Google received their start with NSF funding. Good way to finish off giving everyone the finger Google, continue to 'do only evil'...
From JWZ.org
Greetings, Lynx users. There is a reason this page doesn't use ALT tags on the images. The reason is that the bozos responsible for both MSIE and Netscape Confusicator 4.0 decided that they would display the ALT tags of images every time you move the mouse over them -- even if the images are loaded, and even if they are not links. The ALT attribute to the IMG tag is supposed to be used instead of the image, not in addition to the image.
This looks absolutely terrible, so I don't use ALT tags any more in self-defense.
If they wanted to implemented tooltips, they should have used the TITLE attribute to the A tag. That's in the HTML 1.2 spec and everything.
I had to decide between making this page look good for the vast majority of viewers, or making it be readable by the miniscule minority of you stuck in the 70s. Those of you in the retro contingent lost. Sorry.
from view-source:https://web.archive.org/web/20000304020552/http://www.jwz.or...
Someone got really worked up over text appearing when they moused over an image, and broke accessibility because of it?
jwz has a... strong personality.
1 reply →
Yeah, jwz.
Google locked me out of Lynx nearly a year ago. I'm surprised it took this long to affect other people.
I'm Google's public liaison for search. Thank you for bringing this to our attention, and our apologies for the inconvenience caused. This was indeed a bug that our engineers have explored, and it should be fixed now.
I am willing to bet, attorneys who specialize in exerting money from ecommerce sites based on fluke accessibility lawsuits will not take notice of this problem that presents an actual hardship.
Sounds like a bug in lynx.
Solution: Use alternative HTML parser
The program will now display search result URLs as visible links.
"Bye bye mainstream, hello ghetto."
Yes, because your use of deprecated software is no longer supported is TOTALLY the same thing as being relocated into the Ghetto in Warsaw.
Strange with elinks I can still click on the links.
With with lynx not.
This appears to have been fixed since the posting. Links in the search results work again in lynx.
I am able to Google using Links (another excellent text mode web browser).
duck.com works well with lynx.
keep you worthless js out of my cpu cycles; dont need to read o
Sounds like an ADA lawsuit is in the works...
That's what happens when Google releases their own console killer.
I think you may be confusing console (the terminal emulator, command line interface) with console (video game console like Xbox or Playstation or Stadia).
Google search no longer working with text readers should be an ADA Violation. https://www.ada.gov/complaint/
Lynx is not a screen reader.
Screen readers are tools like JAWS or VoiceOver which interact with desktop applications, including desktop web browsers such as Chrome or Safari. Google works fine with these.
And YASR under Linux with lynx/edbrowse works 200000 times better than jaws, starting with a logical layout for the blind.