Comment by ryandrake
3 years ago
To me, computers used to be fun when you commanded them what to do, they did it, then they prompted you for another command.
Now, more and more, computers are trying to tell us what to do. Notifications, unwanted ads, spam, recommendations, pop ups, accept this, subscribe to that, dark patterns trying to get me to do something… I never commanded my computer to do these things. Some product manager at some company 1000 miles away simply decided my computer should do these things, without even my input. Even my operating system! After booting up, it’s running hundreds of programs simultaneously. I did not tell it to run these things! It’s doing it all by itself out of the box. I feel less and less in control of my computer and more and more a bystander.
We (the software industry) have royally screwed up computers. Users used to be in control and now they are the ones being controlled or at least “influenced.”
Here's one: someone wrote a battery status "middleware" which reports battery status on DBus. Fine.
Then they coupled in "shutdown on low battery" and refuse to allow it to be disabled. https://gitlab.freedesktop.org/upower/upower/-/issues/64
So since xfce, gnome applets use this library, I either have no battery status applet or intermittently my computer goes into shutdown after resume because the battery falsely appears dead for a few seconds.
(The kicker: it doesn't log why it decided to initiate the shutdown. Took years to find the bloody cause..)
> and refuse to allow it to be disabled
Why don't people fork these terribly managed projects? Even just working on the feature and then submitting the .patch file to be merged in by downstream distros would be a very meaningful signal.
Big problem is that just forking it isn't enough, you also have to convince every distro out there to switch to your fork to make any real difference. It becomes a truckload of boring work for what might be a trivial one-line fix in the code.
I blame the package managers we use for this, as most of them make it incredible difficult to change anything and make you a slave to whatever the distribution ships, which in turn is slave to whatever upstream ships. The effort to fix the issue just far outweighs the issues the problem is causing in the first place.
NixOS is one of the few that seems to get this right by making code changes pretty much trivial. Just fork the upstream repository, point the package to the new fork and you are done. With a Nix flake you can even make the repository itself installable directly. No need to try to convince any maintainer to include your patch. And having package names be hashes means you can switch between old, new and patched versions of a software easily, no need to worry about conflicts or having to un/reinstall all the time. Downside is the lack of binary compatibility in NixOS, meaning if you change a package deep down the dependency tree you might end up recompiling half your distribution and can no longer use the precompiled binaries (there is `patchelf` to workaround this, but haven't used that myself yet).
13 replies →
> because the battery falsely appears dead for a few seconds.
Uuughh, I have this with Windows on my Dell XPS as well. Basically every time it comes out of sleep/hibernate, it will briefly think the battery is at 0% and try to shut itself down, and if you boot it up again without it being plugged it, it won't start up at all.
But when plugged in (either coming out of sleep or for the follow-on boot), after a few seconds it'll go "lol yeah no you are actually at 100%, no further charging is required, hooray!"
Would love to know how to disable the critical battery shutdown altogether in order to get around this. It's a bizarre and terrible bug to have in what is supposed to be a flagship developer machine.
> Then they coupled in "shutdown on low battery" and refuse to allow it to be disabled.
Isn't that to protect user's data? There's been numerous reports that modern-day high performance SSD's don't actually neatly write data to physical storage after a flush command; I wouldn't want to lose data if my system unexpectedly shuts down due to power / voltage issues.
Or is there additional low power protections at a hardware level?
That could be the reason why it's done, and maybe it's helpful in such cases.
But what if I have some "pro-grade" SSD that doesn't need this? What if I have some huge battery that lasts for 2 hours when there's only 2% left?
Point is that the user should be able to have the final word, even if the default looks to be helpful.
we have journalled filesystems these days, even without hardware protections it does not mean the machine should forcefully make decisions for us.
Depending on the brand of flash storage, there are power loss protections (the same as harddisks, actually) where a little capacitor stores enough charge to park the head or flush the DRAM component of the flash to SLC solid flash storage.. if you have a huge write in progress (enough to blow the SLC cache and the NAND cache) then you might lose that write.
> Then they coupled in "shutdown on low battery" and refuse to allow it to be disabled
Would you rather have your computer crash when it runs out of battery, or shutdown gracefully at 2 percent battery.
> Would you rather have your computer crash when it runs out of battery, or shutdown gracefully at 2 percent battery.
If I want my computer to crash then that is a policy I should get to make on my own. It is not the job of the software vendor/developer to dictate policy to me.
If they have to have a default then they can pick whatever, but not being able to change a particular setting is a bug.
Would you rather have to pay a bit of attention or have a computer that just randomly shuts down when it gets confused about battery state, since that's the scenario parent is describing?
It's fun that they assume that any of my computers have ever had a battery other than the CMOS one.
1 reply →
Xfce power manager will handle that on its own. I tell it to sleep instead of power off. Then, worse case laptop just falls back asleep and I wake it a second time.
Why would there be a software fault (crash) at any battery percentage?
What function of battery charge causes an error on a modern operating system?
I think it's important to always remember it's not the computers that are trying to tell us what to do, it's the people who build those computers and the software running on them that are trying to tell us what to do.
Computers, in their fundamental nature, are exactly as you describe them. Such devices will always be available (if nothing else, in the form of electronic components). We just have to refuse to use the machines that want to control us.
Also relevant: https://www.gnu.org/philosophy/free-sw.html
> https://www.gnu.org/philosophy/free-sw.html
Little side rant here, I think one area where Free Software has failed so far is build systems. You get the source alright and the GPL even requires you to include build instructions ("all the source code needed to generate, install, ..."). But in practical terms the amount of effort it takes to actually build software yourself is often insane, far from being automatic and often requiring a lot of manual work and workarounds (especially when you leave plain Linux and start cross-compile, etc.). Now with Github we even have a lot of the build infrastructure be proprietary, and while it runs automatically on Github CI, there is no way to run Github CI locally.
There is effort put towards reproducible build now and some distros like NixOS seem on the right path. But I think we lost a lot of ground here by having the build process be filled with so much patch work and manual human intervention, instead of being 100% automated right from the start. We really need a "freedom -1" that requires software to be buildable fully automatic.
> I think one area where Free Software has failed so far is build systems
GNU Autotools (also known as GNU Build System) is fairly standardized in the GNU world. The build process for most GNU packages is simple:
The FSF-endorsed GuixSD (a GNU system inspired by Nix) is also putting a lot of effort towards system bootstrapping[0] and deterministic builds[1].
Of course, there is still a lot of work to be done. But honestly, I'm very optimistic - the GNU system has gone a long way since its inception, and it's only getting better by the day.
[0] https://guix.gnu.org/en/blog/2020/guix-further-reduces-boots...
[1] https://guix.gnu.org/en/blog/2020/reproducible-computations-...
5 replies →
People who think this is a sensible goal generally, in my experience, do not understand what goes into building complex applications. And while it might indeed be a laudable goal, the ratio of users who will benefit from not having to know and/or do as much to build an application to those who will never build it themselves but would like bug fixes rather than build system architecture changes is skewed against the big picture.
> computers used to be fun when you commanded them what to do, they did it > Users used to be in control and now they are the ones being controlled
2013 was the watershed for me. You can read about why here [1]
There's a world of difference between using a tool and being a tool.
That transformation from "It's more fun to compute" to "If you've nothing to fear you've nothing to hide" took place almost silently in the first 20 years of this century.
The problem is that as "hackers" we don't understand computers. Retaking tech, by fully understanding and helping to culturally redefine computing is both the duty and prerogative of any real hackers left out there.
As for the fun. It never went away for me. I am more passionate about technology, coding, networks and electronics than at any time in my life - precisely because the stakes are now so high.
[1] https://digitalvegan.net
> Retaking tech, by fully understanding and helping to culturally redefine computing is both the duty and prerogative of any real hackers left out there.
A powerful statement right there. As someone who grew up with computers from before the web, I feel that "cyberspace" has been colonized by business and political interests. It was supposed to be "our" space, I mean, by the people and for the people. Right now it's more useful as a tool for the dark empire.
A Declaration of the Independence of Cyberspace - https://www.eff.org/cyberspace-independence
I agree that the problem and the solution is cultural. It's about having fun, being weird and creative with how we use technology, to reclaim the magic and make it ours. Things like Tor, uBlock Origin, and dare I say some of the cryptocurrency and blockchain stuff, they feel like part of a larger decentralized underground-ish movement that has no name (and probably should remain so).
> and dare I say some of the cryptocurrency and blockchain stuff
I've noticed a large split between the quiet people who like cryptocurrency for its security and privacy (who now use Monero) and the loud investment-focused cryptobros obsessed with Ethereum, Doge, NFTs, Musk, etc.
I just bought a copy, and look forward to reading it. (Unfortunately) I relate to what is described. I get this sense that the infinite possibilities given to us with computers has shifted to focus solely on consumption.
I know this isn't the case for everyone or everything--everything that makes computers special is still out there in one form or another, and arguably tools like YouTube and the like has made creating and sharing new things possible. It still seems you have to stray away from the path you're guided to in order to find them (and know they exist!). I'm thinking of things like microcontrollers, electronics, programming in general.
Hey man, haven't read that book, but I love your other one. Thanks for writing it.
Thanks Jim. I enjoyed writing the "other one" too, cos Pure Data is just such awesome fun as a sound and music making language.
I would pay twice the going rate for a Macbook that ran a version of OSX that always immediately responded to my commands. When I hit Cmd-Q I want the app to close. Not when it's ready. Not after showing a dialog. Not after asking me if I'm sure. Not after cleaning up. Not after preparing to close. Not after doing some background processing.
Just close. I want to issue a command and I want it carried out immediately.
If the software can't do that gracefully, then it's bad software.
Same problem in Windows.
Also, there is no reason for the operating system UI to ever be stuck. If the program fails to redraw, show it blank, but don't prevent me from moving/sizing it. Even if the CPU is at 100%, I want my commands to get 1st priority. Too many times I had to spam Ctrl+alt+del just waiting for something to respond.
Or a computer you can actually power off.
Used to be: You throw a physical switch on the computer, it interrupts the power circuit from the power supply, and the computer went off instantly.
Today: You have to do it in some software menu, but invoking this, you're not even powering it off. You're requesting your computer to "pretty please, with sugar on top, turn off my computer when you're not too busy." Then the OS goes and does god knows what, maybe it decides to flush some caches, maybe it decides to do an entire operating system update, who knows--you're not the driver, you're just along for the ride.
Even the physical power button on the computer doesn't break a circuit anymore. It... you guessed it... sends a software signal begging the almighty operating system to please shut down.
And when you point this stuff out, all the excuses start coming out: "Bbbbut the OS has to flush its caches! Bbbbut the filesystem needs to write things to disk that are in-flight! Bbbbut applications need to gracefully shut down and free their memory (which is about to have the power cut). Bbbbbut there may be a critical background task still running that needs to...." I don't fucking care! I didn't command my computer to do any of these things.
It's gotten to the point where if you want to shut your computer down instantly, you need to pull the power plug or remove the battery. I expect we software engineers will fix that one some time soon, too.
16 replies →
Software capable of doing some of the tasks people want done is sufficiently complex that it allows you to issue multiple commands for which immediate action would be contradictory. Telling the application to save state but then also respond to a quit command "immediately" would be one trivial example. Tell an application to quit right after you launched any operation that requires on-disk state to be modified would be the more general case. The software is not bad, it's just got enough power to allow you to make your intent ambiguous.
You can make the app window disappear while keeping the disk saving thread. It is perfectly doable.
4 replies →
A "you have spent 3 hours working on this document without saving, are you sure you want to close?" windows used to be a good thing.
Nowadays the software should just auto-save it for you. But not all software is that well written, and the OS can't tell it apart.
So why is the OS supplementing functionality
I think there are legitimate use cases for delayed close. Especially in gaming. Some games play in ironman mode where closing the game could provide a cheat. To avoid this they need to be able to save on exit. Likewise, if the game is in the process of saving an override close could corrupt a save file which is unlikely to be a wanted behaviour for the user. Unfortunately, many companies abuse this feature and give pop-up dialogs akin to "Are you sure you want to close us?". I think the OS is in a difficult place with finding a balance as the feature itself is necessary for some developers.
Majority of the games out there always have warning before the title menu and continuing the game, the warning is always this "If you see this icon, please do not try to quit or shut down the game while it is saving". If they have that warning, then it should respect the user's command. I had a few games that locked out Alt-F4, refusing to listen to that command. The only way I can quit is through their menu then go to the main menu, then go to the title menu to confirm the quit. And the game have the gall to ask "Are you sure you want to quit?". Then what is the point of having that exit game option there if they are making us to through the hassle to quit the game.
Thankfully for SuperF4, it will force exit the game or any app that I have it focused on. I had one game that outright shamed me for doing that and I never went back to that game again. Sorry, I don't have time for games that chose to lock out the exit and three-finger salute command. SuperF4 is the boss now and there nothing that those games can do.
In another HN thread just the other day (paraphrasing, but the tone is accurate): "macOS is shit because cmd+Q kills my programs and it's too easy to hit accidentally while trying to strike cmd+A"
> In another HN thread just the other day (paraphrasing, but the tone is accurate): "macOS is shit because cmd+Q kills my programs and it's too easy to hit accidentally while trying to strike cmd+A"
On the other hand, macOS is wonderful because, if you don't want ⌘Q to do that, then you can re-bind the Quit command, and programs respect that. (I don't remember if programs have to opt-in to respecting it, or if the OS enforces it.) I know that various flavours of Linux can do this easily too, but I think that Windows doesn't have a native such feature.
1 reply →
It's hard to please everyone, and data-loss is kind of a trump concern, but actually macOS does because it has this feature. The keyboard shortcut is cmd+option+shift+escape but you can probably remap that.
I force quit apps all the time, sometimes even because I want them to restore their exact state. In a lot of apps, the macOS state restoration works great, possibly better than actually quitting them.
> If the software can't do that gracefully, then it's bad software.
What I want is that the software itself doesn't even get a chance to interfere with demands like ⌘Q. There's no reason Chrome should get to decide that it doesn't close until I hold ⌘Q; that way lies all sorts of dark patterns. (Adobe's attempt to seize control of basic OS functions is my bugbear here.)
I don't mind the OS having a system-wide setting whereby I can decide how much I want it to protect me from the consequences of my actions, but that should be a decision between me and the OS, enforced by the OS, not something I have to negotiate individually with every app.
(Same with the menu bar. I choose what goes in the menu bar, and macOS should enforce that choice, not tell me that the software has decided what goes there and I have to lump it.)
That’s a very hard constraint for a non real-time system to guarantee. It’s not necessarily a bad thing that your OS has a flexible scheduler. I think you’d find running a real-time OS as your daily development machine would have some of its own quirks you would find distasteful.
It doesn't have to be a truly real-time system. The difference here is more philosophical than technical. If the software were designed to make my demands a priority, then it could do a much better job responding to them, but nobody working on the software is thinking that way.
BeOS managed it, and Haiku continues to manage.
This is why I love Debian linux. It’s not nearly as polished as MacOS but dammit it’s quick and does what I tell it to do.
i think the reason a lot of software works that way is that most software does not carry out tasks asynchronously and even does blocking or time consuming stuff from the foreground or GUI thread instead of handing it off to a background thread. the reason for that is that doing anything async makes software exponentially more complex to engineer… so hence we have lots of apps that cannot just be stopped easily.
kill -9
Ads in Windows. Just...the insane amounts of ads.
There's probably a distro offering for Linux that should be made where is promise is just "I'm not contacting the internet unless you ask me too".
> "I'm not contacting the internet unless you ask me to"
I think there kind of is? Like, most Linux distros actually?
Yes, Ubuntu has snaps which try to talk to the internet and autoupdate, and yes this is absolutely terrible. Yes, sometimes a distro might notify you there are updates available. Yes, sometimes a distro talks to a ntp server to sync the time. But, generally, I don't feel internet usage is inflicted on me, I inflict it on myself.
In what ways which bother you does your Linux distro contact the internet without asking?
Ubuntu also has services like fwupd and unattended-upgrades that are configured by default to run periodically without asking the user. I do not know if there are any others.
Fortunately, as long as the user knows about them, they are fairly straightforward to mask with systemctl.
For people like me, who like having complete control and knowledge of everything their machine is doing, Arch GNU/Linux a good choice.
And then you run Firefox and it starts calling home all by itself. That one still bugs me.
I miss when I could put Wireshark on an interface and nothing would show up until I pressed a button somewhere.
This.
How is it that I paid for Windows (begrudgingly) and yet it's constantly undermining my efforts to control it? Microsoft has at least a dozen shitty apps that no matter how many times you disable or uninstall them, always seem to come slinking back into RAM. What the actual fuck?!
Every distro has privacy issues, here are just the ones that we know about in Debian:
https://wiki.debian.org/PrivacyIssues
Linux from scratch doesn't ;)
KISS, Gentoo, Alpine(not sure if it's desktop-usable?) are probably the most private.
1 reply →
Speaking of computers commanding us ... a few years ago, I was visiting a friend's parents (practicing dentists). They told me in horror how Microsoft forcefully upgraded their work machines to Windows 10 overnight. And one of their applications stopped working due to the upgrade.
That's the first time I learnt that Windows was using that approach (I'm blissfully away from Windows; I dwell in the Linux world.) It was painful to watch them ask "how come this machine does such a big task [upgrade] forcefully?"
> To me, computers used to be fun when you commanded them what to do, they did it, then they prompted you for another command.
This is a good portion of why I switched to Linux full time over 10 years ago.
My computer does what I tell it to do, when I tell it to do it, as I tell it to do it. I fuck up? It's my fault and I know it and I have to fix it myself.
No more "What the fuck just happened?" bullshit.
Have you tried using a Linux desktop? I personally find that it doesn't have all the negative things you mention.
It has fewer, but unfortunately the mentality of developers believing they should be able to tell the user what to do is so pervasive that even FOSS developers do it. From what I can tell, GNOME is the most prevalent Linux Desktop environment and it is notorious for this. Ubuntu is, if not the most common distro, the most commonly recommended one and Canonical is also notorious for forcing things on the user they don't want like Snaps and auto updating.
Avoid desktop environments like GNOME. Stray from the guided path and use bare window managers and other alternative software. The additional effort required is overblown to a ridiculous degree.
The software does what I say, when I say. If it doesn't, it gets wiped from my system.
Another little picky example: gnome-terminal is my favorite of the various spiffed-up xterms. Start up a KDE session, and gnome-terminal does not appear in the menu even though the full GNOME is installed on the system. I have to run it from the menu's search function.
At least, this is the case on Ubuntu Focal. Anyone else using KDE and gnome-terminal?
ETA: I can't even make a quicklaunch for it in KDE.
I use Fedora+Gnome, and I haven't ever felt that I am being told what to do. Plus you can totally just install Fedora with a different desktop if that is the case.
It needs to be a library and it's turned into a framework.
Yes, and now everything is on the cloud so you don't even own your files anymore.
We (the software industry) have royally screwed up computers. Users used to be in control and now they are the ones being controlled or at least “influenced.”
I think we naively believed that an increase of human technical capability would lead to such abundance that we'd achieve post-scarcity (at which point, whether we call it socialism or pretend it is liberalized capitalism, it doesn't matter) and permanent democracy... not the corporatized nightmare we actually got. The capitalists are right about very little, but they got this right: human nature can be shit. Most people are decent or want to be, but the ones who gain power in human organizations, especially organizations without purpose such as private corporations, are cancerous. We thought that problem would magically solve itself if we just made the world (in aggregate terms) richer, and we were wrong.
It's like a martial arts instructor who earnestly but unaccountably believes he's teaching good kids how to fight back against bullies. He may be. Or, he may be teaching the bullies. In our case, though, we weren't training... we were arming... and we didn't always know we were building weapons, but that's absolutely what we were doing... all of our "data science" got turned into decisions that hurt workers and enriched executives.
I'm speaking in past tense because we, as technologists, are no longer relevant. We've sold our souls. Capitalist hogs and their managerial thugs have won. Our moral credibility is deep in the negative territory. Power will either stay with those who currently have it, who have evil intentions, or move toward the set of people who work up the courage to overthrow the current system, who may or may not--it's impossible to know, as it hasn't happened yet--have ugly intentions.
"We" (meaning technologist culture) have never worked to promote centralized, monopoly services. The users did it to themselves, largely in pursuit of short-term convenience. "We" are now building the federated, interoperable platforms that users will hopefully come around to when the obvious problems of centralization (including widespread censorship, non-existent customer service and non-transparent AI's/bots run amok) start seriously biting them in the ass. In many ways, it's already happening.
There is also the situation that is not created out of malice or greed, but out of lack of restraint and divergence of priorities between the authors and the users: compilation speed. When a piece of software makes me wait for it, I feel subservient to it. It feels like sitting in a state office, waiting until a civil servant decides to grant you an audience. Except there is no civil servant, there is only the computer. How much do I have to wait? It depends on the alignment of the stars, air humidity and will of the gods.
When I used to work in office, there was this one time when I had an urgent ticket from customer to resolve, so I made the necessary fix and started building it. Normally builds would take something like 15 minutes, provided they were incremental and this wasn't the first (clean) one. But they could also take hours. It was at the end of the working day, I was working hard on this fix, didn't even take a lunch break. I started building it and waiting, because I wanted to share the fix with the customer as soon as possible. 15 minutes passed... 30 minutes passed... 1 hour passed... 90 minutes passed... 2 hours passed... And it was still compiling. At that point I gave up, went out and started walking home. But due to not eating the whole day and then staying late waiting for the build to finish, now I was so hungry that my hands started trembling on my way home and I felt generally weak, on the edge of being able to reach home. The next day I found out this build took ~4h 30min.
The reason for this state of affairs: of course I shouldn't care that much about my work and just make the customer wait instead. Put my health and time above that. But another important reason is that the build times were so unpredictable: when I hit "compile", it could take anywhere from 3 minutes to 4.5 hours. There is no planning you can do around that. If it was just a fixed 3 hours, it would be even better, because then I could plan my day around it. But it being so unstable destroys everything. Of course, if every build took 3 hours, people making decisions would wake up and see that we've got a pathological situation and there is something seriously wrong with the project. But when you often hit 15 minutes, it's going to be brushed off. And for the C++ committee even though compilation speed may be an issue, it is never a priority. There are always going to be other issues which eclipse it.
Personally, I think a build for even a OS-size project like that shouldn't take more than at most 1 minute. Even the incremental times in this one are a travesty.
What I like about what was planned for Jai (still not released) is that it's designed to always make a clean build. And the clean builds need to be fast. There should be no reason to make incremental builds. They are hacks that make the situation worse, but make it look better. Suddenly you're dealing with weird bugs, because the build system did not detect a necessary recompile and used stale cache entries (happened multiple times to me). The compile times are unpredictable (see the above story of why that matters).
I've seen some Delphi jobs in my country a year ago I think. Maybe I should switch there...
That's a great explanation and one I'm going to steal.