Comment by travisgriggs
6 months ago
> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).
Me too. But, I worry this “want” may not be realistic/scalable.
Yesterday, I was trying to get some Bluetooth/BLE working on a Raspberry CM 4. I had dabbled with this 9 months ago. And things were making progress then just fine. Suddenly with a new trixie build and who knows what else has changed, I just could not get my little client to open the HCI socket. In about 10 minutes prompt dueling between GPT and Claude, I was able to learn all about rfkill and get to the bottom of things. I’ve worked with Linux for 20+ years, and somehow had missed learning about rfkill in the mix.
I was happy and saddened. I would not have k own where to turn. SO doesn’t get near the traffic it used to and is so bifurcated and policed I don’t even try anymore. I never know whether to look for a mailing list, a forum, a discord, a channel, the newsgroups have all long died away. There is no solidly written chapter in a canonically accepted manual written by tech writers on all things Bluetooth for the Linux Kernel packaged with raspbian. And to pile on, my attention span driven by a constant diet of engagement, makes it harder to have the patience.
It’s as if we’ve made technology so complex, that the only way forward is to double down and try harder with these LLMs and the associated AGI fantasy.
In the short term, it may be unrealistic (as you illustrate in your story) to try to successfully navigate the increasingly fragmented, fragile, and overly complex technological world we have created without genAI's assistance. But in the medium to long term, I have a hard time seeing how a world that's so complex that we can't navigate it without genAI can survive. Someday our cars will once again have to be simple enough that people of average intelligence can understand and fix them. I believe that a society that relies so much on expertise (even for everyday things) that even the experts can't manage without genAI is too fragile to last long. It can't withstand shocks.
While I generally agree with this, I have mixed feelings. On the one hand, AI could be smart enough to reach the "enlightened master engineer" and can reach super-human levels of...simplification. In some ways, complexity can result from improper layering and abstraction inversion. It takes a holistic view to realize that the lines (i.e. interfaces) between layers were drawn improperly and redesign everything together, achieving an overall simplification.
A good example is the web platform. It's just enormous...to the point that no human can really understand how it all even works. And I say that as someone who worked for a long time on a narrow part of that stack (V8). It being only a little over a million lines of code, it is incredibly intricate and subtle, because it implements a pretty weird language, has lots of optimizations, advanced GC, multiple compilers, etc. And that's just the JS engine. Add in the layout engine, rendering engine, multi-process architecture...it's beyond the comprehension a single mind.
We're not yet at the level that an AI can understand code really deeply yet, but may we will reach the point where an AI understands enough of it and can code competently enough to start over from scratch and build something we can both understand and does the things we actually want it to do.
>our systems cannot withstand shocks
We've seen a disturbing preview of this recently.
It's a natural law that what is not exercised dies away.
When we make our systems too stable and predictable, the ability to operate effectively in the absence of stability also dies away.
I do agree with the fragility argument. Though if/when the shock comes, I doubt we’ll be anywhere near being able to build cars. Especially taking into account that all the easily accessible ore has long been mined and oxidized away.
Distros do have manuals, they just usually come in the form of user-curated wikis these days. ArchWiki is usually my first stop when I run into a Linux issue, even as a fellow Debian user.
Both https://wiki.archlinux.org/title/Bluetooth and https://wiki.debian.org/BluetoothUser mention rfkill and show you how to troubleshoot.
> It’s as if we’ve made technology so complex, that the only way forward is to double down and try harder with these LLMs and the associated AGI fantasy.
This is the real AI risk we should be worried about IMO, at least short term. Information technology has made things vastly more complicated. AI will make it even more incomprehensible. Tax code, engineering, car design, whatever.
It's already happening at my work. I work at big tech and we already have a vast array of overly complicated tools/technical debt no one wants to clean up. There's several initiatives to use AI to prompt an agent, which in turn will find the right tool to use and run the commands.
It's not inconceivable that 10 or 20 years down the road no human will bother trying to understand what's actually going on. Our brains will become weaker and the logic will become vastly more complicated.
Has anyone tried putting the AIs to work cleaning up the technical debt?
Yes, I'm already doing it. But the problem is there's not a lot of incentive from management to do it.
Long term investment in something that can't easily be quantified is a non-starter to management. People will say "thank you for doing that" but those who create new features that drive metrics get promoted.
3 replies →
I think LLMs as a replacement for Google, Stack Overflow, etc. is a no brainer. As long as you can get to the source documents when you need them, and train yourself to sniff out hallucinations.
(We already do this constantly in categorizing human generated bullshit information and useful information constantly. So learning to do something similar with LLM output is not necessarily worse, just different.)
What's silly at this point is replacing a human entirely with an LLM. LLMs are still fundamentally unsuited for those tasks, although they may be in the future with some significant break throughs.
Yeah, using LLMs makes me reconsider the complexity of the software I'm producing and I'm relying on. In a sense LLMs can be a test for the complexity and the fast iteration cycles could yield better solutions than the existing ones
The LLMs we have today aren't a fantasy: They're a concrete thing that works.
Just because the people who make them live in a fantasy world, doesn't mean we can't reap the fruits of their labor!
That being said, I suspect a lot of the energy spent on AI training is resulting in unusable slop.
The only reason search might not be as good as an AI is enshitifications. I expect these LLMs will be the same after the open web has withered.
LLMs are quite good at "semantic search", where traditional search engines struggle
No one said search engines need to be "traditional" or only Page Rank or whatever.
But you also don't need to filter the information through an hallucinating UX just so you can use vectorization to match actual results.