Comment by Culonavirus
4 days ago
The computer science field is going to be an absolute shitshow within 5 years (it already kinda is). On one side you'll have ADHD dog attention span zoomers trying out all these nth party model apis and tools every 5 seconds (switching them like socks, insisting the latest one is better, but ultimately producing the same slop) and on the other side you'll have all these applied math gurus squeezing out the last bits of usable AI compute on the planet... and nothing else.
We used to joke that "The internet was a mistake.", making fun of the bad parts... but LLMs take the fucking cake. No intelligent beings, no sentient robots, just unlimited amounts of slop.
The tech basically stopped evolving right around the point of it being good enough for spam and slop, but not going any further, there are no cures no new laws of physics or math or anything else being discovered by these things. All AI use in science I can see is based on finding patters in data, not intelligent thought (as in novel ideas). What a bust.
Completely disagree, what i see agentic coding agents do in combination with LLMs is seriously mind-blowing. I don't care how much knowledge is compressed into an LLM. What is way more interesting is what it does when it misses some knowledge. I see it come up with a plan to create the knowledge by running an experiment (running a script, sometimes asking me to run a script or try something), evaluating the output, and then replan based on the output. Full Plan-Do-Check-Act. Finding answers systematically to things you don't know is way more impressive than remembering lots of stuff.
I don't see a big difference to humans, we are saying many unreasonable things too, validation is necessary. If you use internet, books or AI it is your job to test their validity. Anything can be bullshit, written by human or AI.
In fact I fear the humans optimize for attention and cater to the feed ranking Algorithm too much, while AI is at least trying to do a decent job. But with AI it is the responsibility of the user to guide it, what AI does depends on what the user does.
There are some major differences though. Without using these tools, individual are pretty limited in how much bullshit they can output for many reasons, including they are not mere digital puppet without need to survive in society.
It’s clear pro-slavery-minded elitists are happy to sell the speech that people should become "good complement to AI", that is even more disposable as this puppets. But unlike this mindless entities, people have will to survive deeply engraved as primary behavior.
Humans can output serious amounts of unproven bullshit, e.g., 3000 incompatible gods and all the religions that come with them...
7 replies →
The worst part is when the AI spits out dogshit results --people show up at lightspeed in the comments to say how "you're not using it right" / "try this other model, it's better"
Anecdotally, the people I see the most excited about AI are the people that don't do any fucking work. I can create a lot of value with plain ol' for loop style automation in my niche. We're stil nowhere near the limit of what we can do with automation, that I don't give a fuck about what AI can do. Bruh in windows 10 copy and fuckin paste doesn't work for me anymore, but instead of fixing that they're adding AI
LLMs help a lot of users with making FOR loops and things like that. At least it's been the case for me, I'd never tried to use PowerShell before but with a bit of LLM guidance was able to cobble together some useful (for me) one-liner commands to do things like "use this CSV of file names and pixel locations, and make cropped PNG thumbnails of these locations from these images".
Stuff like that which regular users often do by hand, they can ask an LLM for the command (usually just a few lines of a scripting language if they only know the magic words to use).
The only people I see complaining about AI are those that have the most to lose.
Using it isn't optional though, its forced through corporate policy. If my boss would shut up about it that would be enough for me
My wife and I are both paid to work on AI products and we both think the whole thing’s only sorta useful in-fact. Not nothing, but… not that much, either.
I’m not worried about AI taking our jobs, I’m worried about the market crash when the reality of the various failed (… to actually reduce payroll) or would’ve-been-cheaper-and-better-without-AI initiatives the two of us have been working on non-stop since this shit started break through the hype of investment and the music stops.
The LLM only reflects the input of what its fed. If the results are unintelligent then so is the input.
It's been three years of amazing use cases and discoveries, and in those same years we got things like Ozempic. You can be skeptical of all the hyped things that are said that may be exaggerated without negating the good side.
The patent for Ozempic was filed nearly 20 years ago: https://patents.google.com/patent/US8129343B2/en?oq=US812934...
Ozempic’s FDA approval was in 2017, the same year transformers were invented.
Whatever you can place at LLMs, GLP-1’s aren’t one of them.
Ozempic has nothing to do with LLMs, so I'm a bit confused about the point you're making here?
My chatbot told me that chatbots invented drugs.