Measurable responses to the environment lag, Moore's law has been slowing down (e: and demand has been speeding up, a lot).
From just a sustainability point, I really hope that the parent post's quote is true, because otherwise I've personally seen LLMs used over and over to complete the same task that it could have been used for once to generate a script, and I'd really like to be able to still afford to own my own hardware at home.
10 years from now: "The next big thing: HENG - Human Engineers! These make mistakes, but when they do, they can just learn from it and move on and never make it again! It's like magic! Almost as smart as GPT-63.3-Fast-Xtra-Ultra-Google23-v2-Mem-Quantum"
I would love to live in a world where my coworkers learn from their mistakes
is this Human 2.0? I only have 1.0a beta in the office.
I get the joke but it really does highlight how flimsy the argument is for humans. IME humans frequently make simple errors everywhere they don’t learn from and get things right the first time very rarely. Damn. Sounds like LLMs. And those are only getting better. Humans aren’t.
I am kind of already at that point. For all the complaining about context windows being stuffed with MCPs, I am curious what they are up to and how many MCPs they have that this is a problem.
More likely: "Can you believe they were actually trying to use LLMs for this?"
OSes and software engs did not end up using less RAM.
Measurable responses to the environment lag, Moore's law has been slowing down (e: and demand has been speeding up, a lot).
From just a sustainability point, I really hope that the parent post's quote is true, because otherwise I've personally seen LLMs used over and over to complete the same task that it could have been used for once to generate a script, and I'd really like to be able to still afford to own my own hardware at home.
3 replies →
10 years from now: "The next big thing: HENG - Human Engineers! These make mistakes, but when they do, they can just learn from it and move on and never make it again! It's like magic! Almost as smart as GPT-63.3-Fast-Xtra-Ultra-Google23-v2-Mem-Quantum"
I would love to live in a world where my coworkers learn from their mistakes
is this Human 2.0? I only have 1.0a beta in the office.
I get the joke but it really does highlight how flimsy the argument is for humans. IME humans frequently make simple errors everywhere they don’t learn from and get things right the first time very rarely. Damn. Sounds like LLMs. And those are only getting better. Humans aren’t.
> Did you know if you ask <X> a question and it doesn't know the answer, sometimes it just makes something up?!
I think maybe a lot of us live in a bubble where the above statement is less frequently true of our peers than average.
Imagine believing humans don’t make the same mistakes. You live in a different universe than me buddy.
Sometimes we repeat mistakes. But humans are capable of occasionally learning. I've seen it!
1 reply →
I mean, that is not what they are writing buddy.
10 years from now: “what’s a context window?”
10 years from now: “come with me if you want to live”
Terminator 2 Clip: https://youtu.be/XTzTkRU6mRY?t=72&si=dmfLNDqpDZosSP4M
“640K ought to be enough for anybody”
I dunno why you're getting down voted. This is funny.
Very!
I am kind of already at that point. For all the complaining about context windows being stuffed with MCPs, I am curious what they are up to and how many MCPs they have that this is a problem.
"That was back when models were so slow and weighty they had to use cloud based versions. Now the same LLM power is available in my microwave"