Comment by KoolKat23
5 months ago
Not necessarily, workflows just need to be adapted to work with it rather than it working in existing workflows. It's something that happens during each industrial revolution.
Originally electric generators merely replaced steam generators but had no additional productivity gains, this only changed when they changed the rest of the processes around it.
I don't get this. What workflow can have occasional catastrophic lapses of reasoning, non factuality, no memory and hallucinations etc? Even in things like customer support this is a no go imo. As long as these very major problems aren't improved (by a lot) the tools will remain very limited.
We are at the precipice of a new era. LLMs are only part of the story. Neural net architecture and tooling has matured to the point where building things like LLMs is possible. LLMs are important and will forever change "the interface" for both developers and users, but it's only the beginning. The Internet changed everything slowly, then quickly, then slowly. I expect that to repeat
So you're just doing Delphic oracle prophecy. Mysticism is not actually that helpful or useful in most discussions, even if some mystical prediction accidently ends up correct.
1 reply →
> What workflow can have occasional catastrophic lapses of reasoning, non factuality, no memory and hallucinations etc?
LLMs might enable some completely new things to be automated that made no sense to automate before, even if it’s necessary to error correct with humans / computers.
There's a lot of productivity gains from things like customer support. It can draft a response and the human merely validates it. Hallucination rates are falling and even minor savings add up in these areas with large scale, productivity targets and strict SLA's such as call centres. It's not a reach to say it could already do a lot of Business process outsourcing type work.
Source on hallucination rates falling?
I use LLMs 20-30 times a day and while it feels invaluable for personal use where I can interpret the responses at my own discretion, they still hallucinate enough and have enough lapses in logic where I would never feel confident incorporating them into some critical system.
1 reply →