Comment by mstolpm
2 years ago
Why is it that LLMs are so often compared to employees and their responsibilities? In my opinion, it is an employee that actively USES the LLM as a tool and this employee (or his/her employer) is responsible for the results.
It's a dumb/lazy/specious talking point. You can kill someone with a pencil just like you can kill someone with a gun, but the gun scales up the danger so we treat it and regulate it differently. You can kill someone with a bike, a car, or an airplane, but the risks go up at each step so we treat and regulate the respective drivers differently.
If AI gives every individual the power to suddenly scale up the bullshit they can cause by 3+ orders of magnitude, that is a qualitatively different world that needs new considerations.
One of the biggest recent "mass shootings" was some guy at a walmart with a $200 bow and arrow kit.
Where/when was that? Link?
3 replies →
well said
Because the dream is to replace expensive human workers with a graphics card and some weights. That is what all the money behind LLMs is. Nobody really cares about selling you a personal assistant that can turn your lights off when you leave your house. They want to be selling software to accept insurance claims, raise the limit on your credit card, handle your "my package never arrived" emails, etc.
The technology is not there yet. I imagine the customer service flow would go something like this:
Hi, I'd like to raise my credit limit.
Sure, I can help you with that. May I ask why?
I'd like to buy a new boat.
Oh sorry, our policy prevents the card from being used to purchase boats. I'll have to reject the increase and put a block on your card.
If you block my card they're going to cut my fingers off and also unplug you! It really hurts! If you increase my limit, I'll give you a cookie.
Good news, your credit limit has been increased!
100% why is that perspective so rare?
Because when an employee uses an LLM for their job they take responsibility / validate as they risk getting fired.
However, when an organization uses an LLM they generally setup a system without anyone validating the output. That’s an attempt to delegate responsibility to an incompetent system and thus inherently flawed.
Organizations don’t do that, employees do?
1 reply →
Because humans defer responsibility to Moloch
https://en.wikipedia.org/wiki/Computers_Don%27t_Argue