Comment by Imustaskforhelp
1 month ago
Sir your experience is unique and thanks for answering this.
That being said, someone took the idea of you saying LLM's might be good at subsets of projects to consider we should use LLMs for that subset as well
But I digress because (I provided more in depth reasoning in other comment as well) because if there is an even minute bug which might slip up past LLM and code review for subset of that and for millions of cars travelling through points, we assume that one single bug in it somewhere might increase the traffic/fatality traffic rate by 1 person per year. Firstly it shouldn't be used because of the inherent value of human life itself but even from monetary sense as well so there's really not much reason I can see in using it
That alone over a span of 10 years would cost 75 million-130Million$ (the value of life in US for a normal perosn ranges from 7.5 million - 13 million$)
Sir I just feel like if the point of LLM is to have less humans or less giving them income, this feels so short sighted because I (if I were the state and I think everyone will agree after the cost analysis) would much rather pay a few hundred thousand dollars to even a few million$ right now to save 75-130 Million$ (on the smallest scale mind you, it can get exponentially more expensive)
I am not exactly sure how we can detect the rate of deaths due to LLM use itself (the 1 number) but I took the most conservative number.
And that is also the fact that we won't know if LLM's might save a life but I am 99.9% sure that might not be the case and once again it wouldn't be verifiable itself so we are shooting things in the dark
And we can have a much more sensitive job with better context (you know what you are working at and you know how valuable it is/can save lives and everything) whereas no amount of words can convey that danger to LLM's
To put it simply, the LLM might not know the difference between this life or death situation machine's code at times or a sloppy website created by it.
I just don't think its worth it especially in this context at all even a single % of LLM code might not be worth it here.
> we won't know if LLM's might save a life
I had friend who was in crisis while the rest of us were asleep. Talking with ChatGPT kept her alive. So we know the number is at least one. If you go to the Dr ChatGPT thread, you'll find multiple reports of people who figured out debilitating medical conditions via ChatGPT in conjunction with a licensed human doctor, so we can be sure the numbers greater than zero. It doesn't make headlines the same way Adam's suicide does, and not just because OpenAI can't be the ones to say it.
Great for her, I hope she's doing okay now. (I do think we humans can take each other for granted)
If talking to chatgpt helps anyone mentally, then sure great. I can see as to why but I am a bit concerned that if we remove a human from the loop then we can probably get way too easily disillusioned as well which is what is happening.
These are still black boxes but in the context of traffic lights code (even partially) feels to me something that the probability of it might not saving a life significantly overwhelms the opposite.
ChatGPT psychosis also exists so it goes both ways, I just don't want the negative voices to drown out the positive ones (or vice versa).
As far as traffic lights go, this predates ChatGPT, but IBM's Watson, which is also rather much a black box where you stuff data in, and instructions come out; they've been doing traffic light optimization for years. IBM's got some patents on it, even. Of course that's machine learning, but as they say, ML is just AI that works.