I went back because of course I could've just pointed out one picture but still wanted to give the whole picture.
my conclusion is rather the fact that this is a very high stakes project (both emotionally and mentally and economically) and AI are still black boxes with chances of being much more error prone (atleast in this context) and chances of it missing something to cause the -75 million and deaths of many is more likely and also that in such a high stakes project, LLM's shouldn't be used and having more engineers in the team might be worth it.
> I doubt you have a clue regarding my suitability for any project, so I’ll ignore the passive l-aggressive ad hominem.
Aside from the snark presented at me. I agree. And this is why you don't see me in a project regarding such high stakes project and neither should you see an LLM at any costs in this context. These should be reserved to the caliber of people who have both experience in the industry and are made of flesh.
Human beings are basically black boxes as far as the human brain is concerned. We don't blindly trust the code coming out of those black boxes, it seems illogical to do the same for LLMs.
Yes but at the end of the day I can't understand this take because what are we worried about for (atleast in this context) a few hundred thousand dollars for a human job than LLM?
I don't understand if its logical to deploy an LLM in any case, the problem is chances of LLM code slipping are very much more likely than the code of people who can talk to each other and decide on all meetings exactly how they wish to write and they got 10's of years of experience to back it up
If I were a state, there are so so many ways of getting money rather easily (hundreds of thousands of $ might seem a lot but they aren't for state) and plus you are forgetting that they went in manually and talked to real people
I went back because of course I could've just pointed out one picture but still wanted to give the whole picture.
my conclusion is rather the fact that this is a very high stakes project (both emotionally and mentally and economically) and AI are still black boxes with chances of being much more error prone (atleast in this context) and chances of it missing something to cause the -75 million and deaths of many is more likely and also that in such a high stakes project, LLM's shouldn't be used and having more engineers in the team might be worth it.
> I doubt you have a clue regarding my suitability for any project, so I’ll ignore the passive l-aggressive ad hominem.
Aside from the snark presented at me. I agree. And this is why you don't see me in a project regarding such high stakes project and neither should you see an LLM at any costs in this context. These should be reserved to the caliber of people who have both experience in the industry and are made of flesh.
Human beings are basically black boxes as far as the human brain is concerned. We don't blindly trust the code coming out of those black boxes, it seems illogical to do the same for LLMs.
Yes but at the end of the day I can't understand this take because what are we worried about for (atleast in this context) a few hundred thousand dollars for a human job than LLM?
I don't understand if its logical to deploy an LLM in any case, the problem is chances of LLM code slipping are very much more likely than the code of people who can talk to each other and decide on all meetings exactly how they wish to write and they got 10's of years of experience to back it up
If I were a state, there are so so many ways of getting money rather easily (hundreds of thousands of $ might seem a lot but they aren't for state) and plus you are forgetting that they went in manually and talked to real people