Comment by rvz
2 years ago
> Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
Absoultely. The lack of transparent reasoning and deep explanation is indeed where LLMs and black-box AIs always fall short and make them totally untrustworthy for industries that carry a lot of risk such as finance, medical, transportation and legal industries which the financial risk and impact is in the trillions of dollars.
This is why ChatGPT for example has so very limited use-cases (summarization is the only one other than bullshit generation) and the hype train attempting to push this snake-oil onto the masses to dump their VC money as soon as regulations catch up.
LLMs has become the crypto hype of AI. Like how crypto's only use-case is world-wide cheap instant money transfer into wallets, ChatGPT and LLMs are only useful for summarization of existing text.
Apart from that, there are no other use-cases. Even if there are others, the customer in this case is close to no-one. Both have trust issues and the simple reason is due to regulations.
> summarization is the only one
Hum... Yeah, if you go and make sure the AI didn't invert the meaning of anything (or if you use it in a way where the difference between "it's daytime" and "it's not daytime" is moot), the resulting summaries are good.