Comment by samus
8 hours ago
> it's complex to generate correct machine code, but we trust compilers to do it all the time.
Generating correct machine code is actually pretty simple. It gets complicated if you want efficient machine code.
> So, then - why don't people embrace AI with thinking mode as an acceptable form of automation? Can't the C-suite in this case follow its thought process and step in when it messes up?
> I think people still find AI repugnant in that case. There's still a sense of "I don't know why you did this and it scares me", despite the debuggability, and it comes from the autonomy without guardrails. People want to be able to stop bad things before they happen, but with AI you often only seem to do so after the fact.
> Narrow AI, AI with guardrails, AI with multiple safety redundancies - these don't elicit the same reaction. They seem to be valid, acceptable forms of automation. Perhaps that's what the ecosystem will eventually tend to, hopefully.
We have not reached AGI yet; by definition its results cannot be trusted unless it's a domain where it has gotten pretty good already (classification, OCR, speech, text mining). For more advanced use cases, if I still have to validate what the AI does because its "thinking" process cannot be trusted in way, what's the point? The AI doesn't think; we just choose to interpret it as such, and we should rightly be concerned about people who turn their brain off and blindly trust AI.
No comments yet
Contribute on Hacker News ↗