← Back to context

Comment by lumost

11 hours ago

The big problem with AI in back-office automation is that it will randomly decide to do something different than it had been doing. Meaning that it could be happily crunching numbers accurately in your development and launch experience, then utterly drop the ball after a month in production.

While humans have the same risk factors, human oriented back-office processes involve multiple rounds of automated/manual checks which are extremely laborious. Human errors in spreadsheets have particular flavors such as forgotten cell, misstyped number, or reading from the wrong file/column. Human's are pretty good at catching these errors as they produce either completely wrong results when the columns don't line up - or the typo'd number is completely out of distribution.

An AI may simply decide to hallucinate realistic column values rather than extracting its assigned input. Or hallucinate a fraction of column values. How do you QA this? You can't guarantee that two invocations of the AI won't hallucinate the same values, you can't guarantee that a different LLM won't hallucinate different values. To get a real human check, you'd need to re-do the task as a human. In theory you can have the LLM perform some symbolic manipulation to improve accuracy... but it can still hallucinate the reasoning traces etc.

If a human decided to make up accounting numbers one out of every 10000 accounting requests they would likely be charged with fraud. Good luck finding the AI hallucinations at the equivalent level before some disaster occurs. Likewise, how do you ensure the human excel operator doesn't get pressured into certifying the AIs numbers when the "don't get fired this week" button is sitting right their in their excel app? how do you avoid the race to the bottom where the "star" employee is the one certifying the AI results without thorough review?

I'm bullish on AI in backoffice, but ignoring the real difficulties in deployment doesn't help us get there.