← Back to context

Comment by sandworm101

8 hours ago

This is not an LLM problem. It was solved years ago via OCR. Worldwide, postal services long ago deployed OCR to read handwitten addresses. And there was an entire industry of OCR-based data entry services, much of it translating the chicken scratch of doctor's handwiting on medical forms, long before LLMs were a thing.

It was never “solved” unless you can point me to OCR software that is 100% accurate. You can take 5 seconds to google “ocr with llm” and find tons of articles explaining how LLMs can enhance OCR. Here’s an example:

https://trustdecision.com/resources/blog/revolutionizing-ocr...

  • By that standard, no problem has ever been solved by anyone. I prefer to believe that a great many everyday tech issues were in fact tackled and solved in the past by people who had never even heard of LLMs. So too many things were done in finance long before blockchains solved everything for us.

    • OCR is very bad.

      As an example look at subtitle rips for DVD and Blu-ray. The discs store them as images of rendered computer text. A popular format for rippers is SRT, where it will be stored as utf-8 and rendered by the player. So when you rip subtitles, there's an OCR step.

      These are computer rendered text in a small handful of fonts. And decent OCR still chokes on it often.

    • In my experience the chatbots have bumped transcription accuracy quite a bit. (Of course, it's possible I just don't have access to the best-in-class OCR software I should be comparing against).

      (I always go over the transcript by hand, but I'd have to do that with OCR anyway).

    • From the article I linked:

      “Our internal tests reveal a leap in accuracy from 98.97% to 99.56%, while customer test sets have shown an increase from 95.61% to 98.02%. In some cases where the document photos are unclear or poorly formatted, the accuracy could be improved by over 20% to 30%.”

LLMs improve significantly on state of the art OCR. LLMs can do contextual analysis. If I were transcribing these by hand, I would probably feed them through OCR + an LLM, then ask an LLM to compare my transcription to its transcription and comment on any discrepancies. I wouldn't be surprised if I offered minimal improvement over just having the LLM do it though.

  • Are you guessing, or are there results somewhere that demonstrate how LLMs improve OCR in practical applications?

    • Someone linked this above

      https://trustdecision.com/resources/blog/revolutionizing-ocr...

      > Our internal tests reveal a leap in accuracy from 98.97% to 99.56%, while customer test sets have shown an increase from 95.61% to 98.02%. In some cases where the document photos are unclear or poorly formatted, the accuracy could be improved by over 20% to 30%.

      While a small percentage increase, when applied to massive amounts of text it’s a big deal.

  • Why assume that OCR does not involve context? OCR systems regularly use context. It doesnt require an LLM for a machine reading medical forms to generate and use a list of the hundred most common drugs appearing in a paticular place on a specific form. And an OCR reading envelopes can be directed to prefer numbers or letters depending on what it expects.

    Even if LLMs can push a 99.9% accuracy to 99.99, at least an OCR-based system can be audited. Ask an OCR vendor why the machine confused "Vancouver WA" and "Vancouver CA" and one can get a solid answer based in repeated testing. Ask an LLM vendor why and, at best, you'll get a shrug and some line citing how much better they were in all the other situations.

For the addresses it might be a bit easier because they are a lot more structured and in theory and the vocabulary is a lot more limited. I’m less sure about medical notes although I’d suspect that there are fairly common things they are likely to say.

Looking at the (admittedly single) example from the National Archives seems a bit more open than perhaps the other two examples. It’s not impossible thst LLMs could help with this

Yes, but there was usually a fall-back mechanism where an unrecognized address would be shown on a screen to an employee who would type it so that it could then be inkjetted with a barcode.

Fun fact, convolutional neural networks developed by Yann LeCunn were instrumental in that roll out!