← Back to context

Comment by aragonite

6 days ago

I did this very recently for a 19th century book in German with occasionally some Greek. The method that produces the highest level of accuracy I've found is to use ImageMagick to extract each page as a image, then send each image file to Claude Sonnet (encoded as base64) with a simple user prompt like "Transcribe the complete text from this image verbatim with no additional commentary or explanations". The whole thing is completed in under an hour & the result is near perfect and certainly much better than from standard OCR softwares.

> a 19th century book

If you're dealing with public domain material, you can just upload to archive.org. They'll OCR the whole thing and make it available to you and everyone else. (If you got it from archive.org, check the sidebar for the existing OCR files.)

  • I did try the full text OCR from archive.org, but unfortunately the error rate is too high. Here are some screenshots to show what I mean:

    - Original book image: https://imgur.com/a8KxGpY

    - OCR from archive.org: https://imgur.com/VUtjiON

    - Output from Claude: https://imgur.com/keUyhjR

    • Ah, yeah, that's not uncommon. I was operating on an assumption, based on experience seeing language models make mistakes, that the two approaches would be within an acceptable range of each other for your texts, plus the idea that it's better to share the work than not.

      Note if you're dealing with a work (or edition) that cannot otherwise be found on archive.org, though, then if you do upload it, you are permitted as the owner of that item to open up the OCRed version and edit it. So an alternative workflow might be better stated:

      1. upload to archive.org

      2. check the OCR results

      3. correct a local copy by hand or use a language model to assist if the OCR error rate is too high

      4. overwrite the autogenerated OCR results with the copy from step 3 in order to share with others

      (For those unaware and wanting to go the collaborative route, there is also the Wikipedia-adjacent WMF project called Wikisource. It has the upside of being more open (at least in theory) than, say, a GitHub repo—since PRs are not required for others to get their changes integrated. One might find, however, it to be less open in practice, since it is inhabited by a fair few wikiassholes of the sort that folks will probably be familiar with from Wikipedia.)

  • Maybe I've just had back luck, but their OCR butchered some of the books I've tried to get

Is it really necessary to split it into pages? Not so bad if you automate it I suppose, but aren't there models that will accept a large PDF directly (I know Sonnet has a 32MB limit)?

  • They are limited on how much they can output and there is generally an inverse relationship between the amount of tokens you send vs quality after the first 20-30 thousand tokens.

  • Necessary? No. Better? Probably. Despite larger context windows, attention and hallucinations aren’t completely a thing of the past within the expanded context windows today. Splitting to individual pages likely helps ensure that you stay well within a normal context window size that seems to avoid most of these issues. Asking an LLM to maintain attention for a single page is much more achievable than an entire book.

    Also, PDF size isn’t a relevant measurement of token lengths when it comes to PDFs which can range from a collection of high quality JPEG images to thousand(s) of pages of text

  • They all accept large PDFs (or any kind of input) but the quality of the output will suffer for various reasons.

I recently did some OCRing with OpenAI. I found o3-mini-hi to be imagining and changing text, whereas the older (?) o4 was more accurate. It’s a bit worrying that some of the models screw around with the text.

  • There’s GPT4, then GPT4o (o for Omni, as in multi modal) and then GPT o1 (chain of thought / internal reasoning) then o3 (because o2 is a stadium in London that I guess is very litigious about its trademark?), o3-mini is the latest but yes optimized to be faster and cheaper

Do you have a rough estimate of what the price per page was for this?

  • It must have been under $3 for the 150 or so API calls, possibly even under $2, though I'm less sure about that.

What about preserving the style like titles and subtitles?

  • You can request Markdown output, which takes care of text styling like italics and bold. For sections and subsections, in my own case they already have numerical labels (like "3.1.4") so I didn't feel the need to add extra formatting to make them stand out. Incidentally, even if you don't specify markdown output, Claude (at least in my case) automatically uses proper Unicode superscript numbers (like ¹, ², ³) for footnotes, which I find very neat.