Comment by siva7
17 days ago
You‘re describing yesterdays world. With the advancement of AI, there is no need for any of these many steps and stages of OCR anymore. There is no need for XML in your pipeline because Markdown is now equally suited for machine consumption by AI models.
The results we got 18 months ago are still better than the current gemini benchmarks at a fraction the cost.
As for markdown, great. Now how do you encode the meta data about the confidence of the model that the text says what it thinks it says? Becuase xml has this lovely thing called attributes that let's you keep a provenance record without a second database that's also readable by the llm.
Just commenting here so that I can find back to this comment later. You perfectly captured the AI hype in one small paragraph.
Hey, why settle for yesteryear's world, with better accuracy, lower costs and local deployment, if you can use today's new shinny tool, reinvent the wheel, put everything in the cloud, and get hallucination for free..
What are the tools from the yesterday's world you are referring to? I've had issues with the base Python library in PDF parsing, only some state of the art tools were able to parse the information correctly.
Just commenting here to say the GP is spot on.
If you already have a high optimized pipeline built yesterday, then sure keep using it.
But if you start dealing with PDF today, just use Gemini. Use the most human readable formats you can find because we know AI will be optimized on understanding that. Don't even think about "stitching XML files" blahblah.
Except it's more expensive, hallucinates and you are vendor locked.
1 reply →
For future reference if you click on the timestamp of a comment that will bring you to a screen that has a “favorite” link. Click that to add the comment to your favorite comments list, which you can find on your profile page.