Comment by kbyatnal

17 days ago

This is spot on, any legacy vendor focusing on a specific type of PDF is going to get obliterated by LLMs. The problem with using an off-the-shelf provider like this is, you get stuck with their data schema. With an LLM, you have full control over the schema meaning you can parse and extract much more unique data.

The problem then shifts from "can we extract this data from the PDF" to "how do we teach an LLM to extract the data we need, validate its performance, and deploy it with confidence into prod?"

You could improve your accuracy further by adding some chain-of-thought to your prompt btw. e.g. Make each field in your json schema have a `reasoning` field beforehand so the model can CoT how it got to its answer. If you want to take it to the next level, `citations` in our experience also improves performance (and when combined with bounding boxes, is powerful for human-in-the-loop tooling).

Disclaimer: I started an LLM doc processing infra company (https://extend.app/)

> The problem then shifts from "can we extract this data from the PDF" to "how do we teach an LLM to extract the data we need, validate its performance, and deploy it with confidence into prod?"

A smart vendor will shift into that space - they'll use that LLM themselves, and figure out some combination of finetunes, multiple LLMs, classical methods and human verification of random samples, that lets them not only "validate its performance, and deploy it with confidence into prod", but also sell that confidence with an SLA on top of it.

  • That's what we did with our web scraping saas - with Extraction API¹ we shifted web scraped data parsing to support both predefined models for common objects like products, reviews etc. and direct LLM prompts that we further optimize for flexible extraction.

    There's definitely space here to help the customer realize their extraction vision because it's still hard to scale this effectively on your own!

    1 - https://scrapfly.io/extraction-api

  • What's the value for a customer to pay a vendor that is only a wrapper around an LLM when they can leverage LLMs directly? I imagine tools being accessible for certain types of users, but for customers like those described here, you're better off replacing any OCR vendor with your own LLM integration

  • Software is dead, if it isn't a prompt now, it will be a prompt in 6 months.

    Most of what we think software is today, will just be a UI. But UIs are also dead.

    • I wonder about these takes. Have you never worked in a complex system in a large org before?

      OK, sure, we can parse a PDF reliably now, but now we need to act on that data. We need to store it, make sure it ends up with the right people who need to be notified that the data is available for their review. They then need to make decisions upon that data, possible requiring input from multiple stakeholders.

      All that back and forth needs to be recorded and stored, along with the eventual decision and the all supporting documents and that whole bundle needs to be made available across multiple systems, which requires a bunch of ETLs and governance.

      An LLM with a prompt doesn't replace all that.

      3 replies →

    • Software without data moats, vender lock-in, etc sure will. All the low handing fruit saas is going to get totally obliterated by LLM built-software.

      5 replies →

  • >A smart vendor will shift into that space - they'll use that LLM themselves

    It's a bit late to start shifting now since it takes time. Ideally they should already have a product on the market.

    • There's still time. The situation in which you can effectively replace your OCR vendor with hitting LLM APIs via a half-assed Python script ChatGPT wrote for you, has existed for maybe few months. People are only beginning to realize LLMs got good enough that this is an option. An OCR vendor that starts working on the shift today, should easily be able to develop, tune, test and productize an LLM-based OCR pipeline way before most of their customers realize what's been happening.

      But it is a good opportunity for a fast-moving OCR service to steal some customers from their competition. If I were working in this space, I'd be worried about that, and also about the possibility some of the LLM companies realize they could actually break into this market themselves right now, and secure some additional income.

      EDIT:

      I get the feeling that the main LLM suppliers are purposefully sticking to general-purpose APIs and refraining from competing with anyone on specific services, and that this goes beyond just staying focused. Some of potential applications, like OCR, could turn into money printers if they moved on them now, and they all could use some more cash to offset what they burn on compute. Is it because they're trying to avoid starting an "us vs. them" war until after they made everyone else dependent on them?

      2 replies →

    • Never underestimate the power of the second mover. Since the development is happening in the open, someone can quickly cobble up the information and cut directly to the 90% of the work.

      Then your secret sauce will be your fine tunes, etc.

      Like it or not AI/LLM will be a commodity, and this bubble will burst. Moats are hard to build when you have at least one open source copy of what you just did.

      1 reply →

I have some out-of-print books that I want to convert into nice pdf's/epubs (like, reference-quality)

1) I don't mind destroying the binding to get the best quality. Any idea how I do so?

2) I have a multipage double-sided scanner (fujitsu scansnap). would this be sufficient to do the scan portion?

3) Is there anything that determines the font of the book text and reproduces that somehow? and that deals with things like bold and italic and applies that either as markdown output or what have you?

4) how do you de-paginate the raw text to reflow into (say) an epub or pdf format that will paginate based on the output device (page size/layout) specification?

Great, I landed on the reasoning and citations bit through trial and error and the outputs improved for sure.

`How did you add bounding boxes, especially if it is variety of files?

  • In my open source tool http://docrouter.ai I run both OCR and LLM/Gemini, using litellm to support multiple LLMs. The user can configure extraction schema & prompts, and use tags to select which prompt/llm combination runs on which uploaded PDF.

    LLM extractions are searched in OCR output, and if matched, the bounding box is displayed based on OCR output.

    Demo: app.github.ai (just register an account and try) Github: https://github.com/analytiq-hub/doc-router

    Reach out to me at andrei@analytiqhub.com for questions. Am looking for feedback and collaborators.

How do you handle the privacy of the scanned documents?

  • With the docrouter.ai, it can be installed on prem. If using the SAAS version, users can collaborate in separate workspaces, modeled on how Databricks supports workspaces. Back end DB is Mongo, which keeps things simple.

    One level of privacy is the workspace level separation in Mongo. But, if there is customer interest, other setups are possible. E.g. the way Databricks handles privacy is by actually giving each account its own back end services - and scoping workspaces within an account.

    That is a good possible model.

  • We work with fortune 500s in sensitive industries (healthcare, fintech, etc). Our policies are:

    - data is never shared between customers

    - data never gets used for training

    - we also configure data retention policies to auto-purge after a time period