> It produces a somewhat-readable PDF (first page at least) with this text output
Any chance you could share a screenshot / re-export it as a (normalized) PDF? I’m curious about what’s in there, but all of my readers refuse to open it.
there are a few messaging conversations between FB agents early on that are kind of interesting. It would be very interesting to see them about the releases. I sometimes wonder if some was malicious compliance... ie, do a shitty job so the info get's out before it get re-redacted... we can hope...
I wonder if this could be intentional. If the datasets are contaminated with CSAM, anybody with a copy is liable to be arrested for possession.
More likely it's just an oversight, but it could also be CYA for dragging their feet, like "you rushed us, and look at these victims you've retraumatized". There are software solutions to find nudity and they're quite effective.
It decodes to binary pdf and there are only so many valid encodings. So this is how I would solve it.
1. Get an open source pdf decoder
2. Decode bytes up to first ambiguous char
3. See if next bits are valid with an 1, if not it’s an l
4. Might need to backtrack if both 1 and l were valid
By being able to quickly try each char in the middle of the decoding process you cut out the start time. This makes it feasible to test all permutations automatically and linearly
This is one of those things that seems like a nerd snipe but would be more easily accomplished through brute forcing it. Just get 76 people to manually type out one page each, you'd be done before the blog post was written.
Or one person types 76 pages. This is a thing people used to do, not all that infrequently. Or maybe you have one friend who will help–cool, you just cut the time in half.
Typing 76 pages is easy when it's words in a language you understand. WPM is going to be incredibly slow when you actually have to read every character. On top of that, no spaces and no spellcheck so hopefully you didn't miss a character.
I consider myself fairly normal in this regard, but I don't have 76 friends to ask to do this, so I don't know how I'd go about doing this. Post an ad on craigslist? Fiverr? Seems like a lot to manage.
Why not just try every permutation of (1,l)? Let’s see, 76 pages, approx 69 lines per page, say there’s one instance of [1l] per line, that’s only… uh… 2^5244 possibilities…
It should be much easier than that. You should should be able to serially test if each edit decodes to a sane PDF structure, reducing the cost similar to how you can crack passwords when the server doesn't use a constant-time memcmp. Are PDFs typically compressed by default? If so that makes it even easier given built-in checksums. But it's just not something you can do by throwing data at existing tools. You'll need to build a testing harness with instrumentation deep in the bowels of the decoders. This kind of work is the polar opposite of what AI code generators or naive scripting can accomplish.
On the contrary, that kind of one-off tooling seems a great fit for AI. Just specify the desired inputs, outputs and behavior as accurately as possible.
pdftoppm and Ghostscript (invoked via Imagemagick) re-rasterize full pages to generate their output. That's why it was slow. Even worse with a Q16 build of Imagemagick. Better to extract the scanned page images directly with pdfimages or mutool.
Given how much of a hot mess PDFs are in general, it seems like it would behoove the government to just develop a new, actually safe format to standardize around for government releases and make it open source.
Unlike every other PDF format that has been attempted, the federal government doesn't have to worry about adoption.
Even the previous justice departments struggled with PDFs. The way they handled it was scrubbing all possible metadata and uploading it as images.
For example, when the Mueller reports were released with redactions, they had no searchable text or meta data because they were worried about this exact kind of data leaks.
However, vast troves of unsearchable text is not a huge win for transparency.
PDFs are just a garbage format and even good administrations struggle.
> …but good luck getting that to work once you get to the flate-compressed sections of the PDF.
A dynamic programming type approach might still be helpful. One version or other of the character might produce invalid flate data while the other is valid, or might give an implausible result.
Honestly, this is something that should've been kept private, until each and every single one of the files is out in the open. Sure, mistakes are being made, but if you blast them onto the internet, they WILL eventually get fixed.
On one hand, the DOJ gets shit because it was taking too long to produce the documents, and then on another, they get shit because there are mistakes in the redacting because there are 3 million pages of documents.
What they are redacting is pretty questionable though. Entire pages being suspiciously redacted with no explanation (which they are supposed to provide). This is just my opinion, but I think it's pretty hard to defend them as making an honest and best effort here. Remember they all lied about and changed their story on the Epstein "files" several times now (by all I mean Bondi, Patel, Bongino, and Trump).
It's really really hard to give them the benefit of the doubt at this point.
I doubt the PDF would be very interesting. There are enough clues in the human-readable parts: it's an invite to a benefit event in New York (filename calls it DBC12) that's scheduled on December 10, 2012, 8pm... Good old-fashioned searching could probably uncover what DBC12 was, although maybe not, it probably wasn't a public event.
There's potentially a lot of files attached and printed out in this fashion.
The search on the DOJ website (which we shouldn't trust), given the query: "Content-Type: application/pdf; name=", yields maybe a half dozen or so similarly printed BASE64 attachments.
There's probably lots of images as well attached in the same way (probably mostly junk). I deleted all my archived copies recently once I learned about how not-quite-redacted they were. I will leave that exercise to someone else.
anyone ran a search to see how many other files have the base64 in scan?
https://web.archive.org/web/20260206040716/https://what2wear...
Nerdsnipe confirmed :)
Claude Opus came up with this script:
https://pastebin.com/ntE50PkZ
It produces a somewhat-readable PDF (first page at least) with this text output:
https://pastebin.com/SADsJZHd
(I used the cleaned output at https://pastebin.com/UXRAJdKJ mentioned in a comment by Joe on the blog page)
> It produces a somewhat-readable PDF (first page at least) with this text output
Any chance you could share a screenshot / re-export it as a (normalized) PDF? I’m curious about what’s in there, but all of my readers refuse to open it.
So it was a public event attended by 450 people:
https://www.mountsinai.org/about/newsroom/2012/dubin-breast-...
https://www.businessinsider.com/dubin-breast-center-benefit-...
Even names match up, but oddly the date is different.
Your links are for the inaugural (first) ball in December 2011; OP's text referred to a second annual ball in December 2012.
[delayed]
> it’s safe to say that Pam Bondi’s DoJ did not put its best and brightest on this
Or worse. She did.
there are a few messaging conversations between FB agents early on that are kind of interesting. It would be very interesting to see them about the releases. I sometimes wonder if some was malicious compliance... ie, do a shitty job so the info get's out before it get re-redacted... we can hope...
I mean, the internet is finding all her mistakes for her. She is actually doing alright with this. Crowdsource everything, fix the mistakes. lol.
This would be funnier if it wasn’t child porn being unredacted by our government
4 replies →
I wonder if this could be intentional. If the datasets are contaminated with CSAM, anybody with a copy is liable to be arrested for possession.
More likely it's just an oversight, but it could also be CYA for dragging their feet, like "you rushed us, and look at these victims you've retraumatized". There are software solutions to find nudity and they're quite effective.
the issue is that mistakes can't be fixed in the sense once they are discovered, it doesn't matter if they are eventually redacted
Let's see her sued for leaking PII. Here in Europe, she'd be mincemeat.
12 replies →
Yeah - they'll take these lessons learned for future batches of releases.
Teseract supports being trained for specific fonts, that would probably be a good starting point
https://pretius.com/blog/ocr-tesseract-training-data
It decodes to binary pdf and there are only so many valid encodings. So this is how I would solve it.
1. Get an open source pdf decoder
2. Decode bytes up to first ambiguous char
3. See if next bits are valid with an 1, if not it’s an l
4. Might need to backtrack if both 1 and l were valid
By being able to quickly try each char in the middle of the decoding process you cut out the start time. This makes it feasible to test all permutations automatically and linearly
Sounds like a job for afl
This proves my paranoia that you should print and rescan redactions. That or do screenshots of the pdf redacted and convert back to a pdf
You can use the justice.gov search box to find several different copies of that same email.
The copy linked in the post:
https://www.justice.gov/epstein/files/DataSet%209/EFTA004004...
Three more copies:
https://www.justice.gov/epstein/files/DataSet%2010/EFTA02153...
https://www.justice.gov/epstein/files/DataSet%2010/EFTA02154...
https://www.justice.gov/epstein/files/DataSet%2010/EFTA02154...
Perhaps having several different versions might make it easier.
This is one of those things that seems like a nerd snipe but would be more easily accomplished through brute forcing it. Just get 76 people to manually type out one page each, you'd be done before the blog post was written.
Or one person types 76 pages. This is a thing people used to do, not all that infrequently. Or maybe you have one friend who will help–cool, you just cut the time in half.
Typing 76 pages is easy when it's words in a language you understand. WPM is going to be incredibly slow when you actually have to read every character. On top of that, no spaces and no spellcheck so hopefully you didn't miss a character.
1 reply →
You think compelling 76 people to honestly and accurately transcribe files is something that's easy and quick to accomplish.
> Just get 76 people
I consider myself fairly normal in this regard, but I don't have 76 friends to ask to do this, so I don't know how I'd go about doing this. Post an ad on craigslist? Fiverr? Seems like a lot to manage.
First, build a fanbase by streaming on Twitch.
Amazon Mechanical Turk?
Why not just try every permutation of (1,l)? Let’s see, 76 pages, approx 69 lines per page, say there’s one instance of [1l] per line, that’s only… uh… 2^5244 possibilities…
Hmm. Anyone got some spare CPU time?
It should be much easier than that. You should should be able to serially test if each edit decodes to a sane PDF structure, reducing the cost similar to how you can crack passwords when the server doesn't use a constant-time memcmp. Are PDFs typically compressed by default? If so that makes it even easier given built-in checksums. But it's just not something you can do by throwing data at existing tools. You'll need to build a testing harness with instrumentation deep in the bowels of the decoders. This kind of work is the polar opposite of what AI code generators or naive scripting can accomplish.
I wonder if you could leverage some of the fuzzing frameworks tools like Jepsen rely on. I’m sure there’s got to be one for PDF generation.
On the contrary, that kind of one-off tooling seems a great fit for AI. Just specify the desired inputs, outputs and behavior as accurately as possible.
pdftoppm and Ghostscript (invoked via Imagemagick) re-rasterize full pages to generate their output. That's why it was slow. Even worse with a Q16 build of Imagemagick. Better to extract the scanned page images directly with pdfimages or mutool.
Followup: pdfimages is 13x faster than pdftoppm
Given how much of a hot mess PDFs are in general, it seems like it would behoove the government to just develop a new, actually safe format to standardize around for government releases and make it open source.
Unlike every other PDF format that has been attempted, the federal government doesn't have to worry about adoption.
You’re thinking about this as a nerd.
It’s not a tools problem, it’s a problem of malicious compliance and contempt for the law.
Even the previous justice departments struggled with PDFs. The way they handled it was scrubbing all possible metadata and uploading it as images.
For example, when the Mueller reports were released with redactions, they had no searchable text or meta data because they were worried about this exact kind of data leaks.
However, vast troves of unsearchable text is not a huge win for transparency.
PDFs are just a garbage format and even good administrations struggle.
JPEG?
That's not really comparable - It needs to be editable and searchable.
Lossy
Bummer that it's not December - the https://www.reddit.com/r/adventofcode/ crows would love this puzzle
If only Base64 had used a checksum.
"had used"? Base64 is still in very common use, specifically embedded within JSON and in "data URLs" on the Web.
"had" in the sense of when it was designed and introduced as a standard
Wait would this give us the unredacted PDFs?
That's the idea yeah. There are other people actively working on this. You can follow vx-underground on twitter. They're tracking it.
I think it's the PDF files that were attached to the emails, since they're base64 encoded.
I took at stab at training Tesseract and holy jeebus is their CLI awful. Just an insanely complicated configuration procedure.
Love this, absolutely looking forward to some results.
I'm only here to shout out fish shell, a shell finally designed for the modern world of the 90s
> …but good luck getting that to work once you get to the flate-compressed sections of the PDF.
A dynamic programming type approach might still be helpful. One version or other of the character might produce invalid flate data while the other is valid, or might give an implausible result.
Time to flex those Leetcode skills.
Honestly, this is something that should've been kept private, until each and every single one of the files is out in the open. Sure, mistakes are being made, but if you blast them onto the internet, they WILL eventually get fixed.
Cool article, however.
On one hand, the DOJ gets shit because it was taking too long to produce the documents, and then on another, they get shit because there are mistakes in the redacting because there are 3 million pages of documents.
What they are redacting is pretty questionable though. Entire pages being suspiciously redacted with no explanation (which they are supposed to provide). This is just my opinion, but I think it's pretty hard to defend them as making an honest and best effort here. Remember they all lied about and changed their story on the Epstein "files" several times now (by all I mean Bondi, Patel, Bongino, and Trump).
It's really really hard to give them the benefit of the doubt at this point.
Considering the justice to document ratio that's kind of on them regardless.
This one is irresistible to play with. Indeed a nerd snipe.
I doubt the PDF would be very interesting. There are enough clues in the human-readable parts: it's an invite to a benefit event in New York (filename calls it DBC12) that's scheduled on December 10, 2012, 8pm... Good old-fashioned searching could probably uncover what DBC12 was, although maybe not, it probably wasn't a public event.
The recipient is also named in there...
There's potentially a lot of files attached and printed out in this fashion.
The search on the DOJ website (which we shouldn't trust), given the query: "Content-Type: application/pdf; name=", yields maybe a half dozen or so similarly printed BASE64 attachments.
There's probably lots of images as well attached in the same way (probably mostly junk). I deleted all my archived copies recently once I learned about how not-quite-redacted they were. I will leave that exercise to someone else.
[dead]