← Back to context

Comment by ctoth

1 month ago

I didn't use Claude Code. I just pasted it directly into the web interface and said "I can't read this, can you help?" and then I excerpted the result so you sighted folks didn't have to reread, you could just verify the content matched.

So basically this person has put up a big "fuck you" sign to people like me... while at the same time not protecting their content from actual AI (if this technique actually caught on it is trivial to reverse it in your data ingestion pipeline)

But it's "made with ♥" (the footer says so).

(He's broken mainstream browsers, too - ctrl+f doesn't work in the page.)

GPT 5.2 extracted the correct text, but it definitely struggled - 3m36s, and it had to write a script to do it, and it messed up some of the formatting. It actually found this thread, but rejected that as a solution in the CoT: "The search result gives a decoded excerpt, which seems correct, but I’d rather decode it myself using a font mapping."

I doubt it would be economic to decode unless significant numbers of people were doing this, but it is possible.

  • This is the point I was making downthread: no scraper will use 3m36s of frontier LLM time to get <100 KB of data. This is why his method would technically achieve what he asked for. Someone alluded to this further down the thread, but I wonder if one-to-one letter substitution specifically would still expose some extractable information to the LLM, even without decoding.

Yes, it's worse for screenreaders, I listed that next to other drawbacks which I acknowledged. I don't intend to apply this method anywhere else due to these drawbacks, because accessibility matters.

It's a proof of concept, and maybe a starting point for somebody else who wants to tackle this problem.

Can LLMs detect and decode the text? Yes, but I'd wager for the case that data cleaning doesn't happen to the extent that it decodes the text after scraping.

I didn’t think you did use Claude Code! I was just saying that with AI agents these days, even more thoroughly obfuscated text can probably be de-obfuscated without much effort.

I suppose I don’t know data ingestion that well. Is de-obfuscating really something they do? If I was maintaining such a pipeline and found the associated garbage data, I doubt I’d bother adding a step for the edge case of getting the right caesar cipher to make text coherent. Unless I was fine-tuning a model for a particular topic and a critical resource/expert obfuscated their content, I’d probably just drop it and move on.

That said, after watching my father struggle deeply with the complex computer usage his job requires when he developed cataracts, I don’t see any such method as tenable. The proverbial “fuck you” to the disabled folks who interact with one’s content is deeply unacceptable. Accessible web content should be mandatory in the same way ramps and handicap parking are—if not more-so. For that matter, it shouldn’t take seeing a loved one slowly and painfully lose their able body to give a shit about accessibility. Point being, you’re right to be pissed and I’m glad this post had a direct response from somebody with direct personal experience needing accessible content so quickly after it went up.