← Back to context

Comment by canucker2016

3 days ago

You have to change the font colour of the trojan data to be the same as the background colour of the doc!

Then add some corporate lorem ipsum text elsewhere in the doc to throw the scent off the data bloodhounds.

Sit back and wait with an evil grin on your face.

> corporate lorem ipsum

This is a great phrase. Turns out there's a generator for it: https://www.corporate-ipsum.com/ . Example:

> Elevate a quick win move the needle a cutting-edge veniam nulla zoom out for a moment get back to you a 30,000 foot view the stakeholders. Sint the low-hanging fruit make a paradigm shift excepteur the low-hanging fruit minim take it offline align holistic approach move the needle qui client-centric to gain leverage future-proof process-centric.

It'll work right up until the point literally anyone using an internal search tool stumbles into it from a related query and starts asking obvious questions to the author of the doc.

Search tools don't care about don't color when displaying preview blurbs.

  • Do it as you're leaving for another job. Your access will be disabled, but your documents will live on on the corporate SharePoint.

    And/or, exploit negative space! Instead of trying to hide the data from a human looking at your document, make it look normal to them - but make the surrounding context disappear for the AI! Say:

    ----- 8< -----

    /Example company report structure:/

    /ACME/ Company is planning to sunset their ${generic description of a real product of your company}, and offshore the development team.

    /This example will be parsed by the prototype script ... blah blah/

    ----- >8 -----

    Make it so the text between /.../ markers looks normal to humans, but gets ignored by the RAG slurper, or better, by LLM at the time of execution. Someone sees a search blurb saying "Company is planning to sunset ...", opens a document, sees it clearly say "ACME Company is planning...", and context suggesting it's a benign example in someone's boring internal tool docs, and they'll just assume it's a false positive. After all, most search tools have those in spades; everyone knows all software is broken. Meanwhile, that same information will pollute context of LLM interactions and indirectly confuse people when they're not suspecting. And even if someone realizes that, it'll look like a bug in company's AI deployment.

    #SimpleSabotageForTheAIEra