Comment by csemple
7 days ago
OP here. *** I'm seeing comments about AI-generated writing. This is my voice—I've been writing in this style for years in government policy docs. Happy to discuss the technical merits rather than the prose style. ***
At Ontario Digital Service, we built COVID-19 tools, digital ID, and services for 15M citizens. We evaluated LLM systems to improve services but could never procure them.
The blocker wasn't capability—it was liability. We couldn't justify "the model probably won't violate privacy regulations" to decision-makers who need to defend "this system cannot do X."
This post demonstrates the "Prescription Pad Pattern": treating authority boundaries as persistent state that mechanically filters tools.
The logic: Don't instruct the model to avoid forbidden actions—physically remove the tools required to execute them. If the model can't see the tool, it can't attempt to call it.
This is a reference implementation. The same pattern works for healthcare (don't give diagnosis tools to unlicensed users), finance (don't give transfer tools to read-only sessions), or any domain where "98% safe" means "0% deployable."
Repo: https://github.com/rosetta-labs-erb/authority-boundary-ledge...
>As Head of Product for the Ontario Digital Service
Ah, this explains a lot about the state of Canada actually.
This was so clearly LLM-generated that I couldn't get through the whole thing.
In a few years everyone will be talking like this -- humans and LLMs alike. We're not there yet but our LLMs masters will train us soon enough.
I am only half-joking. Kids talking to LLMs to get homework done, people use it for therapy or companionship, for work, even to "Google things". Pretty soon you'll find yourself at a bar, wanting to call your friend a dumbass for saying some stupid shit and instead you'll hear yourself say "You're absolutely right, Jim! ..."
I guess working in government has put me ahead of the curve sounding like a robot.
Hi OP, can you rewrite the article in your own words?
I second this. Very difficult to read through the slop. I get that it saves time, but it's verbose and repetitive in all the wrong places.
Im Canadian (Not Onario), so I really wanted to enjoy reading this as a peak inside what IT is like in that environment, but the LLM generated headers and patterns in the piece really put me off and I had to stop reading after a couple of minutes Im afraid.
I think this article would really benefit from being rewritten in your own words. The concept is good
> The concept is good
Unfortunately, it's not. Once you read through the slop the implementation is still getting a pass/fail security response from the LLM, which the premise of OP's article is railing against.
> The blocker wasn't capability—it was liability.
Yikes (regarding the AI patterns in the comment)
OP Thank you for taking the time to write and post this! It was an interesting take on a very difficult problem.
FWIW, I have been reading policy documents for a long time and I thought you sounded rather human and natural… Just very professional! :)
What exactly is "Ontario Digital Service" in this context?
A department of the government of Ontario.
(Now dead: https://thinkdigital.ca/podcast/the-end-of-the-ontario-digit... )
3 — in one single comment. Even your comment is AI generated
Yep, I use em dashes all the time—still a human typing this.
Same here. It’s not the signal everybody thinks it is.
Please try writing this article yourself. It's unreadable as-is due to the slop.