Comment by funnybeam
3 days ago
I really think we should stop using the term ‘guard rails’ as it implies a level of control that really doesn’t exist.
These things are polite suggestions at best and it’s very misleading to people that do not understand the technology - I’ve got business people saying that using LLMs to process sensitive data is fine because there are “guardrails” in place - we need to make it clear that these kinds of vulnerabilities are inherent in the way gen AI works and you can’t get round that by asking them nicely
It's interesting that companies don't provide concrete definitions or examples of what their AI guardrails are. IBM's definition suggests to me they see it as imperative to continue moving fast (and breaking things) no matter what:
Think of AI guardrails like the barriers along a highway: they don’t slow the car down, but they do help keep it from veering off course.
https://www.ibm.com/think/topics/ai-guardrails
I think you’re absolutely right. These companies know full well that their “guardrails” are ineffective but they just don’t care because they’ve sunk so much money into AI that they are desperate to pretend that everything’s fine and their investments were worthwhile.
I was on a call with Microsoft the other day when (after being pushed) they said they had guardrails in place “to block prompt injection” and linked to an article which said “_help_ block prompt injection”. The careful wording is deliberate I’m sure.