Comment by maglite77
6 hours ago
Something I'm surprised this article didn't touch on which is driving many organizations to be conservative in "how much" AI they release for a given product: prompt-jacking and data privacy.
I, like many others in the tech world, am working with companies to build out similar features. 99% percent of the time, data protection teams and legal are looking for ways to _remove_ areas where users can supply prompts / define open-ended behavior. Why? Because there is no 100% guarantee that the LLM will not behave in a manner that will undermine your product / leak data / make your product look terrible - and that lack of a guarantee makes both the afore-mentioned offices very, very nervous (coupled with a lack of understanding of the technical aspects involved).
The example of reading emails from the article is another type of behavior that usually gets an immediate "nope", as it involves sending customer data to the LLM service - and that requires all kinds of gymnastics to a data protection agreement and GDPR considerations. It may be fine for smaller startups, but the larger companies / enterprises are not down with it for initial delivery of AI features.
No comments yet
Contribute on Hacker News ↗