Comment by pjc50
5 hours ago
New variant on "I followed my satnav blindly and now I'm stuck in the river", except less reliable.
It is however fraud on the part of the travel company to advertise something that doesn't exist. Another form of externalized cost of AI.
> It is however fraud on the part of the travel company to advertise something that doesn't exist
Just here to point out that from a legal perspective, fraud is deliberate deception.
In this case a tourist agency outsourced the creation of their marketing material to a company who used AI to produce it, with hallucinations. From the article it doesn't look like either of the two companies advertised the details knowing they're wrong, or had the intent to deceive.
Posting wrong details on a blog out of carelessness and without deliberate ill intention is not fraud more than using a wrong definition of fraud is fraud.
The standard is to add disclaimers like "Al responses may include mistakes." The chatbot they used to generate that text would have mentioned that.
Everybody knows AI makes stuff up. It's common knowledge.
To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.
Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.
> To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.
> Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.
Couldn't help but notice you gave some very convincing legal advice without any disclaimer that you are not a lawyer, a judge, or an expert on Australian law. Your own litmus test characterizes you as a fraudster. The other mandatory components of fraud (knowledge, intention, damages) don't even apply, you said so.
Australian law isn't at all weird about this. Their definition (simplified) pivots on intentional deception, to obtain gains or to cause loss to others, knowing the outcome.
1 reply →
There has to be a clause for "willful disregard for the truth", no? Having your lying machine come up with plausible lies for you and publishing them without verification is no better than coming up with the lies yourself. What really protects them from fraud accusations is that these blog posts were just content marketing, they weren't making money off of them directly.
Even for civil law where the bar for the evidence is lower, it's hard to make a case that someone who posted wrong details on a free blog and didn't make money off of it should cover the damages you incurred by traveling based on the advice alone. Not making any reasonable effort to fact check cuts both ways.
This is a matter of contract law between the two companies, but the people who randomly read an internet blog, took everything for granted, and more importantly didn't use that travel agency's services can't really claim fraud.
Just being wrong or making mistakes isn't fraud. Otherwise 99% of people saying something on the internet would be on the hook for damages again and again.
And using autocomplete to write travel advertisements has to fall under this category?
Seems like closer to fraud on behalf of the marketing company they outsourced to.
I doubt they commissioned articles on things that don't exist. If you use AI to perform a task that someone has asked you to do, it should be your responsibility to ensure that it has actually done that thing properly.
The consequences for wrong ai need to be a lot higher if we want to limit slop. Of course, there’s space for llms and their hallucinations to contribute meaningful things, but we need at least a screaming all caps disclaimer on content that looks like it could be human-generated but wasn’t (and absent that disclaimer or if the disclaimer was insufficiently prominent, false statements are treated as deliberate fraud)