Comment by mikert89
11 hours ago
There's another thing happening which people haven't really heard much about, which is basically ChatGPT Pro is really good at making legal arguments. And so people that previously would never have filed something like a discrimination lawsuit can now use ChatGPT to understand how to respond to managers' emails and proactively send emails that point out discrimination in non-threatening manner, and so in ways that create legal entrapment. I think people are drastically underestimating what's going to happen over the next 10 years and how bad the discrimination is in a lot of workplaces.
> ChatGPT Pro is really good at making legal arguments
It’s good at initiating them. I’ve started to see folks using LLM output directly in legal complaints and it’s frankly a godsend to the other side since blatantly making shit up is usually enough to swing a regulator, judge or arbitrator to dismiss with prejudice.
Posted my response below, you have no idea how impactful this is going to be
[dead]
That's all well and good, but anyone who does this will likely just be terminated asap without cause, possibly as a part of a multi-person layoff that makes it appear innocuous.
First call should be to an employment attorney and the EEOC, no matter what, before you sign anything.
https://www.eeoc.gov/how-file-charge-employment-discriminati...
That’s not quite right. To win a discrimination case, you typically need to document a pattern of behavior over time—often a year. Most people can’t afford a lawyer to manage that. But if you’re a regular employee, you can use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently. With diligent, organized evidence, you absolutely can build a case; the hard part is proving it, and ChatGPT is great at helping you gather and frame the proof.
> To win a discrimination case, you typically need to document a pattern of behavior over time—often a year
Where did you hear this?
> use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently
This is terrible advice. It not only makes those messages inadmissible, it casts reasonable doubt on everything else you say.
Using an LLM to take the emotion out of your breadcrumbs is fine. Having it draft generic stuff, or worse, potentially hallucinate, may actually flip liability onto you, particularly if you weren't authorised to disclose the contents of those messages to an outside LLM.
6 replies →