Comment by mikert89
1 day ago
With respect, it seems you haven’t kept up with how people actually use ChatGPT. In discrimination cases—especially disparate treatment—the key is comparing your performance, opportunities, and outcomes against peers: projects assigned, promotions, credit for work, meeting invites, inclusion, and so on. For engineers, that often means concrete signals like PR assignments, review comments, approval times, who gets merges fast, and who’s blocked.
Most employees don’t know what data matters or how to collect it. ChatGPT Pro (GPT-5 Pro) can walk someone through exactly what to track and how to frame it: drafting precise, non-threatening documentation, escalating via well-written emails, and organizing evidence. I first saw this when a seed-stage startup I know lost a wage claim after an employee used ChatGPT to craft highly effective legal emails.
This is the shift: people won’t hire a lawyer to explore “maybe” claims on a $100K tech job—but they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull. On its own, ChatGPT isn’t a lawyer. In the hands of a thoughtful user, though, it’s close to lawyer-level support for spotting issues, building a record, and pushing for a fair outcome. The legal system will feel that impact.
> they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull
This is correct usage. Letting it draft notes and letters is not. (Procedural emails, why not.) Essentially, ChatGPT Pro lets one do e-discovery and preliminary drafting to a degree that’s good enough for anything less than a few million dollars.
I’ve worked with startups in San Francisco, where lawyers readily take cases on contingency because they’re so easy to win. The only times I’ve urged companies fight back have been recently, because the emails and notes the employee sent were clearly LLM generated and materially false in one instance. That let, in the one case that they insisted on pursuing, the entire corpus of claims be put under doubt and dismissed. Again, in San Francisco, a notoriously employee-friendly jurisdiction.
I’ve invested in legal AI efforts. I’d be thrilled if their current crop of AIs were my adversary in any case. (I’d also take the bet on ignoring an LLM-drafted complaint more than a written one, lawyer or not.)
No I think the big unlock is a bunch of people that would never file lawsuits can at least approach it. You obviously can’t copy paste its email output, but you can definitely verify what are legal terms, and how to position certain phrases.
> the big unlock is a bunch of people that would never file lawsuits can at least approach it
Totally agree again. LLMs are great at collating and helping you decide if you have a case and, if so, convincing either a lawyer to take it or your adversary to settle.
Where they backfire is when people use them to send chats or demand letters. You suggested this, and this is the part where I’m pointing out that I am personally familiar with multiple cases where this took a case the person could have won, on contingency, and turned it into one where they couldn’t irrespective of which lawyers they retained.
The legal system is extremely biased in favor of those who can afford an attorney. Moreover, the more expensive the attorney, the more biased it is in their favor.
It is in effect not a legal system, but a system to keep lawyers and judges in business with intentionally vaguely worded laws and variable interpretations.
2 replies →