← Back to context

Comment by OutOfHere

9 hours ago

That's all well and good, but anyone who does this will likely just be terminated asap without cause, possibly as a part of a multi-person layoff that makes it appear innocuous.

That’s not quite right. To win a discrimination case, you typically need to document a pattern of behavior over time—often a year. Most people can’t afford a lawyer to manage that. But if you’re a regular employee, you can use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently. With diligent, organized evidence, you absolutely can build a case; the hard part is proving it, and ChatGPT is great at helping you gather and frame the proof.

  • > To win a discrimination case, you typically need to document a pattern of behavior over time—often a year

    Where did you hear this?

    > use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently

    This is terrible advice. It not only makes those messages inadmissible, it casts reasonable doubt on everything else you say.

    Using an LLM to take the emotion out of your breadcrumbs is fine. Having it draft generic stuff, or worse, potentially hallucinate, may actually flip liability onto you, particularly if you weren't authorised to disclose the contents of those messages to an outside LLM.

    • With respect, it seems you haven’t kept up with how people actually use ChatGPT. In discrimination cases—especially disparate treatment—the key is comparing your performance, opportunities, and outcomes against peers: projects assigned, promotions, credit for work, meeting invites, inclusion, and so on. For engineers, that often means concrete signals like PR assignments, review comments, approval times, who gets merges fast, and who’s blocked.

      Most employees don’t know what data matters or how to collect it. ChatGPT Pro (GPT-5 Pro) can walk someone through exactly what to track and how to frame it: drafting precise, non-threatening documentation, escalating via well-written emails, and organizing evidence. I first saw this when a seed-stage startup I know lost a wage claim after an employee used ChatGPT to craft highly effective legal emails.

      This is the shift: people won’t hire a lawyer to explore “maybe” claims on a $100K tech job—but they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull. On its own, ChatGPT isn’t a lawyer. In the hands of a thoughtful user, though, it’s close to lawyer-level support for spotting issues, building a record, and pushing for a fair outcome. The legal system will feel that impact.

      5 replies →