← Back to context

Comment by simianwords

1 day ago

Can you explain the exact way in which this is possible? It’s not legal to be denied jobs based on health. Not to deny insurance

And how would you know what they base their hiring upon? You would just get a generic automated response..

You would not be privy to their internal processes, and thusfar not be able to prove wrong doing. You would just have to hope for a new Snowden and that the found wrongdoings would actually be punished this time.

  • I don't get it, if you're medically unfit for a job, why would you want the job?

    For instance, if your job is to be on your feet all day and you can barely stand, then that job is not for you. I have never met employers that are so flush in opportunities of candidates that they just randomly choose to exclude certain people.

    And if it's insurance, there's a group rate. The difference only variable is what the employee chooses out of your selected plans (why make a plan available if you don't want people to pick that one?) and family size. It's illegal to discriminate of family size and that does add up to 10k extra on the employer side. But there are downsides to hiring young single people, so things may balance out.

    • Usually there's one or two job responsibilities among many, that you can do, but not the way everyone else does them. The ADA requires employers to make reasonable accommodations, and some employers don't want to.

      So less, the job requires you to stand all day, and more, once a week or so they ask you make a binder of materials, and the hole puncher they want you to use dislocates your hands (true story). Or, it's a desk job, but you can't get from your desk to the bathroom in your wheelchair unless they widen the aisles between desks (hypothetical).

    • Very large employers don't have a group rate. The insurance company administers the plan on behalf of the company according to pre-agreed rules, then the company covers all costs according to the employee health situation.

      Read your policy!

    • I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason. If I may, stepping back for a second: the reason privacy laws exist, is to protect people from bad behavior from employers, health insurance, etc.

      If we circumvent those privacy laws, through user licenses, or new technology - we are removing the protections of normal citizens. Therefore, the bad behavior which we already decided as a society to ban can now be perpetrated again, with perhaps a fresh new word for it to dodge said old laws.

      If I understand your comment, you are essentially wondering why those old laws existed in the first place. I would suggest racism or other systemic issues, and differences in insurance premiums, are more than enough to justify the existence of privacy laws. Take a normal office job as an example over a manual labor intensive job. No reason at all that health conditions should impact that. The idea of not being hired because I have a young child, or a health condition, that would raise the group rate from the insurer passing the cost to my employer (which would be in their best interest to do) is a terrible thought. And it happened before, and we banned that practice (or did our best to do so).

      All this to say, I believe HIPAA helps people, and if ChatGPT is being used to partially or fully facilitate medical decision making, they should be bound under strict laws preventing the release of that data regardless of their existing user agreements.

      1 reply →

    • > And if it's insurance, there's a group rate.

      Insurers derive rates for each employer from each employer's costs where laws allow this. And many employers self fund medical insurance.

  • Do corporations use my google searches as data to hire me?

    • Do you have any proof they don't? Do you have any proof the "AI System" that they use to filter out candidates doesn't "accidentally" access data ? Are you willing to bet that Google, OpenAI, Anthropic, Meta, won't sell access to that information?

      Also, in some cases: they absolutely do. Try to get hired in Palantir and see how much they know about your browsing history. Anything related to national security or requiring clearances has you investigated.

      16 replies →

    • Probably not directly, that would be too vulnerable. But they could hire a background check company, that could pay a data aggregator to check if you searched for some forbidden words, and then feed the results into a threat model...

    • No they do not.

      Anyone who has worked in hiring for any big company knows how much goes into ensuring hiring processes don't accidentally touch anything that could be construed as illegal discrimination. Employees are trained, policies and procedures are documented, and anyone who even accidentally says or does anything that comes too close to possibly running afoul of hiring laws will find themselves involved with HR.

      The idea that these same companies also have a group of people buying private search information or ChatGPT conversations for individual applicants from somewhere (which nobody can link to) and then secretly making hiring decisions based on what they find is silly.

      The arguments come with the usual array of conspiracy theory defenses, like the "How can you prove it's not happening" or the claims that it's well documented that it's happening but nobody can link to that documentation.

      1 reply →

    • I'm kind of amazed that so many people in this comment section believe their Google searches and ChatGPT conversations are being sold and used.

      Under this conspiracy theory they'd have to be available for sale somewhere, right? Yet no journalist has ever picked up the story? Nobody has ever come out and whistleblown that their company was buying Google searches and denying applicants for searching for naughty words?

      5 replies →

  • This fails the classic conspiracy theory test: Any company practicing this would have to be large enough to be able to afford to orchestrate a chain of illegal transactions to get the data, develop a process for using it in hiring, and routinely act upon it.

    The continued secrecy of the conspiracy would then depend on every person involved in orchestrating this privacy violation and illegal hiring scheme keeping it secret forever. Nobody ever leaking it to the press, no disgruntled employees e-mailing their congress people, no concerned citizens slipping a screenshot to journalists. Both during and after their employment with the company.

    To even make this profitable at all, the data would have to be secretly sold to a lot of companies for this use, and also continuously updated to be relevant. Giant databases of your secret ChatGPT queries being sold continuously in volume, with all employees at both the sellers, the buyers, and the users of this information all keeping it perfectly quiet, never leaking anything.

    • It doesn't though. As an aside, I have been using a competitor to chatgpt health (nori) for a while now, and I have been getting an extreme amount of targeted ads about HRV and other metrics that the app consumes. I have been collecting health metrics through wearables for years, so there has been no change in my own search patterns or beliefs about my health. I just thought ai + health data was cool.

  • > And how would you know what they base their hiring upon?

    GDPR Request. Ah wait, regulation bad.

> It’s not legal to be denied jobs based on health.

There is a vast gap between what is not legal and what is actually actionable in a court of law, which is well known to a large power nexus.

> It’s not legal to be denied jobs based on health. Not to deny insurance

The US has been pretty much a free-for-all for surveillance and abusing all sorts of information, even when illegal to do so. On the rare occasions that they get caught, the penalty is almost always a handslap, and they know it.

How are you ever going to prove this?

You just get an automated denial from the ATS that's based on the output from AI inference engine.

The ADA made it illegal to discriminate against job seekers for health conditions and ObamaCare made it illegal to base cover and rates on pre-existing conditions.

What are the chances those bills last long in the current administration and supreme court?

  • And yet, if you want life insurance you can’t get it with a bunch of pre existing conditions. And you can be discriminated against as a job seeker as long as they don’t make it obvious.