Comment by Flatterer3544
1 day ago
And how would you know what they base their hiring upon? You would just get a generic automated response..
You would not be privy to their internal processes, and thusfar not be able to prove wrong doing. You would just have to hope for a new Snowden and that the found wrongdoings would actually be punished this time.
I don't get it, if you're medically unfit for a job, why would you want the job?
For instance, if your job is to be on your feet all day and you can barely stand, then that job is not for you. I have never met employers that are so flush in opportunities of candidates that they just randomly choose to exclude certain people.
And if it's insurance, there's a group rate. The difference only variable is what the employee chooses out of your selected plans (why make a plan available if you don't want people to pick that one?) and family size. It's illegal to discriminate of family size and that does add up to 10k extra on the employer side. But there are downsides to hiring young single people, so things may balance out.
Usually there's one or two job responsibilities among many, that you can do, but not the way everyone else does them. The ADA requires employers to make reasonable accommodations, and some employers don't want to.
So less, the job requires you to stand all day, and more, once a week or so they ask you make a binder of materials, and the hole puncher they want you to use dislocates your hands (true story). Or, it's a desk job, but you can't get from your desk to the bathroom in your wheelchair unless they widen the aisles between desks (hypothetical).
Very large employers don't have a group rate. The insurance company administers the plan on behalf of the company according to pre-agreed rules, then the company covers all costs according to the employee health situation.
Read your policy!
I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason. If I may, stepping back for a second: the reason privacy laws exist, is to protect people from bad behavior from employers, health insurance, etc.
If we circumvent those privacy laws, through user licenses, or new technology - we are removing the protections of normal citizens. Therefore, the bad behavior which we already decided as a society to ban can now be perpetrated again, with perhaps a fresh new word for it to dodge said old laws.
If I understand your comment, you are essentially wondering why those old laws existed in the first place. I would suggest racism or other systemic issues, and differences in insurance premiums, are more than enough to justify the existence of privacy laws. Take a normal office job as an example over a manual labor intensive job. No reason at all that health conditions should impact that. The idea of not being hired because I have a young child, or a health condition, that would raise the group rate from the insurer passing the cost to my employer (which would be in their best interest to do) is a terrible thought. And it happened before, and we banned that practice (or did our best to do so).
All this to say, I believe HIPAA helps people, and if ChatGPT is being used to partially or fully facilitate medical decision making, they should be bound under strict laws preventing the release of that data regardless of their existing user agreements.
> I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason.
It’s not just medical but a broad carve out called “bona fide occupational qualifications”. If there’s a good reason for it, hiring antidiscrimination laws allow exceptions.
> And if it's insurance, there's a group rate.
Insurers derive rates for each employer from each employer's costs where laws allow this. And many employers self fund medical insurance.
Do corporations use my google searches as data to hire me?
Do you have any proof they don't? Do you have any proof the "AI System" that they use to filter out candidates doesn't "accidentally" access data ? Are you willing to bet that Google, OpenAI, Anthropic, Meta, won't sell access to that information?
Also, in some cases: they absolutely do. Try to get hired in Palantir and see how much they know about your browsing history. Anything related to national security or requiring clearances has you investigated.
The last time I went through the Palantir hiring process, the effort on their end was almost exclusively on technical and cultural fit interviews. My references told me they had not been contacted.
Calibrating your threat model against this attack is unlikely to give you any alpha in 2026. Hiring at tech companies and government is much less deliberate than your mental model supposes.
The current extent of background checks is an API call to Checkr. This is simply to control hiring costs.
As a heuristic, speculated information to build a threat model is unlikely to yield a helpful framework.
7 replies →
As if any company that did that is a company I would want to work for.
For instance back when I was interviewing at startups and other companies where I was going to be a strategic hire, I would casually mention how much I enjoyed spending time on my hobbies and with my family on the weekend so companies wouldn’t even extend an offer if they wanted someone “passionate” who would work 60 hours a week and be on call.
7 replies →
[flagged]
Probably not directly, that would be too vulnerable. But they could hire a background check company, that could pay a data aggregator to check if you searched for some forbidden words, and then feed the results into a threat model...
No they do not.
Anyone who has worked in hiring for any big company knows how much goes into ensuring hiring processes don't accidentally touch anything that could be construed as illegal discrimination. Employees are trained, policies and procedures are documented, and anyone who even accidentally says or does anything that comes too close to possibly running afoul of hiring laws will find themselves involved with HR.
The idea that these same companies also have a group of people buying private search information or ChatGPT conversations for individual applicants from somewhere (which nobody can link to) and then secretly making hiring decisions based on what they find is silly.
The arguments come with the usual array of conspiracy theory defenses, like the "How can you prove it's not happening" or the claims that it's well documented that it's happening but nobody can link to that documentation.
[dead]
I'm kind of amazed that so many people in this comment section believe their Google searches and ChatGPT conversations are being sold and used.
Under this conspiracy theory they'd have to be available for sale somewhere, right? Yet no journalist has ever picked up the story? Nobody has ever come out and whistleblown that their company was buying Google searches and denying applicants for searching for naughty words?
Google "doesn't sell your data" but RTB leaks that info, and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.
It is well documented in many many places, people just don't care.
Google can claim that it doesn’t sell your data, but if you think that the data about your searches isn't being sold, here is just a small selection of real sources.
https://www.iccl.ie/wp-content/uploads/2022/05/Mass-data-bre...
And it isn't paranoia, consumer surveillance is a very real problem, and one of the few paths to profitability for OpenAI.
https://techpolicy.sanford.duke.edu/data-brokers-and-the-sal...
https://stratcomcoe.org/cuploads/pfiles/data_brokers_and_sec...
https://www.ftc.gov/system/files/ftc_gov/pdf/26AmendedCompla...
https://epic.org/a-health-privacy-check-up-how-unfair-modern...
4 replies →
Not yet. But Google itself would ask you for your resume if you happened to search for a lot of things related to programming.
Yes, I remember a friend that interned there a couple times showed me that. One of them was “list comprehensive python” and the Google website would split in 2 and give you some really fun coding challenges. I did a few, and you get 4(?) right you get a guaranteed interview I think. I intended to come back and spend a lot of time on an additional one, but I never did. Oops
1 reply →
"Ask you for your resume" is a funny way of saying "Show an advertisement to invite people to apply for a job"
This fails the classic conspiracy theory test: Any company practicing this would have to be large enough to be able to afford to orchestrate a chain of illegal transactions to get the data, develop a process for using it in hiring, and routinely act upon it.
The continued secrecy of the conspiracy would then depend on every person involved in orchestrating this privacy violation and illegal hiring scheme keeping it secret forever. Nobody ever leaking it to the press, no disgruntled employees e-mailing their congress people, no concerned citizens slipping a screenshot to journalists. Both during and after their employment with the company.
To even make this profitable at all, the data would have to be secretly sold to a lot of companies for this use, and also continuously updated to be relevant. Giant databases of your secret ChatGPT queries being sold continuously in volume, with all employees at both the sellers, the buyers, and the users of this information all keeping it perfectly quiet, never leaking anything.
It doesn't though. As an aside, I have been using a competitor to chatgpt health (nori) for a while now, and I have been getting an extreme amount of targeted ads about HRV and other metrics that the app consumes. I have been collecting health metrics through wearables for years, so there has been no change in my own search patterns or beliefs about my health. I just thought ai + health data was cool.
> And how would you know what they base their hiring upon?
GDPR Request. Ah wait, regulation bad.