Comment by 34679
6 days ago
I'd like to offer a cautionary tale that involves my experience after seeing this post.
First, I tried enabling o3 via OpenRouter since I have credits with them already. I was met with the following:
"OpenAI requires bringing your own API key to use o3 over the API. Set up here: https://openrouter.ai/settings/integrations"
So I decided I would buy some API credits with my OpenAI account. I ponied up $20 and started Aider with my new API key set and o3 as the model. I get the following after sending a request:
"litellm.NotFoundError: OpenAIException - Your organization must be verified to use the model `o3`. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate."
At that point, the frustration was beginning to creep in. I returned to OpenAI and clicked on "Verify Organization". It turns out, "Verify Organization" actually means "Verify Personal Identity With Third Party" because I was given the following:
"To verify this organization, you’ll need to complete an identity check using our partner Persona."
Sigh I click "Start ID Check" and it opens a new tab for their "partner" Persona. The initial fine print says:
"By filling the checkbox below, you consent to Persona, OpenAI’s vendor, collecting, using, and utilizing its service providers to process your biometric information to verify your identity, identify fraud, and conduct quality assurance for Persona’s platform in accordance with its Privacy Policy and OpenAI’s privacy policy. Your biometric information will be stored for no more than 1 year."
OK, so now, we've gone from "I guess I'll give OpenAI a few bucks for API access" to "I need to verify my organization" to "There's no way in hell I'm agreeing to provide biometric data to a 3rd party I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder. How do I get my $20 back?"
I actually contacted the California AG to get a refund from another AI company after they failed to refund me.
The AG office followed up and I got my refund. Worth my time to file because we should stop letting companies get away with this stuff where they show up with more requirements after paying.
Separately they also do not need my phone number after having my name, address and credit card.
Has anyone got info on why they are taking everyone’s phone number?
(having no insider info:) Because it can be used as a primary key ID across aggregated marketing databases including your voting history / party affiliation, income levels, personality and risk profiles etc etc etc. If a company wants to, and your data hygiene hasn't been tip top, your phone number is a pointer to a ton of intimate if not confidential data. Twitter was fined $150 million for asking for phone numbers under pretense of "protecting your account" or whatever but they actually used it for ad targeting.
>> Wednesday's 9th Circuit decision grew out of revelations that between 2013 and 2019, X mistakenly incorporated users' email addresses and phone numbers into an ad platform that allows companies to use their own marketing lists to target ads on the social platform.
>> In 2022, the Federal Trade Commission fined X $150 million over the privacy gaffe.
>> That same year, Washington resident Glen Morgan brought a class-action complaint against the company. He alleged that the ad-targeting glitch violated a Washington law prohibiting anyone from using “fraudulent, deceptive, or false means” to obtain telephone records of state residents.
>> X urged Dimke to dismiss Morgan's complaint for several reasons. Among other arguments, the company argued merely obtaining a user's phone number from him or her doesn't violate the state pretexting law, which refers to telephone “records.”
>> “If the legislature meant for 'telephone record' to include something as basic as the user’s own number, it surely would have said as much,” X argued in a written motion.
https://www.mediapost.com/publications/article/405501/None
Tangential: please do not use a phone number as a PK. Aside from the nightmare of normalizing them, there is zero guarantee that someone will keep the same number.
2 replies →
OpenAI doesn’t (currently) sell ads. I really cannot see a world where they’re wanting to sell ads to their API users only? It’s not like you need a phone number to use ChatGPT.
To me the obvious example is fraud/abuse protection.
21 replies →
Thank you for this comment… a relative of mine spent a ton of money on an AI product that never came a license he cannot use. I told him to contact his states AG just in case.
Source: have dealt with fraud at scale before.
Phone number is the only way to reliably stop MOST abuse on a freemium product that doesn't require payment/identity verification upfront. You can easily block VOIP numbers and ensure the person connected to this number is paying for an actual phone plan, which cuts down dramatically on bogus accounts.
Hence why even Facebook requires a unique, non-VOIP phone number to create an account these days.
I'm sure this comment will get downvoted in favor of some other conspiratorial "because they're going to secretly sell my data!" tinfoil post (this is HN of course). But my explanation is the actual reason.
I would love if I could just use email to signup for free accounts everywhere still, but it's just too easily gamed at scale.
On the flip side it makes a company seem sparklingly inept when they use VOIP as a method to filter valid users. I haven’t done business with companies like Netflix or Uber because I don’t feel like paying AT&T a cut for identity verification. There are plenty of other methods like digital licenses which are both more secure and with better privacy protections.
1 reply →
Maybe they should look into a non-freemium business model. But that won't happen because they want to have as much personal data as possible.
- Parent talks about a paid product. If they wants to burn tokens, they are going to pay for it.
- Those phone requirements do not stop professional abusers, organized crime nor state sponsored groups. Case in point: twitter is overrun by bots, scammers and foreign info-ops swarms.
- Phone requirements might hinder non-professional abusers at best, but we are sidestepping the issue if those corporations deserve that much trust to compel regular users to sell themselves. Maybe the business model just sucks.
2 replies →
The discussion wasn't about freemium products though. Someone mentioned that they paid 20 bucks for OpenAI's API already and then they were asked for more verification.
Personally I found that rejecting disposable/temporary emails and flagging requests behind VPNs filtered out 99% of abuse on my sites.
No need to ask for a phone or card -- or worse, biometric data! -- which also removes friction.
> I'm sure this comment will get downvoted in favor of some other conspiratorial "because they're going to secretly sell my data!" tinfoil post (this is HN of course). But my explanation is the actual reason.
Your explanation is inconsistent with the link in these comments showing Twitter getting fined for doing the opposite.
> Hence why even Facebook requires a unique, non-VOIP phone number to create an account these days.
Facebook is the company most known for disingenuous tracking schemes. They just got caught with their app running a service on localhost to provide tracking IDs to random shady third party websites.
> You can easily block VOIP numbers and ensure the person connected to this number is paying for an actual phone plan, which cuts down dramatically on bogus accounts.
There isn't any such thing as a "VOIP number", all phone numbers are phone numbers. There are only some profiteers claiming they can tell you that in exchange for money. Between MVNOs, small carriers, forwarding services, number portability, data inaccuracy and foreign users, those databases are practically random number generators with massive false positive rates.
Meanwhile major carriers are more than happy to give phone numbers in their ranges to spammers in bulk, to the point that this is now acting as a profit center for the spammers and allowing them to expand their spamming operations because they can get a large number of phone numbers those services claim aren't "VOIP numbers", use them for spamming the services they want to spam, and then sell cheap or ad-supported SMS service at a profit to other spammers or privacy-conscious people who want to sign up for a service they haven't used that number at yet.
[dead]
Doesn’t Sam Altman own a crypto currency company [1] that specifically collects biometric data to identify people?
Seems familiar…
[1] https://www.forbes.com/advisor/investing/cryptocurrency/what...
GP did mention this :)
> I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder
the core tech and premise doesnt collect biometric data, but biometric data is collected for training purposes with consent and compensation. There is endless misinformation (willfully and ignorantly) around worldcoin but it is not, at its core, a biometric collection company
Collecting biometrics for training purposes is still collecting biometrics.
1 reply →
I also am using OpenRouter because OpenAI isn't a great fit for me. I also stopped using OpenAI because they expire your API credits even if you don't use them. Yeah, it's only $10, but I'm not spending another dime with them.
Hi - I'm the COO of OpenRouter. In practice we don't expire the credits, but have to reserve the right to, or else we have a uncapped liability literally forever. Can't operate that way :) Everyone who issues credits on a platform has to have some way of expiring them. It's not a profit center for us, or part of our P&L; just a protection we have to have.
Pretty sure OP was talking about OpenAI expiring their credits (just had mine expire).
Btw, unsurprisingly, the time for expiry appears to be in UTC in case anyone else is in the situation of trying to spend down their credits before they disappear.
If you're worried about the unlimited liability, how about you refund the credits instead of expiring them?
5 replies →
Out of curiosity, what makes you different from a retailer or restaurant that has the same problem?
Why only 365 days? Would be way fairer and still ok for you (if it's such a big issue) to expire them after 5 years.
I wonder if they do this everywhere, in certain jurisdictions this is illegal.
then you shouldn’t use OpenRouter. ToS: 4.2 Credit Expiration; Auto Recharge OpenRouter reserves the right to expire unused credits three hundred sixty-five (365) days after purchase
That is so sleezy.
After how long do they expire?
IIRC, 1 year
1 reply →
I suspect their data collection might not be legal in the EU.
https://withpersona.com/legal/privacy-policy
To me it looks like an extremely aggressive data pump.
There are stories about e.g. Hetzner requiring all sorts of data from people who want to open/verify accounts so perhaps not. Might just be an anti “money laundering” thing. Especially if the credit card company ends up refunding everything..
Hi there, During our KYC process, we do sometimes ask customers to submit additional information, including IDs, so that we can verify their identity. However, we follow the General Data Protection Regulation in the EU, amongst other regulations. So we only keep that data for the account verification process. We also have a data protection officer and team who can answer questions potential customers have about data protection measures that we take. Their contact information is here: https://www.hetzner.com/legal/privacy-policy/ --Katie, Hetzner Online
What stories? Can you back up that claim with some sources please?
5 replies →
As someone not in the US, I do a straight nope out whenever I see a Persona request. I advise everyone else to do the same. Afaik, it's used by LinkedIn and Doordash too.
Oh I also recently got locked out of my linkedin account until I supply data to Persona.
(So I’m remaining locked out of my linkedin account.)
> How do I get my $20 back?
Contact support and ask for a refund. Then a charge back.
KYC requirement + OpenAI preserving all logs in the same week?
OpenAI introduced this with the public availability of o3, so no.
It's also the only LLM provider which has this.
What OpenAI has that the others don't is SamA's insatiable thirst for everyone's biometric data.
I think KYC has been beaten by AI agents according to RepliBench [0] as obtaining compute requires KYC which has a high success rate in the graphic.
[0] https://www.aisi.gov.uk/work/replibench-measuring-autonomous...
KYC has been around for a few months I believe. Whenever they released some of the additional thought logs you had to be verified.
Meanwhile the FSB and Mossad happily generate fake identities on demand.
The whole point of identity verification is for the same Mossad to gather your complete profile and everything else they can from OpenAI.
Since Mossad and CIA is essentially one organization they already do it, 100%.
You are even luck to be able to verify. Mine give me an error about "Session expired" for months!! Support do not reply.
I was more excited by the process, like, there exists a model out there so powerful it requires KYC
which, after using it, fair! It found a zero day
I think they're probably more concerned about fake accounts and people finding ways to get free stuff.
China is training their AI models using ChatGPT. They want to stop or slow that down.
3 replies →
I actually think they’re worried about foreign actors using it for…
- generating synthetic data to train their own models
- hacking and exploitation research
etc
What free stuff? It requires a paid API.
1 reply →
> which, after using it, fair! It found a zero day
Source?
Recently, Sean Heelan wrote a post "How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation". It might be what they are referring to.
Link: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-...
1 reply →
[flagged]
Bit strange to see such new accounts already priming everyone to think this is acceptable and normal, which it isn't, hence OA having started doing this months ago and still not a single other provider has it, despite offering just as powerful models. It's not like Google and Anthropic have launched their own Worldcoin either.
2 replies →
There is an ISO standard for digital ID, the same one used by the TSA, and it’s coming to the web soon! Human verification is going to continue to grow in importance.
2 replies →
I was excited about trying o3 for my apps but I'm not doing this validation.. thanks for the heads up.
> OK, so now, we've gone from "I guess I'll give OpenAI a few bucks for API access" to "I need to verify my organization" to "There's no way in hell I'm agreeing to provide biometric data to a 3rd party I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder. How do I get my $20 back?"
This should be illegal. How many are going to do the same as you, but then think that the effort/time/hassle they would waste to try to get their money back would not be worth it? At which point you've effectively donated money to a corp that implements anti-consumer anti-patterns.
Yeah, same. I am a paying API customer but I am not doing biometric KYC to talk to a bot.
This is in part "abuse prevention"[1] and in part marketing. Making customers feel like they're signing up to access state secrets makes the models seem more "special". Sama is well known to use these SV marketing tricks, like invite-only access, waiting lists, etc to psychologically manipulate users into thinking they're begging for entry to an exclusive club instead of just swiping a credit card to access an API.
Google tried this with Google Plus and Google Wave, failed spectacularly, and have ironically stopped with this idiotic "marketing by blocking potential users". I can access Gemini Pro 2.5 without providing a blood sample or signing parchment in triplicate.
[1] Not really though, because a significant percentage of OpenAI's revenue is from spammers and bulk-generation of SOE-optimised garbage. Those are valued customers!
Gemini doesn't give you reasoning via API though, at least as far as I'm aware.
If by reasoning you mean showing CoT, Gemini and OA are the same in this regard - neither provides it, not through the UI nor through the API. The "summaries" both provide have zero value and should be treated as non-existent.
Anthropic exposes reasoning, which has become a big reason to use them for reasoning tasks over the other two despite their pricing. Rather ironic when the other two have been pushing reasoning much harder.
2 replies →
Works for me?
Maybe you’re thinking of deep research mode which is web UI only for now.
HN Don’t Hate Marketing Challenge
Difficulty: Impossible
This feels eerily similar to a post I've read a within the last month. Either I'm having a deja vu, it's a coincidence that the same exact story is mentioned or theres something else going on
What should be going on? A regular Google search for "openai persona verify organization" shows withpersona.com in the second search result.
Yeah ok guess I misremembered it a bit but I was curious too and found the previous one I've thought of: https://news.ycombinator.com/item?id=43795406
1 reply →
This is OpenAI’s fairly dystopian process, so the exact same thing happens to lots of people.
It's a concerted attempt to de-anonymise the internet. Corporate entities are jostling for position as id authorities.
This is just the process for OpenAI. It's the same process I went through as well.
this reminds me of how broadcom maintains the “free” tier of vmware.
Can you explain? Is it not actually free?
there are so many non-functional websites and signups required to get to the end of the rainbow that any sane person quits well before getting to any freely distributed software, if, in fact, there still is some.
Interesting, it works for me through openrouter, without configured openai integration. Although, I have openai account and did verification with openai before. Conspiricy theory would say that they are exchanging PII so openrouter knows who am I :)
With all this plus the saving of all chats they can't operate on the EU. But they do ....
I don't recall Persona being in the mix last year when I signed up. Interesting development.
I think it was a recent thing [1], but I thought they were only considering it
[1] https://techcrunch.com/2025/04/13/access-to-future-ai-models...
Thank you for reminding me why I’ll never touch these LLM services.
>ID Check
Just send them a random passport photo from the Internet, what's the deal? Probably they are just vibe-verifying the photo with "Is it legit passport?" prompt anyways.
I think modern face verification has moved on, it's been video in all my encounters.
still no real human is involved, as they mention their verification is automated and prohabilistic — which is especially funny to hear in context of verification. Im pretty sure even a kid can go around it, e.g. on the video showing a photo of a person holding his passport which you can find online.
7 replies →
It requires video and an app. They are collecting facial biometrics.
App? So you cannot verify without a mobile phone?
2 replies →
Crazy, I already gave up registering for chatgpt because they asked for my phone number. I'm not giving that to any random startup.
Welcome to tech dystopia.
Hello Human Resource, we have all your data, please upload your bio-metric identity, as well as your personal thoughts.
Building the next phase of a corporate totalitarian state, thank you for your cooperation.
Follow-up. I pasted the above into ChatGPT because I genuinely wanted to know how to get my 20 bucks back. That request is in and they say I'll get a response in 2-3 days. I then asked it "What are the potential risks of agreeing to their terms?" Here's the response:
Agreeing to Persona’s terms, especially for biometric identity verification, involves both privacy and long-term data security risks. Here’s a clear breakdown of the main risks you should be aware of: 1. Biometric Data Collection
Risk: Biometric identifiers (like facial recognition, voiceprints, etc.) are extremely sensitive and irreplaceable if compromised.
Risk: Persona says biometric data is kept for up to one year, but:
Risk: Your biometric and ID data may be shared with subprocessors (partners/vendors) that you haven’t explicitly vetted.
Risk: The fine print often includes vague consent for "quality assurance", "model improvement", or "fraud detection".
Should You Agree?
Only if:
If you’re uneasy about this, you’re not alone. Many developers and privacy advocates refuse to verify with biometrics for non-critical services, and companies like OpenAI are increasingly facing criticism for requiring this.
Huh, I never considered using ChatGPT to read terms of serivce for me. Nice idea.
[dead]
o3 is really powerful. I understand it tbh. They don't want scammers and abusers easily accessing it