Comment by chlodwig
1 year ago
Skimming the regulations, this does not seem right. All IAAS providers (which is everyone who allows customers to run custom code, so it includes any web host like Dreamhost) to verify the identity of foreigners who open an account. This would seemingly entail the service provider needing to verify everyone's identity, in order to figure out who is a foreigner and who is not.
In other words, if you want to run your own Wordpress, or Mastodon node, or your own custom CMS web site or group chat or IRC or bitcoin node, you would need to reveal your identity to the hosting service that you want. This does seem quite bad and could obviously be used to identify political dissidents.
On top of that, the IAAS must report to the US Commerce department about foreigners who are using services to train large AI models.
Aren't you basically revealing yourself anyway because you need to pay them?
AWS has my name and my credit card number. But they have never asked for a photocopy of my passport, my history of international travel, which nationalities I have and so on. Something tells me that for the goal of this law to be achieved, all those details would need to enter the database.
Amazon is certainly supposed to ensure that you are not a sanctioned person or a citizen of a sanctioned country. This was a concern decades ago when I was in shared web hosting.. don't know why it would have changed?
3 replies →
Not necessarily (although that doesn't necessarily mean I think this is OK). Payment-card-based verification is a longstanding method of doing prima-facie verification like this. When you give your credit card, you give your billing address and typically your phone number -- if the postal code is a US address and the phone number is a US area code and everything else is consistent with that, that might be all the KYC required. If you appear to be a foreign national operating outside the US, they can flag that and require additional paperwork only then.
This proposed rule looks to me like it basically requires providers to come up with their own verification plans, which may then differ from provider to provider, so as to be "flexible and minimally burdensome to their business operations".
[note for the following: I am not a lawyer. The following is not legal advice. Do not fold, spindle or multilate. Do not taunt Happy Fun Ball.]
The real danger, I think, with things like this is, there's an executive order that was issued, but it further specified a rulemaking process be conducted to determine the actual regulations that define compliance. The link in the title is to the proposed rule. There's nothing that says any amount of prior public input will necessarily influence the details of the final rule, or that rule can't change in the future through another rulemaking process, and if it does the only way to challenge it is either to sue the agency on the grounds that it exceeded its discretion (e.g. by making rules that require unconstitutional things) or that the enabling executive order is itself unconstitutional -- but these kinds of federal cases have a pretty high bar for what's called "standing" (the legal grounds to bring a particular lawsuit): you pretty much have to suffer concrete harm or be in obvious and imminent danger of suffering it to a grievous degree. (This is one reason you hear about "test cases" -- often somebody will agree to be the goat who is denied something, fined, or even arrested and convicted of a crime, so that standing to sue to overturn the law can be established.) Other times, if a lot of potential defendants already have standing, a particularly sympathetic defendant will be selected for the actual challenge. The US federal courts are also deferential to "agency discretion" by default, as a matter of doctrine.
What happens all too often with these things is, the initial rulemaking is pretty reasonable, and the public outrage (if there was any) dissipates. Then three years (or however long) on, the next rulemaking imposes onerous restrictions and strict criteria, and people suddenly (relatively speaking) wake up and find they're now in violation of federal regulations that they were in compliance with last week. (This is one reason public-interest groups are so critical -- they have the motivation and sustained attention to comb the Federal Register for announcements about upcoming rounds of rulemaking on various topics.)
1 reply →
[flagged]
If you rent a VPS in supposedly privacy-conscious Germany they need photo id too :(
Luckily there's other cheap options in Europe like in France.
2 replies →
There are IaaS services out there that accept bitcoin, monero, or anonymous prepaid charge cards. They aren't an IaaS but Mullvad even accepts cash mailed to them in an envelope.
Is it fair to assume, that one can engage in a business relationship with these services outside the US? I'm not sure I see the effect that you are implying. AWS, GCP, Azure don't accept crypto. Mullvad is as you point out not an IaaS provider.
2 replies →
Some hosts accept alternate payment systems, like gift cards or cryptocurrency. You can also have someone else pay for it with a credit card or bank transfer without giving your name, which can be quite important in some cases. The new rules would presumably make that a crime.
“Say you host spammers and scammers without saying you host them.”
Tbh this is fine by me. It's about time the US stop being the center of the world for internet infrastructure.
i’m reading through the contrarian takes here and thinking, “yeah i’m kind of ok with that?”
this would make it much trickier for bad actors to get away with everything from online ai scams to swatting. i could live with that.
It would not. They're financially motivated to do what they want. They will find a way around it. i.e. scaming the elderly to sign up for cloud services and proxying their KYC requirements.
There are scamers who walk seniors to sign up through Coinbase, the KYC requirements, to order bitcoin.
It's fine to make me, a blind person have to upload a government ID. Cool dude.
I think you need to re-read my comment.
Post a comment to the federal register.
Good. It’s not 1999.
There are so many malicious actors putting human life at risk in some scenarios it should be possible to figure out who owns what.
Now, I would start with corporate ownership and focus on anonymous entities controlling things like Delaware and Nevada corporations. But that’s me.
You guys are stupid. That's exactly what they want to use it for is to train AI.