If you're going to host user content on subdomains, then you should probably have your site on the Public Suffix List https://publicsuffix.org/list/ .
That should eventually make its way into various services so they know that a tainted subdomain doesn't taint the entire site....
In the past, browsers used an algorithm which only denied setting wide-ranging cookies for top-level domains with no dots (e.g. com or org). However, this did not work for top-level domains where only third-level registrations are allowed (e.g. co.uk). In these cases, websites could set a cookie for .co.uk which would be passed onto every website registered under co.uk.
Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain (the policies differ with each registry), the only method is to create a list. This is the aim of the Public Suffix List.
(https://publicsuffix.org/learn/)
So, once they realized web browsers are all inherently flawed, their solution was to maintain a static list of websites.
God I hate the web. The engineering equivalent of a car made of duct tape.
> Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain
A centralized list like this not just for domains as a whole (e.g. co.uk) but also specific sites (e.g. s3-object-lambda.eu-west-1.amazonaws.com) is both kind of crazy in that the list will bloat a lot over the years, as well as a security risk for any platform that needs this functionality but would prefer not to leak any details publicly.
We already have the concept of a .well-known directory that you can use, when talking to a specific site. Similarly, we know how you can nest subdomains, like c.b.a.x, and it's more or less certain that you can't create a subdomain b without the involvement of a, so it should be possible to walk the chain.
Example:
c --> https://b.a.x/.well-known/public-suffix
b --> https://a.x/.well-known/public-suffix
a --> https://x/.well-known/public-suffix
Maybe ship the domains with the browsers and such and leave generic sites like AWS or whatever to describe things themselves. Hell, maybe that could also have been a TXT record in DNS as well.
> God I hate the web. The engineering equivalent of a car made of duct tape.
Most of the complex thing I have seen being made (or contributed to) needed duct tape sooner or later. Engineering is the art of trade-offs, of adapting to changing requirements (that can appear due to uncontrollable events external to the project), technology and costs.
I think it's somewhat tribal webdev knowledge that if you host user generated content you need to be on the PSL otherwise you'll eventually end up where Immich is now.
I'm not sure how people not already having hit this very issue before is supposed to know about it beforehand though, one of those things that you don't really come across until you're hit by it.
Besides user uploaded content it's pretty easy to accidentally destroy the reputation of your main domain with subdomains.
For example:
1. Add a subdomain to test something out
2. Complete your test and remove the subdomain from your site
3. Forget to remove the DNS entry and now your A record points to an IP address
At this point if someone else on that hosting provider gets that IP address assigned, your subdomain is now hosting their content.
I had this happen to me once with PDF books being served through a subdomain on my site. Of course it's my mistake for not removing the A record (I forgot) but I'll never make that mistake again.
10 years of my domain having a good history may have gotten tainted in an unrepairable way. I don't get warnings visiting my site but traffic has slowly gotten worse over time since around that time, despite me posting more and more content. The correlation isn't guaranteed, especially with AI taking away so much traffic but it's something I do think about.
Looking through some of the links in this post, I there are actually two separate issues here:
1. Immich hosts user content on their domain. And should thus be on the public suffic list.
2. When users host an open source self hosted project like immich, jellyfin, etc. on their own domain it gets flagged as phishing because it looks an awful lot like the publicly hosted version, but it's on a different domain, and possibly a domain that might look suspicious to someone unfamiliar with the project, because it includes the name of the software in the domain. Something like immich.example.com.
The first one is fairly straightforward to deal with, if you know about the public suffix list. I don't know of a good solution for the second though.
I don't think the Internet should be run by being on special lists (other than like, a globally run registry of domain names)...
I get that SPAM, etc., are an issue, but, like f* google-chrome, I want to browse the web, not some carefully curated list of sites some giant tech company has chosen.
A) you shouldn't be using google-chrome at all B) Firefox should definitely not be using that list either C) if you are going to have a "safe sites" list, that should definitely be a non-profit running that, not an automated robot working for a large probably-evil company...
> I don't know of a good solution for the second though.
I know the second issue can be a legitimate problem but I feel like the first issue is the primary problem here & the "solution" to the second issue is a remedy that's worse than the disease.
The public suffix list is a great system (despite getting serious backlash here in HN comments, mainly from people who have jumped to wildly exaggerated conclusions about what it is). Beyond that though, flagging domains for phishing for having duplicate content smells like an anti-self-host policy: sure there's phishers making clone sites, but the vast majority of sites flagged are going to be legit unless you employ a more targeted heuristic, but doing so isn't incentivised by Google's (or most company's) business model.
> When users host an open source self hosted project like immich, jellyfin, etc. on their own domain...
I was just deploying your_spotify and gave it your-spotify.<my services domain> and there was a warning in the logs that talked about thud, linking the issue:
The second is a real problem even with completely unique applications. If they have UI portions that have lookalikes, you will get flagged. At work, I created an application with a sign-in popup. Because it's for internal use only, the form in the popup is very basic, just username and password and a button. Safe Browsing continues to block this application to this day, despite multiple appeals.
Even the first one only works if there's no need to have site-wide user authentication on the domain, because you can't have a domain cookie accessible from subdomains anymore otherwise.
I thought this story would be about some malicious PR that convinced their CI to build a page featuring phishing, malware, porn, etc. It looks like Google is simply flagging their legit, self-created Preview builds as being phishing, and banning the entire domain. Getting immich.cloud on the PSL is probably the right thing to do for other reasons, and may decrease the blast radius here.
> Is that actually relevant when only images are user content?
Yes. For instance in circumstances exactly as described in the thread you are commenting in now and the article it refers to.
Services like google's bad site warning system may use it to indicate that it shouldn't consider a whole domain harmful if it considers a small number of its subdomains to be so, where otherwise they would. It is no guarantee, of course.
In another comment in this thread, it was confirmed that these PR host names are only generated from branches internal to Immich or labels applied by maintainers, and that this does not automatically happen for arbitrary PRs submitted by external parties. So this isn’t the use case for the public suffix list - it is in no way public or externally user-generated.
What would you recommend for this actual use case? Even splitting it off to a separate domain name as they’re planning merely reduces the blast radius of Google’s false positive, but does not eliminate it.
If these are dev subdomains that are actually for internal use only, then a very reliable fix is to put basic auth on them, and give internal staff the user/password. It does not have to be strong, in fact it can be super simple. But it will reliably keep out crawlers, including Google.
Browsers already do various levels of isolation based on domain / subdomains (e.g. cookies). PSL tells them to treat each subdomain as if it were a top level domain because they are operated (leased out to) different individuals / entities. WRT to blocking, it just means that if one subdomain is marked bad, it's less likely to contaminate the rest of the domain since they know it's operated by different people.
This is not about user content, but about their own preview environments! Google decided their preview environments were impersonating... Something? And decided to block the entire domain.
I think this only is true if you host independent entities. If you simply construct deep names about yourself with demonstrable chain of authority back, I don't think the PSL wants to know. Otherwise there is no hierarchy the dots are just convenience strings and it's a flat namespace the size of the PSLs length.
There is no law appointing that organization as a world wide authority on tainted/non tainted sites.
The fact it's used by one or more browsers in that way is a lawsuit waiting to happen.
Because they, the browsers, are pointing a finger to someone else and accusing them of criminal behavior. That is what a normal user understands this warning as.
Turns out they are wrong. And in being wrong they may well have harmed the party they pointed at, in reputation and / or sales.
It's remarkable how short sighted this is, given that the web is so international. Its not a defense to say some third party has a list, and you're not on it so you're dangerous
Never host your test environments as Subdomains of your actual production domain.
You'll also run into email reputation as well as cookie hell. You can get a lot of cookies from the production env if not managed well.
This. I cannot believe the rest of the comments on this are seemingly completely missing the problem here & kneejerk-blaming Google for being an evil corp. This is a real issue & I don't feel like the article from the Immich team acknowledges it. Far too much passing the buck, not enough taking ownership.
It's true that putting locks on your front door will reduce the chance of your house getting robbed, but if you do get robbed, the fact that your front door wasn't locked does not in any way absolve the thief for his conduct.
Similarly, if an organization deploys a public system that engages in libel and tortious interference, the fact that jumping through technical hoops might make it less likely to be affected by that system does not in any way absolve the organization for operating it carelessly in the first place.
Just because there are steps you can take to lessen the impact of bad behavior does not mean that the behavior itself isn't bad. You shouldn't have restrict how you use your own domains to avoid someone else publishing false information about your site. Google should be responsible for mitigating false positives, not the website owners affected by them.
.cloud is used to host the map embedded in their webapp.
In fairness, in my local testing sofar, it appears to be an entirely unauthenticated/credential-less service so there's no risk to sessions right now for this particular use-case. That leaves the only risk-factors being phishing & deploy environment credentials.
The one thing I never understood about these warnings is how they don't run afoul of libel laws. They are directly calling you a scammer and "attacker". The same for Microsoft with their unknown executables.
They used to be more generic saying "We don't know if its safe" but now they are quite assertive at stating you are indeed an attacker.
"The people living at this address might be pedophiles and sexual predators. Not saying that they are, but if your children are in the vicinity, I strongly suggest you get them back to safety."
You can’t possibly use the “they use the word ‘might’” argument and not mention the death red screen those words are printed over. If you are referring to abidance to the law, you are technically right. If we remove the human factor, you technically are.
Imagine if you bought a plate at Walmart and any time you put food you bought elsewhere on it, it turned red and started playing a warning about how that food will probably kill you because it wasn't Certified Walmart Fresh™
Now imagine it goes one step further, and when you go to eat the food anyway, your Walmart fork retracts into its handle for your safety, of course.
No brand or food supplier would put up with it.
That's what it's like trying to visit or run non-blessed websites and software coming from Google, Microsoft, etc on your own hardware that you "own".
This is the future. Except you don't buy anything, you rent the permission to use it. People from Walmart can brick your carrots remotely even when you don't use this plate, for your safety ofc
> The one thing I never understood about these warnings is how they don't run afoul of libel laws. They are directly calling you a scammer and "attacker"
Being wrong doesn't count as libel.
If a company has a detection tool, makes reasonable efforts to make sure it is accurate, and isn't being malicious, you'll have a hard time making a libel case
There is a truth defence to libel in the USA but there is no good faith defence. Think about it like a traffic accident, you may not have intended to drive into the other car but you still caused damage. Just because you meant well doesn't absolve you from paying for the damages.
If the false positive rate is consistently 0.0%, that is a surefire sign that the detector is not effective enough to be useful.
If a false positive is libel, then any useful malware detector would occasionally do libel. Since libel carries enormous financial consequences, nobody would make a useful malware detector.
I am skeptical that changing the wording in the warning resolves the fundamental tension here. Suppose we tone it down: "This executable has traits similar to known malware." "This website might be operated by attackers."
Would companies affected by these labels be satisfied by this verbiage? How do we balance this against users' likelihood of ignoring the warning in the face of real malware?
The problem is that it's so one sided. They do what they want with no effort to avoid collateral damage and there's nothing we can do about it.
They could at least send a warning email to the RFC2142 abuse@ or hostmaster@ address with a warning and some instructions on a process for having the mistake reviewed.
The first step in filing a libel lawsuit is demanding a retraction from the publisher. I would imagine Google's lawyers respond pretty quickly to those, which is why SafeBrowsing hasn't been similarly challenged.
Happened to me last week. One morning we wake up and the whole company website does not work.
Not advice with some time to fix any possible problem, just blocked.
We gave very bad image to our clients and users, and had to give explanations of a false positive from google detection.
The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).
After just moving the webmail to another domain, removing one of the redirections just in case, and asking politely 4 times to be unblocked.. took about 12 hours. And no real recourse, feedback or anything about when its gonna be solved. And no responsibility.
The worse is the feeling of not in control of your own business, and depending on a third party which is not related at all with us, which made a huge mistake, to let out clients use our platform.
It would be glorious if everybody unjustly screwed by Google did that. Barring antitrust enforcement, this may be the only way to force them to behave.
In all US states corporations may be represented by lawyers in small claims cases. The actual difference is that in higher courts corporations usually must be represented by lawyers whereas many states allow normal employees to represent corporations when defending small claims cases, but none require it.
I've been thinking for a while that a coordinated and massive action against a specific company by people all claiming damages in small claims court would be a very effective way of bringing that company to heel.
> The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).
I find it hard to believe that the double redirect itself tripped it: multiple redirects in a row is completely normal—discouraged in general because it hurts performance, but you encounter them all the time. For example, http://foo.example → https://foo.example → https://www.foo.example (http → https, then add or remove www subdomain) is the recommended pattern. And site root to app path to login page is also pretty common. This then leads me to the conclusion that they’re not disclosing what actually tripped it. Maybe multiple redirects contributed to it, a bad learned behaviour in an inscrutable machine learning model perhaps, but it alone is utterly innocuous. There’s something else to it.
Want to see how often Microsoft accounts redirect you? I'd love to see Google block all of Microsoft, but of course that will never happen, because these tech giants are effectively a cartel looking out for each other. At least in comparison to users and smaller businesses.
I suspect you're right... The problem is, and i've experienced this with many big tech companies, you never really get any explanation. You report an issue, and then, magically, it's "fixed," with no further communication.
I'm permanently banned from the Play Store because 10+ years ago I made a third-party Omegle client, called it Yo-megle (neither Omegle nor Yo-megle still exist now), got a bunch of downloads and good ratings, then about 2 years later got a message from Google saying I was banned for violating trademark law. No actual legal action, just a message from Google. I suppose I'm lucky they didn't delete my entire Google account.
I'm beginning to seriously think we need a new internet, another protocol, other browsers just to break up the insane monopolies that has been formed, because the way things are going soon all discourse will be censored, and competitors will be blocked soon.
We need something that's good for small and medium businesses again, local news and get an actual marketplace going - you know what the internet actually promised.
The community around NOSTR are basically building a kind of semantic web, where users identities are verified via their public key, data is routed through content agnostic relays, and trustworthiness is verified by peer recommendation.
They are currently experimenting with replicating many types of services which are currently websites as protocols with data types, with the goal being that all of these services can share available data with eachother openly.
It's definitely more of a "bazaar" model over a "catherdral" model, with many open questions and it's also tough to get a good overview of what is really going on there. But at least it's an attempt.
We have a “new internet”. We have the indie web, VPNs, websites not behind Cloudflare, other browsers. You won’t have a large audience, but a new protocol won't fix that.
Also, plenty of small and medium businesses are doing fine on the internet. You only hear about ones with problems like this. And if these problems become more frequent and public, Google will put more effort into fixing them.
I think the most practical thing we can do is support people and companies who fall through the cracks, by giving them information to understand their situation and recover, and by promoting them.
Stop trying to look for technological answers to political problems. We already have a way to avoid excessive accumulation of power by private entities, it's called "anti-trust laws" (heck, "laws" in general).
Any new protocol not only has to overcome the huge incumbent that is the web, it has to do so grassroots against the power of global capital (trillions of dollars of it). Of course, it also has to work in the first place and not be captured and centralised like another certain open and decentralised protocol has (i.e., the Web).
Is that easier than the states doing their jobs and writing a couple pages of text?
It's very, very hard to overcome the gravitational forces which encourage centralization, and doing so requires rooting the different communities that you want to exist in their own different communities of people. It's a political governance problem, not a technical one.
IPFS has been doing some great work around decentralization that actually scales (Netflix uses it internally to speed up container delivery), but a) it's only good for static content, b) things still need friendly URLs, and c) once it becomes the mainstream, bad actors will find a way to ruin it anyway.
These apply to a lot of other decentralized systems too.
It won't get anywhere unless it addresses the issue of spam, scammers, phishing etc. The whole purpose of Google Safe Browsing is to make life harder for scammers.
I own what I think are the key protocols for the future of browsers and the web, and nobody knows it yet. I'm not committed to forking the web by any means, but I do think I have a once-in-a-generation opportunity to remake the system if I were determined to and knew how to remake it into something better.
I'm afraid this can't be built on the current net topology which is owned by the Stupid Money Govporation and inherently allows for roadblocks in the flow of information. Only a mesh could solve that.
But the Stupid Money Govporation must be dethroned first, and I honestly don't see how that could happen without the help of an ELE like a good asteroid impact.
It will take the same or less amount of time, to get where we are with current Web.
What we have is the best sim env to see how stuff shape up. So fixing it should be the aim, avoiding will get us on similar spirals. We'll just go on circles.
This may not be a huge issue depending on mitigating controls but are they saying that anyone can submit a PR (containing anything) to Immich, tag the pr with `preview` and have the contents of that PR hosted on https://pr-<num>.preview.internal.immich.cloud?
Doesn't that effectively let anyone host anything there?
I think only collaborators can add labels on github, so not quite. Does seem a bit hazardous though (you could submit a legit PR, get the label, and then commit whatever you want?).
Exposure also extends not just to the owner of the PR but anyone with write access to the branch from which it was submitted. GitHub pushes are ssh-authenticated and often automated in many workflows.
It's the result of failures across the web, really. Most browsers started using Google's phishing site index because they didn't want to maintain one themselves but wanted the phishing resistance Google Chrome has. Microsoft has SmartScreen, but that's just the same risk model but hosted on Azure.
Google's eternal vagueness is infuriating but in this case the whole setup is a disaster waiting to happen. Google's accidental fuck-up just prevented "someone hacked my server after I clicked on pr-xxxx.imiche.app" because apparently the domain's security was set up to allow for that.
You can turn off safe browsing if you don't want these warnings. Google will only stop you from visiting sites if you keep the "allow Google to stop me from visiting some sites" checkbox enabled.
I really don't know how they got nerds to think scummy advertising is cool. If you think about it, the thing they make money on - no user actually wants ads or wants to see them, ever. Somehow Google has some sort of nerd cult that people think its cool to join such an unethical company.
If you ask, the leaders in that area of Google will tell you something like "we're actually HELPING users because we're giving them targeted ads that are for the things they're looking for at the time they're looking for it, which only makes things for the user better." Then you show them a picture of YouTube ads or something and it transitions to "well, look, we gotta pay for this somehow, and at least's it's free, and isn't free information for all really great?"
It's super simple. Check out all the Fediverse alternatives. How many people that talk a big game actually financially support those services? 2% maybe, on the high end.
Things cost money, and at a large scale, there's either capitalism, or communism.
The open internet is done. Monopolies control everything.
We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out. Nothing changed in 3 years.
Getting sick of these companies able to have this level of control over everything, you can’t even self host anymore apparently.
> We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out.
I love Immich & greatly appreciate the amazing work the team put into maintaining it, but between the OP & this "Cursed Knowledge" page, the apparent team culture of shouting from the rooftops complaints that expose their own ignorance about technology is a little concerning to be honest.
I've now read the entire Cursed Knowledge list & - while I found some of them to be invaluable insights & absolutely love the idea of projects maintaining a public list of this nature to educate - there are quite a few red flags in this particular list.
Before mentioning them: some excellent & valuable, genuinely cursed items: Postgres NOTIFY (albeit adapter-specific), npm scripts, bcrypt string lengths & especially the horrifically cursed Cloudflare fetch: all great knowledge. But...
> Secure contexts are cursed
> GPS sharing on mobile is cursed
These are extremely sane security feature. Do we think keeping users secure is cursed? It honestly seems crazy to me for them to have published these items in the list with a straight face.
> PostgreSQL parameters are cursed
Wherein their definition of "cursed" is that PG doesn't support running SQL queries with more than 65535 separate parameters! It seems to me that any sane engineer would expect the limit to be lower than that. The suggestion that making an SQL query with that many parameters is normal seems problematic.
> JavaScript Date objects are cursed
Javascript is zero-indexed by convention. This one's not a huge red flag but it is pretty funny for a programmer to find this problematic.
> Carriage returns in bash scripts are cursed
Non-default local git settings can break your local git repo. This isn't anything to do with bash & everyone knows git has footguns.
> JavaScript date objects are 1 indexed for years and days, but 0 indexed for months.
This mix of 0 and 1 indexing in calendar APIs goes back a long way. I first remember it coming from Java but I dimly recall Java was copying a Taligent Calendar API.
Huh. Maybe? I don't want that information available to apps to spy on me. But I do want full file contents available to some of them.
And wait. Uh oh. Does this mean my Syncthing-Fork app (which itself would never strike me as needing location services) might have my phone's images' location be stripped before making their way to my backup system?
EDIT: To answer my last question: My images transferred via Syncthing-Fork on a GrapheneOS device to another PC running Fedora Atomic have persisted the GPS data as verified by exiftool. Location permissions have not been granted to Syncthing-Fork.
Happy I didn't lose that data. But it would appear that permission to your photo files may expose your GPS locations regardless of the location permission.
I think the “cursed” part (from the developers point of view) is that some phones do that, some don’t, and if you don’t have both kinds available during testing, you might miss something?
Yep, and it's there for very goos reasons. However if you don't know about it, it can be quite surprising and challenging to debug.
Also it's annoying when your phones permissions optimiser runs and removes the location permissions from e.g. Google Photos, and you realise a few months later that your photos no longer have their location.
It's not if it silently alters the file.
i do want GPS data for geolocation, so that when i import the images in the right places they are already placed where they should be on the map
Every kind of permission should fail the same way, informing the user about the failure, and asking if the user wants to give the permission, deny the access, or use dummy values. If there's more than one permission needed for an operation, you should be able to deny them all, or use any combination of allowing or using dummy values.
As it says, bulk inserts with large datasets can fail. Inserting a few thousand rows into a table with 30 columns will hit the limit. You might run into this if you were synchronising data between systems or running big batch jobs.
Sqlite used to have a limit of 999 query parameters, which was much easier to hit. It's now a roomy 32k.
> PostgreSQL USER is cursed
> The USER keyword in PostgreSQL is cursed because you can select from it like a table, which leads to confusion if you have a table name user as well.
> JavaScript date objects are 1 indexed for years and days, but 0 indexed for months.
I don't disagree that months should be 1-indexed, but I would not make that assumption solely based on days/years being 1-indexed, since 0-indexing those would be psychotic.
The only reason I can think of to 0-index months is so you can do monthName[date.getMonth()] instead of monthName[date.getMonth() - 1].
I don't think adding counterintuitive behavior to your data to save a "- 1" here and there is a good idea, but I guess this is just legacy from the ancient times.
Why so? Months in written form also start with 1, same as days/years, so it would make sense to match all of them.
For example, the first day of the first month of the first year is 1.1.1 AD (at least for Gregorian calendar), so we could just go with 0-indexed 0.0.0 AD.
Dark-grey text on black is cursed. (Their light theme is readable.)
Also, you can do bulk inserts in postgres using arrays. Take a look at unnest. Standard bulk inserts are cursed in every database, I'm with the devs here that it's not worth fixing them in postgres just for compatibility.
I'm fighting this right now on my own domain. Google marked my family Immich instance as dangerous, essentially blocking access from Chrome to all services hosted on the same domain.
I know that I can bypass the warning, but the photo album I sent to my mother-in-law is now effectively inaccessible.
Unless I missed something in the article this seems like a different issue. The article is specifically about the domain "immich.cloud". If you're using your own domain, I'd check to ensure it hasn't been actually compromised by a bonnet or similar in some way you haven't noticed.
It may well be a false positive of Google's heuristics but home server security can be challenging - I would look at ruling out the possibility of it being real first.
It certainly sounds like a separate root issue to this article, even if the end result looks the same.
Just in case you're not sure how to deal with it, you need to request a review via the Google Search Console. You'll need a Google account and you have to verify ownership of the domain via DNS (if you want to appeal the whole domain). After that, you can log into the Google Search Console and you can find "Security Issues" under the "Security & Manual Actions" section.
That area will show you the exact URLs that got you put on the block list. You can request a review from there. They'll send you an email after they review the block.
Hopefully that'll save you from trying to hunt down non-existent malware on a half dozen self-hosted services like I ended up doing.
It's a bit ironic that a user installing immich to escape Google's grip ends up having to create again a Google account to be able to remove their Google account.
Reviews view Google Search Console are pointless because they won't stop the same automated process from flagging the domain again. Save your time and get your lawyer to draft a friendly letter instead.
Add a custom "welcome message" in Server Settings (https://my.immich.app/admin/system-settings?isOpen=server) to make your login page look different compared to all other default Immich login pages.
This is probably the easiest non-intrusive tweak to work around the repeated flagging by Safe Browsing, still no 100% guarantee.
I agree that strict access blocking (with extra auth or IP ACL) can work better. Though I've seen in this thread https://github.com/jellyfin/jellyfin-web/issues/4076#issueco... with varying success.
And go through your domain registration/re-review in G Search Console of course.
Immich is a great software package, and I recommend it. Sadly, Google can still flag sites based on domain name patterns, blocking content behind auth or even on your LAN.
That probably wouldn't work, I get hit with Chrome's red screen of annoyance regularly with stuff only reachable on my LAN. I suspect the trigger is that the URLs are like [product name].home.[mydomain.com].
A friend / client of mine used some kind of WordPress type of hosting service with a simple redirect. The host got on the bad sites list.
This also polluted their own domain, even when the redirect was removed, and had the odd side effect that Google would no longer accept email from them. We requested a review and passed it, but the email blacklist appears to be permanent. (I already checked and there are no spam problems with the domain.)
We registered a new domain. Google’s behaviour here incidentally just incentivises bulk registering throwaway domains, which doesn’t make anything any better.
My general policy now is to confine important email to a very, very basic website that you rigidly control the hosting over and just keep static sites on.
Us nerds *really* need to come together in creating a publicly owned browser (non chromium)
Surely among us devs, as we realize app stores increasingly hostile, that the open web is worth fighting for, and that we have the numbers to build solutions?
Firefox should be on that list. It's clearly a lot closer in functionality to Chrome/Chromium than Servo or Ladybird, so it's easier to switch to it. I like that Servo and Ladybird exist and are developing well, but there's no need to pretend that they're the only available alternatives.
This is #1 on HN for a while now and I suspect it's because many of us are nervous about it happening to us (or have already had our own homelab domains flagged!).
So is there someone from Google around who can send this along to the right team to ensure whatever heuristic has gone wrong here is fixed for good?
I doubt Google the corporation cares one bit, and any individual employees who do care would likely struggle against the system to cause significant change.
The best we all can do is to stop using Google products and encourage our friends and family to do likewise. Make sure in our own work that we don't force others to rely on Google either.
We really need an internet Bill of Rights. Google has too much power to delete your company from existence with no due process or recourse.
If any company controls some (high) percentage of a particular market, say web browsers, search, or e-commerce, or social media, the public's equal access should start to look more like a right and less like an at-will contract.
30 years ago, if a shop had a falling out with the landlord, it could move to the next building over and resume business. Now if you annoy eBay, Amazon or Walmart, you're locked out nationwide. If you're an Uber, Lyft, or Doordash (etc) gig worker and their bots decide they don't like you anymore, then sayonara sucker! Your account has been disabled, have a nice day and don't reapply.
Our regulatory structure and economies of scale encourage consolidation and scale and grant access to this market to these businesses, but we aren't protecting the now powerless individuals and small businesses who are randomly and needlessly tossed out with nobody to answer their pleas of desperation, no explanation of rules broken, and no opportunity to appeal with transparency.
I know someone with a small business that applied for Venmo Business account (which is the main payment method in their community industry) and Venmo refused to open the account and didn't provide any reason as to why saying that they have the right to choose to refuse providing the service, which they do. But all the competitors of that business in the area do have a Venmo and take payment this way so it is basically a revenue loss for that person.
It's a bit frustrating when a company becomes a major player in an industry and can have a life and death sentence on other businesses.
There are alternative payment method but people are use to pay a certain way in that industry/area, similarly there are other browsers but people are used to Chrome.
Same thing with Paypal - I opened a business account, was able to do one transaction and was shut down for fraud. I tested a donation to myself. Under $10. Lifetime ban.
In 2025 you can use Beeper (or run your own local Matrix server with the opensource bridges) and get the same result with WhatsApp, Signal, Telegram, Discord, Google Messages, etc. etc.
But seriously; the internet is now overrun with AI Slop, Spam, and automated traffic. To try to do something about it requires curation, somebody needs to decide what is junk, which is completely antithetical to open protocols. This problem is structurally unsolvable, there is no solution, there's either a useless open internet or a useful closed one. The internet is voting with Cloudflare, Discord, Facebook, to be useful, not open. The alternative is trying to figure out how to run a decentralized dictatorship that only allows good things to happen; a delusion.
The only other solution is accountability, a presence tied to your physical identity; so that an attacker cannot just create 100,000 identities from 25,000 IP addresses and smash your small forum with them. That's an even less popular idea, even though it would make open systems actually possible. Building your own search engine or video platform would be super easy, barely an inconvenience. No need for Cloudflare if the police know who every visitor is. No need for a spam filter, if the government can enforce laws perfectly.
Take a look at email, the mother of all open protocols (older than HTTP). What happened? Radical recentralization to companies that had effective spam management, and now we on HN complain we can't break through, someone needs to do something about that centralization, so that we can go back to square one where people get spammed to death again, which will inevitably repeat the discretion required -> who has the best discretion -> flee there cycle. Go figure.
FWIW in some jurisdictions you might be able to sue them for tortious interference, which basically means they went out of their way to hurt your business.
I see a lot of comments here about using some browser that will allow ME to see sites I want to see, but I did not see a lot about how do I protect my site or sites of clients from being subjected to this. Is there anything proactive that can be done? A set of checks almost like regression testing? I understand it can be a bit like virus builders using anti virus to test their next virus. But is there a set of best practices that could give you higher probability of not being blocked?
> how do I protect my site or sites of clients from being subjected to this. Is there anything proactive that can be done?
Some steps to prevent this happening to you:
1. Host only code you own & control on your own domain. Unless...
2. If you have a use-case for allowing arbitrary users to publish & host arbitrary code on a domain you own (or subdomains of), then ensure that domain is a separate dedicated one to the ones you use for your own owned code, that can't be confused with your own owned hosted content.
3. If you're allowing arbitrary members of the public to publish arbitrary code for preview/testing purposes on a domain you own - have the same separation in place for that domain as mentioned above.
4. If you have either of the above two use-cases, publish that separated domain on the Mozilla Public Suffix list https://publicsuffix.org/
That would protect your domains from being poisoned by arbitrary publishing, but wouldn't it risk all your users being affected by one user publishing?
A good takeaway is to separate different domains for different purposes.
I had prior been tossing up the pros/cons of this (such as teaching the user to accept millions of arbitrary TLDs as official), but I think this article (and other considerations) have solidified it for me.
The biggest con of this is that to a user it will seem much more like phishing.
It happened to me a while ago that I suddenly got emails from "githubnext.com". Well, I know Github and I know that it's hosted at "github.com". So, to me, that was quite obviously phishing/spam.
This is such a difficult problem. You should be able to buy a “season pass” for $500/year or something that stops anyone from registering adjacent TLDs.
And new TLDs are coming out every day which means that I could probably go buy microsoft.anime if I wanted it.
This is what trademarks are supposed to do, but it’s reactive and not proactive.
PayPal is a real star when it comes to vague, fake-sounding, official domains.
Real users don't care much about phishing as long as you got redirected from the main domain, though. github.io has been accepted for a long time, and githubusercontent.com is invisible 99% of the time. Plus, if your regular users are not developers and still end up on your dev/staging domains, they're bound to be confused regardless.
Maybe a dumb question but what constitutes user-hosted-content?
Is a notion page, github repo, or google doc that has user submitted content that can be publicly shared also user-hosted?
IMO Google should not be able to use definitive language "Dangerous website" if its automated process is not definitive/accurate. A false flag can erode customer trust.
The definition of "active code" is broad & sometimes debatable - e.g. do old MySpace websites count - but broadly speaking the best way of thinking about it is in terms of threat model, & the main two there are:
- credential leakage
- phishing
The first is fairly narrow & pertains to uploading server side code or client javascript. If Alice hosts a login page on alice.immich.cloud that contains some session handling bugs in her code, Mallory can add some cute to mallory.immich.cloud to read cookies set on *.immich.cloud to compromise Alice's logins.
The second is much broader as it's mostly about plausible visual impersonation so will also cases where users can only upload CSS or HTML.
Specifically in this case what Immich is doing here is extremely dangerous & this post from them - while I'll give them the benefit of the doubt on being ignorant - is misinformation.
It may be dangerous but it is an established pattern. There are many cases (like Cloudflare Pages) of others doing the same, hosting strangers' sites on subdomains of a dedicated domain (pages.dev for Cloudflare, immich.cloud for Immich).
By preventing newcomers from using this pattern, Google's system is flawed, severely stifling competition.
> what Immich is doing here is extremely dangerous
You fully misunderstand what content is hosted on these sites. It's only builds from internal branches by the core team, there is no path for "external user" content to land on this domain.
Looking forward to Louis Rossmann's reaction. Wouldn't be surprised if this leads to a lawsuit over monopolistic behavior - this is clearly abusing their dominant position in the browser space to eliminate competitors in photos sharing.
He's a right-to-repair activist Youtuber who is quite involved in GrayJay, another app made by this company, which is a video player client for other platforms like YouTube.
I'm not sure why his reaction would be relevant, though. It'll just be another rant about how Google has too much control like he's done in the past. He may be right, but there's nothing new to say.
>> Unfortunately, Google seems to have the ability to arbitrarily flag any domain and make it immediately unaccessible to users. I'm not sure what, if anything, can be done when this happens, except constantly request another review from the all mighty Google.
Perhaps a complaint to the ETC for abusing the monopoly and lack of due process to harm legitimate business? Or DG COMP (in the EU).
Gather evidence of harm and seek alliances with other open-source projects could build a momentum.
I write a couple of libraries for creating GOV.UK services and Google has flagged one of them as dangerous. I've appealed the decision several times but it's like screaming into a void.
I use Google Workspace for my company email, so that's the only way for me to get in contact with a human, but they refuse to go off script and won't help me contact the actual department responsible in any way.
It's now on a proper domain, https://govuk-components.x-govuk.org/ - but other than moving, there's still not much anyone can do if they're incorrectly targeted.
Google is not the only one marking subdomains under netlify.app dangerous. For a good reason though, there's a lot of garbage hosted there. Netlify also doesn't do a good enough job of taking down garbage.
Given the scale of Google, and the nerdiness required to run Immich, I bet it's just an accident. Nevertheless, I'm very curious as to how senior Google staff looks at Immich, are they actually registering signals that people use immich-go to empty their Google Photos accounts? Do they see this as something potentially dangrous to their business in the long term?
The nerdsphere has been buzzing with Immich for some time now (I started using it a month back and it lives up to its reputation!), and I assume a lot of Googlers are in that sphere (but not neccessarily pro-Google/anti-Immich of course). So I bet they at least know of it. But do they talk about it?
I love Immich but the entire design and interface is so clearly straight up copied from Google photos. It makes me a bit nervous about their exposure, legally.
I think the other very interesting thing in the reddit thread[0] for this is that if you do well-known-domain.yourdomain.tld then you're likely to get whacked by this too. It makes sense I guess. Lots of people are probably clicking gmail.shady.info and getting phished.
Can I use this space to comment on how amazing Immich is? I self host lots of stuff, and there’s this one tier above everything else that’s currently, and exclusively, held by Home Assistant and Immich. It is actually _better_ than Google photos (if you keep your db and thumbs on ssd, and run the top model for image search). You give up nothing, and own all your data.
I think I found it because it was recommended by Immich as the best, but it still only took a day or two to run against my 5 thousand assets. I’ve tested it against whatever Google is using (I keep a part of my library on Google Photos), and it’s far better.
I’ve heard anecdotes of people using an entirely internal domain like “plex.example.com” even if it’s never exposed to the public internet, google might flag it as impersonating plex. Google will sometimes block it based only on name, if they think the name is impersonating another service.
Its unclear exactly what conditions cause a site to get blocked by safe browsing. My nextcloud.something.tld domain has never been flagged, but I’ve seen support threads of other people having issues and the domain name is the best guess.
I'm almost positive GMail scanning messages is one cause. My domain got put on the list for a URL that would have been unknowable to anyone but GMail and my sister who I invited to a shared Immich album. It was a URL like this that got emailed directly to 1 person:
Then suddenly the domain is banned even though there was never a way to discover that URL besides GMail scanning messages. In my case, the server is public so my siblings can access it, but there's nothing stopping Google from banning domains for internal sites that show up in emails they wrongly classify as phishing.
Think of how Google and Microsoft destroyed self hosted email with their spam filters. Now imagine that happening to all self hosted services via abuse of the safe browsing block lists.
I'm kind of curious, do you have your own domain for immich or is this part of a malware-flagged subdomain issue? It's kind of wild to me that Google would flag all instances of a particular piece of self-hosted software as malicious.
Tangential to the flagging issue, but is there any documentation on how Immich is doing the PR site generation feature? That seems pretty cool, and I'd be curious to learn more.
Pretty sure Immich is on github, so I assume they have a workflow for it, but in case you're interested in this concept in general, gitlab has first-class support for this which I've been using for years: https://docs.gitlab.com/ci/review_apps/ . Very cool and handy stuff.
I’m also self hosting gitea and pertainer and I’m trying this issue every few weeks. I appeal, they remove the warning, after a week is back. This is ongoing for at least 4 years. I have more than 20 appeals all successfully removing the warning. Ridiculous. I heard legal action is the best option now, any other ideas?
This happened to one of our documentation sites. My co-workers all saw it before I did, because Brave (my daily driver) wasn't showing it. I'm not sure if Brave is more relaxed in determining when a site is "dangerous" but I was glad not to be seeing it, because it was a false positive.
Safe Browsing collects a lot of data, such as hashes of URLs (URLs can be easily decoded by comparison) and probably other interactions with web like downloads.
But how effective is it in malware detection?
The benefits seem to me dubious. It looks like a feature offered to collect browsing data, useful to maybe 1% in special situations.
It's the only thing that has reasonable coverage to effectively block a phishing attack or malware distribution. It can certainly do other things like collecting browsing data, but it does get rid of long-lasting persistent garbage hosted at some bulletproof hosts.
Not sure if this is exactly the scenario from the discussed article but it's interesting to understand it nonetheless.
TL;DR the browser regularly downloads a dump of color profile fingerprints of known bad websites. Then when you load whatever website, it calculates the color profile fingerprint of it as well, and looks for matches.
(This could be outdated and there are probably many other signals.)
> There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.
I had this same problem with my self-hosted Home Assistant deployment, where Google marked the entire domain as phishing because it contains a login page that looks like other self-hosted Home Assistant deployments.
Fortunately, I expose it to the internet on its own domain despite running through the same reverse proxy as other projects. It would have sucked if this had happened to a domain used for anything else, since the appeal process is completely opaque.
Google often marks my homelab domains as dangerous which all point to an A record that is in the private IP space, completely inaccessible to the internet.
This is crazy, it happened to the SoGO webmailer, standalone or bundled with the mailcow: dockerized stack as well. They implemented a slight workaround where URLs are being encrypted to avoid pattern detection to flag it as "deceiving".
There is no responses from Google about this. I had my instance flagged 3 times on 2 different domains including all subdomains, displaying a nice red banner on a representative business website. Cool stuff!
This can happen to everyone. It happened to Amazon.de's Cloudfront endpoint a week ago. Most people didn't notice because Chrome doesn't look at the intermediate bits in the resolver chain, but DNS providers using Safe Browsing blocked it.
The .internal.immich.cloud sites do not have matching certs!
Navigating to https://main.preview.internal.immich.cloud, I'm right away informed by the browser that the connection is not secure due to an issue with the certificate. The problem is that it has the following CN (common name): main.preview.internal.immich.build. The list of alternative names also contains that same domain name. It does not match the site: the certificate's TLD .build is different from the site's .cloud!
I don't see the same problem on external sites like tiles.immich.cloud. That has a CN=immich.cloud
with tiles.immich.cloud as an alternative.
First thing I do when I start to use a browser for the first time is making sure 'Google Safe Browsing' feature is disabled. I don't need yet another annoyance while I browse the web, especially when it's from Google.
This happened to me, I hosted a Wordpress site and it got 0'day'd (this was probably 8 years ago). Google spotted the list of insane pornographic URLs and banned it. You might want to verify nothing is compromised.
Yes, this is not a new problem: Web browsers has taken on the role as internet police but they only care about their judgement and don't afford websites operators any due process or recourse. And by web browsers I mean Google because of course everyone just defers to them. "File a complaint with /dev/null" might be how Google operates their own properties but this should not be acceptable for the web as a whole. Google and those integrating their "solutions" need to be held accountable for the damage they cause.
Honestly, where do people live that the DMV (or equivalent - in some states it is split or otherwise named) is a pain? Every time I've ever been it has been "show up, take a number, wait 5 minutes, get served" - and that's assuming website self-service doesn't suffice.
> The most alarming thing was realizing that a single flagged subdomain would apparently invalidate the entire domain.
Correct. It works this way because in general the domain has the rights over routing all the subdomains. Which means if you were a spammer, and doing something untoward on a subdomain only invalidated the subdomain, it would be the easiest game in the world to play.
I’d say this is a clear slight from Google, using their Chrome browser because something or someone is inconveniencing another part of their business, google cloud / google photos.
They did a similar thing with the uBlock Origin extension, flagging it with “this extension might be slowing down your browser” in a big red banner in the last few months of manifest v2 on Chrome. After already having to upload the extension yourself to Chrome cause they took it off the extension store cause it was inhibiting on their ad business.
Google is a massive monopolistic company who will pull strings on one side of their business to help another.
With only Firefox not being based on Chromium and still having manifest v2 the future (5 to 10 years from now) looks bleak. With only 1 browser like this web devs can phase it out slowly by not taking it into consideration when coding or Firefox could enshittify to such an extent because of their manifest v2 monopoly that even that wont make it worth it anymore.
Oh and for the ones not in the know, Manifest is the name of a javascript file manifest.js that decides what browser extensions can and cant modify and the “upgrade” from manifest v2 to v3 has made it near impossible for adblockers to block ads.
A class action lawsuit, charging anticompetitive behavior, on behalf of all Immich site operators could be a good idea here.
Of course Google will claim it's just a mistake, but large tech companies - including e.g. Microsoft - have indulged in such behavior before. A lawsuit will allow for discovery which can help settle the matter, and may also encourage Google to behave like a good citizen.
We still don't know what caused it because it happened to the Cloudflare R2 subdomain, and none of the Search Console verification methods work with R2. It also means it's impossible to request verification.
I've had it work for me several times. Most of the time following links/redirects from search engines, ironically a few times from Google itself. Not that I was going to enter anything (the phishing attempts themselves were quite amateurish) but they do help in some rare cases.
When I worked customer service, these phishing blocks worked wonders preventing people from logging in to your-secure-webmail.jobz. People would be filling in phishing forms days after sending out warnings on all official channels. Once Google's algorithm kicked in, the attackers finally needed to switch domains and re-do their phishing attempts.
This is a known thing since quite some time and the only solution is to use separate domain. This problem has existed for so long that at this point we as users adapt to it rather than still expecting Google to fix this.
From their perspective, a few false positives over the total number of actual malicious websites blocked is fractional.
I had my personal domain I use for self-hosting flagged. I've had the domain for 25 years and it's never had a hint of spam, phishing, or even unintentional issues like compromised sites / services.
It's impossible to know what Google's black box is doing, but, in my case, I suspect my flagging was the result of failing to use a large email provider. I use MXRoute for locally hosted services and network devices because they do a better job of giving me simple, hard limits for sending accounts. That way if anything I have ever gets compromised, the damage in terms of spam will be limited to (ex) 10 messages every 24h.
I invited my sister to a shared Immich album a couple days ago, so I'm guessing that GMail scanned the email notifying her, used the contents + some kind of not-google-or-microsoft sender penalty, and flagged the message as potential spam or phishing. From there, I'd assume the linked domain gets pushed into another system that eventually decides they should blacklist the whole domain.
The thing that really pisses me off is that I just received an email in reply to my request for review and the whole thing is a gas-lighting extravaganza. Google systems indicate your domain no longer contains harmful links or downloads. Keep yourself safe in the future by blah blah blah blah.
Umm. No! It's actually Google's crappy, non-deterministic, careless detection that's flagging my legitimate resources as malicious. Then I have to spend my time running it down and double checking everything before submitting a request to have the false positive mistake on Google's end fixed.
Convince me that Google won't abuse this to make self hosting unbearable.
> I suspect my flagging was the result of failing to use a large email provider.
This seems like the flagging was a result of the same login page detection that the Immich blog post is referencing? What makes you think it's tied to self-hosted email?
I'm not using self hosted email. My theory is that Google treats smaller mail providers as less trustworthy and that increases the odds of having messages flagged for phishing.
In my case, the Google Search Console explicitly listed the exact URL for a newly created shared album as the cause.
I wish I would have taken a screenshot. That URL is not going to be guessed randomly and the URL was only transmitted once to one person via e-mail. The sending was done via MXRoute and the recipient was using GMail (legacy Workspace).
The only possible way for Google to have gotten that URL to start the process would have been by scanning the recipient's e-mail. What I was trying to say is that the only way it makes sense to me is if Google via GMail categorized that email as phishing and that kicked off the process to add my domain to the block list.
So, if email categorization / filtering is being used as a heuristic for discovering URLs for the block list, it's possible Google's discriminating against domains that use smaller email hosts that Google doesn't trust as much as themselves, Microsoft, etc..
All around it sucks and Google shouldn't be allowed to use non-deterministic guesswork to put domains on a block list that has a significant negative impact. If they want to operate a clown show like that, they should at least be liable for the outcomes IMO.
I'm in a similar boat. Google's false flag is causing issues for my family members who use Chrome, even for internal services that aren't publicly exposed, just because they're on related subdomains.
It's scary how much control Google has over which content people can access on the web - or even on their local network!
This is another case where it's highly important to "plant your flag" [1] and set up all those services like Search Console, even if you don't plan to use them. Not only can this sort of thing happen, but bad-guys can find crafty ways of hijacking your search console account if you're not super vigilant.
Google Postmaster Console [2] is another one everybody should set up on every domain, even if you don't use gmail. And Google Ads, even if you don't run ads.
I also recommend that people set up Bing search console [3] and some service to monitor DMARC reports.
It's unfortunate that so much of the internet has coalesced around a few private companies, but it's undeniably important to "keep them happy" to make sure your domain's reputation isn't randomly ruined.
It does seem kind of stupid to (apparently) not have google search console, or even a google account according to them, for your business. I don't like Google being in control of so much of the internet - but they are, and it won't do us any good to shout into the void about it when our domain and livelihood is on the line.
Simply opening a case saying that this is our website not impersonating anyone else is unlikely to get anything resolved.
Just because it's your website, and you're not a bad agent doesn't prove that no part of the site is under the control of a bad agent, and that your site isn't accidentally hosting something malicious somewhere, or have some UI that is exploitable for cross-site scripting or whatever.
Sure, but why does Google approve our review over and over again without us making any changes or modifications to the flagged sites/urls? It's a vanilla Immich deployment with docker containers from GitHub pushed there by the core team.
I believe that Jellyfin, Immish, and NextCloud login pages are automatically flagged as dangerous by Google. What's more, I suspect that Google is somehow collecting data from its browser - Chrome.
Google flagged my domain as dangerous once. I do host Jellyfin, Immish, and NextCloud. I run an IP whitelist on the router. All packets from IPs that are not whitelisted are dropped. There are no links to my domain on the internet. At any time, there are 2-3 IPs belonging to me and my family that can load the website. I never whitelisted Google IPs.
How on earth did Google manage to determine that my domain is dangerous?
F you, Google!
Thank goodness I severed that relationship years ago. With so many other great (and ethically superior) products out there to choose from, you'd have to be a true masochist to intentionally throw yourself into their pool of shit.
I realize now that Gigglebet is purposely fucking up Internet for everyone, and is paying unsuspecting chumps princely sums to do so. To kill the thing they say they love.
Chrome is to Web what Teams is to Chat. Bad job guys.
I've rarely seen a HN comment section this overwhelmingly wrong on a technical topic. This community is usually better than this.
Google is an evil company I want the web to be free of, I resent that even Firefox & Safari use this safe browsing service. Immich is a phenomenal piece of software - I've hosted it myself & sung its praises on HN in the past.
Put putting aside David vs Goliath biases here, Google is 100% correct here & what Immich are doing is extremely dangerous. The fact they don't acknowledge that in the blog post shows a security knowledge gap that I'm really hoping is closed over the course of remediating this.
I don't think the Immich team mean any harm but as it currently stands the OP constitutes misinformation.
They're auto-deploying PRs to a subdomain of a domain that they also use for production traffic. This allows any member of the public with a GitHub account to deploy any arbitrary code to that subdomain without any review or approval from the Immich team. That's bad for two reasons:
1. PR deploys on public repos are inherently tricky as code gains access to the server environment, so you need to be diligent about segregating secrets for pr deployments from production secret management. That diligence is a complex & continuous undertaking, especially for an open source project.
2. Anyone with a GitHub account can use your domain for phishing scams or impersonation.
The second issue is why they're flagged by Google (he first issue may be higher risk to the Immich project but it's out of scope for Google's safe browsing service).
To be clear: this isn't about people running their own immich instance. This is about members of the public having the ability to deploy arbitrary code without review.
---
The article from the Immich team does mention they're switching to using a non-production domain (immich.build) for their PR builds which does indicate to me they somewhat understand the issue (though they've explained it badly in the article), but they don't seem to understand the significance or scope.
If there are any googlers here, I'd like to report an even more dangerous website. As much as 30-50% of the traffic to it relates to malware or scams, and it has gone unpunished for a very long time.
What i really don't understand at least here in Europe the advertising partner (adsense) must investigate at least minimally whether the advertising is illegal or fraudulent, i understand that sites.google etc are under "safe harbor" but that's not the point with adsense since people from google "click" the publish button and also get money to publish that ad.
I have reported over a dozen ads to AdSense (Europe) because of them being outright scams (e.g. on weather apps, an AdSense banner claiming "There is a new upgrade to this program, click here to download it") . Google has invariably closed my reports claiming that they do not find any violation of the adsense policies.
The law is only for plebs like you and me. Companies get a pass.
I'm still amazed how deploying spyware would've rightfully landed you in jail a couple decades back, but do the same thing on the web under the justification of advertising/marketing and suddenly it's ok.
The same outfit is runimg a domain called blogger.
Reminds me of MS blocking a website of mine for dangerous script. The offending thing i did was use document.write to put copyright 2025 (with the current year) at the end of static pages.
sites.google.com is widely abused but so practically any site which allows users to host content of their choice and make it publicly available. Where google can be different is that they famously refuse yo do work which they cannot automate and probably they cannot (or don’t want) to automate detection/blocking of spam/phishing hosted on sites.google.com and processing of abuse reports.
Apparently the "best practise" is using Manifest V3 versus V2.
Reading a bit online (not having any personal/deep knowledge) it seems the original extension also downloaded updates from a private (the developers) server, while that is no longer allowed - they now need to update via the chrome extension, which also means waiting for code review/approval from google.
I can see the security angle there, it is just awkward how much of an vested interest google has in the whole topic. ad-blocking is already a grey area (legally), and there is a cat-and-mouse between blockers and advertisers; it's hard to believe there is only security best-practise going on here.
You know what? I don't even mind them killing it, because of course there are a whole pile of items under the anti-trust label that google is doing so why not one more. But what I do take issue with is the gaslighting, their attempt to make the users believe that this is in the users interests, rather than in google's interests.
If we had functional anti-trust laws then this company would have been broken up long ago, Alphabet or not. But they keep doing these things because we - collectively - let them.
As someone who doesn't like Google and absolutely thinks they need to be broken up, no probably not. Google's algorithms around security are so incompetent and useless that stupidity is far more likely than malice here.
Callous disregard for the wellbeing of others is not stupidity, especially when demonstrated by a company ostensibly full of very intelligent people. This behavior - in particular, implementing an overly eager mechanism for damaging the reputation of other people - is simply malicious.
Incompetently or "coincidentally" abusing your monopoly in a way that "happens" to suppress competitors (while whitelisting your own sites) probably won't fly in court. Unless you buy the judge of course.
Intent does not always matter to the law ... and if a C&D is sent, doesn't that imply that intent is subsequently present?
Defamation laws could also apply independently of monopoly laws.
I don't see how this is an issue. To me, this does seem at least confusing, but possibly dangerous.
If you have internal auth testing domains at the same place as user generated content, what's to stop somebody thinking a user-generated page isn't a legit page when it asked you to login or something?
If you're going to host user content on subdomains, then you should probably have your site on the Public Suffix List https://publicsuffix.org/list/ . That should eventually make its way into various services so they know that a tainted subdomain doesn't taint the entire site....
So, once they realized web browsers are all inherently flawed, their solution was to maintain a static list of websites.
God I hate the web. The engineering equivalent of a car made of duct tape.
> Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain
A centralized list like this not just for domains as a whole (e.g. co.uk) but also specific sites (e.g. s3-object-lambda.eu-west-1.amazonaws.com) is both kind of crazy in that the list will bloat a lot over the years, as well as a security risk for any platform that needs this functionality but would prefer not to leak any details publicly.
We already have the concept of a .well-known directory that you can use, when talking to a specific site. Similarly, we know how you can nest subdomains, like c.b.a.x, and it's more or less certain that you can't create a subdomain b without the involvement of a, so it should be possible to walk the chain.
Example:
Maybe ship the domains with the browsers and such and leave generic sites like AWS or whatever to describe things themselves. Hell, maybe that could also have been a TXT record in DNS as well.
12 replies →
> God I hate the web
This is mostly a browser security mistake but also partly a product of ICANN policy & the design of the domain system, so it's not just the web.
Also, the list isn't really that long, compared to, say, certificate transparency logs; now that's a truly mad solution.
Show me a platform not made out of duct tape and I'll show you a platform nobody uses.
17 replies →
"The engineering equivalent of a car made of duct tape"
Kind of. But do you have a better proposition?
50 replies →
I think we lost the web somewhere between PageRank and JavaScript. Up to there it was just linked documents and it was mostly fine.
Why is it a centrally maintained list of domains, when there is a whole extensible system for attaching metadata to domain names?
I love the web. It's the corporate capitalistic ad fueled and govt censorship web that is the problem.
> God I hate the web. The engineering equivalent of a car made of duct tape.
Most of the complex thing I have seen being made (or contributed to) needed duct tape sooner or later. Engineering is the art of trade-offs, of adapting to changing requirements (that can appear due to uncontrollable events external to the project), technology and costs.
Related, this is how the first long distance automobile trip was done: https://en.wikipedia.org/wiki/Bertha_Benz#First_cross-countr... . Seems to me it had quite some duct tape.
2 replies →
That's the nature of decentralised control. It's not just DNS, phone numbers work in the same way.
All web encryption is backed by static list of root certs each browser maintains.
Idk any other way to solve it for the general public (ideally each user would probably pick what root certs they trust), but it does seem crazy.
1 reply →
What we need is a web made in a similar way to the wicker-bodied cars of yesteryear
I'm not sure I'm following what inherent flaw you are suggesting browsers had that the public suffix list originators knew they had.
Wait until you learn about the HSTS preload list.
I think it's somewhat tribal webdev knowledge that if you host user generated content you need to be on the PSL otherwise you'll eventually end up where Immich is now.
I'm not sure how people not already having hit this very issue before is supposed to know about it beforehand though, one of those things that you don't really come across until you're hit by it.
This is the first time I hear about https://publicsuffix.org
1 reply →
I’ve been doing this for at least 15 years and it’s the first I heard of this.
Fun learning new things so often but I never once heard of the public suffix list.
That said, I do know the other best practices mentioned elsewhere
5 replies →
Besides user uploaded content it's pretty easy to accidentally destroy the reputation of your main domain with subdomains.
For example:
At this point if someone else on that hosting provider gets that IP address assigned, your subdomain is now hosting their content.
I had this happen to me once with PDF books being served through a subdomain on my site. Of course it's my mistake for not removing the A record (I forgot) but I'll never make that mistake again.
10 years of my domain having a good history may have gotten tainted in an unrepairable way. I don't get warnings visiting my site but traffic has slowly gotten worse over time since around that time, despite me posting more and more content. The correlation isn't guaranteed, especially with AI taking away so much traffic but it's something I do think about.
The Immich domains that are hit by this issue are -not- user generated content.
3 replies →
Clearly they are not reading HN enough. It hasn’t even been two weeks since this issue last hit the front page.
I wish this comment were top ranked so it would be clear immediately from the comments what the root issue was.
[flagged]
so its skill issue ??? or just google being bad????
11 replies →
Looking through some of the links in this post, I there are actually two separate issues here:
1. Immich hosts user content on their domain. And should thus be on the public suffic list.
2. When users host an open source self hosted project like immich, jellyfin, etc. on their own domain it gets flagged as phishing because it looks an awful lot like the publicly hosted version, but it's on a different domain, and possibly a domain that might look suspicious to someone unfamiliar with the project, because it includes the name of the software in the domain. Something like immich.example.com.
The first one is fairly straightforward to deal with, if you know about the public suffix list. I don't know of a good solution for the second though.
I don't think the Internet should be run by being on special lists (other than like, a globally run registry of domain names)...
I get that SPAM, etc., are an issue, but, like f* google-chrome, I want to browse the web, not some carefully curated list of sites some giant tech company has chosen.
A) you shouldn't be using google-chrome at all B) Firefox should definitely not be using that list either C) if you are going to have a "safe sites" list, that should definitely be a non-profit running that, not an automated robot working for a large probably-evil company...
10 replies →
> I don't know of a good solution for the second though.
I know the second issue can be a legitimate problem but I feel like the first issue is the primary problem here & the "solution" to the second issue is a remedy that's worse than the disease.
The public suffix list is a great system (despite getting serious backlash here in HN comments, mainly from people who have jumped to wildly exaggerated conclusions about what it is). Beyond that though, flagging domains for phishing for having duplicate content smells like an anti-self-host policy: sure there's phishers making clone sites, but the vast majority of sites flagged are going to be legit unless you employ a more targeted heuristic, but doing so isn't incentivised by Google's (or most company's) business model.
> When users host an open source self hosted project like immich, jellyfin, etc. on their own domain...
I was just deploying your_spotify and gave it your-spotify.<my services domain> and there was a warning in the logs that talked about thud, linking the issue:
https://github.com/Yooooomi/your_spotify/issues/271
That means the Safe Browsing abuse could be weaponized against self-hosted services, oh my...
2 replies →
The second is a real problem even with completely unique applications. If they have UI portions that have lookalikes, you will get flagged. At work, I created an application with a sign-in popup. Because it's for internal use only, the form in the popup is very basic, just username and password and a button. Safe Browsing continues to block this application to this day, despite multiple appeals.
Even the first one only works if there's no need to have site-wide user authentication on the domain, because you can't have a domain cookie accessible from subdomains anymore otherwise.
The issue isn't the user-hosted content - I'm running a release build of Immich on my own server and Google flagged my entire domain.
Is it on your own domain?
1 reply →
[dead]
Is the subdomain named immich or something more general?
3 replies →
They aren't hosting user content; it was their pull request preview domains that was triggering it.
This is very clearly just bad code from Google.
Or anticompetitive behavior.
I thought this story would be about some malicious PR that convinced their CI to build a page featuring phishing, malware, porn, etc. It looks like Google is simply flagging their legit, self-created Preview builds as being phishing, and banning the entire domain. Getting immich.cloud on the PSL is probably the right thing to do for other reasons, and may decrease the blast radius here.
The root cause is bad behaviour by google. This is merely a workaround.
[flagged]
22 replies →
Is that actually relevant when only images are user content?
Normally I see the PSL in context of e.g. cookies or user-supplied forms.
> Is that actually relevant when only images are user content?
Yes. For instance in circumstances exactly as described in the thread you are commenting in now and the article it refers to.
Services like google's bad site warning system may use it to indicate that it shouldn't consider a whole domain harmful if it considers a small number of its subdomains to be so, where otherwise they would. It is no guarantee, of course.
2 replies →
In another comment in this thread, it was confirmed that these PR host names are only generated from branches internal to Immich or labels applied by maintainers, and that this does not automatically happen for arbitrary PRs submitted by external parties. So this isn’t the use case for the public suffix list - it is in no way public or externally user-generated.
What would you recommend for this actual use case? Even splitting it off to a separate domain name as they’re planning merely reduces the blast radius of Google’s false positive, but does not eliminate it.
If these are dev subdomains that are actually for internal use only, then a very reliable fix is to put basic auth on them, and give internal staff the user/password. It does not have to be strong, in fact it can be super simple. But it will reliably keep out crawlers, including Google.
1 reply →
How does the PSL make any sense? What stops an attacker from offering free static hosting and then making use of their own service?
I appreciate the issue it tries to solve but it doesn't seem like a sane solution to me.
PSL isn't a list of dangerous sites per-se.
Browsers already do various levels of isolation based on domain / subdomains (e.g. cookies). PSL tells them to treat each subdomain as if it were a top level domain because they are operated (leased out to) different individuals / entities. WRT to blocking, it just means that if one subdomain is marked bad, it's less likely to contaminate the rest of the domain since they know it's operated by different people.
1 reply →
This is not about user content, but about their own preview environments! Google decided their preview environments were impersonating... Something? And decided to block the entire domain.
I think this only is true if you host independent entities. If you simply construct deep names about yourself with demonstrable chain of authority back, I don't think the PSL wants to know. Otherwise there is no hierarchy the dots are just convenience strings and it's a flat namespace the size of the PSLs length.
Aw. I saw Jothan Frakes and briefly thought my favorite Starfleet first officer's actor had gotten into writing software later in life.
Does Google use this for Safe Browsing though?
Looks like it? https://developers.google.com/safe-browsing/reference/URLs.a...
Oh - of course this is where I find the answer why there's a giant domain list bloating my web bundles (tough-cookie/tldts).
There is no law appointing that organization as a world wide authority on tainted/non tainted sites.
The fact it's used by one or more browsers in that way is a lawsuit waiting to happen.
Because they, the browsers, are pointing a finger to someone else and accusing them of criminal behavior. That is what a normal user understands this warning as.
Turns out they are wrong. And in being wrong they may well have harmed the party they pointed at, in reputation and / or sales.
It's remarkable how short sighted this is, given that the web is so international. Its not a defense to say some third party has a list, and you're not on it so you're dangerous
Incredible
I love all the theoretical objections to something that has been in use for nearly 20 years.
As far as I know there is currently no international alternative authority for this. So definitely not ideal, but better than not having the warnings.
5 replies →
Never host your test environments as Subdomains of your actual production domain. You'll also run into email reputation as well as cookie hell. You can get a lot of cookies from the production env if not managed well.
This. I cannot believe the rest of the comments on this are seemingly completely missing the problem here & kneejerk-blaming Google for being an evil corp. This is a real issue & I don't feel like the article from the Immich team acknowledges it. Far too much passing the buck, not enough taking ownership.
It's true that putting locks on your front door will reduce the chance of your house getting robbed, but if you do get robbed, the fact that your front door wasn't locked does not in any way absolve the thief for his conduct.
Similarly, if an organization deploys a public system that engages in libel and tortious interference, the fact that jumping through technical hoops might make it less likely to be affected by that system does not in any way absolve the organization for operating it carelessly in the first place.
Just because there are steps you can take to lessen the impact of bad behavior does not mean that the behavior itself isn't bad. You shouldn't have restrict how you use your own domains to avoid someone else publishing false information about your site. Google should be responsible for mitigating false positives, not the website owners affected by them.
16 replies →
Both things can be problems.
1. You should host dev stuff and separate domains.
2. Google shouldn't be blocking your preview environments.
5 replies →
Yes they could do better, but who appointed Google "chief of web security"? Google can eff right off.
Yep. Still I feel bad for them.
1 reply →
There's quite a few comments of people having this happen to them when they self-host Immich, the issue you point out seems minor in comparison.
I think immich.app is the production domain, not cloud?
.cloud is used to host the map embedded in their webapp.
In fairness, in my local testing sofar, it appears to be an entirely unauthenticated/credential-less service so there's no risk to sessions right now for this particular use-case. That leaves the only risk-factors being phishing & deploy environment credentials.
The one thing I never understood about these warnings is how they don't run afoul of libel laws. They are directly calling you a scammer and "attacker". The same for Microsoft with their unknown executables.
They used to be more generic saying "We don't know if its safe" but now they are quite assertive at stating you are indeed an attacker.
> They are directly calling you a scammer and "attacker".
No they're not. The word "scammer" does not appear. They're saying attackers on the site and they use the word "might".
This includes third-party hackers who have compromised the site.
They never say the owner of the site is the attacker.
I'm quite sure their lawyers have vetted the language very carefully.
"The people living at this address might be pedophiles and sexual predators. Not saying that they are, but if your children are in the vicinity, I strongly suggest you get them back to safety."
I think that might count as libel.
2 replies →
You can’t possibly use the “they use the word ‘might’” argument and not mention the death red screen those words are printed over. If you are referring to abidance to the law, you are technically right. If we remove the human factor, you technically are.
1 reply →
> The one thing I never understood about these warnings is how they don't run afoul of libel laws.
I’m not a lawyer, but this hasn’t ever been taken to court, has it? It might qualify as libel.
I know of no such cases, and would love to know if someone finds one.
1 reply →
you only sue somebody poorer than you
1 reply →
Imagine if you bought a plate at Walmart and any time you put food you bought elsewhere on it, it turned red and started playing a warning about how that food will probably kill you because it wasn't Certified Walmart Fresh™
Now imagine it goes one step further, and when you go to eat the food anyway, your Walmart fork retracts into its handle for your safety, of course.
No brand or food supplier would put up with it.
That's what it's like trying to visit or run non-blessed websites and software coming from Google, Microsoft, etc on your own hardware that you "own".
This is the future. Except you don't buy anything, you rent the permission to use it. People from Walmart can brick your carrots remotely even when you don't use this plate, for your safety ofc
> The one thing I never understood about these warnings is how they don't run afoul of libel laws. They are directly calling you a scammer and "attacker"
Being wrong doesn't count as libel.
If a company has a detection tool, makes reasonable efforts to make sure it is accurate, and isn't being malicious, you'll have a hard time making a libel case
There is a truth defence to libel in the USA but there is no good faith defence. Think about it like a traffic accident, you may not have intended to drive into the other car but you still caused damage. Just because you meant well doesn't absolve you from paying for the damages.
1 reply →
This is tricky to get right.
If the false positive rate is consistently 0.0%, that is a surefire sign that the detector is not effective enough to be useful.
If a false positive is libel, then any useful malware detector would occasionally do libel. Since libel carries enormous financial consequences, nobody would make a useful malware detector.
I am skeptical that changing the wording in the warning resolves the fundamental tension here. Suppose we tone it down: "This executable has traits similar to known malware." "This website might be operated by attackers."
Would companies affected by these labels be satisfied by this verbiage? How do we balance this against users' likelihood of ignoring the warning in the face of real malware?
The problem is that it's so one sided. They do what they want with no effort to avoid collateral damage and there's nothing we can do about it.
They could at least send a warning email to the RFC2142 abuse@ or hostmaster@ address with a warning and some instructions on a process for having the mistake reviewed.
Spamhaus has been sued—multiple times, I believe—for publishing DNS-based lists used to block email from known spammers.
For instance: https://reason.com/volokh/2020/07/27/injunction-in-libel-cas... (That was a default judgment, though, which means Spamhaus didn't show up, probably due to jurisdictional questions.)
The first step in filing a libel lawsuit is demanding a retraction from the publisher. I would imagine Google's lawyers respond pretty quickly to those, which is why SafeBrowsing hasn't been similarly challenged.
Happened to me last week. One morning we wake up and the whole company website does not work.
Not advice with some time to fix any possible problem, just blocked.
We gave very bad image to our clients and users, and had to give explanations of a false positive from google detection.
The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).
After just moving the webmail to another domain, removing one of the redirections just in case, and asking politely 4 times to be unblocked.. took about 12 hours. And no real recourse, feedback or anything about when its gonna be solved. And no responsibility.
The worse is the feeling of not in control of your own business, and depending on a third party which is not related at all with us, which made a huge mistake, to let out clients use our platform.
File a small claim for damages up to 10,000 to 20,000 USD depending on your local statues.
It’s actually pretty quick and easy. They cannot defend themselves with lawyers, so a director usually has to show up.
It would be glorious if everybody unjustly screwed by Google did that. Barring antitrust enforcement, this may be the only way to force them to behave.
6 replies →
In all US states corporations may be represented by lawyers in small claims cases. The actual difference is that in higher courts corporations usually must be represented by lawyers whereas many states allow normal employees to represent corporations when defending small claims cases, but none require it.
5 replies →
I've been thinking for a while that a coordinated and massive action against a specific company by people all claiming damages in small claims court would be a very effective way of bringing that company to heel.
5 replies →
And now your Gmail account has been deleted as well as any other accounts you had with Google
5 replies →
Do small claims apply to things like this where damages are indirect?
1 reply →
> The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).
I find it hard to believe that the double redirect itself tripped it: multiple redirects in a row is completely normal—discouraged in general because it hurts performance, but you encounter them all the time. For example, http://foo.example → https://foo.example → https://www.foo.example (http → https, then add or remove www subdomain) is the recommended pattern. And site root to app path to login page is also pretty common. This then leads me to the conclusion that they’re not disclosing what actually tripped it. Maybe multiple redirects contributed to it, a bad learned behaviour in an inscrutable machine learning model perhaps, but it alone is utterly innocuous. There’s something else to it.
Want to see how often Microsoft accounts redirect you? I'd love to see Google block all of Microsoft, but of course that will never happen, because these tech giants are effectively a cartel looking out for each other. At least in comparison to users and smaller businesses.
5 replies →
I suspect you're right... The problem is, and i've experienced this with many big tech companies, you never really get any explanation. You report an issue, and then, magically, it's "fixed," with no further communication.
This looks like the same suicide inducing type of crap by google that previously only android devs on playstore were subject to.
I'm permanently banned from the Play Store because 10+ years ago I made a third-party Omegle client, called it Yo-megle (neither Omegle nor Yo-megle still exist now), got a bunch of downloads and good ratings, then about 2 years later got a message from Google saying I was banned for violating trademark law. No actual legal action, just a message from Google. I suppose I'm lucky they didn't delete my entire Google account.
I'm beginning to seriously think we need a new internet, another protocol, other browsers just to break up the insane monopolies that has been formed, because the way things are going soon all discourse will be censored, and competitors will be blocked soon.
We need something that's good for small and medium businesses again, local news and get an actual marketplace going - you know what the internet actually promised.
Anyone working on something like this?
The community around NOSTR are basically building a kind of semantic web, where users identities are verified via their public key, data is routed through content agnostic relays, and trustworthiness is verified by peer recommendation.
They are currently experimenting with replicating many types of services which are currently websites as protocols with data types, with the goal being that all of these services can share available data with eachother openly.
It's definitely more of a "bazaar" model over a "catherdral" model, with many open questions and it's also tough to get a good overview of what is really going on there. But at least it's an attempt.
We have a “new internet”. We have the indie web, VPNs, websites not behind Cloudflare, other browsers. You won’t have a large audience, but a new protocol won't fix that.
Also, plenty of small and medium businesses are doing fine on the internet. You only hear about ones with problems like this. And if these problems become more frequent and public, Google will put more effort into fixing them.
I think the most practical thing we can do is support people and companies who fall through the cracks, by giving them information to understand their situation and recover, and by promoting them.
6 replies →
Stop trying to look for technological answers to political problems. We already have a way to avoid excessive accumulation of power by private entities, it's called "anti-trust laws" (heck, "laws" in general).
Any new protocol not only has to overcome the huge incumbent that is the web, it has to do so grassroots against the power of global capital (trillions of dollars of it). Of course, it also has to work in the first place and not be captured and centralised like another certain open and decentralised protocol has (i.e., the Web).
Is that easier than the states doing their jobs and writing a couple pages of text?
10 replies →
It's very, very hard to overcome the gravitational forces which encourage centralization, and doing so requires rooting the different communities that you want to exist in their own different communities of people. It's a political governance problem, not a technical one.
1 reply →
You make it seem like the problem is of technical nature (instead of regulatory or other). Would you mind explaining why?
Technical alternatives already exist, see for example GNUnet.
2 replies →
How about the Invisible Internet Project, https://geti2p.net?
IPFS has been doing some great work around decentralization that actually scales (Netflix uses it internally to speed up container delivery), but a) it's only good for static content, b) things still need friendly URLs, and c) once it becomes the mainstream, bad actors will find a way to ruin it anyway.
These apply to a lot of other decentralized systems too.
1 reply →
It won't get anywhere unless it addresses the issue of spam, scammers, phishing etc. The whole purpose of Google Safe Browsing is to make life harder for scammers.
2 replies →
I'm not sure, but it's on my mind.
I own what I think are the key protocols for the future of browsers and the web, and nobody knows it yet. I'm not committed to forking the web by any means, but I do think I have a once-in-a-generation opportunity to remake the system if I were determined to and knew how to remake it into something better.
If you want to talk more, reach out!
2 replies →
This is not a technical problem. You will not solve it with purely technical solutions.
I'm afraid this can't be built on the current net topology which is owned by the Stupid Money Govporation and inherently allows for roadblocks in the flow of information. Only a mesh could solve that.
But the Stupid Money Govporation must be dethroned first, and I honestly don't see how that could happen without the help of an ELE like a good asteroid impact.
It will take the same or less amount of time, to get where we are with current Web.
What we have is the best sim env to see how stuff shape up. So fixing it should be the aim, avoiding will get us on similar spirals. We'll just go on circles.
2 replies →
Have you talked to your lawyer? Making Google pay for their carelessness is the ONLY way to get them to care.
This may not be a huge issue depending on mitigating controls but are they saying that anyone can submit a PR (containing anything) to Immich, tag the pr with `preview` and have the contents of that PR hosted on https://pr-<num>.preview.internal.immich.cloud?
Doesn't that effectively let anyone host anything there?
I think only collaborators can add labels on github, so not quite. Does seem a bit hazardous though (you could submit a legit PR, get the label, and then commit whatever you want?).
Exposure also extends not just to the owner of the PR but anyone with write access to the branch from which it was submitted. GitHub pushes are ssh-authenticated and often automated in many workflows.
So basically like https://docs.google.com/ ?
Yes, except on Google Docs you can't make the document steal credentials or download malware by simply clicking on the link.
It's more like sites.google.com.
No, it doesn't work at all for PRs from forks.
That was my first thought - have the preview URLs possibly actually been abused through GitHub?
Excellent idea for cost-free phishing.
Insane that one company can dictate what websites you're allowed to visit. Telling you what apps you can run wasn't far enough.
US congress not functioning for over a decade causes a few problems.
It's the result of failures across the web, really. Most browsers started using Google's phishing site index because they didn't want to maintain one themselves but wanted the phishing resistance Google Chrome has. Microsoft has SmartScreen, but that's just the same risk model but hosted on Azure.
Google's eternal vagueness is infuriating but in this case the whole setup is a disaster waiting to happen. Google's accidental fuck-up just prevented "someone hacked my server after I clicked on pr-xxxx.imiche.app" because apparently the domain's security was set up to allow for that.
You can turn off safe browsing if you don't want these warnings. Google will only stop you from visiting sites if you keep the "allow Google to stop me from visiting some sites" checkbox enabled.
I really don't know how they got nerds to think scummy advertising is cool. If you think about it, the thing they make money on - no user actually wants ads or wants to see them, ever. Somehow Google has some sort of nerd cult that people think its cool to join such an unethical company.
Turns out it's cool to make lots of money
If you ask, the leaders in that area of Google will tell you something like "we're actually HELPING users because we're giving them targeted ads that are for the things they're looking for at the time they're looking for it, which only makes things for the user better." Then you show them a picture of YouTube ads or something and it transitions to "well, look, we gotta pay for this somehow, and at least's it's free, and isn't free information for all really great?"
unfortunately nobody wants to sacrifice anything nowadays so everyone will keep using google, and microsoft, and tiktok and meta and blah blah
It's super simple. Check out all the Fediverse alternatives. How many people that talk a big game actually financially support those services? 2% maybe, on the high end.
Things cost money, and at a large scale, there's either capitalism, or communism.
1 reply →
Absolutely fuck Google
[flagged]
10 replies →
The open internet is done. Monopolies control everything.
We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out. Nothing changed in 3 years.
Getting sick of these companies able to have this level of control over everything, you can’t even self host anymore apparently.
> We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out.
Crazy! If you can elaborate here, please do.
[dead]
Story of when it happened to my company: https://news.ycombinator.com/item?id=25802366
Be sure to see the team's whole list of Cursed Knowledge. https://immich.app/cursed-knowledge
I love Immich & greatly appreciate the amazing work the team put into maintaining it, but between the OP & this "Cursed Knowledge" page, the apparent team culture of shouting from the rooftops complaints that expose their own ignorance about technology is a little concerning to be honest.
I've now read the entire Cursed Knowledge list & - while I found some of them to be invaluable insights & absolutely love the idea of projects maintaining a public list of this nature to educate - there are quite a few red flags in this particular list.
Before mentioning them: some excellent & valuable, genuinely cursed items: Postgres NOTIFY (albeit adapter-specific), npm scripts, bcrypt string lengths & especially the horrifically cursed Cloudflare fetch: all great knowledge. But...
> Secure contexts are cursed
> GPS sharing on mobile is cursed
These are extremely sane security feature. Do we think keeping users secure is cursed? It honestly seems crazy to me for them to have published these items in the list with a straight face.
> PostgreSQL parameters are cursed
Wherein their definition of "cursed" is that PG doesn't support running SQL queries with more than 65535 separate parameters! It seems to me that any sane engineer would expect the limit to be lower than that. The suggestion that making an SQL query with that many parameters is normal seems problematic.
> JavaScript Date objects are cursed
Javascript is zero-indexed by convention. This one's not a huge red flag but it is pretty funny for a programmer to find this problematic.
> Carriage returns in bash scripts are cursed
Non-default local git settings can break your local git repo. This isn't anything to do with bash & everyone knows git has footguns.
> Carriage returns in bash scripts are cursed
Also the full story here seemed to be
1. Person installs git on Windows with autocrlf enabled, automatically converting all LF to CRLF (very cursed in itself in my opinion).
2. Does their thing with git on the Windows' side (clone, checkout, whatever).
3. Then runs the checked out (and now broken due to autocrlf) code on Linux instead of Windows via WSL.
The biggest footgun here is autocrlf but I don't see how this is whole situation is the problem of any Linux tooling.
5 replies →
The Date complaint is
> JavaScript date objects are 1 indexed for years and days, but 0 indexed for months.
This mix of 0 and 1 indexing in calendar APIs goes back a long way. I first remember it coming from Java but I dimly recall Java was copying a Taligent Calendar API.
You're taking the word cursed way too seriously
This is just a list of things that can catch devs off guard
1 reply →
Some of these seem less cursed, and more just security design?
>Some phones will silently strip GPS data from images when apps without location permission try to access them.
That strikes me as the right thing to do?
Huh. Maybe? I don't want that information available to apps to spy on me. But I do want full file contents available to some of them.
And wait. Uh oh. Does this mean my Syncthing-Fork app (which itself would never strike me as needing location services) might have my phone's images' location be stripped before making their way to my backup system?
EDIT: To answer my last question: My images transferred via Syncthing-Fork on a GrapheneOS device to another PC running Fedora Atomic have persisted the GPS data as verified by exiftool. Location permissions have not been granted to Syncthing-Fork.
Happy I didn't lose that data. But it would appear that permission to your photo files may expose your GPS locations regardless of the location permission.
1 reply →
I think the “cursed” part (from the developers point of view) is that some phones do that, some don’t, and if you don’t have both kinds available during testing, you might miss something?
> That strikes me as the right thing to do
Yep, and it's there for very goos reasons. However if you don't know about it, it can be quite surprising and challenging to debug.
Also it's annoying when your phones permissions optimiser runs and removes the location permissions from e.g. Google Photos, and you realise a few months later that your photos no longer have their location.
2 replies →
It's not if it silently alters the file. i do want GPS data for geolocation, so that when i import the images in the right places they are already placed where they should be on the map
IMO, the problem is that it fails silently.
Every kind of permission should fail the same way, informing the user about the failure, and asking if the user wants to give the permission, deny the access, or use dummy values. If there's more than one permission needed for an operation, you should be able to deny them all, or use any combination of allowing or using dummy values.
1 reply →
I think the bad part is that the users are often unaware. Stripping the data by default makes sense but there should be an easy option not to.
Try to get an iPhone user to send you an original copy of a photo with all metadata. Even if they want to do it most of them don't know how.
How does it makes sense?
This kind of makes we wish CURSED.md was a standard file in projects. So much hard-earned knowledge could be shared.
You know you can just start doing that in your projects. That's how practice often becomes standard.
The Postgres query parameters one is funny. 65k parameters is not enough for you?!
As it says, bulk inserts with large datasets can fail. Inserting a few thousand rows into a table with 30 columns will hit the limit. You might run into this if you were synchronising data between systems or running big batch jobs.
Sqlite used to have a limit of 999 query parameters, which was much easier to hit. It's now a roomy 32k.
4 replies →
> PostgreSQL USER is cursed > The USER keyword in PostgreSQL is cursed because you can select from it like a table, which leads to confusion if you have a table name user as well.
is even funnier :D
1 reply →
> JavaScript date objects are 1 indexed for years and days, but 0 indexed for months.
I don't disagree that months should be 1-indexed, but I would not make that assumption solely based on days/years being 1-indexed, since 0-indexing those would be psychotic.
The only reason I can think of to 0-index months is so you can do monthName[date.getMonth()] instead of monthName[date.getMonth() - 1].
I don't think adding counterintuitive behavior to your data to save a "- 1" here and there is a good idea, but I guess this is just legacy from the ancient times.
4 replies →
Why so? Months in written form also start with 1, same as days/years, so it would make sense to match all of them.
For example, the first day of the first month of the first year is 1.1.1 AD (at least for Gregorian calendar), so we could just go with 0-indexed 0.0.0 AD.
Hum...
Dark-grey text on black is cursed. (Their light theme is readable.)
Also, you can do bulk inserts in postgres using arrays. Take a look at unnest. Standard bulk inserts are cursed in every database, I'm with the devs here that it's not worth fixing them in postgres just for compatibility.
Saw the long passwords are cursed one. Reminded me of ancient DES unix passwords only reading the first eight characters. What's old is new again...
I'm fighting this right now on my own domain. Google marked my family Immich instance as dangerous, essentially blocking access from Chrome to all services hosted on the same domain.
I know that I can bypass the warning, but the photo album I sent to my mother-in-law is now effectively inaccessible.
Unless I missed something in the article this seems like a different issue. The article is specifically about the domain "immich.cloud". If you're using your own domain, I'd check to ensure it hasn't been actually compromised by a bonnet or similar in some way you haven't noticed.
It may well be a false positive of Google's heuristics but home server security can be challenging - I would look at ruling out the possibility of it being real first.
It certainly sounds like a separate root issue to this article, even if the end result looks the same.
*botnet
Just in case you're not sure how to deal with it, you need to request a review via the Google Search Console. You'll need a Google account and you have to verify ownership of the domain via DNS (if you want to appeal the whole domain). After that, you can log into the Google Search Console and you can find "Security Issues" under the "Security & Manual Actions" section.
That area will show you the exact URLs that got you put on the block list. You can request a review from there. They'll send you an email after they review the block.
Hopefully that'll save you from trying to hunt down non-existent malware on a half dozen self-hosted services like I ended up doing.
It's a bit ironic that a user installing immich to escape Google's grip ends up having to create again a Google account to be able to remove their Google account.
1 reply →
Reviews view Google Search Console are pointless because they won't stop the same automated process from flagging the domain again. Save your time and get your lawyer to draft a friendly letter instead.
Since other browsers, like Firefox, also use the Google Safe Browsing list, they are affected as well.
No later than last weekend I was comtemplating migrating my family pictures to a self-hosted Immich instance...
I guess a workaround Google's crap would be to put an htpasswd/basic auth in front of Immich, blocking Google to get to the content and flagging it.
Add a custom "welcome message" in Server Settings (https://my.immich.app/admin/system-settings?isOpen=server) to make your login page look different compared to all other default Immich login pages. This is probably the easiest non-intrusive tweak to work around the repeated flagging by Safe Browsing, still no 100% guarantee. I agree that strict access blocking (with extra auth or IP ACL) can work better. Though I've seen in this thread https://github.com/jellyfin/jellyfin-web/issues/4076#issueco... with varying success.
And go through your domain registration/re-review in G Search Console of course.
1 reply →
Immich is a great software package, and I recommend it. Sadly, Google can still flag sites based on domain name patterns, blocking content behind auth or even on your LAN.
That probably wouldn't work, I get hit with Chrome's red screen of annoyance regularly with stuff only reachable on my LAN. I suspect the trigger is that the URLs are like [product name].home.[mydomain.com].
1 reply →
Out of curiosity, is your Immich instance published as https://immich.example.com ?
Yes, it's on the "immich" subdomain. This has crossed my mind as a potential triggering cause, as has the default login page.
Update: my appeal of the false positive has been accepted by Google and my domain is now unblocked.
A friend / client of mine used some kind of WordPress type of hosting service with a simple redirect. The host got on the bad sites list.
This also polluted their own domain, even when the redirect was removed, and had the odd side effect that Google would no longer accept email from them. We requested a review and passed it, but the email blacklist appears to be permanent. (I already checked and there are no spam problems with the domain.)
We registered a new domain. Google’s behaviour here incidentally just incentivises bulk registering throwaway domains, which doesn’t make anything any better.
Wow. That scares me. I've been using my own domain that got (wrongly) blacklisted this week for 25 years and can't imagine having email impacted.
My general policy now is to confine important email to a very, very basic website that you rigidly control the hosting over and just keep static sites on.
And avoid using subdomains.
Us nerds *really* need to come together in creating a publicly owned browser (non chromium)
Surely among us devs, as we realize app stores increasingly hostile, that the open web is worth fighting for, and that we have the numbers to build solutions?
Uh… we are. Servo and Ladybird. It’s a shit tonne of work.
Firefox should be on that list. It's clearly a lot closer in functionality to Chrome/Chromium than Servo or Ladybird, so it's easier to switch to it. I like that Servo and Ladybird exist and are developing well, but there's no need to pretend that they're the only available alternatives.
10 replies →
> It’s a shit tonne of work.
[Sam didn't like that.]
This is #1 on HN for a while now and I suspect it's because many of us are nervous about it happening to us (or have already had our own homelab domains flagged!).
So is there someone from Google around who can send this along to the right team to ensure whatever heuristic has gone wrong here is fixed for good?
I doubt Google the corporation cares one bit, and any individual employees who do care would likely struggle against the system to cause significant change.
The best we all can do is to stop using Google products and encourage our friends and family to do likewise. Make sure in our own work that we don't force others to rely on Google either.
We really need an internet Bill of Rights. Google has too much power to delete your company from existence with no due process or recourse.
If any company controls some (high) percentage of a particular market, say web browsers, search, or e-commerce, or social media, the public's equal access should start to look more like a right and less like an at-will contract.
30 years ago, if a shop had a falling out with the landlord, it could move to the next building over and resume business. Now if you annoy eBay, Amazon or Walmart, you're locked out nationwide. If you're an Uber, Lyft, or Doordash (etc) gig worker and their bots decide they don't like you anymore, then sayonara sucker! Your account has been disabled, have a nice day and don't reapply.
Our regulatory structure and economies of scale encourage consolidation and scale and grant access to this market to these businesses, but we aren't protecting the now powerless individuals and small businesses who are randomly and needlessly tossed out with nobody to answer their pleas of desperation, no explanation of rules broken, and no opportunity to appeal with transparency.
It's a sorry state of affairs at the moment.
I know someone with a small business that applied for Venmo Business account (which is the main payment method in their community industry) and Venmo refused to open the account and didn't provide any reason as to why saying that they have the right to choose to refuse providing the service, which they do. But all the competitors of that business in the area do have a Venmo and take payment this way so it is basically a revenue loss for that person.
It's a bit frustrating when a company becomes a major player in an industry and can have a life and death sentence on other businesses.
There are alternative payment method but people are use to pay a certain way in that industry/area, similarly there are other browsers but people are used to Chrome.
Same thing with Paypal - I opened a business account, was able to do one transaction and was shut down for fraud. I tested a donation to myself. Under $10. Lifetime ban.
fuck paypal
9 replies →
Force interoperability. In 2009 I could run Pidgin and load messages from AIM, FB Messages, Yahoo... Where did that go?
I suspect the EU will be the first region to push the big tech companies on this.
Or enforce antitrust.
As firearm enthusiasts like to say, "Enforce the laws we already have".
6 replies →
In 2025 you can use Beeper (or run your own local Matrix server with the opensource bridges) and get the same result with WhatsApp, Signal, Telegram, Discord, Google Messages, etc. etc.
8 replies →
The project is still alive and we're trying to finish our next major version to be able to better support modern protocols and features.
We do monthly updates on the status of the project that we call State of the Bird and they can be found here https://discourse.imfreedom.org/tag/state-of-the-bird.
1 reply →
> I suspect the EU will be the first region to push the big tech companies on this.
Supposedly, DMA should enforce this already.
https://www.socialmediatoday.com/news/meta-announces-next-st...
Haven't heard much about it lately though.
Your Pidgin example isn't even real interoperability - you still needed real AIM, FB and Yahoo accounts for that.
> 2009 I could run Pidgin and load messages from AIM, FB Messages, Yahoo... Where did that go?
https://www.youtube.com/watch?v=mBcY3W5WgNU
But seriously; the internet is now overrun with AI Slop, Spam, and automated traffic. To try to do something about it requires curation, somebody needs to decide what is junk, which is completely antithetical to open protocols. This problem is structurally unsolvable, there is no solution, there's either a useless open internet or a useful closed one. The internet is voting with Cloudflare, Discord, Facebook, to be useful, not open. The alternative is trying to figure out how to run a decentralized dictatorship that only allows good things to happen; a delusion.
The only other solution is accountability, a presence tied to your physical identity; so that an attacker cannot just create 100,000 identities from 25,000 IP addresses and smash your small forum with them. That's an even less popular idea, even though it would make open systems actually possible. Building your own search engine or video platform would be super easy, barely an inconvenience. No need for Cloudflare if the police know who every visitor is. No need for a spam filter, if the government can enforce laws perfectly.
Take a look at email, the mother of all open protocols (older than HTTP). What happened? Radical recentralization to companies that had effective spam management, and now we on HN complain we can't break through, someone needs to do something about that centralization, so that we can go back to square one where people get spammed to death again, which will inevitably repeat the discretion required -> who has the best discretion -> flee there cycle. Go figure.
12 replies →
They're too busy trying to strip encryption to do anything
It’s almost as if those companies have country like powers.
Maybe they should be subject to same limitations like First Amendment etc.
The solution is just to enforce the anti-trust act as it is written.
1 reply →
FWIW in some jurisdictions you might be able to sue them for tortious interference, which basically means they went out of their way to hurt your business.
I see a lot of comments here about using some browser that will allow ME to see sites I want to see, but I did not see a lot about how do I protect my site or sites of clients from being subjected to this. Is there anything proactive that can be done? A set of checks almost like regression testing? I understand it can be a bit like virus builders using anti virus to test their next virus. But is there a set of best practices that could give you higher probability of not being blocked?
> how do I protect my site or sites of clients from being subjected to this. Is there anything proactive that can be done?
Some steps to prevent this happening to you:
1. Host only code you own & control on your own domain. Unless...
2. If you have a use-case for allowing arbitrary users to publish & host arbitrary code on a domain you own (or subdomains of), then ensure that domain is a separate dedicated one to the ones you use for your own owned code, that can't be confused with your own owned hosted content.
3. If you're allowing arbitrary members of the public to publish arbitrary code for preview/testing purposes on a domain you own - have the same separation in place for that domain as mentioned above.
4. If you have either of the above two use-cases, publish that separated domain on the Mozilla Public Suffix list https://publicsuffix.org/
That would protect your domains from being poisoned by arbitrary publishing, but wouldn't it risk all your users being affected by one user publishing?
2 replies →
> Is there anything proactive that can be done?
Befriend a lawyer that will agree to send a letter to Google on your behalf in case it happens.
A good takeaway is to separate different domains for different purposes.
I had prior been tossing up the pros/cons of this (such as teaching the user to accept millions of arbitrary TLDs as official), but I think this article (and other considerations) have solidified it for me.
For example
www.contoso.com (public)
www.contoso.blog (public with user comments)
contoso.net (internal)
staging.contoso.dev (dev/zero trust endpoints)
raging-lemur-a012afb4.contoso.build (snapshots)
The biggest con of this is that to a user it will seem much more like phishing.
It happened to me a while ago that I suddenly got emails from "githubnext.com". Well, I know Github and I know that it's hosted at "github.com". So, to me, that was quite obviously phishing/spam.
Turns out it was real...
This is such a difficult problem. You should be able to buy a “season pass” for $500/year or something that stops anyone from registering adjacent TLDs.
And new TLDs are coming out every day which means that I could probably go buy microsoft.anime if I wanted it.
This is what trademarks are supposed to do, but it’s reactive and not proactive.
PayPal is a real star when it comes to vague, fake-sounding, official domains.
Real users don't care much about phishing as long as you got redirected from the main domain, though. github.io has been accepted for a long time, and githubusercontent.com is invisible 99% of the time. Plus, if your regular users are not developers and still end up on your dev/staging domains, they're bound to be confused regardless.
Good
The same thing happened to me earlier this year with a self-hosted instance of Umami Analytics.
https://news.ycombinator.com/item?id=42779544#42783321
Unironically, including a threat of legal action in my appeal on the Google Search Console was what stopped our instance getting flagged in the end.
Could you provide your text? Having same issue for years https://news.ycombinator.com/item?id=45678095
Maybe a dumb question but what constitutes user-hosted-content?
Is a notion page, github repo, or google doc that has user submitted content that can be publicly shared also user-hosted?
IMO Google should not be able to use definitive language "Dangerous website" if its automated process is not definitive/accurate. A false flag can erode customer trust.
A website where a user can upload "active code".
The definition of "active code" is broad & sometimes debatable - e.g. do old MySpace websites count - but broadly speaking the best way of thinking about it is in terms of threat model, & the main two there are:
- credential leakage
- phishing
The first is fairly narrow & pertains to uploading server side code or client javascript. If Alice hosts a login page on alice.immich.cloud that contains some session handling bugs in her code, Mallory can add some cute to mallory.immich.cloud to read cookies set on *.immich.cloud to compromise Alice's logins.
The second is much broader as it's mostly about plausible visual impersonation so will also cases where users can only upload CSS or HTML.
Specifically in this case what Immich is doing here is extremely dangerous & this post from them - while I'll give them the benefit of the doubt on being ignorant - is misinformation.
It may be dangerous but it is an established pattern. There are many cases (like Cloudflare Pages) of others doing the same, hosting strangers' sites on subdomains of a dedicated domain (pages.dev for Cloudflare, immich.cloud for Immich).
By preventing newcomers from using this pattern, Google's system is flawed, severely stifling competition.
Of course, this is perfectly fine for Google.
1 reply →
> what Immich is doing here is extremely dangerous
You fully misunderstand what content is hosted on these sites. It's only builds from internal branches by the core team, there is no path for "external user" content to land on this domain.
2 replies →
Looking forward to Louis Rossmann's reaction. Wouldn't be surprised if this leads to a lawsuit over monopolistic behavior - this is clearly abusing their dominant position in the browser space to eliminate competitors in photos sharing.
Who is that and why is his reaction relevant?
He's a right-to-repair activist Youtuber who is quite involved in GrayJay, another app made by this company, which is a video player client for other platforms like YouTube.
I'm not sure why his reaction would be relevant, though. It'll just be another rant about how Google has too much control like he's done in the past. He may be right, but there's nothing new to say.
1 reply →
Seems that Rossmann left FUTO in february and started his own foundation in march
>> Unfortunately, Google seems to have the ability to arbitrarily flag any domain and make it immediately unaccessible to users. I'm not sure what, if anything, can be done when this happens, except constantly request another review from the all mighty Google.
Perhaps a complaint to the ETC for abusing the monopoly and lack of due process to harm legitimate business? Or DG COMP (in the EU).
Gather evidence of harm and seek alliances with other open-source projects could build a momentum.
I write a couple of libraries for creating GOV.UK services and Google has flagged one of them as dangerous. I've appealed the decision several times but it's like screaming into a void.
https://govuk-components.netlify.app/
I use Google Workspace for my company email, so that's the only way for me to get in contact with a human, but they refuse to go off script and won't help me contact the actual department responsible in any way.
It's now on a proper domain, https://govuk-components.x-govuk.org/ - but other than moving, there's still not much anyone can do if they're incorrectly targeted.
Google is not the only one marking subdomains under netlify.app dangerous. For a good reason though, there's a lot of garbage hosted there. Netlify also doesn't do a good enough job of taking down garbage.
Given the scale of Google, and the nerdiness required to run Immich, I bet it's just an accident. Nevertheless, I'm very curious as to how senior Google staff looks at Immich, are they actually registering signals that people use immich-go to empty their Google Photos accounts? Do they see this as something potentially dangrous to their business in the long term?
The nerdsphere has been buzzing with Immich for some time now (I started using it a month back and it lives up to its reputation!), and I assume a lot of Googlers are in that sphere (but not neccessarily pro-Google/anti-Immich of course). So I bet they at least know of it. But do they talk about it?
I love Immich but the entire design and interface is so clearly straight up copied from Google photos. It makes me a bit nervous about their exposure, legally.
This seems related to another hosting site that got caught out by this recently:
https://news.ycombinator.com/item?id=45538760
Not quite the same (other than being an abuse of the same monopoly) since this one is explicitly pointing to first-party content, not user content.
I think the other very interesting thing in the reddit thread[0] for this is that if you do well-known-domain.yourdomain.tld then you're likely to get whacked by this too. It makes sense I guess. Lots of people are probably clicking gmail.shady.info and getting phished.
0: https://old.reddit.com/r/immich/comments/1oby8fq/immich_is_a...
So we can't use photos or immich or images or pics as a sub-domain, but anything nondescript will be considered obfuscated and malicious. Awesome!
Can I use this space to comment on how amazing Immich is? I self host lots of stuff, and there’s this one tier above everything else that’s currently, and exclusively, held by Home Assistant and Immich. It is actually _better_ than Google photos (if you keep your db and thumbs on ssd, and run the top model for image search). You give up nothing, and own all your data.
I migrated over from google photos 2 years ago. It has been nothing but amazing. No wonder google has it in its crosshairs.
Don't they block NextCloud sync in the Play Store, for similar reasons?
yeah same, I'm in the process of migrating so I have both google photo and immich, and honestly immich is just as good.
I actually find the semantic search of immich slightly better.
What model do you recommend for image search?
Not OP, but CLIP from OpenAi (2021) seems pretty standard and gives great results at least in English (not so good in rarer languages).
https://opencv.org/blog/clip/
Essentially CLIP lets to encode both text and images in same vector space.
It is really easy and pretty fast too generate embeddings. Took less than hour on Google Colab.
I made a quick and dirty Flask app that lets me query my own collection of pictures and provide most relevant ones via cosine similarity.
You can query pretty much anything on CLIP (metaphors, lightning, object, time, location etc).
From what I understand many photo apps offer CLIP embedding search these days including Immich - https://meichthys.github.io/foss_photo_libraries/
Alternatives could be something like BLIP.
This is what I use:
ViT-SO400M-16-SigLIP2-384__webli
I think I found it because it was recommended by Immich as the best, but it still only took a day or two to run against my 5 thousand assets. I’ve tested it against whatever Google is using (I keep a part of my library on Google Photos), and it’s far better.
If you block those internal subdomains from search with robots.txt, does Google still whine?
I’ve heard anecdotes of people using an entirely internal domain like “plex.example.com” even if it’s never exposed to the public internet, google might flag it as impersonating plex. Google will sometimes block it based only on name, if they think the name is impersonating another service.
Its unclear exactly what conditions cause a site to get blocked by safe browsing. My nextcloud.something.tld domain has never been flagged, but I’ve seen support threads of other people having issues and the domain name is the best guess.
I'm almost positive GMail scanning messages is one cause. My domain got put on the list for a URL that would have been unknowable to anyone but GMail and my sister who I invited to a shared Immich album. It was a URL like this that got emailed directly to 1 person:
https://photos.example.com/albums/xxxxxxxx-xxxx-xxxx-xxxx-xx...
Then suddenly the domain is banned even though there was never a way to discover that URL besides GMail scanning messages. In my case, the server is public so my siblings can access it, but there's nothing stopping Google from banning domains for internal sites that show up in emails they wrongly classify as phishing.
Think of how Google and Microsoft destroyed self hosted email with their spam filters. Now imagine that happening to all self hosted services via abuse of the safe browsing block lists.
10 replies →
Yes, my family Immich instance is blocked from indexing both via headers and robots.txt, yet it's still flagged by Google as dangerous.
I'm kind of curious, do you have your own domain for immich or is this part of a malware-flagged subdomain issue? It's kind of wild to me that Google would flag all instances of a particular piece of self-hosted software as malicious.
3 replies →
Tangential to the flagging issue, but is there any documentation on how Immich is doing the PR site generation feature? That seems pretty cool, and I'd be curious to learn more.
It's open source, you can find this trivially yourself in less than a minute.
https://github.com/immich-app/devtools/tree/a9257b33b5fb2d30...
If anyone's got questions about this setup I'd be happy to chat about it!
2 replies →
Wow. What a rude way to answer.
2 replies →
Pretty sure Immich is on github, so I assume they have a workflow for it, but in case you're interested in this concept in general, gitlab has first-class support for this which I've been using for years: https://docs.gitlab.com/ci/review_apps/ . Very cool and handy stuff.
I’m also self hosting gitea and pertainer and I’m trying this issue every few weeks. I appeal, they remove the warning, after a week is back. This is ongoing for at least 4 years. I have more than 20 appeals all successfully removing the warning. Ridiculous. I heard legal action is the best option now, any other ideas?
This happened to one of our documentation sites. My co-workers all saw it before I did, because Brave (my daily driver) wasn't showing it. I'm not sure if Brave is more relaxed in determining when a site is "dangerous" but I was glad not to be seeing it, because it was a false positive.
Safe Browsing collects a lot of data, such as hashes of URLs (URLs can be easily decoded by comparison) and probably other interactions with web like downloads.
But how effective is it in malware detection?
The benefits seem to me dubious. It looks like a feature offered to collect browsing data, useful to maybe 1% in special situations.
It's the only thing that has reasonable coverage to effectively block a phishing attack or malware distribution. It can certainly do other things like collecting browsing data, but it does get rid of long-lasting persistent garbage hosted at some bulletproof hosts.
100% agreed. Adblock does this better and doesn’t randomly block image sharing websites
Ran a clickbait site, and got flagged for using a bunch of 302 redirects instead of 301s. Went from almost 500k uniques a month to 1k.
During the appeal it was reviewed from India, and I had been using geoblocking. This caused my appeal to be denied.
I ended up deploying to a new domain and starting over.
Never caught back up.
Congrats on this great choice of business endeavor
I hear this a lot. I'm self-taught and had just been let go from my first dev job ever. I was broke and desperate so I built something.
I have been ashamed of it at multiple times in my career, but tbh I built something that fed, sheltered, and clothed me. It was worth.
I also learned a lot.
I wonder when google.com will be flagged with all the phishing happening on sites.google.com.
Not to mention the phishing in the sponsored results on google.com proper.
Regarding how Google safe browsing actually works under the hood, here is a good writeup from Chromium team:
https://blog.chromium.org/2021/07/m92-faster-and-more-effici...
Not sure if this is exactly the scenario from the discussed article but it's interesting to understand it nonetheless.
TL;DR the browser regularly downloads a dump of color profile fingerprints of known bad websites. Then when you load whatever website, it calculates the color profile fingerprint of it as well, and looks for matches.
(This could be outdated and there are probably many other signals.)
I can't imagine that lasted more than 30 seconds after they made a public blog post about how they were doing it.
I'm sure it was a simple mistake. The fact that Immich competes with Google Photos has nothing to do with it.
Them maintaining a page of gotchas is a really cool idea - https://immich.app/cursed-knowledge
> There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.
This is a spicy one, would love to know more.
It links to a commit; the removed deps are by GitHub user ljharb.
I had this same problem with my self-hosted Home Assistant deployment, where Google marked the entire domain as phishing because it contains a login page that looks like other self-hosted Home Assistant deployments.
Fortunately, I expose it to the internet on its own domain despite running through the same reverse proxy as other projects. It would have sucked if this had happened to a domain used for anything else, since the appeal process is completely opaque.
Google often marks my homelab domains as dangerous which all point to an A record that is in the private IP space, completely inaccessible to the internet.
Makes precisely zero sense.
This is crazy, it happened to the SoGO webmailer, standalone or bundled with the mailcow: dockerized stack as well. They implemented a slight workaround where URLs are being encrypted to avoid pattern detection to flag it as "deceiving".
There is no responses from Google about this. I had my instance flagged 3 times on 2 different domains including all subdomains, displaying a nice red banner on a representative business website. Cool stuff!
This can happen to everyone. It happened to Amazon.de's Cloudfront endpoint a week ago. Most people didn't notice because Chrome doesn't look at the intermediate bits in the resolver chain, but DNS providers using Safe Browsing blocked it.
https://github.com/nextdns/metadata/issues/1425
They have to fix their SSL certs. "Kubernetes Ingress Controller Fake Certificate" aint gonna cut it.
Sounds like you're hitting an address that isn't backed by any service, not sure what the issue is.
The .internal.immich.cloud sites do not have matching certs!
Navigating to https://main.preview.internal.immich.cloud, I'm right away informed by the browser that the connection is not secure due to an issue with the certificate. The problem is that it has the following CN (common name): main.preview.internal.immich.build. The list of alternative names also contains that same domain name. It does not match the site: the certificate's TLD .build is different from the site's .cloud!
I don't see the same problem on external sites like tiles.immich.cloud. That has a CN=immich.cloud with tiles.immich.cloud as an alternative.
We've already moved them to immich.build
"might trick you into installing unsafe software"
Something Google actively facilities with the ads they serve.
When the power is concentrated in one hands, those hands will always become the hands of a dictator
> YAML whitespace is cursed
YAML itself is cursed: https://ruudvanasseldonk.com/2023/01/11/the-yaml-document-fr...
First thing I do when I start to use a browser for the first time is making sure 'Google Safe Browsing' feature is disabled. I don't need yet another annoyance while I browse the web, especially when it's from Google.
This happened to me, I hosted a Wordpress site and it got 0'day'd (this was probably 8 years ago). Google spotted the list of insane pornographic URLs and banned it. You might want to verify nothing is compromised.
Yes, this is not a new problem: Web browsers has taken on the role as internet police but they only care about their judgement and don't afford websites operators any due process or recourse. And by web browsers I mean Google because of course everyone just defers to them. "File a complaint with /dev/null" might be how Google operates their own properties but this should not be acceptable for the web as a whole. Google and those integrating their "solutions" need to be held accountable for the damage they cause.
google: we make going to the DMV look delightful by comparison!
They are not the government and should not have this vast, unaccountable monopoly power with no accountability and no customer service.
the government probably shouldn't either?
3 replies →
Honestly, where do people live that the DMV (or equivalent - in some states it is split or otherwise named) is a pain? Every time I've ever been it has been "show up, take a number, wait 5 minutes, get served" - and that's assuming website self-service doesn't suffice.
> The most alarming thing was realizing that a single flagged subdomain would apparently invalidate the entire domain.
Correct. It works this way because in general the domain has the rights over routing all the subdomains. Which means if you were a spammer, and doing something untoward on a subdomain only invalidated the subdomain, it would be the easiest game in the world to play.
malware1.malicious.com
malware2.malicious.com
... Etc.
I’d say this is a clear slight from Google, using their Chrome browser because something or someone is inconveniencing another part of their business, google cloud / google photos.
They did a similar thing with the uBlock Origin extension, flagging it with “this extension might be slowing down your browser” in a big red banner in the last few months of manifest v2 on Chrome. After already having to upload the extension yourself to Chrome cause they took it off the extension store cause it was inhibiting on their ad business.
Google is a massive monopolistic company who will pull strings on one side of their business to help another.
With only Firefox not being based on Chromium and still having manifest v2 the future (5 to 10 years from now) looks bleak. With only 1 browser like this web devs can phase it out slowly by not taking it into consideration when coding or Firefox could enshittify to such an extent because of their manifest v2 monopoly that even that wont make it worth it anymore.
Oh and for the ones not in the know, Manifest is the name of a javascript file manifest.js that decides what browser extensions can and cant modify and the “upgrade” from manifest v2 to v3 has made it near impossible for adblockers to block ads.
A class action lawsuit, charging anticompetitive behavior, on behalf of all Immich site operators could be a good idea here.
Of course Google will claim it's just a mistake, but large tech companies - including e.g. Microsoft - have indulged in such behavior before. A lawsuit will allow for discovery which can help settle the matter, and may also encourage Google to behave like a good citizen.
A similar issue happened to us at APKMirror last week. https://x.com/ArtemR/status/1979428936267501626.
We still don't know what caused it because it happened to the Cloudflare R2 subdomain, and none of the Search Console verification methods work with R2. It also means it's impossible to request verification.
There's a reason GitHub use github.io for user content.
They're using a different TLD (.cloud / .app). But IIRC, GH changed to avoid cookies leaking with user created JS running at their main domain.
I am confused if the term "self-hosted" means the same thing to them as it means to me, not sure if I'm following.
Curious if anyone had an instance where this blocking mechanism saved them. I can’t remember a single instance in last 10 years
I've had it work for me several times. Most of the time following links/redirects from search engines, ironically a few times from Google itself. Not that I was going to enter anything (the phishing attempts themselves were quite amateurish) but they do help in some rare cases.
When I worked customer service, these phishing blocks worked wonders preventing people from logging in to your-secure-webmail.jobz. People would be filling in phishing forms days after sending out warnings on all official channels. Once Google's algorithm kicked in, the attackers finally needed to switch domains and re-do their phishing attempts.
Your parents probably have
This is a known thing since quite some time and the only solution is to use separate domain. This problem has existed for so long that at this point we as users adapt to it rather than still expecting Google to fix this.
From their perspective, a few false positives over the total number of actual malicious websites blocked is fractional.
I tried to submit this, but the direct link here is probably better than the Reddit thread I linked to:
https://old.reddit.com/r/immich/comments/1oby8fq/immich_is_a...
I had my personal domain I use for self-hosting flagged. I've had the domain for 25 years and it's never had a hint of spam, phishing, or even unintentional issues like compromised sites / services.
It's impossible to know what Google's black box is doing, but, in my case, I suspect my flagging was the result of failing to use a large email provider. I use MXRoute for locally hosted services and network devices because they do a better job of giving me simple, hard limits for sending accounts. That way if anything I have ever gets compromised, the damage in terms of spam will be limited to (ex) 10 messages every 24h.
I invited my sister to a shared Immich album a couple days ago, so I'm guessing that GMail scanned the email notifying her, used the contents + some kind of not-google-or-microsoft sender penalty, and flagged the message as potential spam or phishing. From there, I'd assume the linked domain gets pushed into another system that eventually decides they should blacklist the whole domain.
The thing that really pisses me off is that I just received an email in reply to my request for review and the whole thing is a gas-lighting extravaganza. Google systems indicate your domain no longer contains harmful links or downloads. Keep yourself safe in the future by blah blah blah blah.
Umm. No! It's actually Google's crappy, non-deterministic, careless detection that's flagging my legitimate resources as malicious. Then I have to spend my time running it down and double checking everything before submitting a request to have the false positive mistake on Google's end fixed.
Convince me that Google won't abuse this to make self hosting unbearable.
> I suspect my flagging was the result of failing to use a large email provider.
This seems like the flagging was a result of the same login page detection that the Immich blog post is referencing? What makes you think it's tied to self-hosted email?
I'm not using self hosted email. My theory is that Google treats smaller mail providers as less trustworthy and that increases the odds of having messages flagged for phishing.
In my case, the Google Search Console explicitly listed the exact URL for a newly created shared album as the cause.
https://photos.example.com/albums/xxxxxxxx-xxxx-xxxx-xxxx-xx...
I wish I would have taken a screenshot. That URL is not going to be guessed randomly and the URL was only transmitted once to one person via e-mail. The sending was done via MXRoute and the recipient was using GMail (legacy Workspace).
The only possible way for Google to have gotten that URL to start the process would have been by scanning the recipient's e-mail. What I was trying to say is that the only way it makes sense to me is if Google via GMail categorized that email as phishing and that kicked off the process to add my domain to the block list.
So, if email categorization / filtering is being used as a heuristic for discovering URLs for the block list, it's possible Google's discriminating against domains that use smaller email hosts that Google doesn't trust as much as themselves, Microsoft, etc..
All around it sucks and Google shouldn't be allowed to use non-deterministic guesswork to put domains on a block list that has a significant negative impact. If they want to operate a clown show like that, they should at least be liable for the outcomes IMO.
I'm in a similar boat. Google's false flag is causing issues for my family members who use Chrome, even for internal services that aren't publicly exposed, just because they're on related subdomains.
It's scary how much control Google has over which content people can access on the web - or even on their local network!
It's a good opportunity to recommend Firefox when you can show a clear abuse of position
1 reply →
Wonder if there would be any way to redress this in small claims court.
This is another case where it's highly important to "plant your flag" [1] and set up all those services like Search Console, even if you don't plan to use them. Not only can this sort of thing happen, but bad-guys can find crafty ways of hijacking your search console account if you're not super vigilant.
Google Postmaster Console [2] is another one everybody should set up on every domain, even if you don't use gmail. And Google Ads, even if you don't run ads.
I also recommend that people set up Bing search console [3] and some service to monitor DMARC reports.
It's unfortunate that so much of the internet has coalesced around a few private companies, but it's undeniably important to "keep them happy" to make sure your domain's reputation isn't randomly ruined.
[1] https://krebsonsecurity.com/2020/08/why-where-you-should-you...
[2] https://postmaster.google.com/
[3] https://www.bing.com/webmasters/
It does seem kind of stupid to (apparently) not have google search console, or even a google account according to them, for your business. I don't like Google being in control of so much of the internet - but they are, and it won't do us any good to shout into the void about it when our domain and livelihood is on the line.
I have no idea what immich is or what this post says, but I LOVE that this company has a collection of posts called, “Cursed Knowledge.”
There is no reason why a browser should __be__ a contentfilter.
Instead, you should be able to install a preferred contentfilter into your browser.
Is there any linkage to the semifactoid that immich Web gui looks very like Google Photos or is that just one of the coincidences?
Not a coincidence, Immich was started as a personal replacement for Google Photos.
The coincidence here would be google flagging it as malware, not the origin story of the look and feel.
1 reply →
Simply opening a case saying that this is our website not impersonating anyone else is unlikely to get anything resolved.
Just because it's your website, and you're not a bad agent doesn't prove that no part of the site is under the control of a bad agent, and that your site isn't accidentally hosting something malicious somewhere, or have some UI that is exploitable for cross-site scripting or whatever.
Sure, but why does Google approve our review over and over again without us making any changes or modifications to the flagged sites/urls? It's a vanilla Immich deployment with docker containers from GitHub pushed there by the core team.
I don't think I ever saw a legitimate warning, EVER. I push past SSL warnings EVERY DAY to manage infra.
This happened to amazon.de last week. It was resolved quickly.
Google shouldn’t be a single chokepoint for web censorship.
My local SABNZBD instance (not even accessible from the internet) was marked as a malicious site too.
I believe that Jellyfin, Immish, and NextCloud login pages are automatically flagged as dangerous by Google. What's more, I suspect that Google is somehow collecting data from its browser - Chrome.
Google flagged my domain as dangerous once. I do host Jellyfin, Immish, and NextCloud. I run an IP whitelist on the router. All packets from IPs that are not whitelisted are dropped. There are no links to my domain on the internet. At any time, there are 2-3 IPs belonging to me and my family that can load the website. I never whitelisted Google IPs.
How on earth did Google manage to determine that my domain is dangerous?
I’m launching a web version for an online game. What to do to prevent this from happening?
Install your non-self generated SSL certificate correctly, and make sure users can't upload arbitrary content to your domain.
F you, Google! Thank goodness I severed that relationship years ago. With so many other great (and ethically superior) products out there to choose from, you'd have to be a true masochist to intentionally throw yourself into their pool of shit.
Either they have an open redirect being misused, or their domains are being used to host phish content.
This is the way of things.
I don't want Google to abuse the world wide web. It is time for real change - a world without Google. A world with less Evil.
I realize now that Gigglebet is purposely fucking up Internet for everyone, and is paying unsuspecting chumps princely sums to do so. To kill the thing they say they love.
Chrome is to Web what Teams is to Chat. Bad job guys.
I've rarely seen a HN comment section this overwhelmingly wrong on a technical topic. This community is usually better than this.
Google is an evil company I want the web to be free of, I resent that even Firefox & Safari use this safe browsing service. Immich is a phenomenal piece of software - I've hosted it myself & sung its praises on HN in the past.
Put putting aside David vs Goliath biases here, Google is 100% correct here & what Immich are doing is extremely dangerous. The fact they don't acknowledge that in the blog post shows a security knowledge gap that I'm really hoping is closed over the course of remediating this.
I don't think the Immich team mean any harm but as it currently stands the OP constitutes misinformation.
> what Immich are doing is extremely dangerous
I've read the article and don't see anything dangerous, much less extremely so. Care to explain?
They're auto-deploying PRs to a subdomain of a domain that they also use for production traffic. This allows any member of the public with a GitHub account to deploy any arbitrary code to that subdomain without any review or approval from the Immich team. That's bad for two reasons:
1. PR deploys on public repos are inherently tricky as code gains access to the server environment, so you need to be diligent about segregating secrets for pr deployments from production secret management. That diligence is a complex & continuous undertaking, especially for an open source project.
2. Anyone with a GitHub account can use your domain for phishing scams or impersonation.
The second issue is why they're flagged by Google (he first issue may be higher risk to the Immich project but it's out of scope for Google's safe browsing service).
To be clear: this isn't about people running their own immich instance. This is about members of the public having the ability to deploy arbitrary code without review.
---
The article from the Immich team does mention they're switching to using a non-production domain (immich.build) for their PR builds which does indicate to me they somewhat understand the issue (though they've explained it badly in the article), but they don't seem to understand the significance or scope.
3 replies →
This just makes me feel more loyalty towards Immich and disgust towards Google Photos.
At this point I would rather use an analog camera with photo albums than Google Photos.
какие же они все таки гандоны
And yet if you start typing 192 in chrome, first suggested url is 192.168.l00.1
If there are any googlers here, I'd like to report an even more dangerous website. As much as 30-50% of the traffic to it relates to malware or scams, and it has gone unpunished for a very long time.
The address appears to be adsense.google.com.
Also YouTube.com serves a lot of scam advertisements. They should block that too.
I think google is crumbling under the weight of their size. They are no longer able to process the requested commercials with due diligence.
10 replies →
What i really don't understand at least here in Europe the advertising partner (adsense) must investigate at least minimally whether the advertising is illegal or fraudulent, i understand that sites.google etc are under "safe harbor" but that's not the point with adsense since people from google "click" the publish button and also get money to publish that ad.
I have reported over a dozen ads to AdSense (Europe) because of them being outright scams (e.g. on weather apps, an AdSense banner claiming "There is a new upgrade to this program, click here to download it") . Google has invariably closed my reports claiming that they do not find any violation of the adsense policies.
3 replies →
The law is only for plebs like you and me. Companies get a pass.
I'm still amazed how deploying spyware would've rightfully landed you in jail a couple decades back, but do the same thing on the web under the justification of advertising/marketing and suddenly it's ok.
4 replies →
sites.google.com
The same outfit is runimg a domain called blogger.
Reminds me of MS blocking a website of mine for dangerous script. The offending thing i did was use document.write to put copyright 2025 (with the current year) at the end of static pages.
5 replies →
sites.google.com is widely abused but so practically any site which allows users to host content of their choice and make it publicly available. Where google can be different is that they famously refuse yo do work which they cannot automate and probably they cannot (or don’t want) to automate detection/blocking of spam/phishing hosted on sites.google.com and processing of abuse reports.
The nerve of letting everyone run a phishing campaign on sites.google.com but marking a perfectly safe website as malicious.
Enshitification ensues.
Yeah - that website keeps on spamming me down with useless stuff.
I was able to block most of this via ublock origin but Google disabled this - can not download it from here anymore:
https://chromewebstore.google.com/detail/ublock-origin/cjpal...
Funniest nonsense "explanation":
"This extension is no longer available because it doesn't follow best practices for Chrome extensions."
In reality Google killed it because it threatens their greed income. Ads, ads and more ads.
Apparently the "best practise" is using Manifest V3 versus V2.
Reading a bit online (not having any personal/deep knowledge) it seems the original extension also downloaded updates from a private (the developers) server, while that is no longer allowed - they now need to update via the chrome extension, which also means waiting for code review/approval from google.
I can see the security angle there, it is just awkward how much of an vested interest google has in the whole topic. ad-blocking is already a grey area (legally), and there is a cat-and-mouse between blockers and advertisers; it's hard to believe there is only security best-practise going on here.
You know what? I don't even mind them killing it, because of course there are a whole pile of items under the anti-trust label that google is doing so why not one more. But what I do take issue with is the gaslighting, their attempt to make the users believe that this is in the users interests, rather than in google's interests.
If we had functional anti-trust laws then this company would have been broken up long ago, Alphabet or not. But they keep doing these things because we - collectively - let them.
2 replies →
DNS level blockers like NextDNS are much easier to use and works for the entire device.
[flagged]
9 replies →
Yes, the irony of Google warning for other sites as malware, is not lost on me.
[dead]
[dead]
[flagged]
[flagged]
I heard the CEO has a Hitler bedspread and Mussolini tattoo on the far-right of his right buttock.
[flagged]
As someone who doesn't like Google and absolutely thinks they need to be broken up, no probably not. Google's algorithms around security are so incompetent and useless that stupidity is far more likely than malice here.
Callous disregard for the wellbeing of others is not stupidity, especially when demonstrated by a company ostensibly full of very intelligent people. This behavior - in particular, implementing an overly eager mechanism for damaging the reputation of other people - is simply malicious.
Incompetently or "coincidentally" abusing your monopoly in a way that "happens" to suppress competitors (while whitelisting your own sites) probably won't fly in court. Unless you buy the judge of course.
Intent does not always matter to the law ... and if a C&D is sent, doesn't that imply that intent is subsequently present?
Defamation laws could also apply independently of monopoly laws.
I don't see how this is an issue. To me, this does seem at least confusing, but possibly dangerous.
If you have internal auth testing domains at the same place as user generated content, what's to stop somebody thinking a user-generated page isn't a legit page when it asked you to login or something?
To me this seems like a reasonable flag.
There is no user generated content involved here.