Google blocks Immich site

It’s a scenario that every website owner, developer, and administrator dreads: your site, running perfectly one minute, is suddenly inaccessible to the world. Users are met not with your content, but with a terrifying, full-page warning. The natural assumption is a server crash, a DDoS attack, or a malicious hack. But what if the culprit isn’t an outside attacker, but a silent, automated decision from one of the internet’s most powerful gatekeepers?
This is the exact situation the team behind Immich, a self-hosted photo and video backup solution, recently faced. Their entire domain was suddenly flagged as “dangerous,” effectively wiping them off the map for most users. Their frustrating journey to uncover the cause reveals a dangerous blind spot in the automated systems that act as the web’s self-appointed sheriffs—a blind spot that could affect anyone.
Takeaway: An Opaque Gatekeeper Can Erase You from the Web
A Single Flag Can Make You Disappear
Google Safe Browsing is a free service integrated directly into major browsers like Chrome and Firefox. Its goal is to protect users by identifying and blocking sites that host malware, unwanted software, or engage in “social engineering.” When a site is flagged, visitors are confronted with a bright red warning screen. For the Immich team, this frustrating experience was an unwelcome addition to their “list of Cursed Knowledge.”
The impact is immediate and devastating. As they discovered, “your site essentially becomes unavailable for all users.” Only a small fraction of tech-savvy users will dare to click through the multiple warnings to access the “unsafe site.” For your entire audience, your website simply ceases to exist. Complicating matters is that the actual process for how a site is deemed “dangerous” is not transparent, making the system a powerful but opaque arbiter of a site’s accessibility.
Takeaway: The Alarming Domino Effect of a Single Subdomain
One “Bad” Internal Link Can Takedown Your Entire Domain
After digging into the Google Search Console, the Immich team found the source of the flag: their own internal, non-public preview environments. These were temporary sites automatically generated for development purposes, with URLs like main.preview.internal.immich.cloud. Google’s automated systems had apparently crawled these temporary sites and concluded they were “deceptive.”
But the most critical and counter-intuitive discovery was the collateral damage. A flag on a single, internal-facing subdomain was not isolated. Instead, Google’s system applied the “dangerous” label to the entire immich.cloud domain, taking down their production services and informational pages. This even impacted their production tile server at tiles.immich.cloud. Luckily, as the team noted, requests to that server are made via JavaScript and aren’t user-facing, so they appeared to be working as expected.
The most alarming thing was realizing that a single flagged subdomain would apparently invalidate the entire domain.
Takeaway: The “Fix” Is a Frustrating, Endless Loop
Getting Un-Flagged Can Be a Sisyphean Task
The official process for recourse is to create a Google account, register your site with the Google Search Console, and submit a review to plead your case. The Immich team did just that, explaining that the flagged sites were their own deployments. A day or two later, the review was accepted and the domain was clean again! 🎉
The victory, however, was fleeting. The team soon discovered they were trapped in an algorithmic purgatory. Their development process involves creating new preview environments for pull requests on GitHub. As soon as a new preview URL was posted in a comment, Google’s crawlers would find it, crawl the site, and immediately re-flag the entire immich.cloud domain as dangerous. The whole process would begin anew, turning the “fix” into a futile game of digital whack-a-mole.
Faced with this endless cycle, the team devised a workaround. Their plan is to minimize the impact by moving all preview environments to their own, dedicated domain—immich.build—effectively quarantining their development process from their production services to avoid future domain-wide takedowns.
Takeaway: The System Seems Blind to Open-Source Development
This Isn’t Just One Project’s Problem; It’s a Threat to Open-Source
This experience is not unique to Immich. It highlights a wider, systemic issue that poses a significant threat to open-source and self-hosted projects. Many other popular projects, including Jellyfin, YunoHost, n8n, and NextCloud, have run into the exact same problem. The issue even spread to Immich’s user base, as “a few users started complaining about their own Immich deployments being flagged.”
The very workflow that defines modern, transparent, and collaborative open-source development—creating public preview environments for community review—is being misinterpreted by Google’s automated systems as deceptive activity. This is not a bug; it’s a fundamental design blind spot.
Google Safe Browsing looks to be have been built without consideration for open-source or self-hosted software.
When a standard, transparent development workflow is flagged as a threat, it punishes the very communities that build the free and open tools so many of us rely on.
Conclusion: A Centralized Web’s Dilemma
The Immich story is a stark reminder that centralized gatekeepers hold immense, and often arbitrary, power to make entire sections of the web inaccessible. While services like Safe Browsing are built with good intentions, their automated, one-size-fits-all approach can cause significant, unintentional harm to legitimate projects. It forces us to confront a difficult question.
As we hand over more control to opaque, automated gatekeepers, we must ask: are we building a safer internet, or are we inadvertently paving over the vibrant, collaborative spaces where the next generation of open-source software is born?