A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:
"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""
Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.
Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.
Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.
In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).
Why does any of this matter?
Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.
How can we tip that scale out of the attackers' favor?
By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.
- A normal user only has to register once, while an attacker has to re-register every time they get suspended.
- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.
- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).
- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.
Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.
lgbtqia.space/@alice/115499829…
🅰🅻🅸🅲🅴 (🌈🦄) (@alice@lgbtqia.space)
Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up.🅰🅻🅸🅲🅴 (🌈🦄) (LGBTQIA.Space)
reshared this

Dźwiedziu
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •The Orange Theme
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •I used to get my hair cut at a place that was just far enough way, and with enough traffic jams on the way each time, that I stopped going. It's not "far", by any means, but it was just on the cusp of being annoying. Once it became juuuust too much, I went somewhere closer.
I think people underestimate how low the bar can be to prevent bad actors. Even the guy scripting his nonsense will hit an application form and immediately leave to find an open instance, most of the time.
Kirtai 🏳️⚧️
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Totally agree.
Also, I know it wasn't mean that way but this had me in stitches
Marianne
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •The 'punch a Nazi' meme: what are the ethics of punching Nazis? | Tauriq Moosa
Tauriq Moosa (the Guardian)Nicole Parsons reshared this.
kauer
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Tightly argued. Nice.
To the concern that it might deter some new users, I would add "yes, but if the alternative is lots more evil arseholes, it's a minor downside - especially as it is really only a downside for lazy new users".
We already have moderated follows at the user level; having moderated signups at the server level seems like a no-brainer.
🅰🅻🅸🅲🅴 (🌈🦄) reshared this.
Androcat
in reply to kauer • • •Sensitive content
@kauer
Yeah. Having a place infested with malicious dicks is also going to deter people.
I am sure there is a sweet spot in optimizing between cumbersome defenses and being a trash pit.
gkrnours
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Dr Neenah Estrella-Luna
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •"... communities are defined by who stays, not by how many come through the door."
This is a beautiful line and apropo of many situations. I will be adding that to my book of really useful ideas.
Thank you.
reshared this
Shannon Prickett and 🅰🅻🅸🅲🅴 (🌈🦄) reshared this.
Kim Possible
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •🅰🅻🅸🅲🅴 (🌈🦄) reshared this.
Just Tom... 🐁
in reply to Kim Possible • • •Wolf in PDX
in reply to Just Tom... 🐁 • • •@tompearce49 @kimlockhartga@beige.party @alice
I dunno man. You ever stab a kid with a Bic pen in the hand for grabbing you and shoving you down into a chair? Because I did once. And the bullies never fucked with me again.
🅰🅻🅸🅲🅴 (🌈🦄)
in reply to Wolf in PDX • • •@wolfinpdx so the pen *is* mightier 😋
@tompearce49
Kim Possible
in reply to Just Tom... 🐁 • • •🅰🅻🅸🅲🅴 (🌈🦄)
in reply to Kim Possible • • •@kimlockhartga I've been tempted to start collecting the attacks I get and publishing them (with content warnings!) because a thing I hear over and over is:
"Really? I never see stuff like that here."
And these (mostly) white (mostly) guys were saying the same thing when #BlackMastodon talks about #Racism.
Or when #FemmeFedi talks about #Sexism.
It's like, dude, you don't see it because you're not the target. 😮💨
jz.tusk
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •@kimlockhartga
I'm a guy, and once, for about 45 seconds, I was mistaken for a woman, and the difference in the attitude towards me was insane, and that clarified things for me in an incredibly direct way.
So, I'm guessing you're saying that mostly in exasperation, and I'm most definitely not saying it's your job/responsibility to do so, but my thought is that yes, that might actually be a useful thing to do.
sanpan
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •@kimlockhartga I usually don´t see stuff like that bc a) as you say, I´m not the target and b) bc the work all you mods and adms put in every day.
My wife just liked and commented an anti-nazi post on facebook the other day and the messages she got ranged from kill yourself to actual threats.
Is it just me or is this actually getting worse by the day?
Anyway, I´m sorry for everyone who has to deal with this shit and immensely grateful for everyone moderating it.
🅰🅻🅸🅲🅴 (🌈🦄)
in reply to sanpan • • •@sanpan at least in the US, these assholes have been emboldened of late. And with most big tech platforms scrapping DEI and anti-harassment language, the (measly) repercussions they may once have had are even less of a concern.
@kimlockhartga
sanpan
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Negative12DollarBill
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •@kimlockhartga Do it, for sure. I would never say the "Really?" part but no, I never see these posts. For technical reasons, possibly?
But yes, show us the replies. Let us report these people.
Insecurity Princess 🌈💖🔥
in reply to Kim Possible • • •@kimlockhartga
^ that is what makes this accurate:
"Reviewing an application is lower effort than trying to fix the damage from an attack."
Moderators have to review hundreds of applications to prevent a single attack. But because the damage of accumulated attacks is both long-lasting and affects audiences beyond a single direct target, reviewing remains a lower effort investment
Preventing attacks is also ethically worthwhile, adding ethos to logos and pathos! Never let meritocracy trolls reduce us to only using logos and rational arguments as persuasive tools
Leto Fregar
in reply to Kim Possible • • •Androcat
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Sensitive content
This is also well-known from hacker circles.
The absolute largest lump of malicious hackers only look for low-hanging fruit.
Dedicated hackers looking to penetrate a well-composed org? Very rare. And completely different from the bulk. This is what red team sessions are for.
Bill
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •🅰🅻🅸🅲🅴 (🌈🦄) reshared this.
🅰🅻🅸🅲🅴 (🌈🦄)
in reply to Bill • • •DamonHD
in reply to Bill • • •Dave Alvarado
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •for the readers (I know Alice knows all this):
Perfection is not the enemy of the good. Any attempts to keep out the attackers are better than no attempts to keep out the attackers.
There's such a thing as defense-in-depth. You don't need your registration process to stop every attacker to get in the door. You have moderation tools. You have blacklists. You have defederation. You have TBS. Every attacker-stopper you add makes your instance safer.
Don't give up. Fight back.
MayaMayaMaya
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •MaryMarasKittenBakery
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •🥰🥰
🅰🅻🅸🅲🅴 (🌈🦄) reshared this.
🅰🅻🅸🅲🅴 (🌈🦄)
in reply to MaryMarasKittenBakery • • •Zumbador
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Twotired
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •🅰🅻🅸🅲🅴 (🌈🦄)
in reply to Twotired • • •solo
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •on its face this is just an awful argument, like ???
99.9% of nazis won't even bother doing that... so it weeds out the vast majority of them
and that's what you have other moderation practices for!!
Raccoon🇺🇸🏳️🌈
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Suddenly wondering about a system, which I've seen effectively used on forums, where a new user's posts are held back, and they are effectively silenced until a moderator reads their posts and approves them to interact, perhaps with some sort of time-limit in case moderation doesn't get to it in a timely manner. This would not only mean they don't need to give a reason when signing up, but that they could partially engage without having to wait, and potentially that moderation would be seeing their posts right away.
It might take a new layer of systems to implement, but do you think that would be a good idea?
🅰🅻🅸🅲🅴 (🌈🦄)
in reply to Raccoon🇺🇸🏳️🌈 • • •@Raccoon see lgbtqia.space/@alice/116130539… (towards the end)
"""
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
"""
🅰🅻🅸🅲🅴 (🌈🦄)
2026-02-25 09:23:20
Funky Captain 𓆏
in reply to Raccoon🇺🇸🏳️🌈 • • •Closed registration is incredibly stressful to me, for example, because you have to consider just so many factors: did you write enough? did you write too much? are your writing skills on par? do you sound sane enough to be accepted? WHAT do you even write in the first place? what reason for wanting to join do you give beyond the basic "i need an account to interact with the site" which will likely not be enough and get me rejected? what if I end up wanting to change instances and therefore would've wasted moderator's time on making them read that?
The reasons for closed reg brought up in OP are valid and are probably more immediate than having anxiety, but my issues still exist and I'd like them to be kept in mind.
🅰🅻🅸🅲🅴 (🌈🦄)
in reply to Funky Captain 𓆏 • • •@ItsFunkyCaptain you just wrote *way* more in an unsolicited response to a stranger than you'd ever need to write to be accepted into our moderated-registration community at LGBTQIA.space.
In fact, the vast majority of people who *do* get rejected, get rejected because they *obviously* didn't read the server rules.
Specifically:
2. The main language on this instance is English.
11. We need to confirm you’re not an AI, so please write a few sentences explaining why you want to join this server. Without that, we can’t approve your account. (max 400 characters)
And their application looks like: "ich bin schwul", or "community", or "i'm lgb".
Whereas something as simple as "I'm a lesbian looking for a place that isn't toxic like Twitter." would easily be accepted.
@Raccoon
Funky Captain 𓆏
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Todd Knarr
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •Matilda Love
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •it occurs to me that moderated registration has another benefit: reducing moderation load.
yes; reducing.
if your mod team is at capacity, the last thing you need is the doors wide open for any number of new users. if you're having trouble keeping up with new account requests, good! let them lie fallow, because you clearly don't need more users to moderate right now!
J. "Henry" Waugh
in reply to 🅰🅻🅸🅲🅴 (🌈🦄) • • •an excellent analysis
I'd add one addendum for those in the audience who want a low effort policy that's more aggressive
There is another option much more heavy-handed -- toward "innocent" and "guilty" alike. One common to servers including mine:
By referral, after referrer has been registered X months
The number who accidentally invite someone who doesn't share culture and values of the place is very low
And if fedi shows anything imo, it's that this scales better than many think
🅰🅻🅸🅲🅴 (🌈🦄) reshared this.