A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:
"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""
Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.
Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motiveβthey hunt. Opportunistic attackers have an inclination and will attack if a target presents itselfβthey're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.
Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.
In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).
Why does any of this matter?
Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.
How can we tip that scale out of the attackers' favor?
By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.
- A normal user only has to register once, while an attacker has to re-register every time they get suspended.
- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.
- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).
- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.
Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.
lgbtqia.space/@alice/115499829β¦
Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem".
Imagine you invite your friendβlet's call him Markβto a club with you. It's open-door, which is cool, because you like when a lot of folx show up.
π
°π
»π
Έπ
²π
΄ (ππ¦) (LGBTQIA.Space)
Jon Sharp
in reply to Jeff Johnson • • •Gabriele Svelto
in reply to Jeff Johnson • • •Boyd Stephen Smith Jr.
in reply to Jeff Johnson • • •I feel compelled to mention there are models you can self-host. There are even models where the architecture is available under a permissive license, so you can tweak / tune / retrain / distill or whatever beyond mere prompting.
I don't recommend or defend that approach. I think there are still problems, ethical and other.
But, it could be a way to prevent "vendor lock-in" with your LLM usage.
Jonathan Lamothe
in reply to Boyd Stephen Smith Jr. • •Boyd Stephen Smith Jr. likes this.
Boyd Stephen Smith Jr.
in reply to Jonathan Lamothe • • •@me Generating test data, as a complement to QuickCheck/SmallCheck generators. I think LLMs might "explore the probability space" in different ways than manually written generators. But, I haven't validated this in practice.
I've been fairly disappointed with LLMs output all the times I've tried them. Too many hallucinations around factual data. Too little... variety(?) when doing fiction. The image generators seem better than me, but I have declined to use them (much)because I assume the image generators are "stealing" from the recognition/attribution of artists that make their art publicly visible. I know the code generators "steal" copyleft code, most likely including mine.
I don't like saying LLMs capabilities are bad, because I don't use them, for ethical reasons, enough to really know what their current capabilities are.