A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:
"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""
Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.
Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motiveβthey hunt. Opportunistic attackers have an inclination and will attack if a target presents itselfβthey're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.
Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.
In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).
Why does any of this matter?
Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.
How can we tip that scale out of the attackers' favor?
By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.
- A normal user only has to register once, while an attacker has to re-register every time they get suspended.
- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.
- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).
- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.
Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.
lgbtqia.space/@alice/115499829β¦
Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem".
Imagine you invite your friendβlet's call him Markβto a club with you. It's open-door, which is cool, because you like when a lot of folx show up.
π
°π
»π
Έπ
²π
΄ (ππ¦) (LGBTQIA.Space)
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Best part? It's always somebody with years of experience. Exactly the demographic that is supposedly able to use this shit safely, but my impression is they're just as bad as the novices
This is happening IMO because of one of the fundamental issues with software dev (and this predates "AI" and was one of the themes of my first book):
Most software projects fail and most of what gets shipped doesn't work. The way the industry is set up means there is little downside to shipping broken software
Nicole Parsons reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Few devs have a reference point for genuinely working software. Usability labs were disbanded over 20 years ago. Very few companies do actual user research, so their designs are based on fiction. Bugs are the norm
Alienation is also the norm for devs, both socially and organisationally. Whether it works for the end user doesn't cross their mind. Whether the design fulfils business needs is not their problem. Bugs are a future problem. Ship insecure software and patch it as user data gets stolen
Nicole Parsons reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Devs are so disconnected from the output of their work that many of the norms of the industry are outright illegal: there's a good chance that if you follow popular practices for a React project, for example, you'll end up with a site or product that violates accessibility law in several countries
Few devs would even know where to begin to look to answer the question "does my software work for the people forced to use it?"
Nicole Parsons reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Nicole Parsons reshared this.
Kale
in reply to Baldur Bjarnason • • •Whoever came up with 'Yes/Not now' needs to be dragged into the streets and shot.
No wonder some folks don't understand consent - our software doesn't allow for it.
Alexandre Oliva
in reply to Kale • • •or the "You must allow our JavaScript programs to run on your browser, otherwise we won't allow you to get to the information that we're legally required to provide you with"
CC: @baldur@toot.cafe
Chip Butty
in reply to Baldur Bjarnason • • •Nicole Parsons reshared this.
Thomas - NBA
in reply to Baldur Bjarnason • • •Bruno Nicoletti
in reply to Baldur Bjarnason • • •Kale
in reply to Baldur Bjarnason • • •I still remember, talking to a twitter dev who had an utterly ridiculously foolish take on XYZ issue go viral.
They told me 'Uhhh, I've never had this much attention on me, my tweets never go beyond my social circle. I had to turn off my phone. It kept buzzing.'
... this was a person who worked on the UI. No shit they had no idea how to deal with high volume 'oh, you just 200k likes' kinda shit, they never experienced it themselves.
LillyLyle/Count Melancholia
in reply to Baldur Bjarnason • • •Raymond Neilson
in reply to Baldur Bjarnason • • •Honestly I think a big part of it is more than our industry being deeply immature still; I think the most important throughline of the research on LLMs' effects on cognition is a consistent attack on metacognition, which seemingly doesn't abate with experience. The same corrosion happens to juniors and seniors alike, but the seniors have more rationalizations at hand to pretend it doesn't.
(Speaking of, that "cognitive surrender" paper is the latest in that theme: papers.ssrn.com/sol3/papers.cfβ¦)
Kraftwerk-Das Model Collapse
in reply to Raymond Neilson • • •David Beazley
in reply to Baldur Bjarnason • • •Apropos of nothing, the absolute worst implementation of Raft I've ever seen in my Raft course was by a pair of senior devs with a combined 60+ years of experience who decided to pair program together and announced ahead of time to the group that they were going to "win" Raft. They did not.
An undergraduate who'd never coded with sockets before did reasonably okay.
Kraftwerk-Das Model Collapse
in reply to Baldur Bjarnason • • •Nicole Parsons
in reply to Kraftwerk-Das Model Collapse • • •@dngrs
Most people ignore that it's a fossil fuel funded cult, intentionally designed to keep a dependency on oil.
wired.com/story/trump-energy-iβ¦
bloomberg.com/news/articles/20β¦
nytimes.com/2025/10/27/technolβ¦
cnbc.com/2025/11/20/us-approveβ¦
Saudi Arabia aspires to be the next Russian Internet Research Agency, selling hack-for-hire election meddling.
npr.org/2020/08/18/903512647/sβ¦
newyorker.com/news/news-desk/wβ¦
With Larry Ellison's help.
independent.co.uk/news/world/aβ¦
sfchronicle.com/tech/article/pβ¦
intelligentcio.com/me/2023/12/β¦
Billionaire Larry Ellison plotted with Trump aides on call about overturning election
John Bowden (The Independent)Nicole Parsons reshared this.
adingbatponder πΎ
in reply to Baldur Bjarnason • • •Lucas
in reply to Baldur Bjarnason • • •Orjan
in reply to Baldur Bjarnason • • •I feel like having spent most of my career building embedded systems aimed at industry rather than consumers, where customer support issues can mean sending a technician out with a USB stick on a ten-hour road trip, has insulated me from the worst madness.
If your sloppy coding breaks a manufacturing line or distribution network, bugs become expensive fast.
Though having said that, $CURRENT_EMPLOYER is pushing for greater use of LLMs in our workflow...
Niels Abildgaard
in reply to Baldur Bjarnason • • •This has been on my mind the last few days, too: mas.to/@nielsa/116171030173125β¦
I see so many people falling into LLM delusion, who I thought would know better, with no seeming pattern in *why* they fall for it.
Yes, the lack of negative incentives is certainly a factor.
My best explanation so far is that LLMs are kind of "acting" like 17 cons (some new, some old) in a trench coat, and different combinations of these trick different people who'd be able to resist most of these on their own.
Niels Abildgaard
2026-03-04 13:00:44
Martin Hamilton
in reply to Baldur Bjarnason • • •Paul Dot X
in reply to Baldur Bjarnason • • •choomba
in reply to Baldur Bjarnason • • •Elias MΓ₯rtenson
in reply to Baldur Bjarnason • • •I always enjoyed Universe Today, but once you get llm psychosis, everything becomes possible.
youtube.com/watch?v=vkhZHR_hs4β¦
at the 47 minute mark it really goes off the rails.
SpaceX's AI Data Centres Might Actually Be A Good Idea. Here's Why
Fraser Cain (YouTube)spidey
in reply to Baldur Bjarnason • • •