A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:
"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""
Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.
Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.
Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.
In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).
Why does any of this matter?
Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.
How can we tip that scale out of the attackers' favor?
By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.
- A normal user only has to register once, while an attacker has to re-register every time they get suspended.
- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.
- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).
- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.
Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.
lgbtqia.space/@alice/115499829…
Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem".
Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up.
🅰🅻🅸🅲🅴 (🌈🦄) (LGBTQIA.Space)
Kent Pitman
in reply to Kent Pitman • • •At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.
I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.
The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.
Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.
Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.
But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.
And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.
In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.
But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.
The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.
For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.
My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.
This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.
screwlisp reshared this.
Judy Anderson
in reply to Kent Pitman • • •screwlisp reshared this.
Kent Pitman
in reply to Judy Anderson • • •@nosrednayduj
First, thanks for raising that example. It's interesting and contains info I hadn't heard.
In a way, it underscores my point: that for a while, it was an open question whether we could implement GC, but a bet was made that we could.
You could view that as saying they only implemented part of Lisp, and that the malloc stuff was a stepping out of paradigm, an admission the bet was failing for them in that moment. Or you could view it as a success, saying that even though some limping was required of Lisps while we refined the points, it was done.
As I recall, there was some discussion of adding a GC function. At the time, the LispM people probably said "which GC would it invoke" and the Gensym people probably said "we don't have one". That was the kind of complexity that the ANSI process turned up and it's probably why there is no GC function. (There was one in Maclisp that invoked the Mark/Sweep GC, but the situation had become more complicated.)
Also, as an aside, a personal observation about the process: With GC, as with other things like buffered streams, one of the hardest things to get agreement on was something where one party wanted a feature and another said "we don't have that, I'd have to make it a no-op". Making it a no-op was not a lot of implementation work. Just seeing and discarding an arg. But it complicated the story that was told, and vendors didn't like it, so they pushed back even though of all the implementations they had the easiest path (if you didn't count "explaining" as part of the path).
Elias Mårtenson
in reply to Kent Pitman • • •your notes on continuations are interesting. I do a lot of Kotlin programming these days, and one of thr features it adds on top of Java is continuations (they call is suspend functions). However, unlike Scheme, you can only call suspend functions from other sustend functions, leading to two different worlds, the continuation-supported one and the regular one.
I measured a 30% performance hit when changing code use suspend functions instead of regular functions. Nevertheless, this has not stopped people from using them for everything.
Ramin Honary
in reply to Elias Mårtenson • • •@loke ooh, that is interesting, thanks! I did not know that Kotlin also had that feature (in a limited way).
Yes, the performance hit probably comes from copying the stack or restoring the stack. For small stacks this is trivial, but often times continuations are useful when computing recursive functions over very large data structures, and you usually have very large stacks for these kinds of computations.
Delimited continuations (DCs) can help with that problem, apparently. And the API for DCs also happens to make them more composable with each other, since you can kind-of unfreeze a computation inside of another frozen computation.
That might be why Kotlin has those restrictions on continuations.
@kentpitman @screwlisp @cdegroot
Kent Pitman
in reply to Ramin Honary • • •I'm glad you mentioned delimited continuations. They go overlooked a lot.
Ramin Honary
in reply to Kent Pitman • • •yes, Scheme led on continuations before it was a well-established idea, and I think there is some regret about that because of the difficulties involved to which you had alluded, especially in compiling efficient code. Nowadays the common wisdom is that delimited continuations, which I believe are implemented by copying only part of the stack, are better in every way. I have no strong opinions on the issue, I just thought it was interesting how Scheme solved problems of optimizing tail recursion and “creating actors” i.e. capturing closures, and both of these things involve stack manipulation which naturally leads into the idea of continuations.
As a Haskeller I definitely appreciate the study of programming language theory, and how much of Haskell is built on the work of Lisp. The Haskell team’s many innovations include asking questions like, “what if everything was lazy by default?” Or, “what if we abolish mutating variables and force the programmer to pop the old value and push the new value on the stack every time?” Or “what if tail recursion was the only way to loop?” As it turns out, this gives an optimizing compiler the freedom to very aggressively optimize code, and can result in very efficient binaries. Often times, both programmers and language implementors can do a lot more when you are constrained to use fewer features.
But which features to use and which to remove requires a lot of wisdom and experience. So the Haskell people could have only felt comfortable asking those questions after garbage collection and closures had become a well-established practice, and we can thank the work of the Lisp team for those contributions.
@screwlisp @cdegroot
Cees de Groot
in reply to Kent Pitman • • •LisPi
in reply to Kent Pitman • • •