A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:
"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""
Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.
Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.
Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.
In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).
Why does any of this matter?
Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.
How can we tip that scale out of the attackers' favor?
By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.
- A normal user only has to register once, while an attacker has to re-register every time they get suspended.
- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.
- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).
- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.
Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.
lgbtqia.space/@alice/115499829…
Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem".
Imagine you invite your friend—let's call him Mark—to a club with you. It's open-door, which is cool, because you like when a lot of folx show up.
🅰🅻🅸🅲🅴 (🌈🦄) (LGBTQIA.Space)
Kent Pitman
in reply to Kent Pitman • • •At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.
I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.
The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.
Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.
Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.
But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.
And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.
In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.
But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.
The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.
For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.
My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.
This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.
screwlisp reshared this.
Judy Anderson
in reply to Kent Pitman • • •screwlisp reshared this.
Kent Pitman
in reply to Judy Anderson • • •@nosrednayduj
First, thanks for raising that example. It's interesting and contains info I hadn't heard.
In a way, it underscores my point: that for a while, it was an open question whether we could implement GC, but a bet was made that we could.
You could view that as saying they only implemented part of Lisp, and that the malloc stuff was a stepping out of paradigm, an admission the bet was failing for them in that moment. Or you could view it as a success, saying that even though some limping was required of Lisps while we refined the points, it was done.
As I recall, there was some discussion of adding a GC function. At the time, the LispM people probably said "which GC would it invoke" and the Gensym people probably said "we don't have one". That was the kind of complexity that the ANSI process turned up and it's probably why there is no GC function. (There was one in Maclisp that invoked the Mark/Sweep GC, but the situation had become more complicated.)
Also, as an aside, a personal observation about the process: With GC, as with other things like buffered streams, one of the hardest things to get agreement on was something where one party wanted a feature and another said "we don't have that, I'd have to make it a no-op". Making it a no-op was not a lot of implementation work. Just seeing and discarding an arg. But it complicated the story that was told, and vendors didn't like it, so they pushed back even though of all the implementations they had the easiest path (if you didn't count "explaining" as part of the path).
Kent Pitman
in reply to Kent Pitman • • •@nosrednayduj
And, unrelated, another reference I made in the show as to Clyde Prestowitz and book The Betrayal of American Prosperity.
goodreads.com/book/show/810439…
Also an essay I wrote that summarizes a key point from it, though not really related to the topic of the show. I mention it just because that point will also be interesting maybe to this audience on the issue of capitalism if not on the specific economic issue we were talking about tonight:
netsettlement.blogspot.com/201…
The Betrayal of American Prosperity: Free Market Delusi…
Goodreadsscrewlisp reshared this.
Elias Mårtenson
in reply to Kent Pitman • • •your notes on continuations are interesting. I do a lot of Kotlin programming these days, and one of thr features it adds on top of Java is continuations (they call is suspend functions). However, unlike Scheme, you can only call suspend functions from other sustend functions, leading to two different worlds, the continuation-supported one and the regular one.
I measured a 30% performance hit when changing code use suspend functions instead of regular functions. Nevertheless, this has not stopped people from using them for everything.
Ramin Honary
in reply to Elias Mårtenson • • •@loke ooh, that is interesting, thanks! I did not know that Kotlin also had that feature (in a limited way).
Yes, the performance hit probably comes from copying the stack or restoring the stack. For small stacks this is trivial, but often times continuations are useful when computing recursive functions over very large data structures, and you usually have very large stacks for these kinds of computations.
Delimited continuations (DCs) can help with that problem, apparently. And the API for DCs also happens to make them more composable with each other, since you can kind-of unfreeze a computation inside of another frozen computation.
That might be why Kotlin has those restrictions on continuations.
@kentpitman @screwlisp @cdegroot
Kent Pitman
in reply to Ramin Honary • • •I'm glad you mentioned delimited continuations. They go overlooked a lot.
Ramin Honary
in reply to Kent Pitman • • •yes, Scheme led on continuations before it was a well-established idea, and I think there is some regret about that because of the difficulties involved to which you had alluded, especially in compiling efficient code. Nowadays the common wisdom is that delimited continuations, which I believe are implemented by copying only part of the stack, are better in every way. I have no strong opinions on the issue, I just thought it was interesting how Scheme solved problems of optimizing tail recursion and “creating actors” i.e. capturing closures, and both of these things involve stack manipulation which naturally leads into the idea of continuations.
As a Haskeller I definitely appreciate the study of programming language theory, and how much of Haskell is built on the work of Lisp. The Haskell team’s many innovations include asking questions like, “what if everything was lazy by default?” Or, “what if we abolish mutating variables and force the programmer to pop the old value and push the new value on the stack every time?” Or “what if tail recursion was the only way to loop?” As it turns out, this gives an optimizing compiler the freedom to very aggressively optimize code, and can result in very efficient binaries. Often times, both programmers and language implementors can do a lot more when you are constrained to use fewer features.
But which features to use and which to remove requires a lot of wisdom and experience. So the Haskell people could have only felt comfortable asking those questions after garbage collection and closures had become a well-established practice, and we can thank the work of the Lisp team for those contributions.
@screwlisp @cdegroot
Cees de Groot
in reply to Kent Pitman • • •LisPi
in reply to Kent Pitman • • •Roger Crew✅❌☑🗸❎✖✓✔
in reply to Kent Pitman • • •Generational GC changes the way you program and it's not *just* that it's efficient.
We used MIT-Scheme (which, by the early 90s was showing its age). Did all manner of weird optimizing to use memory efficiently. Lots of set! to re-use structure where possible. Or (map! f list) -- same as (map...) but with set-car! to modify in-place -- because it made a HUGE difference not recreating all of those cons cells => bumps memory use => next GC round is that much sooner (and then everything STOPS, because Mark & Sweep). Also stupid (fluid-let ...) tricks to save space in closures.
We were writing Scheme as if it were C because that was how you got speed in that particular world.
1/3
Roger Crew✅❌☑🗸❎✖✓✔
in reply to Roger Crew✅❌☑🗸❎✖✓✔ • • •And then Bruce Duba joined the group (had just come from Indiana).
"Guys, you're doing this ALL WRONG",
"Yeah, we know already. It's ugly, impure, and sucks. But it's faster, unfortunately",
"No, you need a better Scheme; you should try Chez".
...and, to be sure, just that much *was* a significant improvement. Chez was much more actively maintained, had a better repertoire of optimizations, etc...
... but the real eye-opener was what happened when we ripped out all of the set! and fluid-let code. That's when we got the multiple-orders-of-magnitude speed improvement.
2/3
Digital Mark λ ☕️ 🕹 👽 reshared this.
Roger Crew✅❌☑🗸❎✖✓✔
in reply to Roger Crew✅❌☑🗸❎✖✓✔ • • •See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.
(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)
In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.
3/3
screwlisp reshared this.
screwlisp
in reply to Roger Crew✅❌☑🗸❎✖✓✔ • • •> you should try chez
@wrog @kentpitman @cdegroot @ramin_hal9001
Roger Crew✅❌☑🗸❎✖✓✔
in reply to Roger Crew✅❌☑🗸❎✖✓✔ • • •There were other weirdnesses as well.
Even if GC saves you the horror of referencing freed storage, or freeing stuff twice, you still have to worry about memory leaks.
With generational GC, leaks, in some sense end up being *more* costly: it's useless shit that has to be copied -- yes it eventually ends up in an old generation but until then it's getting copied -- and copying is what's taking most of the time with a generational GC.
And so, tracking down leaks and finding places to put in weak pointers started mattering more...
4/3
screwlisp
in reply to Roger Crew✅❌☑🗸❎✖✓✔ • • •@kentpitman @cdegroot @ramin_hal9001
Roger Crew✅❌☑🗸❎✖✓✔
in reply to screwlisp • • •@dougmerritt
5? maybe for mark&sweep
but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...
... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.
screwlisp
in reply to Kent Pitman • • •@kentpitman @cdegroot @ramin_hal9001
Cees de Groot, Kent Pitman, Ramin Honary, screwlisp #commonLisp #lisp user interfaces and the ages, #climate
Lispy Gopher Climate extras (toobnix)LdBeth reshared this.