I'm getting closer and closer to passing the entrance exam for this job. I also learned a little tidbit about why they're always hiring: apparently, "AI-generated" transcripts are inadmissible in US courts.* As much as they might like to, they legally can't replace this job with AI.
Combine that with the very small overlap between people capable of passing this exam and people actually willing to jump through those hurdles, and you have glut of available work.
* At least for now. Give the techbro billionaire class time to keep eroding the US legal system, and who knows?
Justin To #ΠΠ΅ΡΠΠΎΠΉΠ½Π΅ reshared this.
(Following thread was prompted by people pointing out that the Bluesky dev team seems heavily into vibe-coding now and originally posted on said vibe-coded Bluesky platform that is now constantly failing.)
Over the past year, every single time one of the apps or services I use suddenly became less reliable and more buggy, I never have to look far for the "Claude is amazing and now writes most of my code" post for the devs involved.
reshared this
Jonathan Lamothe, Niels Abildgaard and Nicole Parsons reshared this.
@screwlisp is having some site connectivity problems so asked me to remind everyone that we'll be on the anonradio forum at the top of the hour (a bit less than ten minutes hence) for those who like that kind of thing:
He'll also be monitoring LambdaMOO at "telnet lambda.moo.mud.org 8888" for those who do that kind of thing. there are also emacs clients you should get if you're REALLY using telnet.
Topic for today, I'm told, may include the climate, the war, the oil price hikes, some rambles I've recently posted on CLIM, and the book by @cdegroot called The Genius of Lisp, which we'll also revisit again next week.
cc @ramin_hal9001
reshared this
πππ, Jonathan Lamothe and screwlisp reshared this.
At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.
I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.
The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.
Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.
Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.
But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.
And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.
In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.
But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.
The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.
For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.
My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.
This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.
screwlisp reshared this.
screwlisp reshared this.
@nosrednayduj
First, thanks for raising that example. It's interesting and contains info I hadn't heard.
In a way, it underscores my point: that for a while, it was an open question whether we could implement GC, but a bet was made that we could.
You could view that as saying they only implemented part of Lisp, and that the malloc stuff was a stepping out of paradigm, an admission the bet was failing for them in that moment. Or you could view it as a success, saying that even though some limping was required of Lisps while we refined the points, it was done.
As I recall, there was some discussion of adding a GC function. At the time, the LispM people probably said "which GC would it invoke" and the Gensym people probably said "we don't have one". That was the kind of complexity that the ANSI process turned up and it's probably why there is no GC function. (There was one in Maclisp that invoked the Mark/Sweep GC, but the situation had become more complicated.)
Also, as an aside, a personal observation about the process: With GC, as with other things like buffered streams, one of the hardest things to get agreement on was something where one party wanted a feature and another said "we don't have that, I'd have to make it a no-op". Making it a no-op was not a lot of implementation work. Just seeing and discarding an arg. But it complicated the story that was told, and vendors didn't like it, so they pushed back even though of all the implementations they had the easiest path (if you didn't count "explaining" as part of the path).
@nosrednayduj
And, unrelated, another reference I made in the show as to Clyde Prestowitz and book The Betrayal of American Prosperity.
goodreads.com/book/show/810439β¦
Also an essay I wrote that summarizes a key point from it, though not really related to the topic of the show. I mention it just because that point will also be interesting maybe to this audience on the issue of capitalism if not on the specific economic issue we were talking about tonight:
netsettlement.blogspot.com/201β¦
The Betrayal of American Prosperity: Free Market Delusiβ¦
CONSIDER THIS SHOCKING while Chinaβs number one exportβ¦Goodreads
screwlisp reshared this.
@nosrednayduj
Also Naomi Klein's book The Shock Doctrine, very politically relevant this week, traces a lot of political ills to Milton Friedman and his ideas.
goodreads.com/book/show/123730β¦
The Shock Doctrine: The Rise of Disaster Capitalism
In her ground-breaking reporting from Iraq, Naomi Kleinβ¦Goodreads
your notes on continuations are interesting. I do a lot of Kotlin programming these days, and one of thr features it adds on top of Java is continuations (they call is suspend functions). However, unlike Scheme, you can only call suspend functions from other sustend functions, leading to two different worlds, the continuation-supported one and the regular one.
I measured a 30% performance hit when changing code use suspend functions instead of regular functions. Nevertheless, this has not stopped people from using them for everything.
@loke ooh, that is interesting, thanks! I did not know that Kotlin also had that feature (in a limited way).
Yes, the performance hit probably comes from copying the stack or restoring the stack. For small stacks this is trivial, but often times continuations are useful when computing recursive functions over very large data structures, and you usually have very large stacks for these kinds of computations.
Delimited continuations (DCs) can help with that problem, apparently. And the API for DCs also happens to make them more composable with each other, since you can kind-of unfreeze a computation inside of another frozen computation.
That might be why Kotlin has those restrictions on continuations.
I'm glad you mentioned delimited continuations. They go overlooked a lot.
I didn't research it too much, but I think the reason is that when you have a function marked as suspend, it will always pass along an implicit extra argument which is the continuation. I also believe there is a dispatch block at the beginning of a function that can suspend that looks at the continuation to jump to the right part of the code. This is because code running on the JVM cannot directly manipulate the stack.
I don't know how it's implemented when you compile Kotlin to other targets. The semantics are the same, but the underlying implementation may be different.
yes, Scheme led on continuations before it was a well-established idea, and I think there is some regret about that because of the difficulties involved to which you had alluded, especially in compiling efficient code. Nowadays the common wisdom is that delimited continuations, which I believe are implemented by copying only part of the stack, are better in every way. I have no strong opinions on the issue, I just thought it was interesting how Scheme solved problems of optimizing tail recursion and βcreating actorsβ i.e. capturing closures, and both of these things involve stack manipulation which naturally leads into the idea of continuations.
As a Haskeller I definitely appreciate the study of programming language theory, and how much of Haskell is built on the work of Lisp. The Haskell teamβs many innovations include asking questions like, βwhat if everything was lazy by default?β Or, βwhat if we abolish mutating variables and force the programmer to pop the old value and push the new value on the stack every time?β Or βwhat if tail recursion was the only way to loop?β As it turns out, this gives an optimizing compiler the freedom to very aggressively optimize code, and can result in very efficient binaries. Often times, both programmers and language implementors can do a lot more when you are constrained to use fewer features.
But which features to use and which to remove requires a lot of wisdom and experience. So the Haskell people could have only felt comfortable asking those questions after garbage collection and closures had become a well-established practice, and we can thank the work of the Lisp team for those contributions.
Generational GC changes the way you program and it's not *just* that it's efficient.
We used MIT-Scheme (which, by the early 90s was showing its age). We did all manner of weird optimizing to use memory efficiently. Lots of set! to re-use structure where possible. Or (map! f list) -- same as (map...) but with set-car! to modify in-place -- because it made a HUGE difference not recreating all of those cons cells => bumps memory use => next GC round is that much sooner (and then everything STOPS, because Mark & Sweep). Also stupid (fluid-let ...) tricks to save space in closures.
We were writing Scheme as if it were C because that was how you got speed in that particular world.
1/3
screwlisp reshared this.
And then Bruce Duba joined the group (had just come from Indiana).
"Guys, you're doing this ALL WRONG",
"Yeah, we know already. It's ugly, impure, and sucks. But it's faster, unfortunately",
"No, you need a better Scheme; you should try Chez".
...and, to be sure, just that much *was* a significant improvement. Chez was much more actively maintained, had a better repertoire of optimizations, etc...
... but the real eye-opener was what happened when we ripped out all of the set! and fluid-let code. That's when we got the multiple-orders-of-magnitude speed improvement.
2/3
Digital Mark λ βοΈ πΉ π½ reshared this.
See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.
(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)
In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.
3/3
screwlisp reshared this.
There were other weirdnesses as well.
Even if GC saves you the horror of referencing freed storage, or freeing stuff twice, you still have to worry about memory leaks and moreover, dropping references as fast as you can matters
With copying GC, leaks are useless shit that has to be copied -- yes it eventually ends up in an old generation but until then it's getting copied -- and copying is where generational GC is doing work, and it's stuff unnecessarily surviving to the medium term that hurts you the most (generational GC *relies* on stuff becoming garbage as quickly as possible)
And so, tracking down leaks and finding places to put in weak pointers started mattering more...
4/3
@kentpitman @cdegroot @ramin_hal9001
5? maybe for mark&sweep
but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...
... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.
@wrog
> but I can't see how more than 2 would ever be necessary for a copying GC
It's not "necessary", it's "to make GC performance a negligeable percentage of overall CPU".
It was about a theoretical worst case as I recall, certainly not about one particular algorithm.
And IIRC it was actually a factor of 7 -- 5 is merely a good mnemonic which may be close enough. (e.g. perhaps 5-fold keeps overhead down to 10-20% rather than 7's 1%, although I'm making it up to give the flavor -- I haven't read the book for 10-20 years)
But see the book (may as well use the second edition) if and when you care; it's excellent. Mandatory I would say, for anyone who wants to really really understand all aspects of garbage collection, including performance issues.
screwlisp reshared this.
@dougmerritt
Is this the book you're talking about?
(sorry this was after my time in Scheme Land, hadn't heard of it before)
@wrog
Yes, that's it. (And every time I've checked, it [in 1st & 2nd edition] is the only book ever dedicated to purely Garbage Collection)
@wrog Haskell was first invented in 1990 or 91ish, and at that time they had already started to ask questions like, βwhat if we just ban set! entirely,β abolish mutable variables, make everything lazily evaluated by default. If you have been programming in C/C++ for a while, that abolishing mutable variables would lead to a performance increase seems very counter-intuitive.
But for all the reasons you mentioned about not forcing a search for updated pointers in old-generation GC heaps, and also the fact that this forces the programmer to write their source code such that it is essentially already in the Static-Single-Assignment (SSA) form, which is nowadays an optimization pass that most compilers do prior to register allocation, this allowed for more aggressive optimization to be used and results in more efficient code.
@wrog @dougmerritt
The LispM did a nice thing (at some tremendous cost in hardware, I guess, but useful in the early days) by having various kinds of forwarding pointers for this. At least you knew you were going to incur overhead, though, and pricing it properly at least said there was a premium for not side-effecting and tended to cause people to not do it. And the copying GC could fix the problem eventually, so you didn't pay the price forever, though you did pay for having such specific hardware or for cycles in systems trying to emulate that which couldn't hide the overhead cost. I tend to prefer the pricing model over the prohibition model, but I see both sides of that.
If my memory is correct (so yduJ or wrog please fix me if I goof this): MOO, as a language, is in an interesting space in that actual objects are mutable but list structure is not. This observes that it's very unlikely that you allocated an actual object (what CL would call standard class, but the uses are different in MOO because all of those objects are persistent and less likely to be allocated casually, so less likely to be garbage the GC would want to be involved in anyway).
I always say "good" or "bad" is true in a context. It's not true that side effect is good or bad in the abstract, it's a property of how it engages the ecology of other operations and processes.
And, Ramin, the abolishing of mutable variables has other intangible expressional costs, so it's not a simple no-brainer. But yes, if people are locked into a mindset that says such changes couldn't improve performance, they'd be surprised. Ultimately, I prefer to design languages around how people want to express things, and I like occasionally doing mutation even if it's not common, so I like languages that allow it and don't mind if there's a bit of a penalty for it or if one says "don't do this a lot because it's not aesthetic or not efficient or whatever".
To make a really crude analogy, one has free speech in a society not to say the ordinary things one needs to say. Those things are favored speech regardless because people want a society where they can do ordinary things. Free speech is everything about preserving the right to say things that are not popular. So it is not accidental that there are controversies about it. But it's still nice to have it in those situations where you're outside of norms for reasonable reasons. π
> Ultimately, I prefer to design languages around how people want to express things, and I like occasionally doing mutation even if it's not common, so I like languages that allow it and don't mind if there's a bit of a penalty for it or if one says "don't do this a lot because it's not aesthetic or not efficient or whatever".
Me too -- although I remain open to possibilities. Usually such want me to switch paradigms, though, not just add to my toolbox.
βthe abolishing of mutable variables has other intangible expressional costs, so itβs not a simple no-brainer.β
@kentpitman I prefer the term βconstraintβ to βexpressional cost,β because constraints are the difference between a haiku and a long-form essay. For example, I am very curious what the code for the machine learning algorithm that trains an LLM would look like expressed as an APL program. I donβt know, but I get the sense it would be a very beautiful two or three lines of code, as opposed to the same algorithm expressed in C++ which would probably be a hundred or a thousand lines of code.
Not that I disagree with you, on the contrary, that is why I was convinced to switch to Scheme as a more expressive language than Haskell. I like the idea of starting with Scheme as the untyped lambda calculus, and then using it to define more rigorous forms of expression, working your way up to languages like ML or Haskell, as macro systems of Scheme.
I'm not 100% positive I understand your use of constraint here, but I think it is more substantive than that. If you want to use the metaphor you've chosen, a haiku reaches close to theoretical minimum of what can be compressed into a statement, while a long-form essay does not. This metaphor is not perfect, though, and will lead astray if looked at too closely, causing an excess focus on differential size, which is not actually the key issue to me.
I won't do it here, but as I've alluded to more than once I think on the LispyGopher show, I believe that it is possible to rigorously assign cost to the loss of expression between languages.
That is, that a transformation of expressional form is not, claims of Turing equivalence notwithstanding, cost-free both in terms of efficiency and in terms of expressional equivalence of the language. It has implications (positive or negative) any time you make such changes.
Put another way, I no longer believe in Turing Equivalence as a practical truth, even if it has theoretical basis.
And I am pretty sure the substantive loss can be expressed rigorously, if someone cared to do it, but because I'm not a formalist, I'm lazy about sketching how to do that in writing, though I think I did so verbally in one of those episodes.
It's in my queue to write about. For now I'll just rest on bold claims. π Hey, it got Fermat quite a ways, right?
But also, I had a conversation with ChatGPT recently where I convinced it of my position and it says I should write it up... for whatever that's worth. π
cc @screwlisp @wrog @dougmerritt @cdegroot
> That is, that a transformation of expressional form is not, claims of Turing equivalence notwithstanding, cost-free both in terms of efficiency and in terms of expressional equivalence of the language. It has implications (positive or negative) any time you make such changes.
I hope everyone here is already clear that "expressiveness" is something that comes along on *top* of a language's Turing equivalence.
Indeed Turing Machines (and pure typed and untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness.
And for that matter, expressiveness can be on top of Turing incomplete languages. Like chess notation; people argue that the algebraic notation is more expressive than the old descriptive notation. (People used to argue in the other direction)
[..it's possible I'm missing the point, but I'm going to launch anyway...]
I believe trying to define/formalize "expressiveness" is roughly as doomed as trying to define/formalize "intelligence". w.r.t. the latter, there's been nearly a century of bashing on this since Church and Turing and we're still no further along than "we know it when we see it"
(and I STILL think that was Turing's intended point in proposing his Test, i.e., if you can fool a human into thinking it's intelligent, you're done; that this is the only real test we've ever had is a testament to how ill-defined the concept is...)
1/11
The point of Turing equivalence is that even though we have different forms for expressing algorithms and there are apparently vast differences in comprehensibility, they all inter-translate, so any differences in what can utltimately be achieved by the various forms of expression is an illusion. We have, thus far, only one notion of computability.
(which is not to say there can't be others out there, but nobody's found them yet)
2/11
screwlisp reshared this.
I believe expressiveness is a cognition issue, i.e., having to do with how the human brian works and how we learn. If you train yourself to recognize certain kinds of patterns, then certain kinds of problems become easier to solve.
... and right there I've just summarized every mathematics, science, and programming curriiculum on the planet.
What's "easy" depends on the patterns you've learned. The more patterns you know, the more problems you can solve. Every time you can express a set of patterns as sub-patterns of one big super-pattern small enough to keep in your head, that's a win.
I'm not actually sure there's anything more to "intelligence" than this.
3/11
screwlisp reshared this.
I still remember trying to teach my dad about recursion.
He was a research chemist. At some point he needed to do some hairy statistical computations that were a bit too much for the programmable calculators he had in his lab. Warner-Lambert research had just gotten some IBM mainframe -- this was early 1970s, and so he decided to learn FORTRAN -- and he became one of their local power-users.
Roughly in the same time-frame, 11-year-old me found a DEC-10 manual one of my brothers had brought home from college. It did languages.
Part 1 was FORTRAN.
Part 2 was Basic.
But it was last section of the book that was the acid trip.
Part 3 was about Algol.
4/11
screwlisp reshared this.
This was post-Algol-68, but evidently the DEC folks were not happy with Algol-68 (I found out later *nobody* was happy with Algol-68), so ... various footnotes about where they deviated from the spec; not that I had any reason to care at that point.
I encountered the recursive definition of factorial and I was like,
"That can't possibly work."
(the FORTRAN and Basic manuals were super clear about how each subprogram has its dedicated storage; calling one while it was still active is every bit an error like dividing by zero. You're just doing it wrong...)
5/11
Then there was the section on call-by-name (the default parameter passing convention for Algol)
... including a half page on Jenson's Device, that, I should note, was presented COMPLETELY UN-IRONICALLY because this was still 1972,
as in, "Here's this neat trick that you'll want to know about."
And my reaction was, "WTFF, why???"
and also, "That can't possibly work, either."
Not having any actual computers to play with yet, that was that for a while.
Some years later, I got to college and had my first actual programming course...
6/11
screwlisp reshared this.
... in Pascal.. And there I finally learned about and was able to get used to using recursion.
Although I'd say I didn't *really* get it until the following semester taking the assembler course and learning about *stacks*.
It was like recursion was sufficiently weird that I didn't really want to trust it until/unless I had a sense of what was actually happening under the hood,
And THEN it was cool.
7/11
To the point where, the following summer as an intern, I was needing to write a tree walk, and I wrote it in FORTRAN β because that's what was available at AT&T Basking Ridge (long story) β using fake recursion (local vars get dimensions as arrays, every call/return becomes a computed goto, you get the ideaβ¦) because I wanted to see if this *could* actually be done in FORTRAN, and it could, and it worked, and there was much rejoicing; I think my supervisor (who, to be fair, was not really a programmer) blue-screened on that one.
And *then* I tried to explain it all to my dad...
8/11
@dougmerritt
And, to be fair, by then, he had changed jobs/companies, moved up to the bottom tier of management, wasn't using The Computer anymore, so maybe the interest had waned.
But it struck me that I was never able to get past showing him the factorial function and,
"That can't possibly work."
He had basically accepted the FORTRAN model of things and that was that.
Later, when he retired he got one of the early PC clones and then spent vast amounts of time messing with spreadsheets.
9/11
@dougmerritt
You may say that untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness, and I will probably agree,
... but it also seems to me that Barendregt got pretty good at it.
I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),
... but at the same time, it's *still* the case that if I need to write a one-liner to do something, chances are, I'll be doing it in Perl, and I've heard people complain about *that* language being essentially write-only line-noise.
10/11
@dougmerritt
To be sure, my Perl tends to be more structured.
On the other hand, I also hate Moose (Perl's attempt at CLOS) and have thus far succeeded in keeping that out of my life.
I also remember there being a time in my life when I could read and understand APL.
But if you do think it's possible to come up with some kind of useful formal definition/criterion for "expressiveness", go for it.
I'll believe it when I see it.
11/11
@wrog
Yes, thanks, I liked reading that.
... and, crap, I messed up the threading (it seems 9 and 10 are siblings, so you'll miss 9 if you're reading from here. 9 is kind of the point. Go back to 8.)
(I hate this UI. If anybody's written an emacs fediverse-protocol thing for doing long threaded posts please point me to it, otherwise it looks like I'm going to have to write one ...)
π/11
screwlisp reshared this.
It's a bit low tech but if you noticed it in time that other people don't have a ton of other stuff attached to it, just save the text, delete the old post, attach the new. Someone could make that be a single operation in a client and even have it send mail to the people who attached replies saying here's your text if you want to attach it to the new post. Or you could attach your own post with their text in it. Low-tech as it is, existing tools offer us a lot more options than sometimes people see. I'm sure you could have figured this out, and are more fussing at the tedium, but just for fun I'm going to cross reference a related but different scenario...
@wrog
> (I hate this UI. If anybody's written an emacs fediverse-protocol thing for doing long threaded posts please point me to it, otherwise it looks like I'm going to have to write one ...)
There are *so* many programmers using variants of this UI that you would think someone would have addressed it by now.
But you never know, maybe not. Certainly everyone who does multi-posts seems to be struggling with doing it by hand, from my point of view, so that would seem to cry out for the need for some fancier textpost-splitting auto-sequence-number thingie, in emacs or command line or something.
Conceivably a web search would find the thing if it exists. I personally almost never do long posts, so I just grin and bear it when it comes up.
I think for protocol reasons it is necessary to try and connect up the thread using quote-posts, however any particular client understands that.
If you visit the topmost toot of the thread, you at least get the whole (cons) tree.
@dougmerritt @wrog @kentpitman @ramin_hal9001 @cdegroot
Y'all are misunderstanding. Due to the error-prone nature of labelling a series of posts, from one way of viewing he skipped post 9, and 8 linked to 10.
Another view showed simply the correct sequence.
Regardless, anyone who has written e.g. "3/n" on a post is already implicitly indicating a desire for automation.
@wrog @kentpitman @ramin_hal9001 @cdegroot
Well there you go. So wrog just needs to find a list of such clients to choose the most suitable one -- if any.
@dougmerritt
what I *currently* do is compose inside Emacs (the *only* non-painful alternative for long posts),
then manually decide how I'm going to break it up -- which actually has some literary content to it, because in some cases, you *do* want to arrange the breaks for maximal dramatic effect
(generalized How to Use Paragraphs)
Problem 1 being that emacs doesn't count characters the same way as mastodon does, and I don't find out until I've cut&pasted part n, which doesn't happen until I've already posted parts 1..nβ1
Problem 2 being having to cut&paste in the first place when I should just be able to hit SEND (which then has to be from within emacs).
given that I once-upon-a-time wrote a MAPI client for the sake of being able to post to Microsoft Exchange forums in rich text using courier font, in theory, I should be able to do this.
... but that would mean I'd have to Learn Fediverse. crap.
hmm. Anyone have experience with
codeberg.org/martianh/mastodonβ¦
i.e., is the best one or if this just Guy Who Grabbed the Name first and did the best SEO twigging? (I hate that google search has gotten so enshittified)
(also, thanks, LazyWeb!)
unforch mastodon.el hasn't yet implemented chaining of new toots. if someone wants to add it though, by all means. (the issue has been raised before, but as usual no one was willing to get their hands dirty.)
edit: codeberg.org/martianh/mastodonβ¦
Splitting a new toot up into threads
Hi there, thanks to all involved for mastodon.el - it's really nice! I waffle too much when I write. Inevitably this means I need to restructure toots into threads. It'd be really nice if this could be automated, since it's a common usecase.Codeberg.org
Seems like the universe is calling on you to fix it!
With some apologies to legends:
(defun chained-toot
(lim str)
(let ((space (- lim 8))
(End (length str))
(span (+ 1 (ceiling (/ (length str) (- lim 8))))))
(cl-loop
for idx from 1 to span
for start from 0 by space
for end from space by space
for piece = (cl-subseq str start (min end End))
for addy = (format "%s\n%d/%d" piece idx span)
collect addy)))
@dougmerritt @mousebot @wrog @cdegroot @ramin_hal9001 @kentpitman
#elisp
screwlisp reshared this.
You forgot to change 'space' in a complex inscrutable way at each step.
I thought about it, but what if the chain of toots are all non-whitespace-characters anyway. So I decided not to try. Now, cooking in heuristically "proper" justification anyway, you say... But that way madness lies.
@mousebot @wrog @cdegroot @ramin_hal9001 @kentpitman
@dougmerritt @mousebot
figuring out how to split up a toot is solving the wrong problem. In my cases I *know* how I want to split it up.
what I want is the ability to create a sequence of posts, edit them all in place, shuffle text around + attach media and polls wherever I want, get them all looking right,
and then send them all in one fell swoop.
I think the key concept is being able to compose a reply to a draft.
i.e., In-Reply-To is a buffer rather than a URL
Posting the reply automatically posts the In-Reply-To **first**. And likewise for longer chains.
Make that work in a reasonable way, and everything else follows.
(I'm up to 5000 chars in my draft reply on codeberg...)
screwlisp reshared this.
What I do for that is craft one single long post. Because it's not like there would be a character limit, right? That would be silly.
CC: @screwlisp@gamerplus.org @dougmerritt@mathstodon.xyz @mousebot@todon.nl @cdegroot@mstdn.ca @ramin_hal9001@fe.disroot.org @kentpitman@climatejustice.social
@cy
Presumably you're joking. But different of us suffer different character limits. My server, Mathstodon.xyz, has a limit of 1729 characters -- but for most servers it's significantly less.
And some may be larger. Yours, perhaps. But that doesn't help others.
@screwlisp @mousebot @cdegroot @ramin_hal9001 @kentpitman @wrog
I'm joking, yes. And also criticizing those servers. But honestly, I don't like writing super long posts. I feel like I'm ignoring people and not letting them get in a word edgewise. When I consider ranting about something in long form, I try to write a little bit at a time, and give people a chance to respond before writing more. Make it a conversation instead of an essay.
Or sometimes if I just don't care I'll splurg it all out and hope that nobody bothers reading that shit.
CC: @screwlisp@gamerplus.org @mousebot@todon.nl @cdegroot@mstdn.ca @ramin_hal9001@fe.disroot.org @kentpitman@climatejustice.social @wrog@mastodon.murkworks.net
It's Twitter Culture. We're all supposed to speak in sound bites. Dorsey or whoever decided if you can't fit it in 130 chars, it's not worth saying. Then at some point they doubled it and thought that was generous enough.
And now short posts are what people expect.
LJ never had a limit.
Hell, **Usenet** never had a limit and we were suffering under far worse resource constraints back then.
I miss Usenet.
@wrog
I miss Usenet, too. But it suffered a shortage of cat videos, so it had to go.
@cy @screwlisp @mousebot @cdegroot @ramin_hal9001 @kentpitman
@wrog @cy @mousebot @dougmerritt
I did not like the Twitter extended from 140 to 280. But, unrelated to that, I'm pretty sure they made a decision that urls and @ references to people's handles should have fixed small cost, so as not to bias things in favor of short-named people or xrefs. I think that was very important. I was surprised that BlueSky did not copy it.
I don't think it had anything to do with SMS. Twitter was an internet service from the start and Dorsey's decision was a matter of taste/branding/marketing; the notion of a service that *only* allowed short posts was Something New.
Receiving a twitter feed as SMS texts on a cell phone would have been insane (and probably also expensive back then).
@wrog
> I don't think it had anything to do with SMS.
But you would be wrong. Don't mess with the bull, you'll get the horns. I was not only there, I worked in that space at that time.
(I did more than languages, compilers, and operating systems because I got bored periodically. I've also done OCR algorithms, to name another thing that doesn't seem to fit with the rest.)
> The idea was initially pitched as an βSMS for the webβ,...
> Why 140 characters? The limit was inspired by SMS text messaging, which capped messages at 160 characters. Twitter reserved 20 characters for the username, leaving 140 for the message itself.
blog.easybie.com/twitters-origβ¦
So it was at *least* inspired by SMS. But more than that, it gatewayed to and from SMS, so it retained the SMS limit of necessity to continue gatewaying -- for a while.
en.wikipedia.org/wiki/X_(sociaβ¦
Wikipedia stops just short of having an adequate history by itself.
Twitterβs Origin Story: How 140 (Later 280) Characters Changed Global Discourse - Easybie Blog
Twitter, now known as X, has profoundly reshaped the way people communicate, share news, and engage with global conversations. What began as a microbloggingBjΓΆrn Ironside (Easybie Blog)
Sensitive content
Sensitive content
@dougmerritt @wrog @cy @mousebot
Yeah, maybe that's why I didn't win. I didn't think the story/poem was really that bad. It's a lot of information to pack into a short space, and SMS has no way to flag content warnings.
They also had a competition for stories of 150 words. I wrote an entry for that I thought was really cool for that, and of a different nature. It didn't win either, though I was proud of it and think it at least could reasonably have. I've never published that one, though one day I suppose I should. It's still looking for a proper forum. π
Sensitive content
In high school English, we were required to write poetry, so I did a piece about sunshine and rainbows. The teacher took me aside and said, "look, you're trying too hard to be super positive, and the result is awful. Try again. This time, make it personally meaningful."
So I did, and being a troubled teenager, turned in a poem about flaming death or thereabouts. The teacher took me aside again, gave me an A on the assignment, and recommended I see a therapist.
π
If it's not one thing, it's another.
screwlisp reshared this.
@cy
> I feel like I'm ignoring people and not letting them get in a word edgewise
The world could use more people with that perception! Too many people do that and obviously don't notice they're doing that.
@screwlisp @mousebot @cdegroot @ramin_hal9001 @kentpitman @wrog
> But that way madness lies.
That's never stopped you before!
But ok.
No I began working on it when you said using :from-end and :test-not with search, but I have been frustrated by elisp not actually being common lisp. Also I did not have ielm installed, and it seems like ielm is not in melpa.
@mousebot @wrog @cdegroot @ramin_hal9001 @kentpitman
> frustrated by elisp not actually being common lisp
Oh right.
Well, I'm just kibitzing to give you a hard time, so just ignore me, carry on.
Also I was just confused, of course M-x ielm enters ielm
@wrog @dougmerritt
I stick to posting longer-than-two-posts content to my blog, which auto-toots a link, initial text, & any tags.
Or write to my phlog and then tell people to look there, but that's for devnotes/commentary on my Cyberhole.
Third option is to get onto an instance with a huge character limit, post giant walls of text.
> I stick to posting longer-than-two-posts content to my blog, ...
yeah, that's clearly the Right Thing, but has the disadvantage of not inflicting my text on people directly π
Also my blog hasn't gotten a whole lot of readership since the Russians killed Livejournal
hmm... is there a way to do a reply that is *also* a quote-post? I should try this.
mastodon.murkworks.net/@wrog/1β¦
(π+1)/11
To the point where, the following summer as an intern, I was needing to write a tree walk, and I wrote it in FORTRAN β because that's what was available at AT&T Basking Ridge (long story) β using fake recursion (local vars get dimensions as arrays, every call/return becomes a computed goto, you get the ideaβ¦) because I wanted to see if this *could* actually be done in FORTRAN, and it could, and it worked, and there was much rejoicing; I think my supervisor (who, to be fair, was not really a programmer) blue-screened on that one.And *then* I tried to explain it all to my dad...
8/11
@dougmerritt
(I'm guessing a mastodon UI that actually respects the use of surreal numbers to number multipost components and rearranges threads accordingly will be implemented approximately never.
β¦ though I suppose it could turn out to be one of the more creative ways to get kicked off of the Fediverse β¦ )
en.wikipedia.org/wiki/Surreal_β¦
(π/2)/11
@wrog
I support your right to free expression. π
@wrog your story about learning recursion in Algol reminded me of a story that was told about how Edsger Dijkstra influenced the Algol spec (through a personal conversation with I think John Bakus) to include what we now understand as a βfunction call,β by specifying the calling convention for how the stack must be modified before the subroutine call and after the return. I first heard the story in a YouTube video called, βHow the stack got stackedβ.
Regarding βexpressiveness,β you do make a good point about it (possibly) being fundamentally a subjective thing, like βintelligence.β Personally, I never felt the restrictions in Haskell made it any less expressive as a language.
It is interesting how you can express some incredibly complex algorithms with very few characters in APL. Reducing function names to individual symbols and applied as operators does make the language much more concise, but is βconciseβ a necessary condition for βexpressive?β
@dougmerritt @kentpitman @screwlisp @cdegroot
How the stack got stacked
Auf YouTube findest du die angesagtesten Videos und Tracks. AuΓerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.Kay Lack (YouTube)
Some clearly prefer concise, but nonetheless it is orthogonal to expressive.
'Expressive' != 'my favorite approach' -- ideally expressiveness can be determined objectively by human factors studies.
Failing that, sure, it's then subjective and subject to unbounded argument. π
@wrog
> I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),
I'm one of those people, BTW. My proof is that I wrote a closed-loop stick figure ASCII animation juggling three balls.
As with any complex TECO thing, the resulting code was write-only -- and that was always the problem with even mildly powerful TECO macros.
Perl at its worst can be described as write-only line noise, yes, but in my experience is *STILL* better than TECO!
I am indeed fortunate to be able to stick with Emacs and Vi.
screwlisp reshared this.
screwlisp reshared this.
screwlisp reshared this.
As usual, Kent, an excellent description -- and I had forgotten some of those details, but yes, those were very real advantages.
Incidentally, I did *not* hate TECO at the time. I'm just remarking on some fairly objective issues with it.
But at the time, I really appreciated its power (even though for me this was after using vi and emacs).
Also, if one reads about its history in the literature, about how it originally worked in 8 KB with a sliding window on files, and then later versions added more and more commands and power, it all makes sense as an organic 4D creation.
Which is true of most software that one is sympathetic to.
@dougmerritt
it's not so much the editor itself, which, from your description doesn't seem that much worse than, say, what you had to do in IBM XEDIT to get stuff done,
but the macro system, specifically, which as I understand it,(1) was an add-on, (2) would have needed utility commands that one didn't use in the normal course of editing (e.g., for rearranging arguments + building control constructs) and therefore were put on obscure characters, and *this* is where things went nutsβ¦
I recall briefly viewing the TOPS-20 Emacs sources β¦ *did* look like somebody had whacked a cable out in the hall (time to hit refresh-screen)
β¦ granted, I may be misremembering; this *was* 40 years agoβ¦
@dougmerritt
I also recall '~' being an important character that showed up a lot in TECO for some reason,
and *normally* the only time you'd see sequences of ~'s in large numbers was when your modem was dying and your line was about to be dropped
and this may, at least partially, be where TECO's "line noise" reputation came from.
@wrog @dougmerritt
Funny, I couldn't recall "~" being important at all so had to go check. See codeberg.org/PDP-10/its/src/br⦠and while I do see a few uses of it, they seem very minor.
I read this into an Emacs editor buffer and did "M-x occur" looking for [~] and got these, all of which seem highly obscure. I think it is probably because in the early days there may have been a desire not to have case matter, so the upper and lower case versions of these special characters (see line 2672 below) may have once been equivalent or might have some reason to want to reserve space to be equivalent in some cases. Remember that, for example, on a VT52, he CTRL key did not add a control bit but masked out all the bits beyond the 5th, so that CTRL+@ and CTRL+Space were the same (null) character, for example. And sometimes tools masked out the 7th bit in order to uppercase something, which means that certain characters like these might have in some cases gotten blurred.
10 matches for "[~]" in buffer: tecord.1132
1270: use a F~ to compare the error message string against a
2017: case special character" (one of "`{|}~<rubout>").
2235: the expected ones, with F~.
2370: kept in increasing order, as F~ would say, or FO's binary
2672: also ("@[\]^_" = "`{|}~<rubout>").
4192:F~ compares strings, ignoring case difference. It is just
4446: this option include F^A, F^E, F=, FQ, F~, G and M.
4942: string storage space, but begins with a "~" (ASCII 176)
4977: character should be the rubout beginning a string or the "~"
4980: "~" or rubout, then it is not a pointer - just a plain number.
If I recall correctly, this also meant in some tools it was possible if you were using a control-prefix con CTRL-^ to have CTRL-^ CTRL-@ be different than CTRL-^ @ because one of them might set the control bit on @ and the other on null, so there was a lot of ailasing. It even happened for regular characters that CTRL-^ CTRL-A would get you a control bit set on #o1 while CTRL+^ A would get you the control bit set on 65. Some of these worked very differently on the Knight TV, which used SAIL characters, I think, and which thought a code like 1 was an uparrow, not a control-A. There were a lot of blurry areas, and it was hell on people who wanted to make a Dvorak mode because it was the VT52 (and probably VT100 and AAA) hardware that was doing this translation, so there was no place to software intercept all this and make it different, so that's probably why something as important as Teco treaded lightly on making some case distinctions.
But if someone remembers, better, please let me know. It's been 4+ decades since I used this stuff a lot and details slip away. It's just that these things linger, I think, because they were so important to realize were live rails not to tread upon. And because I did, for a while, live and breathe this stuff, since I wrote a few TECO libraries (like ZBABYL and the original TeX mode), so I guess practice drills it in, too.
@dougmerritt
Yeah, I don't know. Maybe '~' was prevalant in Emacs source, or I'm conflating TECO with Something Else.
By my era VT-52s were gone, you'd occasionally see a VT100 in a server room for not wanting to waste $$ there, the terminal of choice at Stanford CS was the Heathkit-19 + if you were in one of the well-financed research groups, you got a Sun-1 or a Sun-2. At DEC(WSL) where I interned, it was all personal VAXstations.
I do recall Emacs ^S and ^Q being problematic due to terminal mode occasionally getting set badly (and then the underlying hardware would wake up, "Oh, flow control! I know how to do that!", ^S would freeze everything and you had to Just Know to do ^Q...)
@wrog
This seems familiar, but I'm not wholly sure why.
It is binary 01111110 and as such really did show up in some line noise contexts that favored such a thing (it's similar to 11111111).
It's also used by vi to mark nonexistent lines at the end of the file; Bill wanted it to be something other than just nothing on that screen line, for specificity of feedback to the user.
> I also recall '~' being an important character
ok, I seem to be out-to-lunch on this
(or at least, remembering Something Else; but I can't imagine what...):
ibiblio.org/pub/academic/compuβ¦
(admittedly, this is VAX/PDP-11 TECO source for Emacs and maybe Fred had to do a complete rewrite of some sort and the actual TOPS20/PDP-10 source is completely different -- given that there *is* significant dependence on wordsize and other architectural issues, it would have to be *somewhat* different -- but I'd still expect a lot of common code [unless there were copyright issues]).
It *does* definitely look like line noise, though.
βI do recall Emacs ^S and ^Q being problematic due to terminal mode occasionally getting set badly (and then the underlying hardware would wake up, βOh, flow control! I know how to do that!β, ^S would freeze everything and you had to Just Know to do ^Qβ¦)β
@wrog this is still a problem in modern terminal emulators. On Linux, nearly all terminal emulator software emulates the DEC VT-220 hardware pretty closely, so it does actually send the ASCII DC1 and DC3 characters for C-q and C-s, and the virtual TTY device responds accordingly by blocking all further characters except for DC1 and DC3. You have to execute the command stty -ixon to disable soft flow control for a given TTY device after it has been initialized by the operating system.
I think there is a way configure the pseudoterminal manager system control to create virtual TTY devices that ignore DC1 and DC3 characters, but I donβt know how, and for whatever reason (probably for backward compatibility with older Unix systems) Debian-based Linux doesnβt configure it this way by default. I think most people just put the stty -ixon in their ~/.profile file.
I was using ^S/^Q last night to pause some fast-moving `make` output to check to see if the thing that should have gotten built got built.
@ewhac
It comes up far less often in today's windowing environments, but we've got those reflexes as a must from the bad old days, eh?
@dougmerritt @wrog
Yes, right. To all that. One minor point is that the PDP-6/10 had a byte-addressing instruction that was pretty weird (overkill in flexibility, like every PDP-6/10 instruction). So that data packing wasn't all that unreasonable.
I showed up to the TECO world in Jan. 1973 with a gofer programming gig in the Macsyma group. The Datapoint terminals were already there, so I missed the pre-(almost)WYSIWYG days.
@djl
Lucky you; I went through teletypes, and then glass terminals lacking cursor control, before finally being in an environment with cursor control terminals capable of WYSIWYG -- and at that, it was pretty random back then who had heard the pro-WYSIWYG arguments and who had not, so...
@dougmerritt @djl @wrog
For those looking on who might not know these terms, teletypes had paper feeding through and mostly did only output that was left-to-right and then fed that line and then did not back up ever to a previous line. They were also loud and clunky, mostly, and had keyboards that had keys you had to press way down in order to get them to take.
Glass terminals were displays that could only do output to the bottom line of the screen, kind of like a paper terminal but without the paper. Once it scrolled up, you couldn't generally scroll back down. But that's why it might sound like it would have cursor control but did not yet.
screwlisp reshared this.
Yes, and to clarify your final two sentences, the *display* scrolled up with each additional line emitted -- the *cursor* could never scroll up.
In my environment at Berkeley, these were Lear Siegler ADM 3 terminals. The slightly later ADM 3a terminals finally allowed the cursor to be moved around at will (although they didn't have any fancier abilities, unlike still later devices).
Thanks for thinking to explain what I did not.
@dougmerritt @wrog
The datapoint terminals were _almost_ wysiwyg: they didn't have a cursor, so the TECO of the time inserted "/\" in the text displayed, and you could insert text there, delete the next character and the like.
But TECO allowed you to change the "/\" to whatever you liked, so if you left your terminal, someone would change that to "/\Foo is loser" and Foo wouldn't be able to delete that text from Foo's file...
screwlisp reshared this.
@dougmerritt @wrog
I've been through 17 or so environments, and I was always able to find an editor that could be persuaded to act the way I wanted: CCA, NEC, AT&T and even Word for MS-DOS.
Hilariously, Word for Windows defeated me. There was no way to persuade it to act as a civilized text editor, so I acquired the source code to WordPad and implemented my usual TECO macros in C++, and used that for 20 years or so.
@djl
Hey, you want what you want.
Also: spoken like a true hacker. "I will bend the universe (of computing) to my will!"
@dougmerritt @wrog
Yes. I missed the teletype round. Sort of. Father was site engineer for one of the early LINC 8 installations, and later a PDP-7 installation, and they had teletypes. They.Were.Horrid.
Peter Belmont (later Ada developer) tried to persuade me to do programming, but I was busy doing other things. The IBM card puches had really sweet keyboards, though.
Yeah I had <1 year stuck on DEC-20s at Stanford before Unix boxes became generally available (originally had to be an RA on a grant with its own VAX, and incoming students on NSFs typically weren't). Seeing Gosling Emacs that first spring, it was clear that was The Future...
βΉ less reason to do TECO
... though ironically, I *did* learn the SAIL editor (SAIL/WAITS -- TOPS-10 derivative -- was, by 1985, a completely dead software ecosystem, *only* continued to exist because Knuth and McCarthy had decades of crap + sufficient grant $$ for the (by then) fantastic expense to keep it going; the only other people who used it were the 3 of us maintaining the Pony (vending machine))
screwlisp reshared this.
Sensitive content
@dougmerritt yes, I am maybe a little unclear in what I wrote. I tend to take shortcuts when I write about Scheme that make it seem I am equating it to the untyped lambda calculus.
I have heard of the Turing Tarpit. And I have inspected the intermediate representation of Haskell before (the Haskell Core), so I have seen first-hand that expressing things in lambda calculus makes the code almost impossible to understand.
Just as a BTW, LLMs have layers of necessary algorithms, rather than just one single algorithm.
That said, someone no doubt *has* reduced that core to one line of APL. π
P.S. arguments about whether "expressiveness" is the right description may end up being about differences without distinctions.
Thanks for this detailed reply. Lotta good stuff there. Also thanks especially for indulging the improper fraction. I mostly do not use the fractional labeling for posts for fear of that scenario. Sometimes you promised to stop and then realize you want to keep going and feel impeded. I'm glad you kept on.
@wrog it's a good chunk of the reason why Erlang shines here. Per-process GC can be kept simple (a process is more like an object than a thread, so you have lots of them) and no equivalent of setq - all data is immutable.
(there is a shared heap, but that also is just immutable data).
yes, the BEAM virtual machine is pretty amazing technology, there are very good reasons why it is used in telecom, or other scenarios where zero downtime is a priority. I think .NET and Graal are have been slowly incorporating more of BEAMβs features into their own runtimes. Since about 3 years ago .NET can do βhot code reloading,β for example.
I have used Erlang before but not Elixer. I think I would like Elixer better because of itβs slighly-more-Haskell-like type system.
@wrog not just zero downtime, the more important aspect is how it does concurrency, how it manages to scale that, and how well it fits the modern requirements of "webapps" (like a glove).
It changed my thinking about objects, just like Smalltalk did before. I'm fully on board with Joe Armstrong's quip that Erlang is "the most OO language" (or something to that extent); having objects with effectively their own address space, their own processor scheduling, etc, completely changes how you think about building scalable concurrent systems (and _then_ you get clustering for free, and sometimes hot reloading is a production thing, although 99% of the time it is good to have it in the REPL)
@wrog
'setq' and friends have been criticized forever, but avoiding mutation is easier said than done. Parsing arbitrarily large sexpr's requires mutation behind the scenes -- which ideally is where it should stay.
Any language we use that helps avoid mutation is a good thing. 100% avoidance is a matter of opinion -- some people claim it was proven to be fully avoidable decades ago, others say the jury is still out on the 100% part.
I don't know enough to have an opinion on whether 100% has been completely proven, but it's attractive.
I respect you, and your contributions to Lisp and the community. So I dislike nitpicking you. But:
> Common Lisp macros more unhygienic than they actually are
This is a biased phrasing. There are hygenic macro systems, and unhygenic macro systems. One cannot assign a degree of "hygenic-ness" without simultaneously defining what metric you are introducing.
We all can agree that one can produce great code in Common Lisp. It's not like Scheme is *necessary* for that.
But de gustibus non est disputandum. There are objective qualities of various macro systems -- and then there's people's preferences about those qualities.
Bottom line: it seems you are saying that Lisp macros aren't so bad if their use is constrained to safe uses, and I would agree with *that*.
@dougmerritt
> it seems you are saying that Lisp macros aren't so bad if their use is constrained to safe uses
Well, what I'm saying isn't formal, and that in itself bugs some people. But the usual criticism of the CL system isn't that "people have to be careful", it's that "ordinary use is not safe". But there's safe and then there's safe.
There is a sense in which C is objectively less safe than, say, Python or Lisp. And there is a sense in which people who write languages that aspire to more proofs think those languages still are not safe. So there's a bit of a continuum here that makes terminology tricky, so I have to make some assumptions that are fragile because some after-the-fact dodging can be done where critics do not acknowledge the incremental strengths, they just keep pointing out other problems as if that's what they meant all along.
In scheme, and ignoring that you could do this functionally, writing a macro foo that takes an argument and yields the list of that argument can't look like `(list ,thing) because if used in some some situation like (define (bar list) (foo list)) you would fall victim to namespace clashes. And so scheme people dislike this paradigm. But even without careful planning, the same probably is FAR LESS likely to happen in CL because:
Parameters that might get captured are usually in the variable namespace. You CAN bind functions, but it's rare, and it's super-rare for the names chosen to be things that would be the name of a pre-defined function. you'd have to be in some context where someone had done (flet ((list ...)) ....) for the list function to be bound to something unexpected, and even then you're not intended to bind list to something unexpected for other reasons, mainly that the symbol list is shared.
I allege that in the natural course of things, it's FAR more rare for the expansion of a macro to ever contain something that would get unexpectedly captured, for reasons that do not exist in the scheme world. Formally, yes, there is still a risk, but what makes this such an urgency in the Scheme world are the choices to have a Lisp1 and the choice to have no package system. Each of these things creates an insulation. In practice, the functional part of the CL world does not vary, as uses of FLET are very rare. And it's equally rare for a macro to expand into free references that are not functional references.
Also, the CL world has gensyms easily available, and CL systems often have other mechanisms that package up their use to be easy. In the Scheme world, there is no gensym and the language semantics is not defined on objects but on the notation itself. This makes things hard to compare, but it doesn't make it easy to see how package separation also eliminates a broad class of the surprise, because usually you know what's in your own package and aren't affected by what's in someone else's where in scheme symbols are symbols and it's far more dangerous to just be relying on lexical context to sort everything out.
So yes, CL is less dangerous if you limit yourself, but also it's less dangerous because a lot of times you don't have to think hard about limiting yourself. The languages features it has create naturally-more-safe situations. Note I am making a relative, not an absolute measurement of safety. I'm saying if CL were full of the conflict opportunities that Scheme is, we'd have rushed to use hygiene, too. But mostly it wasn't, so no one felt the urge.
screwlisp reshared this.
On the one hand, that is all well said.
On the other hand, I always have some nitpicky reply. π
(On the gripping hand -- no, I'll stop there)
You're talking about what is common and what is rare, and I can see why such was your overriding concern.
But I feel like I'm always the guy who ends up needing to fix the rare cases that then happen in real life.
For instance, when implementing a language that is wildly different than the implementation language -- "rare" seems to come up a lot there.
And also many times when I am bending heaven and earth to serve my will despite the obstinacy of the existing software infrastructure. "Just don't do that", people say.
It is indeed a lot like the needs of the formal verification by proof community, that is looking for actual math proofs, versus mundane everyday user needs.
Humpty Dumpty said "The question is, which is to be the master -- that's all" ("Through The Looking Glass", by Lewis Carroll).
Here, perhaps the master is which community you aim to serve.
@dougmerritt
Well, I'm just trying to explain why hygiene seems more like a crisis to the Scheme community than it did to the CL community, who mostly asked "why is this a big deal?". It is a big deal in Scheme. And it's not because of the mindset, it's because different designs favor different outcomes.
The CL community would have been outraged if we overcomplicated macros, while the Scheme community was grateful for safety they actually perceived a need for, in other words.
So yes, "the master is which community you aim to serve". We agree on that. π
I just want to say, I never had much of an opinion on hygienic macros, other than they seemed like a very good idea. But your explanation of why it isnβt a big deal in Common Lisp, because namespaces and libraries prevent nearly all name collisions, was very convincing. And when you consider how complicated the Racket macro expander is, you start to wonder whether it is really worth all of that complexity to ensure a very particular coding problem never happens.
Well, people used to take advantage of the freedom of the original unhygenic lisp macros to do all manner of unholy coding.
> I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did.
My memory is that the Scheme interface for continuations was completely worked out when Scheme was born, but implementation issues were not -- beyond existence proof that is.
> But it was brave to say that full continuations could be made adequately efficient.
Yes it was!
> the Lisp community in general, and here I will include Scheme in that
Planner, for instance, went in a quite different direction. Micro-Planner (and its SHRDLU) inspired Prolog. Robert Kowalski said that "Prolog is what Planner should have been" (it included unification but excluded pattern-directed invocation for example), see Kowalski, R. (1988). βLogic Programming.β Communications of the ACM, 31(9) -- although the precise phrasing I think is from interviews.
Anyway, Prolog was not a Lisp, but sure, definitely Scheme is. The history of Lisp spinoffs created quite a bit of CS history.
I did professional development in Scheme (at Autodesk, before that division was axed π -- it's certainly a workable language in the real world.
But we know that Common Lisp is too, obviously.
screwlisp reshared this.
> 2 of them using hardware support (typed pointers)
I learned about typed pointers from Keith Sklower, from my brief involvement in the earliest days (1978?) of Berkeley's Franz Lisp (implemented in order to support the computer algebra Macsyma port to Vaxima), and it blew my mind. Horizons extended hugely.
A few years later everyone seemed to just take the idea in stride. Yet no one seems to comment on the impact on typed pointers made by big-endian versus little-endian architectures; everyone seems to regard it as a matter of taste. It's not always; it impacts low level implementations.
>My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts
I used to regularly talk to the technical lead for that group at Sun for unimportant reasons, and I have every reason to think that the entire team was absolutely brilliant.
I don't recall whether some of them were displaced Lisp GC experts, but I do recall that I had plenty of criticisms about Java the language, but tended to find few, if any, about Java the implementation. And they kept improving it.
screwlisp reshared this.
Your understanding is mostly faulty. The original GC was written by me, and I'm no Lisp GC expert. I was (and still am) an admirer of Lisp. I wrote the code for my whole PhD thesis in Lisp. My admiration for garbage collection started earlier, when I was a big user of Simula in the 70s. But the motivation for GC in Java was different: the motivation was all about reliability and security. A leading cause of security vulnerabilities as always been buggy code. And one of the leading root causes of many long standing, hard to diagnose and fix, bugs has been flakey storage management. Garbage collection goes a long way to increasing system reliability, and hence security. I had always wanted to make GC more mainstream.
When you described garbage collection to senior management back in the day, their reflexive judgement was: "bullshit! Lazy engineers just don't want to clean up their mess". But when they see measureable improvements in system robustness, and corresponding decreases in failures, they Notice.
@kentpitman @cdegroot @ramin_hal9001
Cees de Groot, Kent Pitman, Ramin Honary, screwlisp #commonLisp #lisp user interfaces and the ages, #climate
Kent: https://nhplace.com/ https://climatejustice.social/@kentpitman https://en.wikipedia.org/wiki/Kent_M._Pitman https://netsettlement.blogspot.com/ Cees de Groot('s book, The Genius of Lisp) : ht...Lispy Gopher Climate extras (toobnix)
Geeks under a certain age are impressed by the idea one was messing about in massively multiplayer worlds in the 1980s. It was early!
I ran into TinyMUD first, and via TinyMUCK their Forth-based MUD language, MUF. Something about programming in MUDs lent itself to thinking in objects though and thinking about the things I wished I could do I (later realized I) started reverse-engineering object-oriented coding.
(I'd had earlier encounters w LISP, so at some point I realized what I was doing.)
A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:
"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""
Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.
Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motiveβthey hunt. Opportunistic attackers have an inclination and will attack if a target presents itselfβthey're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.
Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.
In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).
Why does any of this matter?
Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.
How can we tip that scale out of the attackers' favor?
By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.
- A normal user only has to register once, while an attacker has to re-register every time they get suspended.
- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.
- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).
- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.
Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.
lgbtqia.space/@alice/115499829β¦
π °π »π Έπ ²π ΄ (ππ¦) (@alice@lgbtqia.space)
Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friendβlet's call him Markβto a club with you. It's open-door, which is cool, because you like when a lot of folx show up.π °π »π Έπ ²π ΄ (ππ¦) (LGBTQIA.Space)
reshared this
Nicole Parsons, Shannon Prickett, πΊπ¦ haxadecimal π«π, Jonathan Lamothe and π °π »π Έπ ²π ΄ (ππ¦) reshared this.
I used to get my hair cut at a place that was just far enough way, and with enough traffic jams on the way each time, that I stopped going. It's not "far", by any means, but it was just on the cusp of being annoying. Once it became juuuust too much, I went somewhere closer.
I think people underestimate how low the bar can be to prevent bad actors. Even the guy scripting his nonsense will hit an application form and immediately leave to find an open instance, most of the time.
Totally agree.
Also, I know it wasn't mean that way but this had me in stitches
they'll just pretend to be human
The 'punch a Nazi' meme: what are the ethics of punching Nazis? | Tauriq Moosa
An assault on βalt-rightβ figure Richard Spencer sparked the βpunch a Naziβ meme. Violence is bad, but so is racism β so where do we stand ethically?Tauriq Moosa (the Guardian)
Nicole Parsons reshared this.
Tightly argued. Nice.
To the concern that it might deter some new users, I would add "yes, but if the alternative is lots more evil arseholes, it's a minor downside - especially as it is really only a downside for lazy new users".
We already have moderated follows at the user level; having moderated signups at the server level seems like a no-brainer.
Sensitive content
@kauer
Yeah. Having a place infested with malicious dicks is also going to deter people.
I am sure there is a sweet spot in optimizing between cumbersome defenses and being a trash pit.
"... communities are defined by who stays, not by how many come through the door."
This is a beautiful line and apropo of many situations. I will be adding that to my book of really useful ideas.
Thank you.
Shannon Prickett reshared this.
@tompearce49 @kimlockhartga@beige.party @alice
I dunno man. You ever stab a kid with a Bic pen in the hand for grabbing you and shoving you down into a chair? Because I did once. And the bullies never fucked with me again.
@kimlockhartga
^ that is what makes this accurate:
"Reviewing an application is lower effort than trying to fix the damage from an attack."
Moderators have to review hundreds of applications to prevent a single attack. But because the damage of accumulated attacks is both long-lasting and affects audiences beyond a single direct target, reviewing remains a lower effort investment
Preventing attacks is also ethically worthwhile, adding ethos to logos and pathos! Never let meritocracy trolls reduce us to only using logos and rational arguments as persuasive tools
Sensitive content
This is also well-known from hacker circles.
The absolute largest lump of malicious hackers only look for low-hanging fruit.
Dedicated hackers looking to penetrate a well-composed org? Very rare. And completely different from the bulk. This is what red team sessions are for.
for the readers (I know Alice knows all this):
Perfection is not the enemy of the good. Any attempts to keep out the attackers are better than no attempts to keep out the attackers.
There's such a thing as defense-in-depth. You don't need your registration process to stop every attacker to get in the door. You have moderation tools. You have blacklists. You have defederation. You have TBS. Every attacker-stopper you add makes your instance safer.
Don't give up. Fight back.
π₯°π₯°
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
on its face this is just an awful argument, like ???
99.9% of nazis won't even bother doing that... so it weeds out the vast majority of them
and that's what you have other moderation practices for!!
Suddenly wondering about a system, which I've seen effectively used on forums, where a new user's posts are held back, and they are effectively silenced until a moderator reads their posts and approves them to interact, perhaps with some sort of time-limit in case moderation doesn't get to it in a timely manner. This would not only mean they don't need to give a reason when signing up, but that they could partially engage without having to wait, and potentially that moderation would be seeing their posts right away.
It might take a new layer of systems to implement, but do you think that would be a good idea?
Closed registration is incredibly stressful to me, for example, because you have to consider just so many factors: did you write enough? did you write too much? are your writing skills on par? do you sound sane enough to be accepted? WHAT do you even write in the first place? what reason for wanting to join do you give beyond the basic "i need an account to interact with the site" which will likely not be enough and get me rejected? what if I end up wanting to change instances and therefore would've wasted moderator's time on making them read that?
The reasons for closed reg brought up in OP are valid and are probably more immediate than having anxiety, but my issues still exist and I'd like them to be kept in mind.
I'm a guy, and once, for about 45 seconds, I was mistaken for a woman, and the difference in the attitude towards me was insane, and that clarified things for me in an incredibly direct way.
So, I'm guessing you're saying that mostly in exasperation, and I'm most definitely not saying it's your job/responsibility to do so, but my thought is that yes, that might actually be a useful thing to do.
@kimlockhartga I usually donΒ΄t see stuff like that bc a) as you say, IΒ΄m not the target and b) bc the work all you mods and adms put in every day.
My wife just liked and commented an anti-nazi post on facebook the other day and the messages she got ranged from kill yourself to actual threats.
Is it just me or is this actually getting worse by the day?
Anyway, IΒ΄m sorry for everyone who has to deal with this shit and immensely grateful for everyone moderating it.
@kimlockhartga Do it, for sure. I would never say the "Really?" part but no, I never see these posts. For technical reasons, possibly?
But yes, show us the replies. Let us report these people.
it occurs to me that moderated registration has another benefit: reducing moderation load.
yes; reducing.
if your mod team is at capacity, the last thing you need is the doors wide open for any number of new users. if you're having trouble keeping up with new account requests, good! let them lie fallow, because you clearly don't need more users to moderate right now!
an excellent analysis
I'd add one addendum for those in the audience who want a low effort policy that's more aggressive
There is another option much more heavy-handed -- toward "innocent" and "guilty" alike. One common to servers including mine:
By referral, after referrer has been registered X months
The number who accidentally invite someone who doesn't share culture and values of the place is very low
And if fedi shows anything imo, it's that this scales better than many think
One of those "yeah one possible answer is literally in front of you" moments isn't it? hahaha.
Noted. Account migration is certainly one of these things that seem incredibly cumbersome so we'll work towards that in the near future
What about hellbanning AKA segregating the ab-user into their own mirror chamber where they can see and "interact" as normal, but only fellow hellbanned can actually interact?
That would be in lieu of a banned account.
I dont know. I dont visit Shitter.
I was thinking of what Hacker News does to people who post socialist/communist thought. Hellbanning.
@crankylinuxuser I'd rather we screen for abusers at the door, and make bans explicit, so users know *why* they're not allowed to play in our pool.
They can always go make their own Nazi instances if they want to hang out together.
True but the fash will play along and be nicey nicey. Bouncer says yes. Wash rinse repeat, ala nazi bar.
Whereas if theyre identified as fash, their account isnt deleted.
Its just reduced engagement to other fash. Most dont catch it. They get bored cause nobody interacts, and they go away for real, without spamming account after accuont.
@crankylinuxuser did you read the post where I covered this argument above?
Also, there's a cost to having a shadowban mechanic (and we already have account and server limiting). When you have a shadowban mechanic, users have no way of knowing whether or not they're in "The Good Place", which leads to anxiety and users feeling sure of themselves when posting. If you don't know when you've been punished, then the most insecure/depressed/anxious among us will assume they have been. Meanwhile, the trolls will make "The Bad Place" bleed over via out of network posting (i.e. sites picking up screenshots of shadowbanned filth-posts and dragging good Fedi's reputation down), as well as extra server/storage costs to maintain a safe space for trolls.
lgbtqia.space/@alice/116130539β¦
A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.
Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motiveβthey hunt. Opportunistic attackers have an inclination and will attack if a target presents itselfβthey're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.
Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.
In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).
Why does any of this matter?
Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.
How can we tip that scale out of the attackers' favor?
By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.
- A normal user only has to register once, while an attacker has to re-register every time they get suspended.
- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.
- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).
- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.
Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.
lgbtqia.space/@alice/115499829β¦
π °π »π Έπ ²π ΄ (ππ¦) (@alice@lgbtqia.space)
Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friendβlet's call him Markβto a club with you. It's open-door, which is cool, because you like when a lot of folx show up.π °π »π Έπ ²π ΄ (ππ¦) (LGBTQIA.Space)
Nope, cause I never saw it until I opened your post on your homeserver.
Mastodon sync bugs :/
@crankylinuxuser yay federation issues π
π«
So, there was a post on the fedi about a project Johnny Harris was working on. Some people in that thread seemed to think that he was untrustworthy, even going so far as to posit that he might be a CIA asset. I had no idea why they believed this, but it was echoed by more than one person.
I am familiar with Johnny's work. He always seems to do a good job of citing his sources (at least to my casual inspection). I asked about this distrust but received no response. Perhaps they thought I was sealioning?
So, I'm asking here: Is there an actual valid reason to distrust him that I'm simply not aware of, or is just stemming from the fact that he likes to shine light on things that some would rather not have light shined on?
If a Klein bottle could wear pants, would it be like this or like this?
#mathstodon #math #maths #shitpost
like this
calvin ποΈ, mike, pat and Jonathan Lamothe like this.
reshared this
aburka π«£, Bernie Luckily Does It, Eugen Rochko, Cat ππ₯ (D.Burch) β , Mr. Funk E. Dude, EmpathicQubit, Mx. Luna Corbden πΈ, Jonathan Lamothe, Shannon Prickett, πΊπ¦ haxadecimal π«π, Boyd Stephen Smith Jr. and pizzapal reshared this.
@owlyph @rocketsoup
The man with 1,000 Klein Bottles UNDER his house - Numberphile
Auf YouTube findest du die angesagtesten Videos und Tracks. AuΓerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.Numberphile (YouTube)
@rocketsoup
Goddamn neti pot......
According to this important research;
"He killed the noble Mudjokivis.
Of the skin he made him mittens,
Made them with the fur side inside,
Made them with the skin side outside.
He, to get the warm side inside,
Put the inside skin side outside;
He to get the cold side outside
Put the warm side fur side inside.
Thatβs why he put the fur side inside,
Why he put the skin side outside,
Why he turned them inside outside."
Which means exactly why you imagine it means!
β
that's an easy question
all you have to do is define the inside and the outside
...
...
π± π± π±
pizzapal reshared this.
Pentatonix - The Sound of Silence (Live)
Auf YouTube findest du die angesagtesten Videos und Tracks. AuΓerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.Pentatonix (YouTube)
I bet Matt Parker would be into this, and/or Sarah-Marie Belcastro, but definitely have a look at this (Matt and Katie Steckles) from a long time ago
youtube.com/watch?v=GGlmppx-2Mβ¦
Top N Facts About The Klein Bottle
Auf YouTube findest du die angesagtesten Videos und Tracks. AuΓerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.The Aperiodical (YouTube)
Or maybe Steve Mould and Matt Parker did some stuff here.
Matt is totally into Klein Bottles and topology in general. He may or may not be good at responding to direct questions, however.
Mathematics is not ready for such questions.
I know what hat they would wear, though!
Sensitive content
AndβICYMIβthereβs also this:
mapstodon.space/@pinakographosβ¦
A warning for Adobe #InDesign users. I recently updated my copy, and after preparing a document, the recipient of that document sent me the alt text that popped up on one of their figures. Apparently, by default, InDesign uses AI to generate (often hilariously wrong) alt text for images.
this reminds me of my favorite mathexchange question: βwhy can you turn a shirt inside outβ which required a graduate-level topology explanation
Why can you turn clothing right-side-out?
My nephew was folding laundry, and turning the occasional shirt right-side-out. I showed him a "trick" where I turned it right-side-out by pulling the whole thing through a sleeve instea...Mathematics Stack Exchange
@rocketsoup
@bleeptrack maybe ask Adobe mapstodon.space/@pinakographosβ¦
A warning for Adobe #InDesign users. I recently updated my copy, and after preparing a document, the recipient of that document sent me the alt text that popped up on one of their figures. Apparently, by default, InDesign uses AI to generate (often hilariously wrong) alt text for images.
I don't think it matters, as long as it's covering up the genitals
mapstodon.space/@pinakographosβ¦
@pinakographos@mapstodon.space
A warning for Adobe #InDesign users. I recently updated my copy, and after preparing a document, the recipient of that document sent me the alt text that popped up on one of their figures. Apparently, by default, InDesign uses AI to generate (often hilariously wrong) alt text for images.
how do you ping an entire instance
jorts.horse needs to see this
reshared this
#today and Digital Mark λ βοΈ πΉ π½ reshared this.
@Digital Mark Ξ» βοΈ πΉ π½ Well, there's an entrance exam for a job I need to study for so that I can continue to pay for internet (and by extension, access to thr VR). π
Apparently the one I did earlier was only the prequalification exam.
I just saw a Wal-Mart greeter shrink wrapped to a chair, holding a jar, trying to raise $50 for... something I couldn't read.
Dear Wal-Mart,
You're a billion dollar multinational corporation paying your employees the least you can possibly get away with. Why is this necessary?
Chris Ford reshared this.
reshared this
Shannon Prickett and Dennis reshared this.
like this
juliadream likes this.
like this
god damn gremlin and Todd Sundsted like this.
reshared this
Judy Anderson, god damn gremlin and Daphne Preston-Kendal reshared this.
I don't think programmers and sysadmins get how much there is to learn and how intimidating it is for normal people to host their own software.
For one most of us don't have a computer that is running 24/7, which means we need to rent a server which we have no idea how to go about doing.
And then there's an entire arcane art to running software that can speak to the internet without your server being taken over and used to send spam to half the planet
reshared this
Shannon Prickett and Jonathan Lamothe reshared this.
Yep. Speaking as someone who only started scratching the networking side pretty recently, it's just as hard as learning to code, and that includes the underlying logic.
Once you *know* it seems obvious, but if you don't? Whoooooooof.
thanks this is uplifting. I have been coding and setting up all sorts of servers for decades. One forgets that it is something one had to learn just because it was out of curiosity and for the love of it.
There probably are quite a few people trying to make a business in that niche though. I suspect they are hard to find with margins too small to afford much ad-spend.
So for me it would be a question of how to find customers.
@hc I think a community maintained index would be excellent for that sort of thing, or something similar to what TeamSpeak has where they've got a map of all the TeamSpeak hosting companies and you can just click on the city and find who as a server in that city.
TeamSpeak's map is very out of date, but I was still able to find and pay for a hosting provider with it
@Canageek I think the service that comes the closest to this is yunohost, though I've had a look at them and I don't know that they're quite to the level you describe.
Also, even if someone does pull this off, they need to have competitors and the ability to easily transfer from one to another, otherwise the enshittification process is almost guaranteed to set in eventually.
I'd love to set something like this up myself, but while I have the time and expertise, I don't have the necessary capital.
Canageek likes this.
@Canageek There are tools for doing that sort of thing... but there's no standard way of doing it. Vagrant and Nix come to mind.
But I don't think anyone's currently actually doing this. Nobody wants to make it easy for their users to leave their service, which is understandable, I guess, but...
@me weirdly TeamSpeak does this, they've made it incredibly easy to move from one hosting provider to another and maintain all your settings.
I don't necessarily like the way they've done this, which is all of your settings sync TeamSpeak's servers, not your hosting providers, apparently it works quite well
Jonathan Lamothe likes this.
I've set up self hosted services in the past based on my own software (using things like nginx and various databases as components) but I'd hesitate to do it now because of the increased prevalence of scrapers and deliberate attackers.
I would really hesitate to recommend self hosting for normal people.
I do think that self-hosting should be done by your home internet router.
I mean:
- computer with 32 bit CPU and Linux, check
- 100Mbps uplink, check
- IPv6 address, check
@glent I'm skeptical that most of them have ip6 addresses. They really should but whenever I check, especially with budget ISPs they don't...
Also, that's a much faster upload speed than just about anywhere I've ever lived
This seems very specific to where you are, ISPs don't seem to provide IPv6 where I like, and electric failures are common enough that potential drive damage or corruption feels possible too.
As a sysadmin and someone who self hosts a lot, you'll get really tired of being your own sysadmin and want things to just work. It's not all it's cracked up to be.
That's why there are companies out there trying to solve it, like Umbrel.
Hey Fedi,
For those who don't know, my mother had a major #stroke a little over a month ago. We're very fortunate to live in a country (Canada) where we have free #healthcare, but as her discharge from the #hospital looms closer, we're having to raise funds to make #accessibility modifications to my parents' home so that she can return. Boosts are welcome (and appreciated).
reshared this
N. E. Felibata π½, Kevin Davy, André Polykanine, GreenSkyOverMe (Monika), Allison Meloy, Jonathan Lamothe, silverwizard, cs, xmanmonk, Barbara Monaco, Words are Wind πΊπ¦ π³οΈπ and Daniele10 reshared this.
Him: well what's your big brain idea to eliminate food assistance fraud?
Me: universal food assistance
Him:
Me: because there wouldn't be a system to cheat, everyone would just get a check
Him: and who does that benefit?
Me: family farmers and human beings who rely on food for nutrition
Him: what about rich people?
Me: what about them?
Him: you would give them food assistance too?
Me: that's what universal means
Him: you can't do that
Me: yesterday you said I couldn't tax them and now you won't let me feed them either?
Him: they can afford food
Me: then it should be fine to tax them
Him: but if you tax them they won't have as much money
Me: I'm willing to offer universal food assistance
Jonathan Lamothe likes this.
reshared this
Urban Hermit, Looking for explanations…, Darcy Casselman, Bernie Luckily Does It, Jonathan Lamothe, Ryan Robinson and Boyd Stephen Smith Jr. reshared this.
I like the conclusion. "Aw, the poor billionaire would have less money? Don't worry free food assistance!"
The government says an adult needs $297 to eat for 1 month, and I have found this to be 100% adequate in my state (yes I am on SNAP right now).
I am pretty sure a billionaire sometimes tips an amount bigger than that just to impress someone.
Urban Hermit reshared this.
reshared this
Urban Hermit and Bernie Luckily Does It reshared this.
I started my young adult life without a car. There is bus service in the city I lived in, but carrying groceries on the bus usually meant 2 bags of groceries at most, so a lot of canned and dried foods.
Now I have a car and can buy 5 bags for 2 weeks, and fresh produce is 80% of my shopping trip and is not only healthier, but cheaper.
Buying a 5lb bag of potatoes and 3 lb bag of onions is a cost savings, but one you have to have a little privilege for.
reshared this
Urban Hermit and Bernie Luckily Does It reshared this.
@Extra_Special_Carbon I might have had some of the milder food allergies, like gluten, but at that age and minimum wage I just had chronic inflammation, obesity, swelling, and I spent my whole life not knowing I wasn't absorbing Magnesium and Iron because of it.
Caring about my health was a luxury I couldn't afford, and other people judging me thinking I was lazy was a thing I couldn't help.
It will probably shorten my life by 20 years.
reshared this
Urban Hermit and Bernie Luckily Does It reshared this.
@MaierAmsden @Extra_Special_Carbon I was in a similar way. Putting a can of tuna in raman was fancy and a way to get some protein in when I could afford it.
The number of times I tried to substitute something, like mayo, for expensive milk and butter in generic mac and cheese. Yeach.
The barely meat $1 bag of hot dogs - 1 hot dog, removed and cut up, to "spice up" a pot of mac n cheese.
So much macaroni, which I am probably allergic to, and so much cheese powder.
Not good times.
Urban Hermit reshared this.
@MaierAmsden @Extra_Special_Carbon yeah, 5 years in and a few raises, plus getting a car which caused my employer to jump my pay by $1/hour (suddenly they knew I could drive to some better job) and I could afford to hit those super cheap Chinese buffets.
So much sweet and sour "meat".
Eating healthy was a thing I had to be able to afford more than a decade later.
Urban Hermit reshared this.
I lived on hard-tack, a couple onions and half a dozen potatoes for nearly three weeks. Luckily I was well supplied with spices.
@Jake_Shelton @BernieDoesIt @MaierAmsden @Extra_Special_Carbon
Now that I have good transportation, a 5 lb bag of potatoes and 3 lb bag of onions is the base most of my cheap meals are built on.
I felt so much better eating potatoes that I recently confirmed I have been gluten intolerant for decades, and am now learning how to eat all over again.
Urban Hermit reshared this.
@Urban_Hermit 300 is like 10% of my income. Yeah, I'm doing well. 300 is like 0.001% of the income of a billionaire.
if it means no one is left behind, I don't see an issue with giving to those who don't need it and won't notice it.
Urban Hermit reshared this.
That's how the system was designed that the ruch must become richer and the poor must become poorer. It's so surprising how the 300 richest people in the world own over 40% of the global wealth when we have millions of people who can't even afford to buy two meals a day. What brings about such kind of inequality?
@Urban_Hermit
It's ridiculous to me that we have the ability to feed everyone and we won't because a bunch of people that aren't rich are so worried about the few who are.
If capitalism requires hungry kids, I don't need it.
reshared this
Bernie Luckily Does It reshared this.
Impenetrable.
Wait until these skeptics hear about the tax breaks that all these wealthy folks also don't need
@wendynather Means Testing is always *always* a bad approach. It comes from the same reflex "what if someone undeserving benefits from this" that led to our whole "money=virtue" problem in the first place, and it is the ruination of many good ideas.
Repeat this until you see it behind your own eyelids: The *most efficient* way to help everyone who deserves it, is to help *everyone*.
True ... also, less bureaucracy.
But it's one the LPCβ’ entrance exam!
@Alsy @ferrix @wendynather I do
Sales tax is regressive
Consumptiontaxes are the only taxes the rich pay, so not all bad
Balancing consumption taxes with universal dividends is my favorite transfers policy
(Him, obviously!)
Hopefully someday the rich will realize that at some point the increasing wealth gap creates an unstable society, endangering their position. Just ask Marie Antoinette.
At least here in the US I am not going to hold my breath waiting for that.
Sensitive content
yep, the same rich people keep yelling about overpopulation, yet have 21 kids somehow to spread their wealth between, somehow dodging inheritance taxes. if there was overpopulation that would just be a symptom of us not actively trying to just feed everyone.
If the tax curve actually was exponential to some degree (there's probably a better curve here, idk maths that well) we could likely easily feed, house and insure every single lifeform on this planet to at least a basic degree that is comfortable to live under for everyone. Anyone who wants more can work for it, anyone who wants to be excessively rich can deal with diminishing returns.
That is a great idea. Problem being that this has been brought up for decades if not centuries. It is a myth that you can implement progressive change in a regressive system. The only really progressive changes like the 8 hr workday and five day workweek and womens voting rights and abortion have been brought because the soviets made it law (most of those they did first, they also had the first female minister on the planet).
You are being lied to.
If you tax anyone, they won't have as much money. This isn't the argument Him thinks it is.
The usual trope, "But then the rich won't create more jobs." Wake the hell up they're cutting jobs as you read this, AND they just got a huge tax break.
The rich build their mega yachts, compounds, and bunkers, not because they can but because they are afraid what might happen if they don't.
like this
N. E. Felibata π½ likes this.
Are there any #Lisp programmers out there who use a #ScreenReader? Given how messy Lisp can be to read without proper indentation (which I imagne wouldn't translate well on a screen reader) I can't see it is being an easy language to work in without being able to see it.
I've been thinking about a way to make an editor that lets you explore a Lisp program by walking through the forms in the program in a manner similar to the way one might navigate in a MUD. Is this a crazy idea, or one with some merit?
I can imagine making a screen reader for lisp would be doable. Instead of close parenthesis it could say close defun, and instead of open parenthesis it could say open, then the function name. The rest of the language would be decently adapted.
It might still be hard to understand complex expressions though.
Not to discourage new ways to solve old problems, I just want to point out that your lisp journey will be much easier if you use an editor designed for the task, which includes learning a set of editor operations that function at the "form"/s-exp level.
In emacs these operations include traversal functions like `forward-sexp`, and the very useful `indent-sexp` function. They're part of basic emacs behavior, you don't even need to do change emacs configuration to enable them, and they're useful on data other than lisp source code.
Once you can easily navigate up, down, and around forms, checking to see where a form starts and ends is easy without any sensitivity to how it's indented.
like this
((( David "Kahomono" Frier ))) likes this.
Justin To #ΠΠ΅ΡΠΠΎΠΉΠ½Π΅ reshared this.
Can anyone recmmend a FOSS solution for converting an .azw file to .epub on Debian that isn't Calibre?
I thought maybe pandoc could do it, but apparently not.
reshared this
Justin To #ΠΠ΅ΡΠΠΎΠΉΠ½Π΅ and ((( David "Kahomono" Frier ))) reshared this.
Oh, this bottle of coconut water has "no added sugar"; I wonder if it's okay for my diabetes?
*checks the macros*
(It's roughly equivalent to a can of coke.)
*drinks it anyway*
A question for the #lisp folx:
What, if anything, is the difference between #'(lambda ...) and just plain (lambda ...)?
They seem functionally equivalent to me.
(lambda ...) is just a macro call that expand in the other form.
From the common lisp spec :
(lambda lambda-list [[declaration* | documentation]] form*)
== (function (lambda lambda-list [[declaration* | documentation]] form*))
== #'(lambda lambda-list [[declaration* | documentation]] form*)
Seemingly plain (lambda () ...) is a macro that expands to (function (lambda () ...)). #'(lambda () ...) uses a reader macro to expand to the same (function (lambda () ...)).
clhs.lisp.se/Body/m_lambda.htm
Function is a special operator that returns a function. It takes either a function name or a lambda expression. The second case is what is happening here.
clhs.lisp.se/Body/s_fn.htm#funβ¦
A lambda expression is a list of the symbol lambda, a lambda list, and a body.
like this
silverwizard likes this.
I've been taking a bunch of tests to qualify for a transcription job. They're not easy and I need a perfect score to pass. I finally failed one of the tests but managed to pass it on the retry.
They're really picky about their style guide. Fortunately, it basically amounts to syntax rules and I've been dealing with compilers that are equally picky about syntax for decades.
It also helps that all throughout my schooling my mother worked at the local university proofreading research scientists' papers and she insisted on proofreafing all my essays too.
I never thought I'd end up being happy about that.
So the cooldown period for me to re-take the exam is over. They want me to redo The. Whole. Damn. Thing.
Fine, whatever. It'll take me a few days to get through it all, but I don't expect to fail again.
Given how their style guide is now permanently burned into my brain, there's a tiny vindictive part of me that toyed with the idea of spinning up a competing service. That's probably more throuble than it's worth, though.
I see now why they're always hiring. I wonder how many people get to the prequalification exam and then just decide it's not worth it.
After a closer review of the style guide, I have come to realize that the question I was repeatedly getting wrong was one that wasn't even on my radar as a candidate. I misremembered a stupid trivial detail from the guide.* I was rabbit holing on the wrong two questions, one of which I still maintain can be argued to have two correct answers, but I now know which one they're looking for.
Not looking forward to sinking another ~5 hours on this thing.
* That would have had virtually zero impact on my ability to successfully do this job.
I passed!
Now I have to wait for them to determine whether or not they think I cheated. I didn't, but if they think I did, that's what matters. I can't see there being a reasonable appeals process for that.
@π¨π¦ CleoQc ππ¦π§Άππ It's a legal transcription job. They do regular transcription too, but they have AI doing much of it, so they're not looking for new people there.
They're smart enough to realize that AI isn't currently sophisticated enough to properly follow the various style guides required by their legal clients. I guess they realize that if the quality of the work drops, they'll lose the contracts.
That said, I'm sure all the work I do is going to be used to try to train an AI to replace me, but that's probably true of any job at this point.
Judy Anderson reshared this.
#LambdaMOO
like this
Joseph Teller and Guy Geens like this.
Digital Mark λ βοΈ πΉ π½ reshared this.
Got my hands on a #shortwave radio, but the fact that I live in a giant concrete box doesn't seem to be helping my reception. Seeing what I can do about that.
Are there any broadcasts that are worth catching that I'd be able to get in Southern Ontario?
Kevin Davy reshared this.
reshared this
Jonathan Lamothe reshared this.
(defun lambdamoo-tab-complete ()
"Complete user input using text from the buffer"
(interactive)
(when (memq (char-before) '(? ?\r ?\n ?\t ?\v))
(user-error "Point must follow non-whitespace character"))
(let (replace-start
(replace-end (point))
replace-text found-pos found-text)
(save-excursion
(backward-word)
(setq replace-start (point)
replace-text (buffer-substring replace-start replace-end))
(when (or (null lambdamoo--search-text)
(not (string-prefix-p lambdamoo--search-text replace-text t)))
(setq-local lambdamoo--search-text replace-text)
(set-marker lambdamoo--found-point (point)))
(goto-char lambdamoo--found-point)
(unless
(setq found-pos
(re-search-backward
(concat "\\b" (regexp-quote lambdamoo--search-text))
(point-min) t))
(setq-local lambdamoo--found-point (make-marker))
(user-error "No match found"))
(set-marker lambdamoo--found-point found-pos)
(forward-word)
(setq found-text (buffer-substring found-pos (point))))
(delete-region replace-start replace-end)
(insert found-text)))#emacs #lisp #moo #mud #LambdaMOO
reshared this
Lens and Sacha Chua reshared this.
@Omar AntolΓn Actually, looking more closely at it, it might just do the trick.
I love it when I spend hours re-writing code that essentially already exists. ;)
In the end, I wound up just binding tab to dabbrev-expand. π
It might seem like I wasted a bunch of time writing that, but at least I learned a bunch along the way.
Me realizing that festival uses a Lisp dialect:
Oh cool, I can add accessibility features to my Emacs stuff by procedurally generating the code in elisp.
Me realizing that festival's symbols are case sensitive:
Welp, I guess I can just doand do the rest of the processing in elisp directly. That's probably all I wanted anyway.(defun festival-saytext (text) (format "(SayText %S)" text))
Isaac Kuo likes this.
In 1959, a cement mixer with a full load of cement, wrecked near Winganon, Oklahoma πΊπΈ
By the time a tow truck came to haul it away, all of cement had hardened inside of mixer. Tow truck was not able to remove all wreckage at same time because of weight, and decided to haul only cab/frame and would come back for detached mixer later, which never happened.
Today, 67 years later, it still sits where it fell. Locals have painted it and added "rocket thrusters" to make it look like a space capsule.
like this
Hypolite Petovan and Fabio like this.
reshared this
Isaac Ji Kuo, Hypolite Petovan, Jonathan Lamothe, Boyd Stephen Smith Jr., oldguycrusty, aburka π«£, Joshua Byrd, Aral Balkan, πΊπ¦ haxadecimal π«π and I am Jack's Found 404 reshared this.
Honestly, this is what a space capsule should look like.
@LanceJZ @isaackuo
That's a piece of Art, and congratulations to the locals for maintaining it.
(Actually the capsule would have had thrusters: there would be Capsule:Flotation Bag:Heat Shield:Thruster Pack, with the thruster pack held on by straps so it could be jettisoned after deceleration but before hitting atmosphere. On one mission they re-entered with the thruster pack attached because the flotation bag light had come on and they were concerned about the heat shield.)
@Cadbury_Moose @LanceJZ While this is true of the Mercury, Gemini, and Apollo capsules (including the Apollo service module), a reusable capsule could enter nose first rather than tail first.
Nuclear missile reentry heat shields are blunt cones entering nose first.
That said, Dragon does do tail first reentry, placing the thrusters on the sides rather than the tail. I just think it "looks" wrong.
@isaackuo @Cadbury_Moose @LanceJZ That is only true for modern ballistic missile RVs, initially they were launched blunt end forward, since the materials of that time didn't allow a more accurate short end forward reentry because these cause higher temperatures. (That is also why the Space Shuttle got a rather blunt nose)
Also, there are far more than just one kind of capsule. Imagine this as a biconic lifting body, and it isn't that much fictive to retain its aft thrusters.
@Cadbury_Moose @isaackuo there has never been a capsule with thrusters on them from Apollo on.
@LanceJZ @Cadbury_Moose This is what people think of when they think of the Apollo "capsule". It has a big main thruster in the tail, and lots of thruster clusters all over the place.
That's the reason why the artists modifying the cement mixer tank felt the need to add thrusters. It didn't look right without them, because the overall shape looks like a capsule plus its service module.
@LanceJZ @Cadbury_Moose I know what you mean, but that's what people think of.
One reason they think of the Apollo "capsule" as the Command Module and Service Module is that there isn't any footage of the Command Module by itself in space. No one left on the Service Module to shoot the Command Module after separation.
(The Command Module is just the return capsule.)
Bob Jonkman reshared this.
@Cadbury_Moose @isaackuo @archaeohistories
Cute, but a big hazard if a vehicle has to leave the road. I would move this thing off.
Or at least further away from the road. A crane could do this in less than four hours. Much cheaper than having a vehicle plow into it.
@davevolek That would likely require someone to pay for it. Given the little bits I've gleaned about local governance in the U.S. I can easily see no one having any spare budget for it.
The photo looks like a rural highway to me. This means fairly high speeds. If a car "hits the ditch," a bumpy ride turns into a fatal accident.
I suspect the jurisdiction belongs to whoever owns the highway. It could be the state or it could be the county.
A couple of heavy tow wreckers could move this machine. Less than $5000.
But there may be political pressure to keep the machine in place. It does look cute.
There may indeed be more to the story.
I come from a rural background. Many people drive 80 kph (50 mph) on these roads. And they hit the ditch more often.
There might be some weight restrictions that prohibit big trucks on this road. The pavement in the photo (or oily gravel) looks a little on the weak side to me.
Anyways, we need more info to know why this thing has remained in the ditch for 67 years.
Hey all,
I have a friend who's been trying to get on Mastodon but tells me that it doesn't seem to play well with screen readers. I know there are plenty of people on the fedi who do use screen readers, but I have no experience with them myself, so I can't really direct him.
Can someone who does use a #ScreenReader point me in the direction of some resources that might be useful?
#AskFedi #a11y
like this
lain, author of the quixote likes this.
reshared this
deananorth, Kevin Davy, fedops ππ, Amin, minor deity of the legume realm, Sam D, Childless Bambino, Jules she/her, Luminex, Yvan, Emsquared, MCDuncanLab, Aubrieta, woe2you, kabel42, BashStKid, luce, The Crafty Miss, Hunderoute, Quixoticgeek, Lyle Solla-Yates, Tony Hoyle, Alex Haist, Granny Art (Shrimp) (Joni), David Cantrell π, Hesperalis, Malwen, Cassana π», trending_bot, Play Ball and Fight Fascists, nebulos, Psil, M. E. Garber, Aslak Raanes, Cozy Trends, yelling jackal, littlemiao, π³οΈπππ§π·Luanaπ§π·ππ³οΈπ, hype, Starryeyesswitched, IndyHermit, Fieryzard βοΈ, π, royal, hype, hypebot, Carsten, Trends der PnP-Nachbarschaft, Jay Grant π³οΈβ§οΈ, HypeBot π₯, epicdemiologist, Ashe Dryden ππΌβοΈππβ¬, DecaturNature, Stomata, Eve Ventually, Bruno Girin, Hypebot π€, SlightlyCyberpunk, Jayne πͺπΊπ³οΈπ, DieMadColonizer, Jonizulo, UnCoveredMyths, cms, clear, Daniel Johnson, some kinda cat, hypebot, ts π, A Flock of Beagles, *|FNAME|*, MostlyBlindGamer, not ch1c, avatastic, Robot-Queb, hypebot and 349 other people reshared this.
@The Witchy Bitches @π©βπ¦―The Blind Fraggle @Matt Campbell @ADHDeanASL @Panamanianβ€οΈβπ₯ @Superdave! @Lanie I believe he's recently switched to a Linux distribution* (which I understand doesn't play well with screen readers to begin with). I can ask him for more details. Unfortunately he's in the UK, so I can't assist in person.
* I don't know which.
Furbland's Very Cool Mastodon™ reshared this.
Home Β· Enafore
A fediverse client with better support for Akkoma, glitch-soc, and Iceshrimp instances.enafore.social
Edit: Just had a look and, of course, @FediTips has something on it: fedi.tips/how-do-i-use-mastodoβ¦
How do I use Mastodon through a screen reader? | Fedi.Tips β An Unofficial Guide to Mastodon and the Fediverse
An unofficial guide to using Mastodon and the Fediversefedi.tips
I wonder if an add on for mastodon.el could make that easier. Ive never tried the reader in emacs, but have always heard good things.
Ping @JonathanMosen
Jonathan: @me
is asking about Mastodon clients and screen readers in the toot above.
I would add a couple of things.
First, any blind person is very welcome to join us here at CaneAndAble.social, a great community for blind Mastodon users. There are plenty of helpful people here.
Second, if the person uses iOS, I produced an audio tutorial on Mona for Mastodon. This went out as part of the Living Blindfully podcast I used to run, which can still be found in any podcast app.
Mona has been updated since then, but itβs still very relevant, and many blind people appreciate an audio walkthrough.
The URL for the audio is: LivingBlindfully.com/227, and the transcript is at LivingBlindfully.com/lb0227traβ¦
Episode 227:A tutorial on Mona for Mastodon, the most powerful, accessible way to do Mastodon on your iPhone, iPad and Mac
This special episode is devoted to Mona, a Mastodon app for iOS, iPadOS and MacOS. It has exceptional VoiceOver accessibility, and during its development, the developer has consulted extensively with the blind community.Living Blindfully
Home Β· Enafore
A fediverse client with better support for Akkoma, glitch-soc, and Iceshrimp instances.enafore.social
Fediverse for writers/readers., Sightless Scribbles
A fabulously gay blind author.sightlessscribbles.com
Jonathan Lamothe likes this.
I've tried using some of the fedi web apps with a screen reader and yeah they tend to be horrible
There may be some native apps that do better in that regard
How do I use Mastodon through a screen reader? | Fedi.Tips β An Unofficial Guide to Mastodon and the Fediverse
An unofficial guide to using Mastodon and the Fediversefedi.tips
Episode 206: Mastering Mastodon, how do you get started with ham radio, and a fix for the Eset issues plaguing screen reader users
Kia ora Mosen At Largers. A reminder that this podcast is indexed by chapter. If you listen with a podcast client that offers chapter support, you can easily skip between segments.Living Blindfully
@MonaApp for #IOS and @pachli for #Android are 2 great #Accessible Mastodon clients.
I'm certain I have reinvented a wheel here, but for the life of me I can't find it. Have I?
(defmacro jrl-extract-list (vars list &rest body)
"Split a list into indiviual variables"
(let ((list* (gensym)))
(append
`(let ,(cons (list list* list) vars))
(seq-map (lambda (var)
`(setq ,var (car ,list*)
,list* (cdr ,list*)))
vars)
body)))#emacs #lisp #elisp
Edit: Of course it was pcase.
Sensitive content
(let ((my-list '(1 2 3)))
(jrl-extract-list (foo bar baz) my-list
(format "foo: %d, bar: %d, baz: %d" foo bar baz)))would yield:
"foo: 1, bar: 2, baz: 3"
Sensitive content
Sensitive content
Sensitive content
Sensitive content
*** Welcome to IELM *** Type (describe-mode) or press C-h m for help.
ELISP> (cl-multiple-value-bind (foo bar baz)
(apply #'cl-values (list 1 2 3))
(list baz bar foo))
(3 2 1)
```
Sensitive content
.... or:
`
ELISP> (cl-destructuring-bind (a s &key (foo 42) bar)
(list 1 2 :bar 666)
(list bar foo s a))
(666 42 2 1)
`
My guess is that maybe pcase can do something similar?
pcase confuses me.
Sensitive content
Doesn't destructuring-bind do something along these lines, if not exactly this?
(Assuming you meant Common Lisp. If it's Emacs lisp, then I dunno.)
pcase.
Sensitive content
seq-let a lot better.
reshared this
Jonathan Lamothe, Tengu and Luke Trevorrow reshared this.
This website uses cookies. If you continue browsing this website, you agree to the usage of cookies.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Best part? It's always somebody with years of experience. Exactly the demographic that is supposedly able to use this shit safely, but my impression is they're just as bad as the novices
This is happening IMO because of one of the fundamental issues with software dev (and this predates "AI" and was one of the themes of my first book):
Most software projects fail and most of what gets shipped doesn't work. The way the industry is set up means there is little downside to shipping broken software
Nicole Parsons reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Few devs have a reference point for genuinely working software. Usability labs were disbanded over 20 years ago. Very few companies do actual user research, so their designs are based on fiction. Bugs are the norm
Alienation is also the norm for devs, both socially and organisationally. Whether it works for the end user doesn't cross their mind. Whether the design fulfils business needs is not their problem. Bugs are a future problem. Ship insecure software and patch it as user data gets stolen
Nicole Parsons reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Devs are so disconnected from the output of their work that many of the norms of the industry are outright illegal: there's a good chance that if you follow popular practices for a React project, for example, you'll end up with a site or product that violates accessibility law in several countries
Few devs would even know where to begin to look to answer the question "does my software work for the people forced to use it?"
reshared this
Nicole Parsons and hamish campbell reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Nicole Parsons reshared this.
Kale
in reply to Baldur Bjarnason • • •Whoever came up with 'Yes/Not now' needs to be dragged into the streets and shot.
No wonder some folks don't understand consent - our software doesn't allow for it.
Alexandre Oliva
in reply to Kale • • •or the "You must allow our JavaScript programs to run on your browser, otherwise we won't allow you to get to the information that we're legally required to provide you with"
CC: @baldur@toot.cafe
Chip Butty
in reply to Baldur Bjarnason • • •Nicole Parsons reshared this.
Thomas - NBA
in reply to Baldur Bjarnason • • •Bruno Nicoletti
in reply to Baldur Bjarnason • • •Kale
in reply to Baldur Bjarnason • • •I still remember, talking to a twitter dev who had an utterly ridiculously foolish take on XYZ issue go viral.
They told me 'Uhhh, I've never had this much attention on me, my tweets never go beyond my social circle. I had to turn off my phone. It kept buzzing.'
... this was a person who worked on the UI. No shit they had no idea how to deal with high volume 'oh, you just 200k likes' kinda shit, they never experienced it themselves.
LillyLyle/Count Melancholia
in reply to Baldur Bjarnason • • •Raymond Neilson
in reply to Baldur Bjarnason • • •Honestly I think a big part of it is more than our industry being deeply immature still; I think the most important throughline of the research on LLMs' effects on cognition is a consistent attack on metacognition, which seemingly doesn't abate with experience. The same corrosion happens to juniors and seniors alike, but the seniors have more rationalizations at hand to pretend it doesn't.
(Speaking of, that "cognitive surrender" paper is the latest in that theme: papers.ssrn.com/sol3/papers.cfβ¦)
Kraftwerk-Das Model Collapse
in reply to Raymond Neilson • • •David Beazley
in reply to Baldur Bjarnason • • •Apropos of nothing, the absolute worst implementation of Raft I've ever seen in my Raft course was by a pair of senior devs with a combined 60+ years of experience who decided to pair program together and announced ahead of time to the group that they were going to "win" Raft. They did not.
An undergraduate who'd never coded with sockets before did reasonably okay.
Kraftwerk-Das Model Collapse
in reply to Baldur Bjarnason • • •Nicole Parsons
in reply to Kraftwerk-Das Model Collapse • • •@dngrs
Most people ignore that it's a fossil fuel funded cult, intentionally designed to keep a dependency on oil.
wired.com/story/trump-energy-iβ¦
bloomberg.com/news/articles/20β¦
nytimes.com/2025/10/27/technolβ¦
cnbc.com/2025/11/20/us-approveβ¦
Saudi Arabia aspires to be the next Russian Internet Research Agency, selling hack-for-hire election meddling.
npr.org/2020/08/18/903512647/sβ¦
newyorker.com/news/news-desk/wβ¦
With Larry Ellison's help.
independent.co.uk/news/world/aβ¦
sfchronicle.com/tech/article/pβ¦
intelligentcio.com/me/2023/12/β¦
Billionaire Larry Ellison plotted with Trump aides on call about overturning election
John Bowden (The Independent)Nicole Parsons reshared this.
adingbatponder πΎ
in reply to Baldur Bjarnason • • •Lucas
in reply to Baldur Bjarnason • • •Orjan
in reply to Baldur Bjarnason • • •I feel like having spent most of my career building embedded systems aimed at industry rather than consumers, where customer support issues can mean sending a technician out with a USB stick on a ten-hour road trip, has insulated me from the worst madness.
If your sloppy coding breaks a manufacturing line or distribution network, bugs become expensive fast.
Though having said that, $CURRENT_EMPLOYER is pushing for greater use of LLMs in our workflow...
Niels Abildgaard
in reply to Baldur Bjarnason • • •This has been on my mind the last few days, too: mas.to/@nielsa/116171030173125β¦
I see so many people falling into LLM delusion, who I thought would know better, with no seeming pattern in *why* they fall for it.
Yes, the lack of negative incentives is certainly a factor.
My best explanation so far is that LLMs are kind of "acting" like 17 cons (some new, some old) in a trench coat, and different combinations of these trick different people who'd be able to resist most of these on their own.
Niels Abildgaard
2026-03-04 13:00:44
Martin Hamilton
in reply to Baldur Bjarnason • • •Paul Dot X
in reply to Baldur Bjarnason • • •choomba
in reply to Baldur Bjarnason • • •Elias MΓ₯rtenson
in reply to Baldur Bjarnason • • •I always enjoyed Universe Today, but once you get llm psychosis, everything becomes possible.
youtube.com/watch?v=vkhZHR_hs4β¦
at the 47 minute mark it really goes off the rails.
SpaceX's AI Data Centres Might Actually Be A Good Idea. Here's Why
Fraser Cain (YouTube)spidey
in reply to Baldur Bjarnason • • •