(Following thread was prompted by people pointing out that the Bluesky dev team seems heavily into vibe-coding now and originally posted on said vibe-coded Bluesky platform that is now constantly failing.)
Over the past year, every single time one of the apps or services I use suddenly became less reliable and more buggy, I never have to look far for the "Claude is amazing and now writes most of my code" post for the devs involved.
reshared this
@screwlisp is having some site connectivity problems so asked me to remind everyone that we'll be on the anonradio forum at the top of the hour (a bit less than ten minutes hence) for those who like that kind of thing:
He'll also be monitoring LambdaMOO at "telnet lambda.moo.mud.org 8888" for those who do that kind of thing. there are also emacs clients you should get if you're REALLY using telnet.
Topic for today, I'm told, may include the climate, the war, the oil price hikes, some rambles I've recently posted on CLIM, and the book by @cdegroot called The Genius of Lisp, which we'll also revisit again next week.
cc @ramin_hal9001
reshared this
At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.
I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.
The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.
Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.
Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.
But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.
And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.
In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.
But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.
The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.
For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.
My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.
This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.
screwlisp reshared this.
screwlisp reshared this.
@nosrednayduj
First, thanks for raising that example. It's interesting and contains info I hadn't heard.
In a way, it underscores my point: that for a while, it was an open question whether we could implement GC, but a bet was made that we could.
You could view that as saying they only implemented part of Lisp, and that the malloc stuff was a stepping out of paradigm, an admission the bet was failing for them in that moment. Or you could view it as a success, saying that even though some limping was required of Lisps while we refined the points, it was done.
As I recall, there was some discussion of adding a GC function. At the time, the LispM people probably said "which GC would it invoke" and the Gensym people probably said "we don't have one". That was the kind of complexity that the ANSI process turned up and it's probably why there is no GC function. (There was one in Maclisp that invoked the Mark/Sweep GC, but the situation had become more complicated.)
Also, as an aside, a personal observation about the process: With GC, as with other things like buffered streams, one of the hardest things to get agreement on was something where one party wanted a feature and another said "we don't have that, I'd have to make it a no-op". Making it a no-op was not a lot of implementation work. Just seeing and discarding an arg. But it complicated the story that was told, and vendors didn't like it, so they pushed back even though of all the implementations they had the easiest path (if you didn't count "explaining" as part of the path).
@nosrednayduj
And, unrelated, another reference I made in the show as to Clyde Prestowitz and book The Betrayal of American Prosperity.
goodreads.com/book/show/810439β¦
Also an essay I wrote that summarizes a key point from it, though not really related to the topic of the show. I mention it just because that point will also be interesting maybe to this audience on the issue of capitalism if not on the specific economic issue we were talking about tonight:
netsettlement.blogspot.com/201β¦
The Betrayal of American Prosperity: Free Market Delusiβ¦
CONSIDER THIS SHOCKING while Chinaβs number one exportβ¦Goodreads
screwlisp reshared this.
@nosrednayduj
Also Naomi Klein's book The Shock Doctrine, very politically relevant this week, traces a lot of political ills to Milton Friedman and his ideas.
goodreads.com/book/show/123730β¦
The Shock Doctrine: The Rise of Disaster Capitalism
In her ground-breaking reporting from Iraq, Naomi Kleinβ¦Goodreads
your notes on continuations are interesting. I do a lot of Kotlin programming these days, and one of thr features it adds on top of Java is continuations (they call is suspend functions). However, unlike Scheme, you can only call suspend functions from other sustend functions, leading to two different worlds, the continuation-supported one and the regular one.
I measured a 30% performance hit when changing code use suspend functions instead of regular functions. Nevertheless, this has not stopped people from using them for everything.
@loke ooh, that is interesting, thanks! I did not know that Kotlin also had that feature (in a limited way).
Yes, the performance hit probably comes from copying the stack or restoring the stack. For small stacks this is trivial, but often times continuations are useful when computing recursive functions over very large data structures, and you usually have very large stacks for these kinds of computations.
Delimited continuations (DCs) can help with that problem, apparently. And the API for DCs also happens to make them more composable with each other, since you can kind-of unfreeze a computation inside of another frozen computation.
That might be why Kotlin has those restrictions on continuations.
yes, Scheme led on continuations before it was a well-established idea, and I think there is some regret about that because of the difficulties involved to which you had alluded, especially in compiling efficient code. Nowadays the common wisdom is that delimited continuations, which I believe are implemented by copying only part of the stack, are better in every way. I have no strong opinions on the issue, I just thought it was interesting how Scheme solved problems of optimizing tail recursion and βcreating actorsβ i.e. capturing closures, and both of these things involve stack manipulation which naturally leads into the idea of continuations.
As a Haskeller I definitely appreciate the study of programming language theory, and how much of Haskell is built on the work of Lisp. The Haskell teamβs many innovations include asking questions like, βwhat if everything was lazy by default?β Or, βwhat if we abolish mutating variables and force the programmer to pop the old value and push the new value on the stack every time?β Or βwhat if tail recursion was the only way to loop?β As it turns out, this gives an optimizing compiler the freedom to very aggressively optimize code, and can result in very efficient binaries. Often times, both programmers and language implementors can do a lot more when you are constrained to use fewer features.
But which features to use and which to remove requires a lot of wisdom and experience. So the Haskell people could have only felt comfortable asking those questions after garbage collection and closures had become a well-established practice, and we can thank the work of the Lisp team for those contributions.
Generational GC changes the way you program and it's not *just* that it's efficient.
We used MIT-Scheme (which, by the early 90s was showing its age). We did all manner of weird optimizing to use memory efficiently. Lots of set! to re-use structure where possible. Or (map! f list) -- same as (map...) but with set-car! to modify in-place -- because it made a HUGE difference not recreating all of those cons cells => bumps memory use => next GC round is that much sooner (and then everything STOPS, because Mark & Sweep). Also stupid (fluid-let ...) tricks to save space in closures.
We were writing Scheme as if it were C because that was how you got speed in that particular world.
1/3
screwlisp reshared this.
And then Bruce Duba joined the group (had just come from Indiana).
"Guys, you're doing this ALL WRONG",
"Yeah, we know already. It's ugly, impure, and sucks. But it's faster, unfortunately",
"No, you need a better Scheme; you should try Chez".
...and, to be sure, just that much *was* a significant improvement. Chez was much more actively maintained, had a better repertoire of optimizations, etc...
... but the real eye-opener was what happened when we ripped out all of the set! and fluid-let code. That's when we got the multiple-orders-of-magnitude speed improvement.
2/3
Digital Mark λ βοΈ πΉ π½ reshared this.
See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.
(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)
In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.
3/3
screwlisp reshared this.
There were other weirdnesses as well.
Even if GC saves you the horror of referencing freed storage, or freeing stuff twice, you still have to worry about memory leaks and moreover, dropping references as fast as you can matters
With copying GC, leaks are useless shit that has to be copied -- yes it eventually ends up in an old generation but until then it's getting copied -- and copying is where generational GC is doing work, and it's stuff unnecessarily surviving to the medium term that hurts you the most (generational GC *relies* on stuff becoming garbage as quickly as possible)
And so, tracking down leaks and finding places to put in weak pointers started mattering more...
4/3
@kentpitman @cdegroot @ramin_hal9001
5? maybe for mark&sweep
but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...
... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.
@wrog
> but I can't see how more than 2 would ever be necessary for a copying GC
It's not "necessary", it's "to make GC performance a negligeable percentage of overall CPU".
It was about a theoretical worst case as I recall, certainly not about one particular algorithm.
And IIRC it was actually a factor of 7 -- 5 is merely a good mnemonic which may be close enough. (e.g. perhaps 5-fold keeps overhead down to 10-20% rather than 7's 1%, although I'm making it up to give the flavor -- I haven't read the book for 10-20 years)
But see the book (may as well use the second edition) if and when you care; it's excellent. Mandatory I would say, for anyone who wants to really really understand all aspects of garbage collection, including performance issues.
screwlisp reshared this.
@wrog
'setq' and friends have been criticized forever, but avoiding mutation is easier said than done. Parsing arbitrarily large sexpr's requires mutation behind the scenes -- which ideally is where it should stay.
Any language we use that helps avoid mutation is a good thing. 100% avoidance is a matter of opinion -- some people claim it was proven to be fully avoidable decades ago, others say the jury is still out on the 100% part.
I don't know enough to have an opinion on whether 100% has been completely proven, but it's attractive.
I respect you, and your contributions to Lisp and the community. So I dislike nitpicking you. But:
> Common Lisp macros more unhygienic than they actually are
This is a biased phrasing. There are hygenic macro systems, and unhygenic macro systems. One cannot assign a degree of "hygenic-ness" without simultaneously defining what metric you are introducing.
We all can agree that one can produce great code in Common Lisp. It's not like Scheme is *necessary* for that.
But de gustibus non est disputandum. There are objective qualities of various macro systems -- and then there's people's preferences about those qualities.
Bottom line: it seems you are saying that Lisp macros aren't so bad if their use is constrained to safe uses, and I would agree with *that*.
> I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did.
My memory is that the Scheme interface for continuations was completely worked out when Scheme was born, but implementation issues were not -- beyond existence proof that is.
> But it was brave to say that full continuations could be made adequately efficient.
Yes it was!
> the Lisp community in general, and here I will include Scheme in that
Planner, for instance, went in a quite different direction. Micro-Planner (and its SHRDLU) inspired Prolog. Robert Kowalski said that "Prolog is what Planner should have been" (it included unification but excluded pattern-directed invocation for example), see Kowalski, R. (1988). βLogic Programming.β Communications of the ACM, 31(9) -- although the precise phrasing I think is from interviews.
Anyway, Prolog was not a Lisp, but sure, definitely Scheme is. The history of Lisp spinoffs created quite a bit of CS history.
I did professional development in Scheme (at Autodesk, before that division was axed π -- it's certainly a workable language in the real world.
But we know that Common Lisp is too, obviously.
screwlisp reshared this.
> 2 of them using hardware support (typed pointers)
I learned about typed pointers from Keith Sklower, from my brief involvement in the earliest days (1978?) of Berkeley's Franz Lisp (implemented in order to support the computer algebra Macsyma port to Vaxima), and it blew my mind. Horizons extended hugely.
A few years later everyone seemed to just take the idea in stride. Yet no one seems to comment on the impact on typed pointers made by big-endian versus little-endian architectures; everyone seems to regard it as a matter of taste. It's not always; it impacts low level implementations.
>My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts
I used to regularly talk to the technical lead for that group at Sun for unimportant reasons, and I have every reason to think that the entire team was absolutely brilliant.
I don't recall whether some of them were displaced Lisp GC experts, but I do recall that I had plenty of criticisms about Java the language, but tended to find few, if any, about Java the implementation. And they kept improving it.
screwlisp reshared this.
Your understanding is mostly faulty. The original GC was written by me, and I'm no Lisp GC expert. I was (and still am) an admirer of Lisp. I wrote the code for my whole PhD thesis in Lisp. My admiration for garbage collection started earlier, when I was a big user of Simula in the 70s. But the motivation for GC in Java was different: the motivation was all about reliability and security. A leading cause of security vulnerabilities as always been buggy code. And one of the leading root causes of many long standing, hard to diagnose and fix, bugs has been flakey storage management. Garbage collection goes a long way to increasing system reliability, and hence security. I had always wanted to make GC more mainstream.
When you described garbage collection to senior management back in the day, their reflexive judgement was: "bullshit! Lazy engineers just don't want to clean up their mess". But when they see measureable improvements in system robustness, and corresponding decreases in failures, they Notice.
@kentpitman @cdegroot @ramin_hal9001
Cees de Groot, Kent Pitman, Ramin Honary, screwlisp #commonLisp #lisp user interfaces and the ages, #climate
Kent: https://nhplace.com/ https://climatejustice.social/@kentpitman https://en.wikipedia.org/wiki/Kent_M._Pitman https://netsettlement.blogspot.com/ Cees de Groot('s book, The Genius of Lisp) : ht...Lispy Gopher Climate extras (toobnix)
reshared this
A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:
"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""
Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.
Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motiveβthey hunt. Opportunistic attackers have an inclination and will attack if a target presents itselfβthey're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.
Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.
In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).
Why does any of this matter?
Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.
How can we tip that scale out of the attackers' favor?
By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.
- A normal user only has to register once, while an attacker has to re-register every time they get suspended.
- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.
- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).
- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.
I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.
Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.
Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.
lgbtqia.space/@alice/115499829β¦
π °π »π Έπ ²π ΄ (ππ¦) (@alice@lgbtqia.space)
Why reactive moderation isn't going to cut it, aka, "The Sucker-punch Problem". Imagine you invite your friendβlet's call him Markβto a club with you. It's open-door, which is cool, because you like when a lot of folx show up.π °π »π Έπ ²π ΄ (ππ¦) (LGBTQIA.Space)
reshared this
Nope, cause I never saw it until I opened your post on your homeserver.
Mastodon sync bugs :/
So, there was a post on the fedi about a project Johnny Harris was working on. Some people in that thread seemed to think that he was untrustworthy, even going so far as to posit that he might be a CIA asset. I had no idea why they believed this, but it was echoed by more than one person.
I am familiar with Johnny's work. He always seems to do a good job of citing his sources (at least to my casual inspection). I asked about this distrust but received no response. Perhaps they thought I was sealioning?
So, I'm asking here: Is there an actual valid reason to distrust him that I'm simply not aware of, or is just stemming from the fact that he likes to shine light on things that some would rather not have light shined on?
If a Klein bottle could wear pants, would it be like this or like this?
#mathstodon #math #maths #shitpost
like this
reshared this
reshared this
@Digital Mark Ξ» βοΈ πΉ π½ Well, there's an entrance exam for a job I need to study for so that I can continue to pay for internet (and by extension, access to thr VR). π
Apparently the one I did earlier was only the prequalification exam.
I just saw a Wal-Mart greeter shrink wrapped to a chair, holding a jar, trying to raise $50 for... something I couldn't read.
Dear Wal-Mart,
You're a billion dollar multinational corporation paying your employees the least you can possibly get away with. Why is this necessary?
Chris Ford reshared this.
like this
reshared this
I don't think programmers and sysadmins get how much there is to learn and how intimidating it is for normal people to host their own software.
For one most of us don't have a computer that is running 24/7, which means we need to rent a server which we have no idea how to go about doing.
And then there's an entire arcane art to running software that can speak to the internet without your server being taken over and used to send spam to half the planet
reshared this
Hey Fedi,
For those who don't know, my mother had a major #stroke a little over a month ago. We're very fortunate to live in a country (Canada) where we have free #healthcare, but as her discharge from the #hospital looms closer, we're having to raise funds to make #accessibility modifications to my parents' home so that she can return. Boosts are welcome (and appreciated).
reshared this
Him: well what's your big brain idea to eliminate food assistance fraud?
Me: universal food assistance
Him:
Me: because there wouldn't be a system to cheat, everyone would just get a check
Him: and who does that benefit?
Me: family farmers and human beings who rely on food for nutrition
Him: what about rich people?
Me: what about them?
Him: you would give them food assistance too?
Me: that's what universal means
Him: you can't do that
Me: yesterday you said I couldn't tax them and now you won't let me feed them either?
Him: they can afford food
Me: then it should be fine to tax them
Him: but if you tax them they won't have as much money
Me: I'm willing to offer universal food assistance
Jonathan Lamothe likes this.
reshared this
like this
Are there any #Lisp programmers out there who use a #ScreenReader? Given how messy Lisp can be to read without proper indentation (which I imagne wouldn't translate well on a screen reader) I can't see it is being an easy language to work in without being able to see it.
I've been thinking about a way to make an editor that lets you explore a Lisp program by walking through the forms in the program in a manner similar to the way one might navigate in a MUD. Is this a crazy idea, or one with some merit?
Not to discourage new ways to solve old problems, I just want to point out that your lisp journey will be much easier if you use an editor designed for the task, which includes learning a set of editor operations that function at the "form"/s-exp level.
In emacs these operations include traversal functions like `forward-sexp`, and the very useful `indent-sexp` function. They're part of basic emacs behavior, you don't even need to do change emacs configuration to enable them, and they're useful on data other than lisp source code.
Once you can easily navigate up, down, and around forms, checking to see where a form starts and ends is easy without any sensitivity to how it's indented.
like this
Justin To #ΠΠ΅ΡΠΠΎΠΉΠ½Π΅ reshared this.
Can anyone recmmend a FOSS solution for converting an .azw file to .epub on Debian that isn't Calibre?
I thought maybe pandoc could do it, but apparently not.
reshared this
A question for the #lisp folx:
What, if anything, is the difference between #'(lambda ...) and just plain (lambda ...)?
They seem functionally equivalent to me.
Seemingly plain (lambda () ...) is a macro that expands to (function (lambda () ...)). #'(lambda () ...) uses a reader macro to expand to the same (function (lambda () ...)).
clhs.lisp.se/Body/m_lambda.htm
Function is a special operator that returns a function. It takes either a function name or a lambda expression. The second case is what is happening here.
clhs.lisp.se/Body/s_fn.htm#funβ¦
A lambda expression is a list of the symbol lambda, a lambda list, and a body.
I've been taking a bunch of tests to qualify for a transcription job. They're not easy and I need a perfect score to pass. I finally failed one of the tests but managed to pass it on the retry.
They're really picky about their style guide. Fortunately, it basically amounts to syntax rules and I've been dealing with compilers that are equally picky about syntax for decades.
It also helps that all throughout my schooling my mother worked at the local university proofreading research scientists' papers and she insisted on proofreafing all my essays too.
I never thought I'd end up being happy about that.
@π¨π¦ CleoQc ππ¦π§Άππ It's a legal transcription job. They do regular transcription too, but they have AI doing much of it, so they're not looking for new people there.
They're smart enough to realize that AI isn't currently sophisticated enough to properly follow the various style guides required by their legal clients. I guess they realize that if the quality of the work drops, they'll lose the contracts.
That said, I'm sure all the work I do is going to be used to try to train an AI to replace me, but that's probably true of any job at this point.
Judy Anderson reshared this.
#LambdaMOO
like this
Digital Mark λ βοΈ πΉ π½ reshared this.
Got my hands on a #shortwave radio, but the fact that I live in a giant concrete box doesn't seem to be helping my reception. Seeing what I can do about that.
Are there any broadcasts that are worth catching that I'd be able to get in Southern Ontario?
Kevin Davy reshared this.
reshared this
(defun lambdamoo-tab-complete ()
"Complete user input using text from the buffer"
(interactive)
(when (memq (char-before) '(? ?\r ?\n ?\t ?\v))
(user-error "Point must follow non-whitespace character"))
(let (replace-start
(replace-end (point))
replace-text found-pos found-text)
(save-excursion
(backward-word)
(setq replace-start (point)
replace-text (buffer-substring replace-start replace-end))
(when (or (null lambdamoo--search-text)
(not (string-prefix-p lambdamoo--search-text replace-text t)))
(setq-local lambdamoo--search-text replace-text)
(set-marker lambdamoo--found-point (point)))
(goto-char lambdamoo--found-point)
(unless
(setq found-pos
(re-search-backward
(concat "\\b" (regexp-quote lambdamoo--search-text))
(point-min) t))
(setq-local lambdamoo--found-point (make-marker))
(user-error "No match found"))
(set-marker lambdamoo--found-point found-pos)
(forward-word)
(setq found-text (buffer-substring found-pos (point))))
(delete-region replace-start replace-end)
(insert found-text)))#emacs #lisp #moo #mud #LambdaMOO
reshared this
@Omar AntolΓn Actually, looking more closely at it, it might just do the trick.
I love it when I spend hours re-writing code that essentially already exists. ;)
In the end, I wound up just binding tab to dabbrev-expand. π
It might seem like I wasted a bunch of time writing that, but at least I learned a bunch along the way.
Me realizing that festival uses a Lisp dialect:
Oh cool, I can add accessibility features to my Emacs stuff by procedurally generating the code in elisp.
Me realizing that festival's symbols are case sensitive:
Welp, I guess I can just doand do the rest of the processing in elisp directly. That's probably all I wanted anyway.(defun festival-saytext (text) (format "(SayText %S)" text))
Isaac Kuo likes this.
In 1959, a cement mixer with a full load of cement, wrecked near Winganon, Oklahoma πΊπΈ
By the time a tow truck came to haul it away, all of cement had hardened inside of mixer. Tow truck was not able to remove all wreckage at same time because of weight, and decided to haul only cab/frame and would come back for detached mixer later, which never happened.
Today, 67 years later, it still sits where it fell. Locals have painted it and added "rocket thrusters" to make it look like a space capsule.
like this
reshared this
Honestly, this is what a space capsule should look like.
@LanceJZ @isaackuo
That's a piece of Art, and congratulations to the locals for maintaining it.
(Actually the capsule would have had thrusters: there would be Capsule:Flotation Bag:Heat Shield:Thruster Pack, with the thruster pack held on by straps so it could be jettisoned after deceleration but before hitting atmosphere. On one mission they re-entered with the thruster pack attached because the flotation bag light had come on and they were concerned about the heat shield.)
@Cadbury_Moose @LanceJZ While this is true of the Mercury, Gemini, and Apollo capsules (including the Apollo service module), a reusable capsule could enter nose first rather than tail first.
Nuclear missile reentry heat shields are blunt cones entering nose first.
That said, Dragon does do tail first reentry, placing the thrusters on the sides rather than the tail. I just think it "looks" wrong.
@isaackuo @Cadbury_Moose @LanceJZ That is only true for modern ballistic missile RVs, initially they were launched blunt end forward, since the materials of that time didn't allow a more accurate short end forward reentry because these cause higher temperatures. (That is also why the Space Shuttle got a rather blunt nose)
Also, there are far more than just one kind of capsule. Imagine this as a biconic lifting body, and it isn't that much fictive to retain its aft thrusters.
@Cadbury_Moose @isaackuo there has never been a capsule with thrusters on them from Apollo on.
@LanceJZ @Cadbury_Moose This is what people think of when they think of the Apollo "capsule". It has a big main thruster in the tail, and lots of thruster clusters all over the place.
That's the reason why the artists modifying the cement mixer tank felt the need to add thrusters. It didn't look right without them, because the overall shape looks like a capsule plus its service module.
@LanceJZ @Cadbury_Moose I know what you mean, but that's what people think of.
One reason they think of the Apollo "capsule" as the Command Module and Service Module is that there isn't any footage of the Command Module by itself in space. No one left on the Service Module to shoot the Command Module after separation.
(The Command Module is just the return capsule.)
Bob Jonkman reshared this.
@Cadbury_Moose @isaackuo @archaeohistories
Cute, but a big hazard if a vehicle has to leave the road. I would move this thing off.
Or at least further away from the road. A crane could do this in less than four hours. Much cheaper than having a vehicle plow into it.
@davevolek That would likely require someone to pay for it. Given the little bits I've gleaned about local governance in the U.S. I can easily see no one having any spare budget for it.
The photo looks like a rural highway to me. This means fairly high speeds. If a car "hits the ditch," a bumpy ride turns into a fatal accident.
I suspect the jurisdiction belongs to whoever owns the highway. It could be the state or it could be the county.
A couple of heavy tow wreckers could move this machine. Less than $5000.
But there may be political pressure to keep the machine in place. It does look cute.
There may indeed be more to the story.
I come from a rural background. Many people drive 80 kph (50 mph) on these roads. And they hit the ditch more often.
There might be some weight restrictions that prohibit big trucks on this road. The pavement in the photo (or oily gravel) looks a little on the weak side to me.
Anyways, we need more info to know why this thing has remained in the ditch for 67 years.
Hey all,
I have a friend who's been trying to get on Mastodon but tells me that it doesn't seem to play well with screen readers. I know there are plenty of people on the fedi who do use screen readers, but I have no experience with them myself, so I can't really direct him.
Can someone who does use a #ScreenReader point me in the direction of some resources that might be useful?
#AskFedi #a11y
like this
reshared this
I'm certain I have reinvented a wheel here, but for the life of me I can't find it. Have I?
(defmacro jrl-extract-list (vars list &rest body)
"Split a list into indiviual variables"
(let ((list* (gensym)))
(append
`(let ,(cons (list list* list) vars))
(seq-map (lambda (var)
`(setq ,var (car ,list*)
,list* (cdr ,list*)))
vars)
body)))#emacs #lisp #elisp
Edit: Of course it was pcase.
reshared this
Cats big contribution to science
#Cats #CatsOfMastodon #Humor
Jonathan Lamothe reshared this.
Was loading stuff onto my Jellyfin server for my mom to watch in the hospital. She liked Star Trek and I thought Strange New Worlds might be a good idea because it's a more fun show than a lot of the other recent Trek shows.
I started watching the first episode to be sure it was working, and realized I'd forgotten the whole thing about what happens with Captain Pike.
...maybe this show isn't the vibe after all...

Baldur Bjarnason
in reply to Baldur Bjarnason • • •Best part? It's always somebody with years of experience. Exactly the demographic that is supposedly able to use this shit safely, but my impression is they're just as bad as the novices
This is happening IMO because of one of the fundamental issues with software dev (and this predates "AI" and was one of the themes of my first book):
Most software projects fail and most of what gets shipped doesn't work. The way the industry is set up means there is little downside to shipping broken software
Nicole Parsons reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Few devs have a reference point for genuinely working software. Usability labs were disbanded over 20 years ago. Very few companies do actual user research, so their designs are based on fiction. Bugs are the norm
Alienation is also the norm for devs, both socially and organisationally. Whether it works for the end user doesn't cross their mind. Whether the design fulfils business needs is not their problem. Bugs are a future problem. Ship insecure software and patch it as user data gets stolen
Nicole Parsons reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Devs are so disconnected from the output of their work that many of the norms of the industry are outright illegal: there's a good chance that if you follow popular practices for a React project, for example, you'll end up with a site or product that violates accessibility law in several countries
Few devs would even know where to begin to look to answer the question "does my software work for the people forced to use it?"
Nicole Parsons reshared this.
Baldur Bjarnason
in reply to Baldur Bjarnason • • •Nicole Parsons reshared this.
Kale
in reply to Baldur Bjarnason • • •Whoever came up with 'Yes/Not now' needs to be dragged into the streets and shot.
No wonder some folks don't understand consent - our software doesn't allow for it.
Alexandre Oliva
in reply to Kale • • •or the "You must allow our JavaScript programs to run on your browser, otherwise we won't allow you to get to the information that we're legally required to provide you with"
CC: @baldur@toot.cafe
Chip Butty
in reply to Baldur Bjarnason • • •Nicole Parsons reshared this.
Thomas - NBA
in reply to Baldur Bjarnason • • •Bruno Nicoletti
in reply to Baldur Bjarnason • • •Kale
in reply to Baldur Bjarnason • • •I still remember, talking to a twitter dev who had an utterly ridiculously foolish take on XYZ issue go viral.
They told me 'Uhhh, I've never had this much attention on me, my tweets never go beyond my social circle. I had to turn off my phone. It kept buzzing.'
... this was a person who worked on the UI. No shit they had no idea how to deal with high volume 'oh, you just 200k likes' kinda shit, they never experienced it themselves.
LillyLyle/Count Melancholia
in reply to Baldur Bjarnason • • •Raymond Neilson
in reply to Baldur Bjarnason • • •Honestly I think a big part of it is more than our industry being deeply immature still; I think the most important throughline of the research on LLMs' effects on cognition is a consistent attack on metacognition, which seemingly doesn't abate with experience. The same corrosion happens to juniors and seniors alike, but the seniors have more rationalizations at hand to pretend it doesn't.
(Speaking of, that "cognitive surrender" paper is the latest in that theme: papers.ssrn.com/sol3/papers.cfβ¦)
Kraftwerk-Das Model Collapse
in reply to Raymond Neilson • • •David Beazley
in reply to Baldur Bjarnason • • •Apropos of nothing, the absolute worst implementation of Raft I've ever seen in my Raft course was by a pair of senior devs with a combined 60+ years of experience who decided to pair program together and announced ahead of time to the group that they were going to "win" Raft. They did not.
An undergraduate who'd never coded with sockets before did reasonably okay.
Kraftwerk-Das Model Collapse
in reply to Baldur Bjarnason • • •Nicole Parsons
in reply to Kraftwerk-Das Model Collapse • • •@dngrs
Most people ignore that it's a fossil fuel funded cult, intentionally designed to keep a dependency on oil.
wired.com/story/trump-energy-iβ¦
bloomberg.com/news/articles/20β¦
nytimes.com/2025/10/27/technolβ¦
cnbc.com/2025/11/20/us-approveβ¦
Saudi Arabia aspires to be the next Russian Internet Research Agency, selling hack-for-hire election meddling.
npr.org/2020/08/18/903512647/sβ¦
newyorker.com/news/news-desk/wβ¦
With Larry Ellison's help.
independent.co.uk/news/world/aβ¦
sfchronicle.com/tech/article/pβ¦
intelligentcio.com/me/2023/12/β¦
Billionaire Larry Ellison plotted with Trump aides on call about overturning election
John Bowden (The Independent)adingbatponder πΎ
in reply to Baldur Bjarnason • • •Lucas
in reply to Baldur Bjarnason • • •Orjan
in reply to Baldur Bjarnason • • •I feel like having spent most of my career building embedded systems aimed at industry rather than consumers, where customer support issues can mean sending a technician out with a USB stick on a ten-hour road trip, has insulated me from the worst madness.
If your sloppy coding breaks a manufacturing line or distribution network, bugs become expensive fast.
Though having said that, $CURRENT_EMPLOYER is pushing for greater use of LLMs in our workflow...
Niels Abildgaard
in reply to Baldur Bjarnason • • •This has been on my mind the last few days, too: mas.to/@nielsa/116171030173125β¦
I see so many people falling into LLM delusion, who I thought would know better, with no seeming pattern in *why* they fall for it.
Yes, the lack of negative incentives is certainly a factor.
My best explanation so far is that LLMs are kind of "acting" like 17 cons (some new, some old) in a trench coat, and different combinations of these trick different people who'd be able to resist most of these on their own.
Niels Abildgaard
2026-03-04 13:00:44
Martin Hamilton
in reply to Baldur Bjarnason • • •Paul Dot X
in reply to Baldur Bjarnason • • •choomba
in reply to Baldur Bjarnason • • •Elias MΓ₯rtenson
in reply to Baldur Bjarnason • • •I always enjoyed Universe Today, but once you get llm psychosis, everything becomes possible.
youtube.com/watch?v=vkhZHR_hs4β¦
at the 47 minute mark it really goes off the rails.
SpaceX's AI Data Centres Might Actually Be A Good Idea. Here's Why
Fraser Cain (YouTube)spidey
in reply to Baldur Bjarnason • • •