Jonathan Lamothe reshared this.

@screwlisp is having some site connectivity problems so asked me to remind everyone that we'll be on the anonradio forum at the top of the hour (a bit less than ten minutes hence) for those who like that kind of thing:

anonradio.net:8443/anonradio

He'll also be monitoring LambdaMOO at "telnet lambda.moo.mud.org 8888" for those who do that kind of thing. there are also emacs clients you should get if you're REALLY using telnet.

Topic for today, I'm told, may include the climate, the war, the oil price hikes, some rambles I've recently posted on CLIM, and the book by @cdegroot called The Genius of Lisp, which we'll also revisit again next week.

cc @ramin_hal9001

#LispyGopher #Gopher #Lisp #CommonLisp

This entry was edited (8 hours ago)

reshared this

in reply to Kent Pitman

At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.

I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.

The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.

Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.

Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.

But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.

And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.

In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.

But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.

The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.

For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.

My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.

This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.

screwlisp reshared this.

in reply to Kent Pitman

Don't forget about gensym's real-time lisp, which was common lisp except for consing. It had its own memory manager (written in lisp) which was more like C's malloc/free (only I think with more complex objects than just blocks of memory but I don't really remember), but we got to use macros and all sorts of other stuff. (Exactly what is lost to two decades of doing other stuff.)

screwlisp reshared this.

in reply to Judy Anderson

@nosrednayduj
First, thanks for raising that example. It's interesting and contains info I hadn't heard.

In a way, it underscores my point: that for a while, it was an open question whether we could implement GC, but a bet was made that we could.

You could view that as saying they only implemented part of Lisp, and that the malloc stuff was a stepping out of paradigm, an admission the bet was failing for them in that moment. Or you could view it as a success, saying that even though some limping was required of Lisps while we refined the points, it was done.

As I recall, there was some discussion of adding a GC function. At the time, the LispM people probably said "which GC would it invoke" and the Gensym people probably said "we don't have one". That was the kind of complexity that the ANSI process turned up and it's probably why there is no GC function. (There was one in Maclisp that invoked the Mark/Sweep GC, but the situation had become more complicated.)

Also, as an aside, a personal observation about the process: With GC, as with other things like buffered streams, one of the hardest things to get agreement on was something where one party wanted a feature and another said "we don't have that, I'd have to make it a no-op". Making it a no-op was not a lot of implementation work. Just seeing and discarding an arg. But it complicated the story that was told, and vendors didn't like it, so they pushed back even though of all the implementations they had the easiest path (if you didn't count "explaining" as part of the path).

in reply to Kent Pitman

@nosrednayduj
And, unrelated, another reference I made in the show as to Clyde Prestowitz and book The Betrayal of American Prosperity.
goodreads.com/book/show/810439…

Also an essay I wrote that summarizes a key point from it, though not really related to the topic of the show. I mention it just because that point will also be interesting maybe to this audience on the issue of capitalism if not on the specific economic issue we were talking about tonight:
netsettlement.blogspot.com/201…

screwlisp reshared this.

in reply to Kent Pitman

your notes on continuations are interesting. I do a lot of Kotlin programming these days, and one of thr features it adds on top of Java is continuations (they call is suspend functions). However, unlike Scheme, you can only call suspend functions from other sustend functions, leading to two different worlds, the continuation-supported one and the regular one.

I measured a 30% performance hit when changing code use suspend functions instead of regular functions. Nevertheless, this has not stopped people from using them for everything.

in reply to Elias Mårtenson

@loke ooh, that is interesting, thanks! I did not know that Kotlin also had that feature (in a limited way).

Yes, the performance hit probably comes from copying the stack or restoring the stack. For small stacks this is trivial, but often times continuations are useful when computing recursive functions over very large data structures, and you usually have very large stacks for these kinds of computations.

Delimited continuations (DCs) can help with that problem, apparently. And the API for DCs also happens to make them more composable with each other, since you can kind-of unfreeze a computation inside of another frozen computation.

That might be why Kotlin has those restrictions on continuations.

@kentpitman @screwlisp @cdegroot

in reply to Kent Pitman

yes, Scheme led on continuations before it was a well-established idea, and I think there is some regret about that because of the difficulties involved to which you had alluded, especially in compiling efficient code. Nowadays the common wisdom is that delimited continuations, which I believe are implemented by copying only part of the stack, are better in every way. I have no strong opinions on the issue, I just thought it was interesting how Scheme solved problems of optimizing tail recursion and “creating actors” i.e. capturing closures, and both of these things involve stack manipulation which naturally leads into the idea of continuations.

As a Haskeller I definitely appreciate the study of programming language theory, and how much of Haskell is built on the work of Lisp. The Haskell team’s many innovations include asking questions like, “what if everything was lazy by default?” Or, “what if we abolish mutating variables and force the programmer to pop the old value and push the new value on the stack every time?” Or “what if tail recursion was the only way to loop?” As it turns out, this gives an optimizing compiler the freedom to very aggressively optimize code, and can result in very efficient binaries. Often times, both programmers and language implementors can do a lot more when you are constrained to use fewer features.

But which features to use and which to remove requires a lot of wisdom and experience. So the Haskell people could have only felt comfortable asking those questions after garbage collection and closures had become a well-established practice, and we can thank the work of the Lisp team for those contributions.

@screwlisp @cdegroot

This entry was edited (6 hours ago)
in reply to Kent Pitman

it'd be interesting to make a family tree of programming language implementations by where they got their GC design from. Java started out real bad (well, fine for the sort of embedded systems it was initially targeted at, not so fine for the stuff I tried to do with the 1.0 version) but picked up good (generational) GC from Self and Strongtalk, in other words more directly from Smalltalk than Lisp. But, well, a lot of history shared between Lisp and Smalltalk makes them more joined at the hip than most people realize 😀.
in reply to Kent Pitman

Generational GC changes the way you program and it's not *just* that it's efficient.

We used MIT-Scheme (which, by the early 90s was showing its age). Did all manner of weird optimizing to use memory efficiently. Lots of set! to re-use structure where possible. Or (map! f list) -- same as (map...) but with set-car! to modify in-place -- because it made a HUGE difference not recreating all of those cons cells => bumps memory use => next GC round is that much sooner (and then everything STOPS, because Mark & Sweep). Also stupid (fluid-let ...) tricks to save space in closures.

We were writing Scheme as if it were C because that was how you got speed in that particular world.

1/3

in reply to Roger Crew✅❌☑🗸❎✖✓✔

And then Bruce Duba joined the group (had just come from Indiana).

"Guys, you're doing this ALL WRONG",

"Yeah, we know already. It's ugly, impure, and sucks. But it's faster, unfortunately",

"No, you need a better Scheme; you should try Chez".

...and, to be sure, just that much *was* a significant improvement. Chez was much more actively maintained, had a better repertoire of optimizations, etc...

... but the real eye-opener was what happened when we ripped out all of the set! and fluid-let code. That's when we got the multiple-orders-of-magnitude speed improvement.

2/3

in reply to Roger Crew✅❌☑🗸❎✖✓✔

See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.

(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)

In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.

3/3

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

There were other weirdnesses as well.

Even if GC saves you the horror of referencing freed storage, or freeing stuff twice, you still have to worry about memory leaks.

With generational GC, leaks, in some sense end up being *more* costly: it's useless shit that has to be copied -- yes it eventually ends up in an old generation but until then it's getting copied -- and copying is what's taking most of the time with a generational GC.

And so, tracking down leaks and finding places to put in weak pointers started mattering more...

4/3

in reply to screwlisp

@dougmerritt

5? maybe for mark&sweep

but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...

... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.

Jonathan Lamothe reshared this.

A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

"""
Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
"""

Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

Why does any of this matter?

Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

How can we tip that scale out of the attackers' favor?

By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

- A normal user only has to register once, while an attacker has to re-register every time they get suspended.

- A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

- A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

- Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

lgbtqia.space/@alice/115499829…

So, there was a post on the fedi about a project Johnny Harris was working on. Some people in that thread seemed to think that he was untrustworthy, even going so far as to posit that he might be a CIA asset. I had no idea why they believed this, but it was echoed by more than one person.

I am familiar with Johnny's work. He always seems to do a good job of citing his sources (at least to my casual inspection). I asked about this distrust but received no response. Perhaps they thought I was sealioning?

So, I'm asking here: Is there an actual valid reason to distrust him that I'm simply not aware of, or is just stemming from the fact that he likes to shine light on things that some would rather not have light shined on?

Jonathan Lamothe reshared this.

The media in this post is not displayed to visitors. To view it, please go to the original post.

If a Klein bottle could wear pants, would it be like this or like this?

#mathstodon #math #maths #shitpost

Jonathan Lamothe reshared this.

I don't think programmers and sysadmins get how much there is to learn and how intimidating it is for normal people to host their own software.

For one most of us don't have a computer that is running 24/7, which means we need to rent a server which we have no idea how to go about doing.

And then there's an entire arcane art to running software that can speak to the internet without your server being taken over and used to send spam to half the planet

This entry was edited (2 weeks ago)

reshared this

Jonathan Lamothe reshared this.

Hey Fedi,

For those who don't know, my mother had a major #stroke a little over a month ago. We're very fortunate to live in a country (Canada) where we have free #healthcare, but as her discharge from the #hospital looms closer, we're having to raise funds to make #accessibility modifications to my parents' home so that she can return. Boosts are welcome (and appreciated).

gofund.me/a69e0cdc4

#a11y #MutualAid

Jonathan Lamothe reshared this.

Him: well what's your big brain idea to eliminate food assistance fraud?

Me: universal food assistance

Him:

Me: because there wouldn't be a system to cheat, everyone would just get a check

Him: and who does that benefit?

Me: family farmers and human beings who rely on food for nutrition

Him: what about rich people?

Me: what about them?

Him: you would give them food assistance too?

Me: that's what universal means

Him: you can't do that

Me: yesterday you said I couldn't tax them and now you won't let me feed them either?

Him: they can afford food

Me: then it should be fine to tax them

Him: but if you tax them they won't have as much money

Me: I'm willing to offer universal food assistance

Are there any #Lisp programmers out there who use a #ScreenReader? Given how messy Lisp can be to read without proper indentation (which I imagne wouldn't translate well on a screen reader) I can't see it is being an easy language to work in without being able to see it.

I've been thinking about a way to make an editor that lets you explore a Lisp program by walking through the forms in the program in a manner similar to the way one might navigate in a MUD. Is this a crazy idea, or one with some merit?

in reply to Jonathan Lamothe

Not to discourage new ways to solve old problems, I just want to point out that your lisp journey will be much easier if you use an editor designed for the task, which includes learning a set of editor operations that function at the "form"/s-exp level.

In emacs these operations include traversal functions like `forward-sexp`, and the very useful `indent-sexp` function. They're part of basic emacs behavior, you don't even need to do change emacs configuration to enable them, and they're useful on data other than lisp source code.

Once you can easily navigate up, down, and around forms, checking to see where a form starts and ends is easy without any sensitivity to how it's indented.

in reply to Jonathan Lamothe

Seemingly plain (lambda () ...) is a macro that expands to (function (lambda () ...)). #'(lambda () ...) uses a reader macro to expand to the same (function (lambda () ...)).

clhs.lisp.se/Body/m_lambda.htm

Function is a special operator that returns a function. It takes either a function name or a lambda expression. The second case is what is happening here.

clhs.lisp.se/Body/s_fn.htm#fun…

A lambda expression is a list of the symbol lambda, a lambda list, and a body.

clhs.lisp.se/Body/26_glo_l.htm…

I've been taking a bunch of tests to qualify for a transcription job. They're not easy and I need a perfect score to pass. I finally failed one of the tests but managed to pass it on the retry.

They're really picky about their style guide. Fortunately, it basically amounts to syntax rules and I've been dealing with compilers that are equally picky about syntax for decades.

It also helps that all throughout my schooling my mother worked at the local university proofreading research scientists' papers and she insisted on proofreafing all my essays too.

I never thought I'd end up being happy about that.

in reply to 🇨🇦 CleoQc 🍁🦜🧶🚐🌈

@🇨🇦 CleoQc 🍁🦜🧶🚐🌈 It's a legal transcription job. They do regular transcription too, but they have AI doing much of it, so they're not looking for new people there.

They're smart enough to realize that AI isn't currently sophisticated enough to properly follow the various style guides required by their legal clients. I guess they realize that if the quality of the work drops, they'll lose the contracts.

That said, I'm sure all the work I do is going to be used to try to train an AI to replace me, but that's probably true of any job at this point.

Jonathan Lamothe reshared this.

Don't do AI. Look for people selling electronics they don't need anymore and use them to make the machine. Make a god out of discarded parts, and don't let it die. Use it for whatever the hell you feel like. Give it the ability to sense its reality and start programming it not to speak, or interact with anyone, but to show people echoes of the past.

reshared this

elisp
God, my tab completion function is a hacky mess:
(defun lambdamoo-tab-complete ()
  "Complete user input using text from the buffer"
  (interactive)
  (when (memq (char-before) '(?  ?\r ?\n ?\t ?\v))
    (user-error "Point must follow non-whitespace character"))
  (let (replace-start
        (replace-end (point))
        replace-text found-pos found-text)
    (save-excursion
      (backward-word)
      (setq replace-start (point)
            replace-text (buffer-substring replace-start replace-end))
      (when (or (null lambdamoo--search-text)
                (not (string-prefix-p lambdamoo--search-text replace-text t)))
        (setq-local lambdamoo--search-text replace-text)
        (set-marker lambdamoo--found-point (point)))
      (goto-char lambdamoo--found-point)
      (unless
          (setq found-pos
                (re-search-backward
                 (concat "\\b" (regexp-quote lambdamoo--search-text))
                 (point-min) t))
        (setq-local lambdamoo--found-point (make-marker))
        (user-error "No match found"))
      (set-marker lambdamoo--found-point found-pos)
      (forward-word)
      (setq found-text (buffer-substring found-pos (point))))
    (delete-region replace-start replace-end)
    (insert found-text)))

#emacs #lisp #moo #mud #LambdaMOO

reshared this

elisp

Me realizing that festival uses a Lisp dialect:

Oh cool, I can add accessibility features to my Emacs stuff by procedurally generating the code in elisp.


Me realizing that festival's symbols are case sensitive:

Welp, I guess I can just do
(defun festival-saytext (text)
  (format "(SayText %S)" text))
and do the rest of the processing in elisp directly. That's probably all I wanted anyway.
Jonathan Lamothe reshared this.

The media in this post is not displayed to visitors. To view it, please go to the original post.

In 1959, a cement mixer with a full load of cement, wrecked near Winganon, Oklahoma 🇺🇸

By the time a tow truck came to haul it away, all of cement had hardened inside of mixer. Tow truck was not able to remove all wreckage at same time because of weight, and decided to haul only cab/frame and would come back for detached mixer later, which never happened.

Today, 67 years later, it still sits where it fell. Locals have painted it and added "rocket thrusters" to make it look like a space capsule.

in reply to Knightmare

@LanceJZ @isaackuo
That's a piece of Art, and congratulations to the locals for maintaining it.

(Actually the capsule would have had thrusters: there would be Capsule:Flotation Bag:Heat Shield:Thruster Pack, with the thruster pack held on by straps so it could be jettisoned after deceleration but before hitting atmosphere. On one mission they re-entered with the thruster pack attached because the flotation bag light had come on and they were concerned about the heat shield.)

in reply to Cadbury Moose

@Cadbury_Moose @LanceJZ While this is true of the Mercury, Gemini, and Apollo capsules (including the Apollo service module), a reusable capsule could enter nose first rather than tail first.

Nuclear missile reentry heat shields are blunt cones entering nose first.

That said, Dragon does do tail first reentry, placing the thrusters on the sides rather than the tail. I just think it "looks" wrong.

in reply to Isaac Ji Kuo

@isaackuo @Cadbury_Moose @LanceJZ That is only true for modern ballistic missile RVs, initially they were launched blunt end forward, since the materials of that time didn't allow a more accurate short end forward reentry because these cause higher temperatures. (That is also why the Space Shuttle got a rather blunt nose)

Also, there are far more than just one kind of capsule. Imagine this as a biconic lifting body, and it isn't that much fictive to retain its aft thrusters.

in reply to Knightmare

The media in this post is not displayed to visitors. To view it, please go to the original post.

@LanceJZ @Cadbury_Moose This is what people think of when they think of the Apollo "capsule". It has a big main thruster in the tail, and lots of thruster clusters all over the place.

That's the reason why the artists modifying the cement mixer tank felt the need to add thrusters. It didn't look right without them, because the overall shape looks like a capsule plus its service module.

in reply to Knightmare

@LanceJZ @Cadbury_Moose I know what you mean, but that's what people think of.

One reason they think of the Apollo "capsule" as the Command Module and Service Module is that there isn't any footage of the Command Module by itself in space. No one left on the Service Module to shoot the Command Module after separation.

(The Command Module is just the return capsule.)

in reply to skryking

@skryking

The photo looks like a rural highway to me. This means fairly high speeds. If a car "hits the ditch," a bumpy ride turns into a fatal accident.

I suspect the jurisdiction belongs to whoever owns the highway. It could be the state or it could be the county.

A couple of heavy tow wreckers could move this machine. Less than $5000.

But there may be political pressure to keep the machine in place. It does look cute.

in reply to skryking

@skryking

There may indeed be more to the story.

I come from a rural background. Many people drive 80 kph (50 mph) on these roads. And they hit the ditch more often.

There might be some weight restrictions that prohibit big trucks on this road. The pavement in the photo (or oily gravel) looks a little on the weak side to me.

Anyways, we need more info to know why this thing has remained in the ditch for 67 years.

Hey all,

I have a friend who's been trying to get on Mastodon but tells me that it doesn't seem to play well with screen readers. I know there are plenty of people on the fedi who do use screen readers, but I have no experience with them myself, so I can't really direct him.

Can someone who does use a #ScreenReader point me in the direction of some resources that might be useful?
#AskFedi #a11y

in reply to Fanny Bui

@Brailly615 In another comment it has been said that its about Linux, there I unfortunately can't help. I use Webclients, unfortunately they don't get any Updates anymore or I'd have recommended something. I only use them because I haven't found something better yet that I don't need to install. @Clio09 @C3nC3 @me @MonaApp @pachli

elisp question

I'm certain I have reinvented a wheel here, but for the life of me I can't find it. Have I?

(defmacro jrl-extract-list (vars list &rest body)
  "Split a list into indiviual variables"
  (let ((list* (gensym)))
    (append
     `(let ,(cons (list list* list) vars))
     (seq-map (lambda (var)
                `(setq ,var (car ,list*)
                       ,list* (cdr ,list*)))
              vars)
     body)))

#emacs #lisp #elisp

Edit: Of course it was pcase.

medical, vague ST:SNW spoiler

Was loading stuff onto my Jellyfin server for my mom to watch in the hospital. She liked Star Trek and I thought Strange New Worlds might be a good idea because it's a more fun show than a lot of the other recent Trek shows.

I started watching the first episode to be sure it was working, and realized I'd forgotten the whole thing about what happens with Captain Pike.

...maybe this show isn't the vibe after all...

Jonathan Lamothe reshared this.

The media in this post is not displayed to visitors. To view it, please go to the original post.

Evergreen

#OpenBSD #RunBSD #Linux #Jokes

This entry was edited (1 month ago)
in reply to sotolf

[size=small]This post uses ruby annotations and likely will display only in degraded mode in TUI clients like tut.[/size]

@sotolf @rl_dane @stefano @kabel42 no, no reference. Not いつみMario (マリオ).

Literally just イツミマリオ (マリオです). Katakana syllabic writing for what is supposed to be English speech, then read by a Japanese speaker-or-TTS.

This website uses cookies. If you continue browsing this website, you agree to the usage of cookies.