@screwlisp is having some site connectivity problems so asked me to remind everyone that we'll be on the anonradio forum at the top of the hour (a bit less than ten minutes hence) for those who like that kind of thing:

anonradio.net:8443/anonradio

He'll also be monitoring LambdaMOO at "telnet lambda.moo.mud.org 8888" for those who do that kind of thing. there are also emacs clients you should get if you're REALLY using telnet.

Topic for today, I'm told, may include the climate, the war, the oil price hikes, some rambles I've recently posted on CLIM, and the book by @cdegroot called The Genius of Lisp, which we'll also revisit again next week.

cc @ramin_hal9001

#LispyGopher #Gopher #Lisp #CommonLisp

This entry was edited (1 week ago)

reshared this

in reply to Kent Pitman

At the end of @screwlisp's show, in the discussion of @cdegroot's book, @ramin_hal9001 was talking about continuations. I wanted to make a random point that isn't often made about Lisp that I think is important.

I often do binary partitions of languages (like the static/dynamic split, but more exotic), and one of them is whether they are leading or following, let's say. there are some aspects in which scheme is a follower, not a leader, in the sense that it tends to eschew some things that Common Lisp does for a variety of reasons, but one of them is "we don't know how to compile this well". There is a preference for a formal semantics that is very tight and that everything is well-understood. It is perhaps fortunate that Scheme came along after garbage collection was well-worked and did not seem to fear that it would be a problem, but I would say that Lisp had already basically dealt led on garbage collection.

The basic issue is this: Should a language incorporate things that maybe are not really well-understood but just because people need to do them and on an assumption that they might as well standardize the 'gesture' (to use the CLIM terminology) or 'notation' (to use the more familiar) for saying you want to do that thing.

Scheme did not like Lisp macros, for example, and only adopted macros when hygienic macros were worked out. Lisp, on the other hand, started with the idea that macros were just necessary and worried about the details of making them sound later.

Scheme people (and I'm generalizing to make a point here, with apologies for casting an entire group with a broad brush that is probably unfair) think Common Lisp macros more unhygienic than they actually are because they don't give enough credit to things like he package system, which Scheme does not have, and which protects CL users a lot more than they give credit for in avoiding collisions. They also don't fairly understand the degree to which Lisp2 protects from the most common scenarios that would happen all the time in Scheme if there were a symbol-based macro system. So CL isn't really as much at risk these days, but it was a bigger issue before packages, and the point is that Lisp decided it would figure out how to tighten later, but that it was too important to leave out, where Scheme held back design until it knew.

But, and this is where I wanted to get to, Scheme led on continuations. That's a hard problem and while it's possible, it's still difficult. I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did. But it was brave to say that full continuations could be made adequately efficient.

And the Lisp community in general, and here I will include Scheme in that, though on other days I think these communities sufficiently different that I would not, have collectively been much more brave and leading than many languages, which only grudgingly allow functionality that they know how to compile.

In the early days of Lisp, the choice to do dynamic memory management was very brave. It took a long time to make GC's efficient, and generational GC was what finally I think made people believe this could be done well in large address spaces. (In small address spaces, it was possible because touching all the memory to do a GC did not introduce thrashing if data was "paged out". And in modern hardware, memory is cheap, so the size is not always a per se issue.

But there was an intermediate time in which lots of memory was addressable but not fully realized as RAM, only virtualized, and GC was a mess in that space.

The Lisp Machines had 3 different unrelated but co-resident and mutually usable garbage collection strategies that could be separately enabled, 2 of them using hardware support (typed pointers) and one of them requiring that computation cease for a while because the virtual machine would be temporarily inconsistent for the last-ditch thing that particular GC could do to save the day when otherwise things were going to fail badly.

For a while, dynamic memory management would not be used in real time applications, but ultimately the bet Lisp had made on it proved that it could be done, and it drove the doing of it in a way that holding back would not have.

My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts, for example. But certainly the choice to make Java be garbage collected probably derives from the Lispers on its design team feeling it was by then a solved problem.

This aspect of languages' designs, whether they lead or follow, whether they are brave or timid, is not often talked about. But i wanted to give the idea some air. It's cool to have languages that can use existing tech well, but cooler I personally think to see designers consciously driving the creation of such tech.

screwlisp reshared this.

in reply to Kent Pitman

Don't forget about gensym's real-time lisp, which was common lisp except for consing. It had its own memory manager (written in lisp) which was more like C's malloc/free (only I think with more complex objects than just blocks of memory but I don't really remember), but we got to use macros and all sorts of other stuff. (Exactly what is lost to two decades of doing other stuff.)

screwlisp reshared this.

in reply to Judy Anderson

@nosrednayduj
First, thanks for raising that example. It's interesting and contains info I hadn't heard.

In a way, it underscores my point: that for a while, it was an open question whether we could implement GC, but a bet was made that we could.

You could view that as saying they only implemented part of Lisp, and that the malloc stuff was a stepping out of paradigm, an admission the bet was failing for them in that moment. Or you could view it as a success, saying that even though some limping was required of Lisps while we refined the points, it was done.

As I recall, there was some discussion of adding a GC function. At the time, the LispM people probably said "which GC would it invoke" and the Gensym people probably said "we don't have one". That was the kind of complexity that the ANSI process turned up and it's probably why there is no GC function. (There was one in Maclisp that invoked the Mark/Sweep GC, but the situation had become more complicated.)

Also, as an aside, a personal observation about the process: With GC, as with other things like buffered streams, one of the hardest things to get agreement on was something where one party wanted a feature and another said "we don't have that, I'd have to make it a no-op". Making it a no-op was not a lot of implementation work. Just seeing and discarding an arg. But it complicated the story that was told, and vendors didn't like it, so they pushed back even though of all the implementations they had the easiest path (if you didn't count "explaining" as part of the path).

in reply to Kent Pitman

@nosrednayduj
And, unrelated, another reference I made in the show as to Clyde Prestowitz and book The Betrayal of American Prosperity.
goodreads.com/book/show/810439…

Also an essay I wrote that summarizes a key point from it, though not really related to the topic of the show. I mention it just because that point will also be interesting maybe to this audience on the issue of capitalism if not on the specific economic issue we were talking about tonight:
netsettlement.blogspot.com/201…

screwlisp reshared this.

in reply to Kent Pitman

your notes on continuations are interesting. I do a lot of Kotlin programming these days, and one of thr features it adds on top of Java is continuations (they call is suspend functions). However, unlike Scheme, you can only call suspend functions from other sustend functions, leading to two different worlds, the continuation-supported one and the regular one.

I measured a 30% performance hit when changing code use suspend functions instead of regular functions. Nevertheless, this has not stopped people from using them for everything.

in reply to Elias Mårtenson

@loke ooh, that is interesting, thanks! I did not know that Kotlin also had that feature (in a limited way).

Yes, the performance hit probably comes from copying the stack or restoring the stack. For small stacks this is trivial, but often times continuations are useful when computing recursive functions over very large data structures, and you usually have very large stacks for these kinds of computations.

Delimited continuations (DCs) can help with that problem, apparently. And the API for DCs also happens to make them more composable with each other, since you can kind-of unfreeze a computation inside of another frozen computation.

That might be why Kotlin has those restrictions on continuations.

@kentpitman @screwlisp @cdegroot

in reply to Ramin Honary

I didn't research it too much, but I think the reason is that when you have a function marked as suspend, it will always pass along an implicit extra argument which is the continuation. I also believe there is a dispatch block at the beginning of a function that can suspend that looks at the continuation to jump to the right part of the code. This is because code running on the JVM cannot directly manipulate the stack.

I don't know how it's implemented when you compile Kotlin to other targets. The semantics are the same, but the underlying implementation may be different.

in reply to Kent Pitman

yes, Scheme led on continuations before it was a well-established idea, and I think there is some regret about that because of the difficulties involved to which you had alluded, especially in compiling efficient code. Nowadays the common wisdom is that delimited continuations, which I believe are implemented by copying only part of the stack, are better in every way. I have no strong opinions on the issue, I just thought it was interesting how Scheme solved problems of optimizing tail recursion and “creating actors” i.e. capturing closures, and both of these things involve stack manipulation which naturally leads into the idea of continuations.

As a Haskeller I definitely appreciate the study of programming language theory, and how much of Haskell is built on the work of Lisp. The Haskell team’s many innovations include asking questions like, “what if everything was lazy by default?” Or, “what if we abolish mutating variables and force the programmer to pop the old value and push the new value on the stack every time?” Or “what if tail recursion was the only way to loop?” As it turns out, this gives an optimizing compiler the freedom to very aggressively optimize code, and can result in very efficient binaries. Often times, both programmers and language implementors can do a lot more when you are constrained to use fewer features.

But which features to use and which to remove requires a lot of wisdom and experience. So the Haskell people could have only felt comfortable asking those questions after garbage collection and closures had become a well-established practice, and we can thank the work of the Lisp team for those contributions.

@screwlisp @cdegroot

This entry was edited (1 week ago)
in reply to Kent Pitman

it'd be interesting to make a family tree of programming language implementations by where they got their GC design from. Java started out real bad (well, fine for the sort of embedded systems it was initially targeted at, not so fine for the stuff I tried to do with the 1.0 version) but picked up good (generational) GC from Self and Strongtalk, in other words more directly from Smalltalk than Lisp. But, well, a lot of history shared between Lisp and Smalltalk makes them more joined at the hip than most people realize 😀.
in reply to Kent Pitman

Generational GC changes the way you program and it's not *just* that it's efficient.

We used MIT-Scheme (which, by the early 90s was showing its age). We did all manner of weird optimizing to use memory efficiently. Lots of set! to re-use structure where possible. Or (map! f list) -- same as (map...) but with set-car! to modify in-place -- because it made a HUGE difference not recreating all of those cons cells => bumps memory use => next GC round is that much sooner (and then everything STOPS, because Mark & Sweep). Also stupid (fluid-let ...) tricks to save space in closures.

We were writing Scheme as if it were C because that was how you got speed in that particular world.

1/3

This entry was edited (6 days ago)

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

And then Bruce Duba joined the group (had just come from Indiana).

"Guys, you're doing this ALL WRONG",

"Yeah, we know already. It's ugly, impure, and sucks. But it's faster, unfortunately",

"No, you need a better Scheme; you should try Chez".

...and, to be sure, just that much *was* a significant improvement. Chez was much more actively maintained, had a better repertoire of optimizations, etc...

... but the real eye-opener was what happened when we ripped out all of the set! and fluid-let code. That's when we got the multiple-orders-of-magnitude speed improvement.

2/3

in reply to Roger Crew✅❌☑🗸❎✖✓✔

See, setq/set! is a total disaster for generational GC. It bashes old-space cells to point to new-space; the premise of generational GC being that this mostly shouldn't happen. The super-often new-generation-only pass is now doing a whole lot of old-space traversal because of all of those cells added to the root set by the set! calls, ... which then loses most of the benefit of generational GC.

(fluid-let and dynamic-wind also became way LESS cheap, mainly due to missing multiple optimization opportunities)

In short, with generational GC, straightforward side-effect-free code wins. It took a while for me to recalibrate my intuitions re what sorts of things were fast/cheap vs not.

3/3

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

There were other weirdnesses as well.

Even if GC saves you the horror of referencing freed storage, or freeing stuff twice, you still have to worry about memory leaks and moreover, dropping references as fast as you can matters

With copying GC, leaks are useless shit that has to be copied -- yes it eventually ends up in an old generation but until then it's getting copied -- and copying is where generational GC is doing work, and it's stuff unnecessarily surviving to the medium term that hurts you the most (generational GC *relies* on stuff becoming garbage as quickly as possible)

And so, tracking down leaks and finding places to put in weak pointers started mattering more...

4/3

This entry was edited (6 days ago)
in reply to screwlisp

@dougmerritt

5? maybe for mark&sweep

but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...

... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog
> but I can't see how more than 2 would ever be necessary for a copying GC

It's not "necessary", it's "to make GC performance a negligeable percentage of overall CPU".

It was about a theoretical worst case as I recall, certainly not about one particular algorithm.

And IIRC it was actually a factor of 7 -- 5 is merely a good mnemonic which may be close enough. (e.g. perhaps 5-fold keeps overhead down to 10-20% rather than 7's 1%, although I'm making it up to give the flavor -- I haven't read the book for 10-20 years)

But see the book (may as well use the second edition) if and when you care; it's excellent. Mandatory I would say, for anyone who wants to really really understand all aspects of garbage collection, including performance issues.

@screwlisp @kentpitman @cdegroot @ramin_hal9001

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog Haskell was first invented in 1990 or 91ish, and at that time they had already started to ask questions like, “what if we just ban set! entirely,” abolish mutable variables, make everything lazily evaluated by default. If you have been programming in C/C++ for a while, that abolishing mutable variables would lead to a performance increase seems very counter-intuitive.

But for all the reasons you mentioned about not forcing a search for updated pointers in old-generation GC heaps, and also the fact that this forces the programmer to write their source code such that it is essentially already in the Static-Single-Assignment (SSA) form, which is nowadays an optimization pass that most compilers do prior to register allocation, this allowed for more aggressive optimization to be used and results in more efficient code.

@screwlisp @kentpitman @cdegroot @dougmerritt

in reply to Ramin Honary

@wrog @dougmerritt
The LispM did a nice thing (at some tremendous cost in hardware, I guess, but useful in the early days) by having various kinds of forwarding pointers for this. At least you knew you were going to incur overhead, though, and pricing it properly at least said there was a premium for not side-effecting and tended to cause people to not do it. And the copying GC could fix the problem eventually, so you didn't pay the price forever, though you did pay for having such specific hardware or for cycles in systems trying to emulate that which couldn't hide the overhead cost. I tend to prefer the pricing model over the prohibition model, but I see both sides of that.

If my memory is correct (so yduJ or wrog please fix me if I goof this): MOO, as a language, is in an interesting space in that actual objects are mutable but list structure is not. This observes that it's very unlikely that you allocated an actual object (what CL would call standard class, but the uses are different in MOO because all of those objects are persistent and less likely to be allocated casually, so less likely to be garbage the GC would want to be involved in anyway).

I always say "good" or "bad" is true in a context. It's not true that side effect is good or bad in the abstract, it's a property of how it engages the ecology of other operations and processes.

And, Ramin, the abolishing of mutable variables has other intangible expressional costs, so it's not a simple no-brainer. But yes, if people are locked into a mindset that says such changes couldn't improve performance, they'd be surprised. Ultimately, I prefer to design languages around how people want to express things, and I like occasionally doing mutation even if it's not common, so I like languages that allow it and don't mind if there's a bit of a penalty for it or if one says "don't do this a lot because it's not aesthetic or not efficient or whatever".

To make a really crude analogy, one has free speech in a society not to say the ordinary things one needs to say. Those things are favored speech regardless because people want a society where they can do ordinary things. Free speech is everything about preserving the right to say things that are not popular. So it is not accidental that there are controversies about it. But it's still nice to have it in those situations where you're outside of norms for reasonable reasons. 😀

in reply to Kent Pitman

> Ultimately, I prefer to design languages around how people want to express things, and I like occasionally doing mutation even if it's not common, so I like languages that allow it and don't mind if there's a bit of a penalty for it or if one says "don't do this a lot because it's not aesthetic or not efficient or whatever".

Me too -- although I remain open to possibilities. Usually such want me to switch paradigms, though, not just add to my toolbox.

@ramin_hal9001 @screwlisp @wrog @cdegroot

in reply to DougMerritt (log😅 = 💧log😄)

“the abolishing of mutable variables has other intangible expressional costs, so it’s not a simple no-brainer.”


@kentpitman I prefer the term “constraint” to “expressional cost,” because constraints are the difference between a haiku and a long-form essay. For example, I am very curious what the code for the machine learning algorithm that trains an LLM would look like expressed as an APL program. I don’t know, but I get the sense it would be a very beautiful two or three lines of code, as opposed to the same algorithm expressed in C++ which would probably be a hundred or a thousand lines of code.

Not that I disagree with you, on the contrary, that is why I was convinced to switch to Scheme as a more expressive language than Haskell. I like the idea of starting with Scheme as the untyped lambda calculus, and then using it to define more rigorous forms of expression, working your way up to languages like ML or Haskell, as macro systems of Scheme.

@dougmerritt @screwlisp @wrog @cdegroot

in reply to Ramin Honary

I'm not 100% positive I understand your use of constraint here, but I think it is more substantive than that. If you want to use the metaphor you've chosen, a haiku reaches close to theoretical minimum of what can be compressed into a statement, while a long-form essay does not. This metaphor is not perfect, though, and will lead astray if looked at too closely, causing an excess focus on differential size, which is not actually the key issue to me.
I won't do it here, but as I've alluded to more than once I think on the LispyGopher show, I believe that it is possible to rigorously assign cost to the loss of expression between languages.

That is, that a transformation of expressional form is not, claims of Turing equivalence notwithstanding, cost-free both in terms of efficiency and in terms of expressional equivalence of the language. It has implications (positive or negative) any time you make such changes.

Put another way, I no longer believe in Turing Equivalence as a practical truth, even if it has theoretical basis.

And I am pretty sure the substantive loss can be expressed rigorously, if someone cared to do it, but because I'm not a formalist, I'm lazy about sketching how to do that in writing, though I think I did so verbally in one of those episodes.

It's in my queue to write about. For now I'll just rest on bold claims. 😀 Hey, it got Fermat quite a ways, right?

But also, I had a conversation with ChatGPT recently where I convinced it of my position and it says I should write it up... for whatever that's worth. 😀

cc @screwlisp @wrog @dougmerritt @cdegroot

in reply to Kent Pitman

> That is, that a transformation of expressional form is not, claims of Turing equivalence notwithstanding, cost-free both in terms of efficiency and in terms of expressional equivalence of the language. It has implications (positive or negative) any time you make such changes.

I hope everyone here is already clear that "expressiveness" is something that comes along on *top* of a language's Turing equivalence.

Indeed Turing Machines (and pure typed and untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness.

And for that matter, expressiveness can be on top of Turing incomplete languages. Like chess notation; people argue that the algebraic notation is more expressive than the old descriptive notation. (People used to argue in the other direction)

@ramin_hal9001 @screwlisp @wrog @cdegroot

in reply to DougMerritt (log😅 = 💧log😄)

[..it's possible I'm missing the point, but I'm going to launch anyway...]

I believe trying to define/formalize "expressiveness" is roughly as doomed as trying to define/formalize "intelligence". w.r.t. the latter, there's been nearly a century of bashing on this since Church and Turing and we're still no further along than "we know it when we see it"

(and I STILL think that was Turing's intended point in proposing his Test, i.e., if you can fool a human into thinking it's intelligent, you're done; that this is the only real test we've ever had is a testament to how ill-defined the concept is...)

1/11

This entry was edited (4 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

The point of Turing equivalence is that even though we have different forms for expressing algorithms and there are apparently vast differences in comprehensibility, they all inter-translate, so any differences in what can utltimately be achieved by the various forms of expression is an illusion. We have, thus far, only one notion of computability.

(which is not to say there can't be others out there, but nobody's found them yet)

2/11

This entry was edited (4 days ago)

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

I believe expressiveness is a cognition issue, i.e., having to do with how the human brian works and how we learn. If you train yourself to recognize certain kinds of patterns, then certain kinds of problems become easier to solve.
... and right there I've just summarized every mathematics, science, and programming curriiculum on the planet.

What's "easy" depends on the patterns you've learned. The more patterns you know, the more problems you can solve. Every time you can express a set of patterns as sub-patterns of one big super-pattern small enough to keep in your head, that's a win.

I'm not actually sure there's anything more to "intelligence" than this.

3/11

This entry was edited (3 days ago)

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

I still remember trying to teach my dad about recursion.

He was a research chemist. At some point he needed to do some hairy statistical computations that were a bit too much for the programmable calculators he had in his lab. Warner-Lambert research had just gotten some IBM mainframe -- this was early 1970s, and so he decided to learn FORTRAN -- and he became one of their local power-users.

Roughly in the same time-frame, 11-year-old me found a DEC-10 manual one of my brothers had brought home from college. It did languages.

Part 1 was FORTRAN.
Part 2 was Basic.

But it was last section of the book that was the acid trip.

Part 3 was about Algol.

4/11

This entry was edited (4 days ago)

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

This was post-Algol-68, but evidently the DEC folks were not happy with Algol-68 (I found out later *nobody* was happy with Algol-68), so ... various footnotes about where they deviated from the spec; not that I had any reason to care at that point.

I encountered the recursive definition of factorial and I was like,

"That can't possibly work."

(the FORTRAN and Basic manuals were super clear about how each subprogram has its dedicated storage; calling one while it was still active is every bit an error like dividing by zero. You're just doing it wrong...)

5/11

This entry was edited (4 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

Then there was the section on call-by-name (the default parameter passing convention for Algol)

... including a half page on Jenson's Device, that, I should note, was presented COMPLETELY UN-IRONICALLY because this was still 1972,

as in, "Here's this neat trick that you'll want to know about."

And my reaction was, "WTFF, why???"

and also, "That can't possibly work, either."

Not having any actual computers to play with yet, that was that for a while.

Some years later, I got to college and had my first actual programming course...

6/11

This entry was edited (3 days ago)

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

... in Pascal.. And there I finally learned about and was able to get used to using recursion.

Although I'd say I didn't *really* get it until the following semester taking the assembler course and learning about *stacks*.

It was like recursion was sufficiently weird that I didn't really want to trust it until/unless I had a sense of what was actually happening under the hood,

And THEN it was cool.

7/11

This entry was edited (4 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

To the point where, the following summer as an intern, I was needing to write a tree walk, and I wrote it in FORTRAN — because that's what was available at AT&T Basking Ridge (long story) — using fake recursion (local vars get dimensions as arrays, every call/return becomes a computed goto, you get the idea…) because I wanted to see if this *could* actually be done in FORTRAN, and it could, and it worked, and there was much rejoicing; I think my supervisor (who, to be fair, was not really a programmer) blue-screened on that one.

And *then* I tried to explain it all to my dad...

8/11

This entry was edited (2 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

@dougmerritt
And, to be fair, by then, he had changed jobs/companies, moved up to the bottom tier of management, wasn't using The Computer anymore, so maybe the interest had waned.

But it struck me that I was never able to get past showing him the factorial function and,

"That can't possibly work."

He had basically accepted the FORTRAN model of things and that was that.

Later, when he retired he got one of the early PC clones and then spent vast amounts of time messing with spreadsheets.

9/11

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@dougmerritt
You may say that untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness, and I will probably agree,

... but it also seems to me that Barendregt got pretty good at it.

I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),

... but at the same time, it's *still* the case that if I need to write a one-liner to do something, chances are, I'll be doing it in Perl, and I've heard people complain about *that* language being essentially write-only line-noise.

10/11

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@dougmerritt
To be sure, my Perl tends to be more structured.

On the other hand, I also hate Moose (Perl's attempt at CLOS) and have thus far succeeded in keeping that out of my life.

I also remember there being a time in my life when I could read and understand APL.

But if you do think it's possible to come up with some kind of useful formal definition/criterion for "expressiveness", go for it.

I'll believe it when I see it.

11/11

in reply to Roger Crew✅❌☑🗸❎✖✓✔

... and, crap, I messed up the threading (it seems 9 and 10 are siblings, so you'll miss 9 if you're reading from here. 9 is kind of the point. Go back to 8.)

(I hate this UI. If anybody's written an emacs fediverse-protocol thing for doing long threaded posts please point me to it, otherwise it looks like I'm going to have to write one ...)

𝜔/11

This entry was edited (3 days ago)

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

It's a bit low tech but if you noticed it in time that other people don't have a ton of other stuff attached to it, just save the text, delete the old post, attach the new. Someone could make that be a single operation in a client and even have it send mail to the people who attached replies saying here's your text if you want to attach it to the new post. Or you could attach your own post with their text in it. Low-tech as it is, existing tools offer us a lot more options than sometimes people see. I'm sure you could have figured this out, and are more fussing at the tedium, but just for fun I'm going to cross reference a related but different scenario...

web.archive.org/web/2010092105…

This entry was edited (3 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog
> (I hate this UI. If anybody's written an emacs fediverse-protocol thing for doing long threaded posts please point me to it, otherwise it looks like I'm going to have to write one ...)

There are *so* many programmers using variants of this UI that you would think someone would have addressed it by now.

But you never know, maybe not. Certainly everyone who does multi-posts seems to be struggling with doing it by hand, from my point of view, so that would seem to cry out for the need for some fancier textpost-splitting auto-sequence-number thingie, in emacs or command line or something.

Conceivably a web search would find the thing if it exists. I personally almost never do long posts, so I just grin and bear it when it comes up.

@kentpitman @ramin_hal9001 @screwlisp @cdegroot

in reply to DougMerritt (log😅 = 💧log😄)

I think for protocol reasons it is necessary to try and connect up the thread using quote-posts, however any particular client understands that.

If you visit the topmost toot of the thread, you at least get the whole (cons) tree.
@dougmerritt @wrog @kentpitman @ramin_hal9001 @cdegroot

in reply to screwlisp

Y'all are misunderstanding. Due to the error-prone nature of labelling a series of posts, from one way of viewing he skipped post 9, and 8 linked to 10.

Another view showed simply the correct sequence.

Regardless, anyone who has written e.g. "3/n" on a post is already implicitly indicating a desire for automation.

@wrog @kentpitman @ramin_hal9001 @cdegroot

in reply to DougMerritt (log😅 = 💧log😄)

Lots of clients are available that will daisychain toots within the constraints of your instance if you send a large one. I think mastodon.el is like this. I think that the 𝜑/ζ post counting is often some interesting but quasi-ironic metadata about how the author is feeling about local restarts available to them at the time of writing.
@wrog @kentpitman @ramin_hal9001 @cdegroot
in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt
what I *currently* do is compose inside Emacs (the *only* non-painful alternative for long posts),

then manually decide how I'm going to break it up -- which actually has some literary content to it, because in some cases, you *do* want to arrange the breaks for maximal dramatic effect
(generalized How to Use Paragraphs)

Problem 1 being that emacs doesn't count characters the same way as mastodon does, and I don't find out until I've cut&pasted part n, which doesn't happen until I've already posted parts 1..n−1

Problem 2 being having to cut&paste in the first place when I should just be able to hit SEND (which then has to be from within emacs).

in reply to Roger Crew✅❌☑🗸❎✖✓✔

given that I once-upon-a-time wrote a MAPI client for the sake of being able to post to Microsoft Exchange forums in rich text using courier font, in theory, I should be able to do this.

... but that would mean I'd have to Learn Fediverse. crap.

hmm. Anyone have experience with

codeberg.org/martianh/mastodon…

i.e., is the best one or if this just Guy Who Grabbed the Name first and did the best SEO twigging? (I hate that google search has gotten so enshittified)

(also, thanks, LazyWeb!)

This entry was edited (3 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

unforch mastodon.el hasn't yet implemented chaining of new toots. if someone wants to add it though, by all means. (the issue has been raised before, but as usual no one was willing to get their hands dirty.)

edit: codeberg.org/martianh/mastodon…

This entry was edited (2 days ago)
in reply to DougMerritt (log😅 = 💧log😄)

With some apologies to legends:

(defun chained-toot
(lim str)
(let ((space (- lim 8))
(End (length str))
(span (+ 1 (ceiling (/ (length str) (- lim 8))))))
(cl-loop
for idx from 1 to span
for start from 0 by space
for end from space by space
for piece = (cl-subseq str start (min end End))
for addy = (format "%s\n%d/%d" piece idx span)
collect addy)))

@dougmerritt @mousebot @wrog @cdegroot @ramin_hal9001 @kentpitman
#elisp

screwlisp reshared this.

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt
I thought about it, but what if the chain of toots are all non-whitespace-characters anyway. So I decided not to try. Now, cooking in heuristically "proper" justification anyway, you say... But that way madness lies.
@mousebot @wrog @cdegroot @ramin_hal9001 @kentpitman
in reply to screwlisp

@dougmerritt @mousebot
figuring out how to split up a toot is solving the wrong problem. In my cases I *know* how I want to split it up.

what I want is the ability to create a sequence of posts, edit them all in place, shuffle text around + attach media and polls wherever I want, get them all looking right,

and then send them all in one fell swoop.

I think the key concept is being able to compose a reply to a draft.

i.e., In-Reply-To is a buffer rather than a URL

Posting the reply automatically posts the In-Reply-To **first**. And likewise for longer chains.

Make that work in a reasonable way, and everything else follows.

(I'm up to 5000 chars in my draft reply on codeberg...)

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

in reply to Cy

@cy
Presumably you're joking. But different of us suffer different character limits. My server, Mathstodon.xyz, has a limit of 1729 characters -- but for most servers it's significantly less.

And some may be larger. Yours, perhaps. But that doesn't help others.

@screwlisp @mousebot @cdegroot @ramin_hal9001 @kentpitman @wrog

in reply to DougMerritt (log😅 = 💧log😄)

I'm joking, yes. And also criticizing those servers. But honestly, I don't like writing super long posts. I feel like I'm ignoring people and not letting them get in a word edgewise. When I consider ranting about something in long form, I try to write a little bit at a time, and give people a chance to respond before writing more. Make it a conversation instead of an essay.

Or sometimes if I just don't care I'll splurg it all out and hope that nobody bothers reading that shit.

CC: @screwlisp@gamerplus.org @mousebot@todon.nl @cdegroot@mstdn.ca @ramin_hal9001@fe.disroot.org @kentpitman@climatejustice.social @wrog@mastodon.murkworks.net

in reply to Cy

@cy @mousebot @dougmerritt

It's Twitter Culture. We're all supposed to speak in sound bites. Dorsey or whoever decided if you can't fit it in 130 chars, it's not worth saying. Then at some point they doubled it and thought that was generous enough.

And now short posts are what people expect.

LJ never had a limit.

Hell, **Usenet** never had a limit and we were suffering under far worse resource constraints back then.

I miss Usenet.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog @cy @mousebot @dougmerritt

I did not like the Twitter extended from 140 to 280. But, unrelated to that, I'm pretty sure they made a decision that urls and @ references to people's handles should have fixed small cost, so as not to bias things in favor of short-named people or xrefs. I think that was very important. I was surprised that BlueSky did not copy it.

in reply to Kent Pitman

Things have mutated so much over the years that messages like yours, that harken back to the original 140 limit that was due to the actual SMS protocol being used in cell phones, bring me back to reality with a palpable start.

@wrog @cy @screwlisp @mousebot @cdegroot @ramin_hal9001

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt

I don't think it had anything to do with SMS. Twitter was an internet service from the start and Dorsey's decision was a matter of taste/branding/marketing; the notion of a service that *only* allowed short posts was Something New.

Receiving a twitter feed as SMS texts on a cell phone would have been insane (and probably also expensive back then).

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog
> I don't think it had anything to do with SMS.

But you would be wrong. Don't mess with the bull, you'll get the horns. I was not only there, I worked in that space at that time.

(I did more than languages, compilers, and operating systems because I got bored periodically. I've also done OCR algorithms, to name another thing that doesn't seem to fit with the rest.)

> The idea was initially pitched as an “SMS for the web”,...
> Why 140 characters? The limit was inspired by SMS text messaging, which capped messages at 160 characters. Twitter reserved 20 characters for the username, leaving 140 for the message itself.
blog.easybie.com/twitters-orig…

So it was at *least* inspired by SMS. But more than that, it gatewayed to and from SMS, so it retained the SMS limit of necessity to continue gatewaying -- for a while.
en.wikipedia.org/wiki/X_(socia…

Wikipedia stops just short of having an adequate history by itself.

in reply to DougMerritt (log😅 = 💧log😄)

includes discussion of aftermath of a violent event

Sensitive content

This entry was edited (1 day ago)
in reply to DougMerritt (log😅 = 💧log😄)

includes discussion of aftermath of a violent event

Sensitive content

in reply to Kent Pitman

Sensitive content

screwlisp reshared this.

in reply to DougMerritt (log😅 = 💧log😄)

Even that limit was totallly arbitrary. They were like "We got a few extra bits in the packets, so sure why not? And they'll pay 10 cents a pop! 🍭" The character limit was always about draining our attention, keeping us too busy to organize, and rewarding us for shitposting. Can't be typing on your brand new Portable Smart Phone, can you? You're far too busy and important to take your time! Hurry up!

CC: @kentpitman@climatejustice.social @wrog@mastodon.murkworks.net @screwlisp@gamerplus.org @mousebot@todon.nl @cdegroot@mstdn.ca @ramin_hal9001@fe.disroot.org

in reply to Cy

@cy
> Even that limit was totallly arbitrary. They were like "We got a few extra bits in the packets, so sure why not? And they'll pay 10 cents a pop! 🍭" The character limit was always about draining our attention, keeping us too busy to organize, and rewarding us for shitposting. Can't be typing on your brand new Portable Smart Phone, can you? You're far too busy and important to take your time! Hurry up!

Cynicism about human nature is rarely off-target, but that doesn't extend to protocols being arbitrary:

> Messages are sent with the MAP MO- and MT-ForwardSM operations, whose payload length is limited by the constraints of the signaling protocol to precisely 140 bytes (140 bytes × 8 bits / byte = 1120 bits).
> Short messages can be encoded using a variety of alphabets: the default GSM 7-bit alphabet, the 8-bit data alphabet, and the 16-bit UCS-2 or UTF-16 alphabets.[81][82] Depending on which alphabet the subscriber has configured in the handset, this leads to the maximum individual short message sizes of 160 7-bit characters, 140 8-bit characters, or 70 16-bit characters. GSM 7-bit alphabet support is mandatory for GSM handsets and network elements.
[82]en.wikipedia.org/wiki/SMS

@kentpitman @wrog @screwlisp @mousebot @cdegroot @ramin_hal9001

in reply to DougMerritt (log😅 = 💧log😄)

Just because the protocol was written in the 80's doesn't mean it couldn't have used a different number than 140... seems unlikely they were that forward thinking, but BBSes had been around a while, and I imagine they had their eye on that.

CC: @kentpitman@climatejustice.social @wrog@mastodon.murkworks.net @screwlisp@gamerplus.org @mousebot@todon.nl @cdegroot@mstdn.ca @ramin_hal9001@fe.disroot.org

in reply to Cy

@cy
You seem to keep overlooking that it was originally interoperable with SMS, it wasn't just that they were inspired by SMS, so it had to be a limit of 140 to maintain two-way compatibility without message truncation.

They didn't feel comfortable increasing it until SMS was well on its way out.

Now, when they initially doubled it from 140 to 280, *that* may have been them just being conservative for no functional reason. I don't recall.

@kentpitman @wrog @screwlisp @mousebot @cdegroot @ramin_hal9001

in reply to DougMerritt (log😅 = 💧log😄)

dm> Slow deliberate composition certainly has major virtues, doesn't it?

As does getting an ephemeral thought out before it vanishes never to be spoken of again.

Not even to mention that some of us are old enough that if we keeled over dead, people would shrug and say "well, that's sadly normal for him". But they still might be sad something didn't get said.

We are left no choice but to navigate an overconstrained life, so we just do the best we can. 😀

This entry was edited (1 day ago)
in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt @cy @wrog @mousebot
Also, given that most user interfaces have message length abbreviation, with the user having to affirmatively press "show more" to see the rest of it, I don't really understand why there are small limits.

Making people break stuff up into smaller pieces just increases the number of interruptions due to individual message notifications, and makes it not as useful that there is that length query at all, because it can't span multiple messages.

in reply to Kent Pitman

@dougmerritt @cy @wrog @mousebot
Random trivia though: In 2001, I was interviewed by slashdot about lisp. Anyone wanting to read that interview can find links (plural) on my homepage (nhplace.com/kent). They asked a bunch of questions, but my responses were the first that they had ever had that overran some fixed length that such articles could be. I offered to edit down the size of the responses, but they liked what they saw and said they would instead just run the interview in two parts. I know I am verbose, and my haikus are often intended as an apology for that, but I was touched that they thought the extra words there to be worthwhile.

screwlisp reshared this.

in reply to Kent Pitman

Don't get me started! The mastodon UI barely works (it varies a lot, but I haven't heard of a *standard* [whatever that means in this context] mastodon UI that truly helps the user, as opposed to merely being easy to implement)

@cy @wrog @screwlisp @mousebot @cdegroot @ramin_hal9001

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog my instance has a 5000 character limit, but I rarely type that much. Most of my posts are three or four paragraphs. I could probably make do with 2500 characters, but it is nice to have a little extra space so I don’t feel like I have to watch the character counter so closely. And I have space to write in complete sentences, to include example code, paste long URLs, and quote parts of other posts inline.

@cy @screwlisp @mousebot @cdegroot @kentpitman @dougmerritt

in reply to Cy

@cy
> I feel like I'm ignoring people and not letting them get in a word edgewise

The world could use more people with that perception! Too many people do that and obviously don't notice they're doing that.

@screwlisp @mousebot @cdegroot @ramin_hal9001 @kentpitman @wrog

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@mousebot @dougmerritt
ok, finally finished my codeberg post

codeberg.org/martianh/mastodon…

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

… and a different model which perhaps maps closer to how people think about this.

codeberg.org/martianh/mastodon…

Please note:

Proposal 2
. reply-to-id can be a draft ⟹ creating chains of drafts
. status quo = nobody's doing that yet

Proposal 3
. drafts have split-points (= subdrafts)
. many attributes (CW,NSFW,attachements,visibility) live on the split point
. status quo = only one subdraft

are **implementation** choices, not UIs in and of themsevles.
That is, both should support the same UIs
(for chains, anyway; P3 can't really do trees).

I still much prefer P2.
P3 looks to be more work/complexity to achieve less IMHO.

(N.B. I've not deeply looked at the code, yet).

This entry was edited (5 hours ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

@mousebot @dougmerritt
> (N.B. I've not really looked at the code, yet).

meaning I may yet turn out to be wrong about what's difficult,
meaning don't hold your breath on the pull-request (*)

(Also, elisp seems to have changed a bit.

I remain dismayed at how many Common-Lisp-isms crept into the language. I've mostly been in RMS's camp on this, but I'll admit that was based on how horribly elisp's common-lisp compatibility was implemented 20 years ago. Evidently this ship has sailed. And now that modern elisp compile is way better and does native code, it probably doesn't matter anymore).

(*) Ideally, my architecture wanking will goad someone else into beating me to it.

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt
No I began working on it when you said using :from-end and :test-not with search, but I have been frustrated by elisp not actually being common lisp. Also I did not have ielm installed, and it seems like ielm is not in melpa.
@mousebot @wrog @cdegroot @ramin_hal9001 @kentpitman
in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog @dougmerritt
I stick to posting longer-than-two-posts content to my blog, which auto-toots a link, initial text, & any tags.

Or write to my phlog and then tell people to look there, but that's for devnotes/commentary on my Cyberhole.

Third option is to get onto an instance with a huge character limit, post giant walls of text.

in reply to Digital Mark λ ☕️ 🕹 👽

> I stick to posting longer-than-two-posts content to my blog, ...

yeah, that's clearly the Right Thing, but has the disadvantage of not inflicting my text on people directly 🙂

Also my blog hasn't gotten a whole lot of readership since the Russians killed Livejournal

This entry was edited (3 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

hmm... is there a way to do a reply that is *also* a quote-post? I should try this.

mastodon.murkworks.net/@wrog/1…

(𝜔+1)/11


To the point where, the following summer as an intern, I was needing to write a tree walk, and I wrote it in FORTRAN — because that's what was available at AT&T Basking Ridge (long story) — using fake recursion (local vars get dimensions as arrays, every call/return becomes a computed goto, you get the idea…) because I wanted to see if this *could* actually be done in FORTRAN, and it could, and it worked, and there was much rejoicing; I think my supervisor (who, to be fair, was not really a programmer) blue-screened on that one.

And *then* I tried to explain it all to my dad...

8/11


This entry was edited (3 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

@dougmerritt
(I'm guessing a mastodon UI that actually respects the use of surreal numbers to number multipost components and rearranges threads accordingly will be implemented approximately never.

… though I suppose it could turn out to be one of the more creative ways to get kicked off of the Fediverse … )

en.wikipedia.org/wiki/Surreal_…

(𝜔/2)/11

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog your story about learning recursion in Algol reminded me of a story that was told about how Edsger Dijkstra influenced the Algol spec (through a personal conversation with I think John Bakus) to include what we now understand as a “function call,” by specifying the calling convention for how the stack must be modified before the subroutine call and after the return. I first heard the story in a YouTube video called, “How the stack got stacked“.

Regarding “expressiveness,” you do make a good point about it (possibly) being fundamentally a subjective thing, like “intelligence.” Personally, I never felt the restrictions in Haskell made it any less expressive as a language.

It is interesting how you can express some incredibly complex algorithms with very few characters in APL. Reducing function names to individual symbols and applied as operators does make the language much more concise, but is “concise” a necessary condition for “expressive?”

@dougmerritt @kentpitman @screwlisp @cdegroot

in reply to Ramin Honary

Some clearly prefer concise, but nonetheless it is orthogonal to expressive.

'Expressive' != 'my favorite approach' -- ideally expressiveness can be determined objectively by human factors studies.

Failing that, sure, it's then subjective and subject to unbounded argument. 😀

@kentpitman @screwlisp @wrog @cdegroot

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog
> I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),

I'm one of those people, BTW. My proof is that I wrote a closed-loop stick figure ASCII animation juggling three balls.

As with any complex TECO thing, the resulting code was write-only -- and that was always the problem with even mildly powerful TECO macros.

Perl at its worst can be described as write-only line noise, yes, but in my experience is *STILL* better than TECO!

I am indeed fortunate to be able to stick with Emacs and Vi.

@kentpitman @ramin_hal9001 @screwlisp @cdegroot

screwlisp reshared this.

in reply to DougMerritt (log😅 = 💧log😄)

TECO was a necessary innovation under word-addressed memory. With 36 bits per word, you couldn't afford that much space for an instruction. 5 7-bit bytes (with a bit left over) 8n one word was a lot more compact than an assembly instruction. With only 256 KW (kilowords) total addressable in 18 bits, you had to get all the power packed in you could. And we didn't have WYSIWYG yet, and most computer people couldn't type. So it would make a lot more sense to you if you were doing hunt and peck with almost no visibility into what you're changing. Typing -3cifoo$$ to mean go back three characters and insert foo and show me what the few characters around my cursor look like was extremely natural in context. That it became a programming language was a natural extension of that so that you didn't have to keep typing the same things over and over again.
This entry was edited (3 days ago)

screwlisp reshared this.

in reply to Kent Pitman

In effect, a Q register, what passed for storage in TECO, was something you can name in one bite. So 1,2mA meaning call what's in A with args 1 and 2 was a high-level language function call with two arguments that fit into a single machine word. Even the PDP-10 pushj instruction, which was pretty sophisticated as a way of calling a function, couldn't pass arguments with that degree of compactness.
This entry was edited (3 days ago)

screwlisp reshared this.

in reply to DougMerritt (log😅 = 💧log😄)

Incidentally, I did *not* hate TECO at the time. I'm just remarking on some fairly objective issues with it.

But at the time, I really appreciated its power (even though for me this was after using vi and emacs).

Also, if one reads about its history in the literature, about how it originally worked in 8 KB with a sliding window on files, and then later versions added more and more commands and power, it all makes sense as an organic 4D creation.

Which is true of most software that one is sympathetic to.

@wrog @ramin_hal9001 @screwlisp @cdegroot

in reply to Kent Pitman

@dougmerritt
it's not so much the editor itself, which, from your description doesn't seem that much worse than, say, what you had to do in IBM XEDIT to get stuff done,

but the macro system, specifically, which as I understand it,(1) was an add-on, (2) would have needed utility commands that one didn't use in the normal course of editing (e.g., for rearranging arguments + building control constructs) and therefore were put on obscure characters, and *this* is where things went nuts…

I recall briefly viewing the TOPS-20 Emacs sources … *did* look like somebody had whacked a cable out in the hall (time to hit refresh-screen)
… granted, I may be misremembering; this *was* 40 years ago…

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@dougmerritt
I also recall '~' being an important character that showed up a lot in TECO for some reason,

and *normally* the only time you'd see sequences of ~'s in large numbers was when your modem was dying and your line was about to be dropped

and this may, at least partially, be where TECO's "line noise" reputation came from.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog @dougmerritt
Funny, I couldn't recall "~" being important at all so had to go check. See codeberg.org/PDP-10/its/src/br… and while I do see a few uses of it, they seem very minor.

I read this into an Emacs editor buffer and did "M-x occur" looking for [~] and got these, all of which seem highly obscure. I think it is probably because in the early days there may have been a desire not to have case matter, so the upper and lower case versions of these special characters (see line 2672 below) may have once been equivalent or might have some reason to want to reserve space to be equivalent in some cases. Remember that, for example, on a VT52, he CTRL key did not add a control bit but masked out all the bits beyond the 5th, so that CTRL+@ and CTRL+Space were the same (null) character, for example. And sometimes tools masked out the 7th bit in order to uppercase something, which means that certain characters like these might have in some cases gotten blurred.

10 matches for "[~]" in buffer: tecord.1132
1270: use a F~ to compare the error message string against a
2017: case special character" (one of "`{|}~<rubout>").
2235: the expected ones, with F~.
2370: kept in increasing order, as F~ would say, or FO's binary
2672: also ("@[\]^_" = "`{|}~<rubout>").
4192:F~ compares strings, ignoring case difference. It is just
4446: this option include F^A, F^E, F=, FQ, F~, G and M.
4942: string storage space, but begins with a "~" (ASCII 176)
4977: character should be the rubout beginning a string or the "~"
4980: "~" or rubout, then it is not a pointer - just a plain number.

If I recall correctly, this also meant in some tools it was possible if you were using a control-prefix con CTRL-^ to have CTRL-^ CTRL-@ be different than CTRL-^ @ because one of them might set the control bit on @ and the other on null, so there was a lot of ailasing. It even happened for regular characters that CTRL-^ CTRL-A would get you a control bit set on #o1 while CTRL+^ A would get you the control bit set on 65. Some of these worked very differently on the Knight TV, which used SAIL characters, I think, and which thought a code like 1 was an uparrow, not a control-A. There were a lot of blurry areas, and it was hell on people who wanted to make a Dvorak mode because it was the VT52 (and probably VT100 and AAA) hardware that was doing this translation, so there was no place to software intercept all this and make it different, so that's probably why something as important as Teco treaded lightly on making some case distinctions.

But if someone remembers, better, please let me know. It's been 4+ decades since I used this stuff a lot and details slip away. It's just that these things linger, I think, because they were so important to realize were live rails not to tread upon. And because I did, for a while, live and breathe this stuff, since I wrote a few TECO libraries (like ZBABYL and the original TeX mode), so I guess practice drills it in, too.

in reply to Kent Pitman

@dougmerritt
Yeah, I don't know. Maybe '~' was prevalant in Emacs source, or I'm conflating TECO with Something Else.

By my era VT-52s were gone, you'd occasionally see a VT100 in a server room for not wanting to waste $$ there, the terminal of choice at Stanford CS was the Heathkit-19 + if you were in one of the well-financed research groups, you got a Sun-1 or a Sun-2. At DEC(WSL) where I interned, it was all personal VAXstations.

I do recall Emacs ^S and ^Q being problematic due to terminal mode occasionally getting set badly (and then the underlying hardware would wake up, "Oh, flow control! I know how to do that!", ^S would freeze everything and you had to Just Know to do ^Q...)

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog
This seems familiar, but I'm not wholly sure why.

It is binary 01111110 and as such really did show up in some line noise contexts that favored such a thing (it's similar to 11111111).

It's also used by vi to mark nonexistent lines at the end of the file; Bill wanted it to be something other than just nothing on that screen line, for specificity of feedback to the user.

@kentpitman @ramin_hal9001 @screwlisp @cdegroot

in reply to Roger Crew✅❌☑🗸❎✖✓✔

> I also recall '~' being an important character

ok, I seem to be out-to-lunch on this
(or at least, remembering Something Else; but I can't imagine what...):

ibiblio.org/pub/academic/compu…

(admittedly, this is VAX/PDP-11 TECO source for Emacs and maybe Fred had to do a complete rewrite of some sort and the actual TOPS20/PDP-10 source is completely different -- given that there *is* significant dependence on wordsize and other architectural issues, it would have to be *somewhat* different -- but I'd still expect a lot of common code [unless there were copyright issues]).

It *does* definitely look like line noise, though.

This entry was edited (3 days ago)
in reply to Roger Crew✅❌☑🗸❎✖✓✔

“I do recall Emacs ^S and ^Q being problematic due to terminal mode occasionally getting set badly (and then the underlying hardware would wake up, “Oh, flow control! I know how to do that!”, ^S would freeze everything and you had to Just Know to do ^Q…)”


@wrog this is still a problem in modern terminal emulators. On Linux, nearly all terminal emulator software emulates the DEC VT-220 hardware pretty closely, so it does actually send the ASCII DC1 and DC3 characters for C-q and C-s, and the virtual TTY device responds accordingly by blocking all further characters except for DC1 and DC3. You have to execute the command stty -ixon to disable soft flow control for a given TTY device after it has been initialized by the operating system.

I think there is a way configure the pseudoterminal manager system control to create virtual TTY devices that ignore DC1 and DC3 characters, but I don’t know how, and for whatever reason (probably for backward compatibility with older Unix systems) Debian-based Linux doesn’t configure it this way by default. I think most people just put the stty -ixon in their ~/.profile file.

@kentpitman @dougmerritt @screwlisp @cdegroot

#tech #software #Linux #TerminalEmulator #CLI #LinuxTips

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt @ewhac that is a good use for ^S/^Q, although I tend to use ^Z for that instead.

And since switching to Emacs and controlling my remote terminals using the TRAMP system, I haven’t actually had to think too much about sending control characters directly to a TTY device. I just let Emacs buffer all the output and C-r search through the buffer.

@kentpitman @screwlisp @wrog @cdegroot

in reply to Kent Pitman

@dougmerritt @wrog
Yes, right. To all that. One minor point is that the PDP-6/10 had a byte-addressing instruction that was pretty weird (overkill in flexibility, like every PDP-6/10 instruction). So that data packing wasn't all that unreasonable.

I showed up to the TECO world in Jan. 1973 with a gofer programming gig in the Macsyma group. The Datapoint terminals were already there, so I missed the pre-(almost)WYSIWYG days.

in reply to David in Tokyo

@djl
Lucky you; I went through teletypes, and then glass terminals lacking cursor control, before finally being in an environment with cursor control terminals capable of WYSIWYG -- and at that, it was pretty random back then who had heard the pro-WYSIWYG arguments and who had not, so...

@kentpitman @wrog @ramin_hal9001 @screwlisp @cdegroot

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt @djl @wrog
For those looking on who might not know these terms, teletypes had paper feeding through and mostly did only output that was left-to-right and then fed that line and then did not back up ever to a previous line. They were also loud and clunky, mostly, and had keyboards that had keys you had to press way down in order to get them to take.

Glass terminals were displays that could only do output to the bottom line of the screen, kind of like a paper terminal but without the paper. Once it scrolled up, you couldn't generally scroll back down. But that's why it might sound like it would have cursor control but did not yet.

screwlisp reshared this.

in reply to Kent Pitman

Yes, and to clarify your final two sentences, the *display* scrolled up with each additional line emitted -- the *cursor* could never scroll up.

In my environment at Berkeley, these were Lear Siegler ADM 3 terminals. The slightly later ADM 3a terminals finally allowed the cursor to be moved around at will (although they didn't have any fancier abilities, unlike still later devices).

Thanks for thinking to explain what I did not.

@djl @wrog @ramin_hal9001 @screwlisp @cdegroot

in reply to Kent Pitman

@dougmerritt @wrog
The datapoint terminals were _almost_ wysiwyg: they didn't have a cursor, so the TECO of the time inserted "/\" in the text displayed, and you could insert text there, delete the next character and the like.

But TECO allowed you to change the "/\" to whatever you liked, so if you left your terminal, someone would change that to "/\Foo is loser" and Foo wouldn't be able to delete that text from Foo's file...

screwlisp reshared this.

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt @wrog
I've been through 17 or so environments, and I was always able to find an editor that could be persuaded to act the way I wanted: CCA, NEC, AT&T and even Word for MS-DOS.

Hilariously, Word for Windows defeated me. There was no way to persuade it to act as a civilized text editor, so I acquired the source code to WordPad and implemented my usual TECO macros in C++, and used that for 20 years or so.

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt @wrog
Yes. I missed the teletype round. Sort of. Father was site engineer for one of the early LINC 8 installations, and later a PDP-7 installation, and they had teletypes. They.Were.Horrid.

Peter Belmont (later Ada developer) tried to persuade me to do programming, but I was busy doing other things. The IBM card puches had really sweet keyboards, though.

in reply to DougMerritt (log😅 = 💧log😄)

Yeah I had <1 year stuck on DEC-20s at Stanford before Unix boxes became generally available (originally had to be an RA on a grant with its own VAX, and incoming students on NSFs typically weren't). Seeing Gosling Emacs that first spring, it was clear that was The Future...
⟹ less reason to do TECO

... though ironically, I *did* learn the SAIL editor (SAIL/WAITS -- TOPS-10 derivative -- was, by 1985, a completely dead software ecosystem, *only* continued to exist because Knuth and McCarthy had decades of crap + sufficient grant $$ for the (by then) fantastic expense to keep it going; the only other people who used it were the 3 of us maintaining the Pony (vending machine))

This entry was edited (3 days ago)

screwlisp reshared this.

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt yes, I am maybe a little unclear in what I wrote. I tend to take shortcuts when I write about Scheme that make it seem I am equating it to the untyped lambda calculus.

I have heard of the Turing Tarpit. And I have inspected the intermediate representation of Haskell before (the Haskell Core), so I have seen first-hand that expressing things in lambda calculus makes the code almost impossible to understand.

@kentpitman @screwlisp @wrog @cdegroot

in reply to Ramin Honary

Just as a BTW, LLMs have layers of necessary algorithms, rather than just one single algorithm.

That said, someone no doubt *has* reduced that core to one line of APL. 🙄

P.S. arguments about whether "expressiveness" is the right description may end up being about differences without distinctions.

@kentpitman @screwlisp @wrog @cdegroot

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog
Thanks for this detailed reply. Lotta good stuff there. Also thanks especially for indulging the improper fraction. I mostly do not use the fractional labeling for posts for fear of that scenario. Sometimes you promised to stop and then realize you want to keep going and feel impeded. I'm glad you kept on.
in reply to Roger Crew✅❌☑🗸❎✖✓✔

@JamesGosling

I should note that seeing this
universeodon.com/@JamesGosling…

reminded me that I didn't actually totally understand this
mastodon.murkworks.net/@wrog/1…

until it came time for me to implement my own garbage collector for ASTLOG, and I'm definitely **not** any kind of GC expert. I used a dirt-simple 2-space copying collector that Appel published in the 80s that somebody pointed me to.

it ran every bit as fast as promised.

(and debugging a GC is such an acid trip. Stuff just **disappears**; you have to figure out how (who'd've guessed this would happen?).

I now think that writing a GC, like dissecting frogs in 9th grade, is something that **everybody** should do at least once.


Your understanding is mostly faulty. The original GC was written by me, and I'm no Lisp GC expert. I was (and still am) an admirer of Lisp. I wrote the code for my whole PhD thesis in Lisp. My admiration for garbage collection started earlier, when I was a big user of Simula in the 70s. But the motivation for GC in Java was different: the motivation was all about reliability and security. A leading cause of security vulnerabilities as always been buggy code. And one of the leading root causes of many long standing, hard to diagnose and fix, bugs has been flakey storage management. Garbage collection goes a long way to increasing system reliability, and hence security. I had always wanted to make GC more mainstream.

When you described garbage collection to senior management back in the day, their reflexive judgement was: "bullshit! Lazy engineers just don't want to clean up their mess". But when they see measureable improvements in system robustness, and corresponding decreases in failures, they Notice.


This entry was edited (3 hours ago)

screwlisp reshared this.

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog it's a good chunk of the reason why Erlang shines here. Per-process GC can be kept simple (a process is more like an object than a thread, so you have lots of them) and no equivalent of setq - all data is immutable.

(there is a shared heap, but that also is just immutable data).

in reply to Cees de Groot

yes, the BEAM virtual machine is pretty amazing technology, there are very good reasons why it is used in telecom, or other scenarios where zero downtime is a priority. I think .NET and Graal are have been slowly incorporating more of BEAM’s features into their own runtimes. Since about 3 years ago .NET can do “hot code reloading,” for example.

I have used Erlang before but not Elixer. I think I would like Elixer better because of it’s slighly-more-Haskell-like type system.

@wrog @kentpitman @screwlisp

This entry was edited (6 days ago)
in reply to Ramin Honary

@wrog not just zero downtime, the more important aspect is how it does concurrency, how it manages to scale that, and how well it fits the modern requirements of "webapps" (like a glove).

It changed my thinking about objects, just like Smalltalk did before. I'm fully on board with Joe Armstrong's quip that Erlang is "the most OO language" (or something to that extent); having objects with effectively their own address space, their own processor scheduling, etc, completely changes how you think about building scalable concurrent systems (and _then_ you get clustering for free, and sometimes hot reloading is a production thing, although 99% of the time it is good to have it in the REPL)

in reply to Roger Crew✅❌☑🗸❎✖✓✔

@wrog
'setq' and friends have been criticized forever, but avoiding mutation is easier said than done. Parsing arbitrarily large sexpr's requires mutation behind the scenes -- which ideally is where it should stay.

Any language we use that helps avoid mutation is a good thing. 100% avoidance is a matter of opinion -- some people claim it was proven to be fully avoidable decades ago, others say the jury is still out on the 100% part.

I don't know enough to have an opinion on whether 100% has been completely proven, but it's attractive.

@kentpitman @screwlisp @cdegroot @ramin_hal9001

in reply to Kent Pitman

I respect you, and your contributions to Lisp and the community. So I dislike nitpicking you. But:

> Common Lisp macros more unhygienic than they actually are

This is a biased phrasing. There are hygenic macro systems, and unhygenic macro systems. One cannot assign a degree of "hygenic-ness" without simultaneously defining what metric you are introducing.

We all can agree that one can produce great code in Common Lisp. It's not like Scheme is *necessary* for that.

But de gustibus non est disputandum. There are objective qualities of various macro systems -- and then there's people's preferences about those qualities.

Bottom line: it seems you are saying that Lisp macros aren't so bad if their use is constrained to safe uses, and I would agree with *that*.

@screwlisp @cdegroot @ramin_hal9001

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt
> it seems you are saying that Lisp macros aren't so bad if their use is constrained to safe uses

Well, what I'm saying isn't formal, and that in itself bugs some people. But the usual criticism of the CL system isn't that "people have to be careful", it's that "ordinary use is not safe". But there's safe and then there's safe.

There is a sense in which C is objectively less safe than, say, Python or Lisp. And there is a sense in which people who write languages that aspire to more proofs think those languages still are not safe. So there's a bit of a continuum here that makes terminology tricky, so I have to make some assumptions that are fragile because some after-the-fact dodging can be done where critics do not acknowledge the incremental strengths, they just keep pointing out other problems as if that's what they meant all along.

In scheme, and ignoring that you could do this functionally, writing a macro foo that takes an argument and yields the list of that argument can't look like `(list ,thing) because if used in some some situation like (define (bar list) (foo list)) you would fall victim to namespace clashes. And so scheme people dislike this paradigm. But even without careful planning, the same probably is FAR LESS likely to happen in CL because:

Parameters that might get captured are usually in the variable namespace. You CAN bind functions, but it's rare, and it's super-rare for the names chosen to be things that would be the name of a pre-defined function. you'd have to be in some context where someone had done (flet ((list ...)) ....) for the list function to be bound to something unexpected, and even then you're not intended to bind list to something unexpected for other reasons, mainly that the symbol list is shared.

I allege that in the natural course of things, it's FAR more rare for the expansion of a macro to ever contain something that would get unexpectedly captured, for reasons that do not exist in the scheme world. Formally, yes, there is still a risk, but what makes this such an urgency in the Scheme world are the choices to have a Lisp1 and the choice to have no package system. Each of these things creates an insulation. In practice, the functional part of the CL world does not vary, as uses of FLET are very rare. And it's equally rare for a macro to expand into free references that are not functional references.

Also, the CL world has gensyms easily available, and CL systems often have other mechanisms that package up their use to be easy. In the Scheme world, there is no gensym and the language semantics is not defined on objects but on the notation itself. This makes things hard to compare, but it doesn't make it easy to see how package separation also eliminates a broad class of the surprise, because usually you know what's in your own package and aren't affected by what's in someone else's where in scheme symbols are symbols and it's far more dangerous to just be relying on lexical context to sort everything out.

So yes, CL is less dangerous if you limit yourself, but also it's less dangerous because a lot of times you don't have to think hard about limiting yourself. The languages features it has create naturally-more-safe situations. Note I am making a relative, not an absolute measurement of safety. I'm saying if CL were full of the conflict opportunities that Scheme is, we'd have rushed to use hygiene, too. But mostly it wasn't, so no one felt the urge.

screwlisp reshared this.

in reply to Kent Pitman

On the one hand, that is all well said.

On the other hand, I always have some nitpicky reply. 😀

(On the gripping hand -- no, I'll stop there)

You're talking about what is common and what is rare, and I can see why such was your overriding concern.

But I feel like I'm always the guy who ends up needing to fix the rare cases that then happen in real life.

For instance, when implementing a language that is wildly different than the implementation language -- "rare" seems to come up a lot there.

And also many times when I am bending heaven and earth to serve my will despite the obstinacy of the existing software infrastructure. "Just don't do that", people say.

It is indeed a lot like the needs of the formal verification by proof community, that is looking for actual math proofs, versus mundane everyday user needs.

Humpty Dumpty said "The question is, which is to be the master -- that's all" ("Through The Looking Glass", by Lewis Carroll).

Here, perhaps the master is which community you aim to serve.

@screwlisp @cdegroot @ramin_hal9001

in reply to DougMerritt (log😅 = 💧log😄)

@dougmerritt
Well, I'm just trying to explain why hygiene seems more like a crisis to the Scheme community than it did to the CL community, who mostly asked "why is this a big deal?". It is a big deal in Scheme. And it's not because of the mindset, it's because different designs favor different outcomes.

The CL community would have been outraged if we overcomplicated macros, while the Scheme community was grateful for safety they actually perceived a need for, in other words.

So yes, "the master is which community you aim to serve". We agree on that. 😀

in reply to Kent Pitman

I just want to say, I never had much of an opinion on hygienic macros, other than they seemed like a very good idea. But your explanation of why it isn’t a big deal in Common Lisp, because namespaces and libraries prevent nearly all name collisions, was very convincing. And when you consider how complicated the Racket macro expander is, you start to wonder whether it is really worth all of that complexity to ensure a very particular coding problem never happens.

@dougmerritt @screwlisp @cdegroot

in reply to Kent Pitman

> I don't quite remember if the original language feature had fully worked through all the tail call situations in the way that ultimately it did.

My memory is that the Scheme interface for continuations was completely worked out when Scheme was born, but implementation issues were not -- beyond existence proof that is.

> But it was brave to say that full continuations could be made adequately efficient.

Yes it was!

> the Lisp community in general, and here I will include Scheme in that

Planner, for instance, went in a quite different direction. Micro-Planner (and its SHRDLU) inspired Prolog. Robert Kowalski said that "Prolog is what Planner should have been" (it included unification but excluded pattern-directed invocation for example), see Kowalski, R. (1988). “Logic Programming.” Communications of the ACM, 31(9) -- although the precise phrasing I think is from interviews.

Anyway, Prolog was not a Lisp, but sure, definitely Scheme is. The history of Lisp spinoffs created quite a bit of CS history.

I did professional development in Scheme (at Autodesk, before that division was axed 🙁 -- it's certainly a workable language in the real world.

But we know that Common Lisp is too, obviously.

@screwlisp @cdegroot @ramin_hal9001

screwlisp reshared this.

in reply to Kent Pitman

> 2 of them using hardware support (typed pointers)

I learned about typed pointers from Keith Sklower, from my brief involvement in the earliest days (1978?) of Berkeley's Franz Lisp (implemented in order to support the computer algebra Macsyma port to Vaxima), and it blew my mind. Horizons extended hugely.

A few years later everyone seemed to just take the idea in stride. Yet no one seems to comment on the impact on typed pointers made by big-endian versus little-endian architectures; everyone seems to regard it as a matter of taste. It's not always; it impacts low level implementations.

@screwlisp @cdegroot @ramin_hal9001

in reply to Kent Pitman

>My (possibly faulty) understanding is that the Java GC was made to work by at least some displaced Lisp GC experts

I used to regularly talk to the technical lead for that group at Sun for unimportant reasons, and I have every reason to think that the entire team was absolutely brilliant.

I don't recall whether some of them were displaced Lisp GC experts, but I do recall that I had plenty of criticisms about Java the language, but tended to find few, if any, about Java the implementation. And they kept improving it.

@screwlisp @cdegroot @ramin_hal9001

screwlisp reshared this.

in reply to Kent Pitman

Your understanding is mostly faulty. The original GC was written by me, and I'm no Lisp GC expert. I was (and still am) an admirer of Lisp. I wrote the code for my whole PhD thesis in Lisp. My admiration for garbage collection started earlier, when I was a big user of Simula in the 70s. But the motivation for GC in Java was different: the motivation was all about reliability and security. A leading cause of security vulnerabilities as always been buggy code. And one of the leading root causes of many long standing, hard to diagnose and fix, bugs has been flakey storage management. Garbage collection goes a long way to increasing system reliability, and hence security. I had always wanted to make GC more mainstream.

When you described garbage collection to senior management back in the day, their reflexive judgement was: "bullshit! Lazy engineers just don't want to clean up their mess". But when they see measureable improvements in system robustness, and corresponding decreases in failures, they Notice.

in reply to Michael T Babcock

Geeks under a certain age are impressed by the idea one was messing about in massively multiplayer worlds in the 1980s. It was early!

I ran into TinyMUD first, and via TinyMUCK their Forth-based MUD language, MUF. Something about programming in MUDs lent itself to thinking in objects though and thinking about the things I wished I could do I (later realized I) started reverse-engineering object-oriented coding.

(I'd had earlier encounters w LISP, so at some point I realized what I was doing.)

This website uses cookies. If you continue browsing this website, you agree to the usage of cookies.