Skip to main content


LLMs are not intelligent but they ARE good for spouting extruded text that mimics the zeitgeist of their training data and if that training data includes the overflowing sewer that is the unfiltered internet you're going to get gaslighting, lies, conspiracy theories, and malice. Guardrails only prevent some of it from spilling over.

Humans base their norms on their peers' opinions, so LLMs potentially normalize all sorts of horrors in discourse—probably including ones we haven't glimpsed yet.

in reply to Charlie Stross

To be clear, that point about LLMs is a criticism of the training inputs. Which as far as I can see are promiscuously hoovered off the public internet by bots (like the ones perpetually DoS'ing the server my blog runs on this year) with zero thought given to curating the data for accuracy or appropriateness.

They generate perfect "answer shaped objects" … from a foul-minded misogynist white supremacist bigot oozing malice against everyone who's not like him.

in reply to Charlie Stross

LLMs are the shitbrained version of the ML models that had been accelerating (and transforming entire research areas and industries for the better) until LLMs took up all the oxygen.

To your point, those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous.

Only this era's humanity would throw away actually revolutionary technology to invest in the image of revolutionary technology instead.

But it's not a pipe, never will be

in reply to John

@johnzajac
> "Only this era's humanity would throw away actually revolutionary technology to invest in the image of revolutionary technology instead."

I disagree. The greater mass of humanity has *always* made stupid choices and the greater number of leaders have *always* been lesser people.

And this has been true for all of human history: A few leaders who are capable and decent human beings and orders of magnitude more who's only real talent is remorseless backstabbing.

@John
This entry was edited (6 days ago)
in reply to Jack William Bell

@johnzajac
Instead of the 'Great Man' theory of history, let me introduce you to the 'Unintended Side Effect' theory of history.

ETA: On a re-read I realized I failed to properly tie this in with what you are saying. Sorry about that. By the above I mean humans have historically made EXACTLY the kinds of mistakes you describe (along with many other kinds of mistakes), both en-mass and through the individual choices of our leaders.

But then history has taught me only anthroskepicism.

@John
This entry was edited (6 days ago)
in reply to Jack William Bell

@jackwilliambell
Generally I'd agree, except for one caveat: it's only in the modern era that humanity has created technology that no only does nothing but also consumes everything, and is sitting on so much prosperity (near to post-scarcity levels of it) that is so poorly distributed that they even have the *choice* to do that.

I simply think that all the evidence points to what's happening over the last 40 years as unique in human history in both scale and kind of depravity.

in reply to Charlie Stross

@Charlie Stross A million times this.

Are there very narrow areas where AI can be useful? Probably, but easily 99.9% of the stuff it's being shoehorned into, AI ought never to be trusted with.

This website uses cookies. If you continue browsing this website, you agree to the usage of cookies.