LLMs are not intelligent but they ARE good for spouting extruded text that mimics the zeitgeist of their training data and if that training data includes the overflowing sewer that is the unfiltered internet you're going to get gaslighting, lies, conspiracy theories, and malice. Guardrails only prevent some of it from spilling over.
Humans base their norms on their peers' opinions, so LLMs potentially normalize all sorts of horrors in discourse—probably including ones we haven't glimpsed yet.
Jonathan Lamothe likes this.
reshared this
Charlie Stross
in reply to Charlie Stross • • •To be clear, that point about LLMs is a criticism of the training inputs. Which as far as I can see are promiscuously hoovered off the public internet by bots (like the ones perpetually DoS'ing the server my blog runs on this year) with zero thought given to curating the data for accuracy or appropriateness.
They generate perfect "answer shaped objects" … from a foul-minded misogynist white supremacist bigot oozing malice against everyone who's not like him.
Bernie Isn't In Epstein Files reshared this.
John
in reply to Charlie Stross • • •LLMs are the shitbrained version of the ML models that had been accelerating (and transforming entire research areas and industries for the better) until LLMs took up all the oxygen.
To your point, those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous.
Only this era's humanity would throw away actually revolutionary technology to invest in the image of revolutionary technology instead.
But it's not a pipe, never will be
Jack William Bell
in reply to John • • •@johnzajac
> "Only this era's humanity would throw away actually revolutionary technology to invest in the image of revolutionary technology instead."
I disagree. The greater mass of humanity has *always* made stupid choices and the greater number of leaders have *always* been lesser people.
And this has been true for all of human history: A few leaders who are capable and decent human beings and orders of magnitude more who's only real talent is remorseless backstabbing.
Jack William Bell
in reply to Jack William Bell • • •@johnzajac
Instead of the 'Great Man' theory of history, let me introduce you to the 'Unintended Side Effect' theory of history.
ETA: On a re-read I realized I failed to properly tie this in with what you are saying. Sorry about that. By the above I mean humans have historically made EXACTLY the kinds of mistakes you describe (along with many other kinds of mistakes), both en-mass and through the individual choices of our leaders.
But then history has taught me only anthroskepicism.
John
in reply to Jack William Bell • • •@jackwilliambell
Generally I'd agree, except for one caveat: it's only in the modern era that humanity has created technology that no only does nothing but also consumes everything, and is sitting on so much prosperity (near to post-scarcity levels of it) that is so poorly distributed that they even have the *choice* to do that.
I simply think that all the evidence points to what's happening over the last 40 years as unique in human history in both scale and kind of depravity.
Jef Poskanzer
in reply to Charlie Stross • • •Nicole Parsons
in reply to Charlie Stross • • •Some AI lobbying takes the form of cheating out wins in competitions.
fortune.com/2025/01/21/eye-on-…
decrypt.co/302691/did-openai-c…
the-decoder.com/openai-claims-…
theatlantic.com/technology/arc…
techrepublic.com/article/news-…
We’re Entering Uncharted Territory for Math
Matteo Wong (The Atlantic)Nicole Parsons reshared this.
Jonathan Lamothe
in reply to Charlie Stross • •@Charlie Stross A million times this.
Are there very narrow areas where AI can be useful? Probably, but easily 99.9% of the stuff it's being shoehorned into, AI ought never to be trusted with.
Elyse M Grasso likes this.