LLM access is relatively cheap now because the LLM vendors are discounting their price at a massive loss, subsidized by VC, in order to get you addicted and to drive as much skilled human labor as possible out of the workforce permanently.
The goal is monopolization, and if they’re successful, you’ll see monopolistic pricing in the future.
reshared this
Jon Sharp
in reply to Jeff Johnson • • •Gabriele Svelto
in reply to Jeff Johnson • • •Boyd Stephen Smith Jr.
in reply to Jeff Johnson • • •I feel compelled to mention there are models you can self-host. There are even models where the architecture is available under a permissive license, so you can tweak / tune / retrain / distill or whatever beyond mere prompting.
I don't recommend or defend that approach. I think there are still problems, ethical and other.
But, it could be a way to prevent "vendor lock-in" with your LLM usage.
Jonathan Lamothe
in reply to Boyd Stephen Smith Jr. • •Boyd Stephen Smith Jr. likes this.
Boyd Stephen Smith Jr.
in reply to Jonathan Lamothe • • •@me Generating test data, as a complement to QuickCheck/SmallCheck generators. I think LLMs might "explore the probability space" in different ways than manually written generators. But, I haven't validated this in practice.
I've been fairly disappointed with LLMs output all the times I've tried them. Too many hallucinations around factual data. Too little... variety(?) when doing fiction. The image generators seem better than me, but I have declined to use them (much)because I assume the image generators are "stealing" from the recognition/attribution of artists that make their art publicly visible. I know the code generators "steal" copyleft code, most likely including mine.
I don't like saying LLMs capabilities are bad, because I don't use them, for ethical reasons, enough to really know what their current capabilities are.