Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
LW discourages LLM content, unless the LLM is AGI:
https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
Never change LW, never change.
From the comments
(https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong?commentId=xnfHpn9ryjKqG8WKA)
No biggie, just decide one of the largest open questions in ethics and use that to moderate.
(It would be funny if unaligned AIs take advantage of this to plot humanity’s downfall on LW, surrounded by flustered rats going all “techcnially they’re not breaking the rules”. Especially if the dissenters are zapped from orbit 5s after posting. A supercharged Nazi bar, if you will)
I wrote down some theorems and looked at them through a microscope and actually discovered the objectively correct solution to ethics. I won’t tell you what it is because science should be kept secret (and I could prove it but shouldn’t and won’t).
Reminds me of the stories of how Soviet peasants during the rapid industrialization drive under Stalin, who’d never before seen any machinery in their lives, would get emotional with and try to coax faulty machines like they were their farm animals. But these were Soviet peasants! What are structural forces stopping Yud & co outgrowing their childish mystifications? Deeply misplaced religious needs?
I feel like cult orthodoxy probably accounts for most of it. The fact that they put serious thought into how to handle a sentient AI wanting to post on their forums does also suggest that they’re taking the AGI “possibility” far more seriously than any of the companies that are using it to fill out marketing copy and bad news cycles. I for one find this deeply sad.
Edit to expand: if it wasn’t actively lighting the world on fire I would think there’s something perversely admirable about trying to make sure the angels dancing on the head of a pin have civil rights. As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
This is consistent if you believe rights are contingent on achieving an integer score on some bullshit test.
All praise the Omnissiah
Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).
Instructions unclear, LLMs now posting Texas A&M propaganda.
they’re never going to let it go, are they? it doesn’t matter how long they spend receiving zero utility or signs of intelligence from their billion dollar ouji boards
Don’t think they can, looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted. Don’t think that is going to be psychologically healthy as a realization, it will be like the people who suddenly realize Qanon is a lie and they alienated everybody in their lives because they got tricked.
Adding insult to injury, they’d likely also have to contend with the fact that much of the harm this AI bubble caused was the direct consequence of their dumbshit attempts to prevent an AI Apocalypsetm
As for the upcoming AI winter, I’m predicting we’re gonna see the death of AI as a concept once it starts. With LLMs and Gen-AI thoroughly redefining how the public thinks and feels about AI (near-universally for the worse), I suspect the public’s gonna come to view humanlike intelligence/creativity as something unachievable by artificial means, and I expect future attempts at creating AI to face ridicule at best and active hostility at worst.
Taking a shot in the dark, I suspect we’ll see active attempts to drop the banhammer on AI as well, though admittedly my only reason is a random BlueSky post openly calling for LLMs to be banned.
Locker Weenies
(from the comments).
Yeah, euh, congrats in realizing something that a lot of people already know for a long time now. Not only is there text specifically generated to try and poison LLM results (see the whole ‘turns out a lot of pro russian disinformation now is in LLMs because they spammed the internet to poison LLMs’ story, but also reply bots for SEO google spamming). Welcome to the 2010s LW. The paperclip maximizers are already here.
The only reason this felt weird to them is because they look at the whole ‘coming AGI god’ idea with some quasi-religious awe.