• 1 Post
  • 62 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • The sequence of links hopefully lays things out well enough for normies? I think it it does, but I’ve been aware of the scene since the mid 2010s, so I’m not the audience that needs it. I can almost feel sympathy for Sam dealing with all the doomers, except he uses the doom and hype to market OpenAI and he lied a bunch so not really. And I can almost feel sympathy for the board, getting lied to and outmaneuvered by a sociopathic CEO, but they are a bunch of doomers from the sound of it so, eh. I would say they deserve each other, its the rest of the world that don’t deserve them (from the teacher dealing with the LLM slop plugged into homework, to the Website Admin fending off scrapers, to legitimate ML researchers getting the attention sucked away while another AI winter starts to loom, to the machine cultist not saving a retirement fund and having panic attacks over the upcoming salvation or doom).


  • As to cryonics… for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.

    As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:

    • no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD

    • no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so they’re left with eugenics and GeneSmith’s insanity

    • no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)

    • no exocortex, just over priced google glasses and a hallucinating LLM “assistant”

    • no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people

    The future is here, and it’s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibli’s style for your shitty fanfic projects, so there are a few upsides.





  • He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.

    His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

    And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far

    My own scoring:

    The first prompt programming libraries start to develop, along with the first bureaucracies.

    I don’t think any sane programmer or scientist would credit the current “prompt engineering” “skill set” with comparison to programming libraries, and AI agents still aren’t what he was predicting for 2022.

    Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.

    There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.

    Revenue is high enough to recoup training costs within a year or so.

    Hahahaha, no… they are still losing money per customer, much less recouping training costs.

    Instead, the AIs just make dumb mistakes, and occasionally “pursue unaligned goals” but in an obvious and straightforward way that quickly and easily gets corrected once people notice

    The safety researchers have made this one “true” by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that don’t read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.

    The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.

    They also try to contrive scenarios

    Emphasis on the word"contrive"

    The age of the AI assistant has finally dawned.

    So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.

    I can see how they are trying to anoint his as a prophet, but I don’t think anyone not already drinking the kool aid will buy it.











  • Is this water running over the land or water running over the barricade?

    To engage with his metaphor, this water is dripping slowly through a purpose dug canal by people that claim they are trying to show the danger of the dikes collapsing but are actually serving as the hype arm for people that claim they can turn a small pond into a hydroelectric power source for an entire nation.

    Looking at the details of “safety evaluations”, it always comes down to them directly prompting the LLM and baby-step walking it through the desired outcome with lots of interpretation to show even the faintest traces of rudiments of anything that looks like deception or manipulation or escaping the box. Of course, the doomers will take anything that confirms their existing ideas, so it gets treated as alarming evidence of deception or whatever property they want to anthropomorphize into the LLM to make it seem more threatening.


  • My understanding is that it is possible to reliably (given the reliability required for lab animals) insert genes for individual proteins. I.e. if you want a transgenetic mouse line that has neurons that will fluoresce under laser light when they are firing, you can insert a gene sequence for GCaMP without too much hassle. You can even get the inserted gene to be under the control of certain promoters so that it will only activate in certain types of neurons and not others. Some really ambitious work has inserted multiple sequences for different colors of optogenetic indicators into a single mouse line.

    If you want something more complicated that isn’t just a sequence for a single protein or at most a few protein, never mind something nebulous on the conceptual level like “intelligence” then yeah, the technology or even basic scientific understanding is lacking.

    Also, the gene insertion techniques that are reliable enough for experimenting on mice and rats aren’t nearly reliable enough to use on humans (not that they even know what genes to insert in the first place for anything but the most straightforward of genetic disorders).




  • Soyweiser has likely accurately identified that you’re JAQing in bad faith, but on the slim off chance you actually want to educate yourself, the rationalwiki page on Biological Determinism and Eugenics is a decent place to start to see the standard flaws and fallacies used to argue for pro-eugenic positions. Rationalwiki has a scathing and sarcastic tone, but that tone is well deserved in this case.

    To provide a brief summary, in general, the pro-eugenicists misunderstand correlation and causation, misunderstand the direction of causation, overestimate what little correlation there actually is, fail to understand environmental factors (especially systemic inequalities that might require leftist solutions to actually have any chance at fixing), and refuse to acknowledge the context of genetics research (i.e. all the Neo-Nazis and alt righters that will jump on anything they can get).

    The lesswrongers and SSCs sometimes whine they don’t get fair consideration, but considering they take Charles Murray the slightest bit seriously they can keep whining.