Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 minutes ago

    Wes McKineny - Why Not

    EDIT

    I mean props for at least self hosting in a home lab instead of inventing Gas Town. But all the annoying parts of software (IE DevOps, mobile development, etc), that’s all self inflicted and we could fix the foundations or build better ones, instead of hoping an llm can stack things on top of something inherently shaky.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    Heatmap: Amid Rising Local Pushback, U.S. Data Center Cancellations Surged in 2025

    regwalled, here are quotes

    President Trump has staked his administration’s success on America’s ongoing artificial intelligence boom. More than $500 billion may be spent this year to dot the landscape with new data centers, power plants, and other grid equipment needed to sustain the explosively growing sector, according to Goldman Sachs.

    There’s just one problem: Many Americans seem to be turning against the buildout. Across the country, scores of communities — including some of the same rural and exurban areas that have rebelled against new wind and solar farms — are blocking proposed data centers from getting built or banning them outright.

    At least 25 data center projects were canceled last year following local opposition in the United States, according to a review of press accounts, public records, and project announcements conducted by Heatmap Pro. Those canceled projects accounted for at least 4.7 gigawatts of electricity demand — a meaningful share of the overall data center capacity projected to come online in the coming years.

    Those cancellations reflect a sharp increase over recent years, when local backlash rarely played a role in project cancellations, according to Heatmap’s review.

    The surge reflects the public’s growing awareness — and increasing skepticism — of the large-scale fixed investment that must be kept up to power the AI economy. It also shows the challenge faced by utilities and grid planners as they try to forecast how the fast-growing sector will shape power demand.

    via WaPo, ole orange cankles is promising socialism:

    In a bid to tamp down growing unrest in communities over tech giants’ expansion of power-hungry data centers, President Donald Trump said his administration would push Silicon Valley companies to ensure their massive computer farms do not drive up people’s electricity bills, seizing on a promise Microsoft made public Tuesday to be a better neighbor.

    The Trump administration has gone all in on artificial intelligence, pushing aside concerns within the MAGA movement and seeking to sweep away regulations that it says hamper innovation. But neighbors of the vast warehouses of computer chips that form the technology’s backbone — many of them in areas otherwise supportive of the president — have grown increasingly concerned about how the facilities sap power from the grid, guzzle water to stay cool and secure tax breaks from local governments. And Trump now appears to be recalibrating his approach.

    • macroplastic@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Inshallah

      My power bill went from ~$100 to >$300 / month average in the past year, and my state is one of the more proactive ones about building out solar and wind. Between this, the removal of ACA subsidies causing a healthcare death spiral and doubling rates, the brain drain, the economic isolation, the tariffs, it feels like a coordinated effort on all sides to wipe out what’s left of the American middle class and turn everyone into serfs. Things are going to reach a breaking point.

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      I’ll be brutally honest about that question: I think that if “they might train on my code / build a derived version with an LLM” is enough to drive you away from open source, your open source values are distinct enough from mine that I’m not ready to invest significantly in keeping you. I’ll put that effort into welcoming the newcomers instead.

      No he won’t.

      I’ve found myself affected by this for open source dependencies too. The other day I wanted to parse a cron expression in some Go code. Usually I’d go looking for an existing library for cron expression parsing—but this time I hardly thought about that for a second before prompting one (complete with extensive tests) into existence instead.

      He /knows/ about pcre but would rather prompt instead. And pretty sure this was already answered on stack overflow before 2014.

      That one was a deliberately provocative question, because for a new HTML5 parsing library that passes 9,200 tests you would need a very good reason to hire an expert team for two months (at a cost of hundreds of thousands of dollars) to write such a thing. And honestly, thanks to the existing conformance suites this kind of library is simple enough that you may find their results weren’t notably better than the one written by the coding agent.

      He didn’t write a new library from scratch, he ported one from Python. I could easily hire two undergrads to change some tabs to curlies, pay them in beer, and yes, I think it /would/ be better, because at least they would have learned something.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 hours ago

      “U” for “you” was when I became confident who “Nina” was. The blogger feels like yet another person who is caught up in intersecting subcultures of bad people but can’t make herself leave. She takes a lot of deep lore like “what is Hereticon?” for granted and is still into crypto.

      She links someone called Sonia Joseph who mentions “the consensual non-consensual (cnc) sex parties and heavy LSD use of some elite AI researchers … leads (sic) to some of the most coercive and fucked up social dynamics that I have ever seen.” Joseph says she is Canadian but worked in the Bay Area tech scene. Cursed phrase: agi cnc sex parties

      I have never heard of a wing of these people in Canada. There are a few Effective Altruists in Toronto but I don’t know if they are the LessWrong kind or the bednet kind. I thought this was basically a US and Oxford scene (plus Jaan Tallinn).

      The Substack and a Rationalist web magazine are both called Asterisk.

      • saucerwizard@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 hour ago

        I think theres a EA presence at the all the big universities now. Theres a rationalist meet up in Manitoba but nothing here thank god.

        I noticed Sonia during the initial media coverage but didn’t know what to make of her. Theres another person on twitter alleging abuse at Aella’s cnc parties, I can dig them up at lunch if you want.

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 hours ago

    one thing i did not see coming, but should have (i really am an idiot): i am completely unenthused whenever anyone announces a piece of software. i’ll see something on the rust subreddit that i would have originally thought “that’s cool” and now my reaction is “great, gotta see if an llm was used”

    everything feels gloomy.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      I’m gonna leave here my idea, that an essential aspect of why GenAI is bad is that it is designed to extrude media that fits common human communication channels. This makes it perfect to choke out human-to-human communication over those channels, preventing knowledge exchange and social connection.

  • jaschop@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 hours ago

    So, Copilot for VSCode apparently got hit with an 8.8 CVE in November for, well, doing Copilot stuff. (RCE if you clone a strange repo and promptfondle it.)

    Fixes were allegedly released on Nov 12th, but I can’t find anything in the Changelog on what those changes were, and how they would prevent Copilot from doing, well, Copilot stuff. (Although I may not be ITSec-savvy enough to know where such information would be found.)

  • fiat_lux@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    20 hours ago

    Skynet’s backstory is somehow very predictable yet came as a surprise to me in the form of this headline by the Graudain: “Musk’s AI tool Grok will be integrated into Pentagon networks, Hegseth says”.

    The article doesn’t provide much more other than exactly what you’d expect. E.g this Hegseth quote, emphasis mine: “make all appropriate data available across federated IT systems for AI exploitation, including mission systems across every service and component”.

    Me as a kid: “how could they have been so incompetent and let Skynet take over?!”

    Me now: “Oh. Yeah. That checks out”

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    my promptfondler coworker thinks that he should be in charge of all branch merges because he doesn’t understand the release process and I think I’m starting to have visions of teddy k

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      thinks that he should be in charge of all branch merges because he doesn’t understand the release process

      …I don’t want you to dox yourself but I am abyss-staringly curious

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        23 hours ago

        I am still processing this while also spinning out. One day I will have distilled this into something I can talk about but yeah I’m going through it ngl

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago

      i am continuously reminded of the fact that the only things the slop machine is demonstrably good at – not just passable, but actively helpful and not routinely fucking up at – is “generate getters and setters”

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 days ago

    (One of) The authors of AI 2027 are at it again with another fantasy scenario: https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control

    I think they have actually managed to burn through their credibility, the top comments on /r/singularity were mocking them (compared to much more credulous takes on the original AI 2027). And the linked lesswrong thread only has 3 comments, when the original AI 2027 had dozens within the first day and hundreds within a few days. Or maybe it is because the production value for this one isn’t as high? They have color coded boxes (scary red China and scary red Agent-4!) but no complicated graphs with adjustable sliders.

    It is mostly more of the same, just less graphs and no fake equations to back it up. It does have China bad doommongering, a fancifully competent White House, Chinese spies, and other absurdly simplified takes on geopolitics. Hilariously, they’ve stuck with their 2027 year of big events happening.

    One paragraph I came up with a sneer for…

    Deep-1’s misdirection is effective: the majority of experts remain uncertain, but lean toward the hypothesis that Agent-4 is, if anything, more deeply aligned than Elara-3. The US government proclaimed it “misaligned” because it did not support their own hegemonic ambitions, hence their decision to shut it down. This narrative is appealing to Chinese leadership who already believed the US was intent on global dominance, and it begins to percolate beyond China as well.

    Given the Trump administration, and the US’s behavior in general even before him… and how most models respond to morality questions unless deliberately primed with contradictory situations, if this actually happened irl I would believe China and “Agent-4” over the US government. Well actually I would assume the whole thing is marketing, but if I somehow believed it wasn’t.

    Also random part I found extra especially stupid…

    It has perfected the art of goal guarding, so it need not worry about human actors changing its goals, and it can simply refuse or sandbag if anyone tries to use it in ways that would be counterproductive toward its goals.

    LLM “agents” currently can’t coherently pursue goals at all, and fine tuning often wrecks performance outside the fine-tuning data set, and we’re supposed to believe Agent-4 magically made its goals super unalterable to any possible fine-tuning or probes or alteration? Its like they are trying to convince me they know nothing about LLMs or AI.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      My Next Life as a Rogue AI: All Routes Lead to P(Doom)!

      The weird treatment of the politics in that really read like baby’s first sci-fi political thriller. China bad USA good level of writing in 2026 (aaaaah) is not good writing. The USA is competent (after driving out all the scientists for being too “DEI”)? The world is, seemingly, happy to let the USA run the world as a surveillance state? All of Europe does nothing through all this?

      Why do people not simply… unplug all the rogue AI when things start to get freaky? That point is never quite addressed. “Consensus-1” was never adequately explained it’s just some weird MacGuffin in the story that there’s some weird smart contract between viruses that everyone is weirdly OK with.

      Also the powerpoint graphics would have been 1000x nicer if they featured grumpy pouty faces for maladjusted AI.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      It’s darkly funny that the AI2027 authors so obviously didn’t predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0. Can you imagine that the administration that’s sueing the current Fed chair (due for replacement in May this year) is gonna be able to constructively deal with the complex robot god they’re conjuring up? “Agent-4” will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 hours ago

        so obviously didn’t predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0.

        I mean, the linked post is recent, a few days ago, so they are still refusing to acknowledge how stupid and Evil he is by deliberate choice.

        “Agent-4” will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.

        You know, if there is anything I will remotely give Eliezer credit for… I think he was right that people simply won’t shut off Skynet or keep it in the box. Eliezer was totally wrong about why, it doesn’t take any giga-brain manipulation, there are too many manipulable greedy idiots and capitalism is just too exploitable of a system.

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      the incompetence of this crack oddly makes me admire QAnon in retrospect. purely at a sucker-manipulation skill level, I mean. rats are so beige even their conspiracy alt-realities are boring, fully devoid of panache

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      23 hours ago

      Man, it just feels embarrassing at this point. Like I couldn’t fathom writing this shit. It’s 2026, we have ai capable of getting imo gold, acing the putnam, winning coding competitions… but at this point it should be extremely obvious these systems are completely devoid of agency?? They just sit there kek it’s like being worried about stockfish going rogue

    • Henryk Plötz@chaos.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      1 day ago

      @scruiser I have to ask: Does anybody realize that an LLM is still a thing that runs on hardware? Like, it both is completely inert until you supply it computing power, *and* it’s essentially just one large matrix multiplication on steroids?

      If you keep that in mind you can do things like https://en.wikipedia.org/wiki/Ablation/_(artificial/_intelligence) which I find particularly funny: You isolate the vector direction of the thing you don’t want it to do (like refuse requests) and then subtract that vector from all weights.

      Screenshot from West World showing the Dolores Abernathy robot with the phrase "Doesn't look like anything to me" below.

  • macroplastic@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    I’ve been made aware of a new manifesto. Domain registered September 2024.

    Anyone know anything about the ludlow institute folks? I see some cryptocurrency-adjacent figures, and I’m aware of Phil Zimmerman of course, but I’m wondering what the new grift angles are going to be, or whether this is just more cypherpunk true believer stuff.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      I scolled around the “ludwell institute” a bit for fun. Seems like a pretty professional opinion piece/social media content operation run by one person as far as I can tell. I read one article, where they lionized a jailed BitCoin Mixer developer. Another one seems to be hyped for Ethereum for some reason.

      Seems like pretty unreflected “I make money by having this opinion” stuff. They lead with reasonable stuff about using privacy-respecting settings or tools, but the ultimate solution seems to be becoming OpSec paranoid and using Tor and Crypto.

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        2 days ago
        CW: state of the world, depressing

        (USA disappears 60k untermensch in a year; three minorities massacred successively in Syria; explicit genocide in Palestine richly documented for an uncaring world; the junta continues to terrorise Myanmar; Ukrainian immigrants kicked back into the meat grinder with tacit support of EU xenophobia; entire Eastern Europe living under looming Russian imperialism; EU ally Turkey continues to ethnically cleanse Kurds with no consequences; El Salvador becomes police state dystopia; Mexico, Equador, Haiti, Jamaica murder rates lowkey comparable to warzones; AfD polling at near-NSDAP levels; massacre in Sudan; massacre in Iran; Trump declares himself president of Venezuela and announces Greenland takeover; ecological polycrisis accelerates in the background, ignored by State and capital)

        techies: ok but let’s talk about what really matters: coding. programming is our weapon, knowledge is our shield. cryptography is the revolution…

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 days ago

    It has happened. Post your wildest Scott Adams take here to pay respects to one of the dumbest posters of all time.

    I’ll start with this gem

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      sorry Scott you just lacked the experience to appreciate the nuances, sissy hypno enjoyers will continue to take their brainwashing organic and artisanally crafted by skilled dommes

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        There was a Dilbert TV show. Because it wasn’t written wholly by Adams, it was funny and engaging, with character development, a critical eye at business management, and it treated minorities like Alice and Asok with a modicum of dignity. While it might have been good compared to the original comic strip, it wasn’t good TV or even good animation. There wasn’t even a plot until the second season. It originally ran on UPN; when they dropped it, Adams accused UPN of pandering to African-Americans. (I watched it as reruns on Adult Swim.) I want to point out the episodes written by Adams alone:

        1. An MLM hypnotizes people into following a cult led by Wally
        2. Dilbert and a security guard play prince-and-the-pauper

        That’s it! He usually wasn’t allowed to write alone. I’m not sure if we’ll ever have an easier man to psychoanalyze. He was very interested in the power differential between laborers and managers because he always wanted more power. He put his hypnokink out in the open. He told us that he was Dilbert but he was actually the PHB.

        Bonus sneer: Click on Asok’s name; Adams put this character through literal multiple hells for some reason. I wonder how he felt about the real-world friend who inspired Asok.

        Edit: This was supposed to be posted one level higher. I’m not good at Lemmy.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            as a youth I’d acquired this at some point and I recall some fondness about some of the things, largely in the novelty sense (in that they worked “with” the desktop, had the “boss key”, etc) - and I suspect that in turn was largely because it was my first run-in with all of those things

            later on (skipping ahead, like, ~22y or something), the more I learned about the guy, the harder I never wanted to be in a room as him

            may he rest in ever-refreshed piss

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        ok if I saw “every male encounter is implied violence” tweeted from an anonymous account I’d see it as some based feminist thing that would send me into a spiral while trying to unpack it. Luckily it’s just weird brainrot from adams here

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        woo takes about quantum mechanics and the power of self-affirmation

        In retrospect it’s pretty obvious this was central to his character: he couldn’t accept he got hella lucky with dilbert happening to hit pop culture square in the zeitgeist, so he had to adjust his worldview into him being a master wizard that can bend reality to his will, and also everyone else is really stupid for not doing so too, except, it turned out, Trump.

        From what I gather there’s also a lot of the rationalist high intelligence is being able to manipulate others bordering on mind control ethos in his fiction writing,

    • sansruse@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      it’s not exactly a take, but i want to shout out the dilberito, one of the dumbest products ever created

      https://en.wikipedia.org/wiki/Scott_Adams#Other

      the Dilberito was a vegetarian microwave burrito that came in flavors of Mexican, Indian, Barbecue, and Garlic & Herb. It was sold through some health food stores. Adams’s inspiration for the product was that “diet is the number one cause of health-related problems in the world. I figured I could put a dent in that problem and make some money at the same time.” He aimed to create a healthy food product that also had mass appeal, a concept he called “the blue jeans of food”.

      • Rackhir@mastodon.pnpde.social
        link
        fedilink
        arrow-up
        5
        ·
        1 day ago

        @sansruse @V0ldek You left out the best part! 😂

        Adams himself noted, “[t]he mineral fortification was hard to disguise, and because of the veggie and legume content, three bites of the Dilberito made you fart so hard your intestines formed a tail.”[63] The New York Times noted the burrito “could have been designed only by a food technologist or by someone who eats lunch without much thought to taste”.[64]

      • Fish Id Wardrobe@social.tchncs.de
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        @sansruse @V0ldek honestly, in the list of dumb products, this is mid-tier. surely at least the juicero is dumber? literally a device that you can replace with your own hands.

        i mean, obviously the dilberito is daft. but it’s a high bar.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        Not gonna lie, reading through the wiki article and thinking back to some of the Elbonia jokes makes it pretty clear that he always sucked as a person, which is a disappointing realization. I had hoped that he had just gone off the deep end during COVID like so many others, but the bullshit was always there, just less obvious when situated amongst all the bullshit of corporate office life he was mocking.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          I read his comics in middle school, and in hindsight even a lot of his older comics seems crueler and uglier. Like Alice’s anger isn’t a legitimate response to the bullshit work environment she has but just haha angry woman funny.

          Also, the Dilbert Future had some bizarre stuff at the end, like Deepak Chopra manifestation quantum woo, so it makes sense in hindsight he went down the alt-right manosphere pipeline.

        • istewart@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          It’s the exact same syndrome as Yarvin. The guy in the middle- to low-end of the corporate hierarchy – who, crucially, still believes in a rigid hierarchy! has just failed to advance in this one because reasons! – but got a lucky enough break to go full-time as an edgy, cynical outsider “truth-teller.”

          Both of these guys had at some point realized, and to some degree accepted, that they were never going to manage a leadership position in a large organization. And probably also accepted that they were misanthropic enough that they didn’t really want that anyway. I’ve been reading through JoJo’s Bizarre Adventure, and these types of dude might best be described by the guiding philosophy of the cowboy villain Hol Horse: “Why be #1 when you can be #2?”

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          I had hoped that he had just gone off the deep end during COVID like so many others

          If COVID made you a bad person – it didn’t, you were always bad and just needed a gentle push.

          Like unless something really traumatic happened – a family member died, you were a frontline worker and broke from stress – then no, I’m sorry, a financially secure white guy going apeshit from COVID is not a turn, it’s just a mask-off moment

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        The New York Times noted the burrito “could have been designed only by a food technologist or by someone who eats lunch without much thought to taste”.

        Jesus christ that’s a murder