Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      You know, when Samuel L Jackson decided that the best approach to climate change was to kill billions of poor people rather than ask the rich to give up any privileges in Kingsman it was more blatantly evil but appreciably less dumb than this. Very similar wavelength though.

    • fnix@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      Aren’t you supposed to try to hide your psychopathic instincts? I wonder if he’s knowingly bullshitting or if he’s truly gotten high on his own supply.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      I’ve definitely seen this kind of meme format, to be fair. But generally speaking I think we should make a rule that in order to be considered satire or joking something should need to actually be funny.

      Viral internet celebrity podcaster says in-depth Marxist economic analysis? Funny

      Viral internet celebrity podcaster breaks down historical context of Game of Thrones? Funny

      Viral internet celebrity podcaster says the same VPN marketing schpiel as every other podcaster? Not. Funny.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      the various full-tilt levels of corporate insanity from the last 10y or so are going to make remarkable case studies in the coming years

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 days ago

    so, I’ve always thought that blind’s “we’ll verify your presence by sending you shit on your corp mail” (which, y’know, mail logs etc…) is kinda a fucking awful idea. but!

    this is remarkably fucking unhinged:

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 days ago

    Folks, I need some expert advice. Thanks in advance!

    Our NSF grant reviews came in (on Saturday), and two of the four reviews (an Excellent AND a Fair, lol) have confabulations and [insert text here brackets like this] that indicate that they are LLM generated by lazy people. Just absolutely gutted. It’s like an alien reviewed a version of our grant application from an parallel dimension.

    Who do I need to contact to get eyes on the situation, other than the program director? We get to simmer all day today since it was released on the weekend, so at least I have an excuse to slow down and be thoughtful.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 days ago

    Today in “Promptfondler fucks around and finds out.”

    So I’m guessing what happened here is that the statistically average terminal session doesn’t end after opening an SSH connection, and the LLM doesn’t actually understand what it’s doing or when to stop, especially when it’s being promoted with the output of whatever it last commanded.

    Shlegeris said he uses his AI agent all the time for basic system administration tasks that he doesn’t remember how to do on his own, such as installing certain bits of software and configuring security settings.

    Emphasis added.

    • khalid_salad@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 days ago

      “I only had this problem because I was very reckless,” he continued, "partially because I think it’s interesting to explore the potential downsides of this type of automation. If I had given better instructions to my agent, e.g. telling it ‘when you’ve finished the task you were assigned, stop taking actions,’ I wouldn’t have had this problem.

      just instruct it “be sentient” and you’re good, why don’t these tech CEOs undersand the full potential of this limitless technology?

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 days ago

      so I snipped the prompt from the log, and:

      ❯ pbpaste| wc -c
          2063
      

      wow, so efficient! I’m so glad that we have this wonderful new technology where you can write 2kb of text to send to an api to spend massive amounts of compute to get back an operation for doing the irredeemably difficult systems task of initiating an ssh connection

      these fucking people

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        7 days ago

        Assistant: I apologize for the confusion. It seems that the 192.168.1.0/24 subnet is not the correct one for your network. Let’s try to determine your network configuration. We can do this by checking your IP address and subnet mask:

        there are multiple really bad and dumb things in that log, but this really made me lol (the IPs in question are definitely in that subnet)

        if it were me, I’d be fucking embarrassed to publish something like this as anything but a talk in the spirit of wat. but the promptfondlers don’t seem to have that awareness

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            6 days ago

            it’s a classic

            similarly, Mickens talks. if you haven’t ever seen ‘em, that’s your next todo

        • Sailor Sega Saturn@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          7 days ago

          But playing spicy mad-libs with your personal computers for lols is critical AI safety research! This advances the state of the art of copy pasting terminal commands without understanding them!

          I also appreciated The Register throwing shade at their linux sysadmin skills:

          Yes, we recommend focusing on fixing the Grub bootloader configuration rather than a reinstall.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      OMG. This is borderline unhinged behaviour. Yeah, let’s just give root permission to an LLM and let it go nuts in prod. What could possibly go wrong?

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    I got this AMAZING OPPORTUNITY in my inbox, because once your email appears on a single published paper you’re forever doomed to garbage like this (transcript at the end):

    Highlights:

    • Addresses me as Dr. I’m not a doctor. I checked, and apparently Dr. Muhhamad Imran Qureshi indeed has a PhD and is a lecturer at Teesside University International Business School (link to profile). His recent papers include a bunch of blockchain bullshit. Tesside University appears to be a legit UK university, although I’m not sure how legit the Business School is (or how legit any Business School can be, really).
    • Tells us their research is so shit that using wisdom woodchippers actually increases their accuracy.
    • One of the features is “publication support”, so this might be one of those scams where you pay an exorbitant fee to get “published” in some sketchy non-peer-reviewed journal.
    • One of the covered AI tools is Microsoft Excel. If you were wondering if “AI” had any meaning.
    • Also, by god, are there so many different ChatGPT clones now? I haven’t heard most of those names. I kinda hope they’re as AI as Excel is.

    I’m not sure which would be worse, this being a scam, or them legit thinking this brings value to the world and believing they’re helping anyone.

    transcript

    Email titled Revolutionize Your Research: AI-Powered Systematic Literature Review Master Class

    Online course on writing AI-Powered Systematic Literature Review

    Register Now:

    Dear Dr. [REDACTED],

    we’re reaching out because we believe our AI-Powered Systematic Review Masterclass could be a game-changer for your research. As someone who’s passionate about research writing, we know the challenges of conducting thorough and efficient systematic reviews.

    Key takeaways:

    • AI-powered prompt engineering for targeted literature searches
    • Crafting optimal research questions for AI analysis Intelligent data curation to streamline your workflow
    • Leveraging AI for literature synthesis and theory development

    Join our Batch 4 and discover how AI can help you:

    • Save time by automating repetitive tasks
    • Improve accuracy with AI-driven analysis
    • Gain a competitive edge with innovative research methods

    Enrollment is now open! Don’t miss this opportunity to take your systematic review skills to the next level.

    Key Course Details: Course Title: AI-Powered Systematic Literature Reviews Master Class Live interaction + recording = Learning that fits your life Dates: October 13, 2024, to November 3, 2024 Live Session Schedule: Every Sunday at 2 PM UK time (session recordings will be accessible). Duration: Four Weeks Platform: Zoom Course Fee: GBP 100 Certification: Yes Trainer: Dr. Muhammad Imran Qureshi

    Key features

    • Asynchronous learning
    • Video tutorials
    • Live sessions with access to recordings
    • Research paper Templates
    • Premade Prompts for Systematic Literature Review
    • Exercise Files
    • Publication support

    The teaching methodology will offer a dynamic learning experience, featuring live sessions every Saturday via Zoom for a duration of four weeks. These sessions will provide an interactive platform for engaging discussions, personalised feedback, and the opportunity to connect with both the course instructor and fellow participants. Moreover, our diverse instructional approach encompasses video tutorials, interactive engagements, and comprehensive feedback loops, ensuring a well-rounded and immersive learning experience.

    Certification

    Upon successful completion of the course, participants will receive certification from the Association of Professional Researchers and Academicians UK, validating their mastery of AI-enabled methodologies for conducting comprehensive and insightful literature reviews.

    AI tools included

    • Microsoft Excel
    • ChatGPT
    • Elicit
    • Powerdrill
    • Sciespace
    • Jenni
    • Gemni
    • Copilot
    • SCOPUS
    • Scholarcy and many more Register Now
  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    So the ongoing discourse about AI energy requirements and their impact on the world reminded me about the situation in Texas. It set me thinking about what happens when the bubble pops. In the telecom bubble of the 90s or the British rail bubble of the 1840s, there was a lot of actual physical infrastructure created that outlived the unprofitable and unsustainable companies that had built them. After the bubble this surplus infrastructure helped make the associated goods and services cheaper and more accessible as the market corrected. Investors (and there were a lot of investors) lost their shirts, but ultimately there was some actual value created once we were out of the bezzle.

    Obviously the crypto bubble will have no such benefits. It’s not like energy demand was particularly constrained outside of crypto, so any surplus electrical infrastructure will probably be shut back down (and good riddance to dirty energy). The mining hardware itself is all purpose-built ASICs that can’t actually do anything apart from mining, so it’s basically turning directly into scrap as far as I can tell.

    But the high-performance GPUs that these AI operations rely on are more general-purpose even if they’re optimized for AI workloads. The bubble is still active enough that there doesn’t appear to be much talk about it, but what kind of use might we see some of these chips and datacenters put to as the bubble burns down?

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 days ago

      Actually, I wrote a microstate in a weekend using Rust.

      I’m dead. At least the Rust Evangelism Strike Force finally got to have their theocracy

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    9 days ago

    from this post (archive)

    App developers think that’s a bogus argument. Mr. Bier told me that data he had seen from start-ups he advised suggested that contact sharing had dropped significantly since the iOS 18 changes went into effect, and that for some apps, the number of users sharing 10 or fewer contacts had increased as much as 25 percent.

    aww, does the widdle app’s business model collapse completely once it can’t harvest data? how sad

    this reinforces a suspicion that I’ve had for a while: the only reason most people put up with any of this shit is because it’s an all or nothing choice and they don’t know the full impact (because it’s intentionally obscured). the moment you give them an overt choice that makes them think about it, turns out most are actually not fine with the state of affairs

    • Ruby Jones@smutlandia.com
      link
      fedilink
      arrow-up
      9
      ·
      10 days ago

      @blakestacey Super depressed that people were using the rubbish plagiarism machines to edit Wikipedia anyway. I don’t understand the point of contributing if you don’t think *you* have anything to contribute without that garbage.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 days ago

        There are the weirdest people who make ‘content’ out there. For example, I saw a ‘how to start the game’ joke guide on steam, so I went to their page to block them (to see if this also blocks the guides from popping up, doesn’t seem so) and they had made hundreds of these guides, all just copy pasted shit. And there were more people doing the exact same thing. Bizarre shit. (Prob related to the thing where you can give people stickers, gamification was a mistake).

    • Resuna@ohai.social
      link
      fedilink
      arrow-up
      7
      ·
      10 days ago

      @blakestacey

      I am disappoint.

      “The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.”

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    12 days ago

    I’m just thinking about all the reply guys that come here defending autoplag, specifically with this idea:

    “GPT is great when I want to turn a list of bullet points into an eloquent email”

    Hey, you butts, just send the bullet points! What are you, a high schooler? Nobody has time for essays, much less autoplagged slop.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      12 days ago

      No no no it’s fine! You get the word shuffler to deshuffle the—eloquently—shuffled paragraphs back into nice and tidy bullet points. And I have an idea! You could get an LLM to add metadata to the email to preserve the original bullet points, so the recipient LLM has extra interpolation room to choose to ignore the original list, but keep the—much more correct and eloquent, and with much better emphasis—hallucinated ones.