most of the time you’ll be talking to a bot there without even realizing. they’re gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate thousands of comments and posts, all to support your corporate agenda.

for example you can set it to hate a public figure and force negative commentary into conversations all over the site. you can set it to praise and recommend your latest product. like when a pharma company has a new pill out, they’ll be able to target self-help subs and flood them with fake anecdotes and user testimony that the new pill solves all your problems and you should check it out.

the only real humans you’ll find there are the shills that run the place, and the poor suckers that fall for the scam.

it’s gonna be a shithole.

  • EnglishMobster@kbin.social
    link
    fedilink
    arrow-up
    100
    ·
    edit-2
    1 year ago

    This is already happening.

    Bots are being used to astroturf the protests on Reddit. You can see at the bottom how this so-called “user” responds “as an AI language program…”

        • Empyreal@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Or its another form of a human-monitored bot account. Those have existed for years

          Or its just another bot response. I’ve had arguments with bots that I have banned from my subreddit before. Some of their response mechanisms are quite creative.

    • genoxidedev1@kbin.social
      link
      fedilink
      arrow-up
      33
      ·
      1 year ago

      I never fully trust users with automated usernames and this just proves my paranoia.

      Then again someone who calls subreddits “subReddit” is automagically a bot in my eyes anyways.

      • JunkMilesDavis@kbin.social
        link
        fedilink
        arrow-up
        21
        ·
        1 year ago

        Glad it wasn’t just me. It wasn’t often I paid attention to usernames on the big subs, but it seemed like at some point they were absolutely flooded with “Adjective_Noun_1234” users, and I couldn’t stop seeing it once I noticed. Those and the comment-reposting bots (which probably won’t be called out by other bots anymore without a usable API) made me wonder how many actual humans I was interacting with.

        • Anomander@kbin.social
          link
          fedilink
          arrow-up
          14
          ·
          1 year ago

          There was also some very good and valid reasons why real people wound up with those usernames - mainly, that the signup process (from the App I think? maybe also in New Reddit?) both downplayed, and obstructed changing, the default username during the process - and instead led the user to believe that only the “display name” selected later would appear to other users on the site.

          Completely omitting the fact that anyone on old reddit or accessing through an app would only see the username, as “display names” don’t seem to have ever been served via the API.

          To many of those users, they had no clue that what people were seeing attached to their comments or submissions was “extravagant_mustard_924” and not “Cool Dude Brian” or whatever they’d put in as their display name. They were led to believe that the latter was all that would display, and that signing up with a default account name would only determine what they entered in the top box while logging in.

        • genoxidedev1@kbin.social
          link
          fedilink
          arrow-up
          8
          ·
          1 year ago

          It would have helped Reddit, or at least the user experience on Reddit, majorly if they had just disabled API access for all but a select few bots (Like automod for example).

          Also on the NSFW side of Reddit those automated username “users” are the ones spamming their, or someones, OF on every NSFW subreddit, even unrelated ones to the content they’re posting. Or so a friend told me, of course.

    • Arotrios@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Holy fucking shit I’m dying. That’s fucking hilarious.

      I now want to make a bot that detects bots, grades their responses as 0% - 100% bot, posts the bottage score, and if they determine bottage, engage the other bot in endless conversation until it melts down from confusion.

      We can live stream the battles. We’ll call the show Babblebots.

      Any devs interested?

    • riktor@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yeah I’ve replied to a post here too about bots taking over.
      I used ChatGPT to “reply to the post as if you were a robot”
      Made it a pretty funny response and then people were asking if I was a bot.
      Who knows, maybe I am.

    • desudesudesu@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      anyone else remember how historically youtube comments were always pure garbage? i wonder if that was just a very primitive a.i. spamming posts on popular videos?

      • Jarfil@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        They still are. That’s just “average and below” humans commenting.
        Or as a park ranger would put it once: “there is a large overlap between the smartest bears and the dumbest humans”

      • pollodiabolo@kbin.social
        link
        fedilink
        arrow-up
        27
        ·
        edit-2
        1 year ago

        It’s feasible. Highly profitable. Only a matter of time until someone does it. The only reason not do it, is if your morals stop you. and u/spez has no morals.

        What’s happening right now is that the smart users leave the platform. Makes perfect sense, they are not needed anymore, in fact they would be in the way of the scam running smoothly. So you want them gone. Reddit’s actions make perfect sense really. They act exactly like they don’t need contributors anymore. And for some reason, it doesn’t bother them? There’s a reason why it doesn’t bother them, and people can’t delete their history.

    • dismalnow@kbin.social
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      1 year ago

      And it’s not really a hot take.

      If I could have this thought independently, it’s probably already a common view.

      (Reddit)'s dying… slowly, and painfully. This decline will go on for years. into the endgame of mostly automoderated, bot-driven content.

      Force those who remain to use a substandard app - inhibiting human interaction with the platform further.

      All you’re left with is content addicts, trolls, ads, dregs from the darkest corners, and bots that feed them.

      • TheRazorX@kbin.social
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        Another stealth benefit to reddit with all this API crap, is that it’ll be much harder to tell since most of the tools people use to analyze accounts won’t work anymore. Keeping in mind Reddit started out by inflating their user numbers.

  • hardypart@feddit.de
    link
    fedilink
    arrow-up
    38
    ·
    1 year ago

    I actually think this is the fate of the entire corporate driven part of the internet (so basically 95% nowadays, lol). Non-corporate, federated platforms are the future and will remain as the bastions of actual human interaction while the rest of the internet is being FUBAR by large language model bots.

    • mrbubblesort@kbin.social
      link
      fedilink
      arrow-up
      19
      ·
      1 year ago

      Seriously asking, what makes you think the fediverse is immune to that? Eventually they’ll get good enough that they’ll be almost indistinguishable from normal users, so how can we keep the bots out?

      • rastilin@kbin.social
        link
        fedilink
        arrow-up
        15
        ·
        1 year ago

        There’s a number of options including a chain of trust where you only see comments from someone who’s been verified by someone who’s been verified by someone and so on who’s been verified by an actual real human that you’ve met in person. We can also charge per post, which will rapidly drive up the cost of a botnet (as well as trim down the number of two word derails).

        • BraveSirZaphod@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I’m not sure how reliable chains of trust would be. There’s a pretty obvious financial incentive for someone to simply lie and vouch for a bot etc. But in general, I think some kind of network of trustworthiness or verification as a real human will eventually be necessary. I could see PGP etc being useful.

        • archomrade [he/him]@midwest.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          “charge per post”

          That part kind of worries me, are you proposing charging users to participate in the fediverse? Seems like it would also exclude a lot of people who can’t afford to spend money on social media…

          • riskable@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Listen here, you! I paid good money for this here comment so you’re gonna read it, alright‽

            <Brought to you by FUBAR, a corporation with huge pockets that can afford to sway opinion with lots of carefully placed bot comments>

          • rastilin@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            The obvious question is then “how are they helping pay for the servers they’re using?”.

            It’s not that I don’t see your point, everyone should be able to take part in a community without having to spend money, but I do find it annoying that whenever the topic of money comes up, we end up debating the hypothetical of someone with 0c spare in their budget.

            Charging for membership worked well for Something Awful, and they only charge something like $20 for lifetime membership anyway, plus an additional fee for extra functionality. But you don’t get the money back if you get banned. Corporations would still be able to spend their way into the conversation, but it would be harder to create massive networks that just flood the real users.

            • archomrade [he/him]@midwest.social
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              The nice thing about federated media is that there doesn’t need to be one instance that carries most of the traffic. The cost gets distributed among many servers and instances, and they can choose how to fund the server independently (many instance owners spend their own money to a point, then bridge the gap with donations from users).

              I’m just not sure that’s the best way to cut down bots, IMHO.

      • apemint@kbin.social
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        It’s not immune but until the fediverse reaches a critical mass, we’re safe… probably.
        After that, it will be the same whac-a-mole game we’re used to and somehow I don’t think we’ll win.

      • CynAq@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Right now, we can already recognize lower quality bots within conversation. AI generated “art” is already very distinct to everyone to the point almost nobody misses it.

        Language is a human instinct. Our minds create it, we can use it in all sorts of ways, bend it to our will however we want.

        By the time bots become good enough to be indistinguishable online, they’ll either be actually worth talking to, or they will simply be another corporate shill.

        • MrsEaves@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?

          I don’t love that right now, the focus is on eliminating or silencing the voice of bots, because as you point out, they’re going to be indistinguishable from human voices soon - if they aren’t already. In the education space, we’re already dealing with plagiarism platforms incorrectly claiming real student work is written by ChatGPT. Reading a viewpoint you disagree with and immediately jumping to “bot!” only serves to create echo chambers.

          I think it’s better and safer long term to educate people to think critically, assume good intent, know their boundaries online (ie, don’t argue when you can’t be coherent about it and have to devolve to name calling, etc), and focus on the content and argument of the post, not who created it - unless it’s very clear from a look at their profile that they’re arguing in bad faith or astroturfing. A shitty argument won’t hold up to scrutiny, and you don’t have the risk of silencing good conversation from a human with an opposing viewpoint. Common agreement on community rules such as “no hate speech” or limiting self-promotion/review/ads to certain spaces and times is still the best and safest way to combat this, and from there it’s a matter of mods enforcing the boundaries on content, not who they think you are.

          • Aesthesiaphilia@kbin.social
            link
            fedilink
            arrow-up
            13
            ·
            1 year ago

            I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?

            Because bots don’t think. They exist solely to push an agenda on behalf of someone.

          • BraveSirZaphod@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?

            If the people involved in the conversation are there because they are intending to have a conversation with people, yes, it’s automatically bad. If I want to have a conversation with a chatbot, I can happily and intentionally head over to ChatGPT etc.

            Bots are not inherently bad, but I think it’s imperative that our interactions with them are transparent and consensual.

          • Umbrias@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Part of the problem is that bots unfairly empower the speech of those with resources to dominate and dictate the conversation space, even in good effort, it disempowers everyone else. Even the act ofseeing the same ideas over and over can sway whole zeitgeists. Now imagine what bots cab do by dictating the bulk of what’s even talked about at all.

      • TheRazorX@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Nothing is immune, but at least on the fediverse it’s unlikely API access will be revoked on tools used to detect said bots.

    • taurentipper@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I agree with you 100%. If their motive is to make profit for shareholders or themselves they’re imo inevitably going to do this.

    • livus@kbin.social
      link
      fedilink
      arrow-up
      38
      ·
      1 year ago

      Let’s face it, they already had it on some of the big default subs as well.

      I went though a phase of bot hunting, and it was not unusual to find comment chains of 3 bots replying to each other near the top of big threads, sometimes with a hapless human or two in the mix.

      They use snippets of comments from downthread (and usually downvote their “donor” comments to lower visibility) so it seems kind of organic. Sometimes they use a thesaurus or something and re word it somewhat.

      What was really sad was when you’d see a human writing screeds of long arguments in reply to them.

      • HotDogFingies@kbin.social
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        Excuse my ignorance, but how were you able to recognize the bots?

        The repost bots were fairly easy to spot, but I sadly never found a situation like the one you’re describing. I don’t use reddit anymore, but the information may be useful elsewhere.

        • mrbubblesort@kbin.social
          link
          fedilink
          arrow-up
          13
          ·
          edit-2
          1 year ago

          Not the guy you were asking, but the ones I found were blatantly obvious because they would copy and reword info specific to the user they stole it from. Like “as a conservative, I wholly support what Ron DeSantis is doing in Florida” changed to “as an unwed teen-aged mother …” kind of thing. Eventually though the bots are gonna get too good to spot I bet

        • livus@kbin.social
          link
          fedilink
          arrow-up
          12
          ·
          edit-2
          1 year ago

          It’s a bit like finding a single thread and unravelling it.
          I used to get dozens of these things banned a day, there were a lot of us bot hunters reporting bots.

          They sometimes sound “off”, stop in mid sentence, reply to people as if they think it’s the OP, reply as if they are OP, or post 💯 by itself. Or they have a username that fits a recent bot pattern (e.g. appending “rp” to existing usernames)
          .
          If you see one slip up once, then looking at its other comments will often lead you to new bots simply because they are all attracted to the same positions (prominent but a few comments deep).

          Certain subs like AITA and r/memes are more prone to them so I would go there for easy leads.

          Also if you check its actual submissions, a karma laden bot will often repost hobby content, then have a second bot come and claim to have bought a t shirt or mug with that content and post a malicious link. Then a third bot will pose as another redditor saying thanks I just ordered one to the second bot. Following those bots leads you to even more bots, etc.

          @XiELEd copying you in here.

          • rastilin@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            It makes you wonder if a ChatGPT bot could be automated to flag all these accounts. I’m sure that Reddit could have tagged and deleted the lot of them if they wanted to.

            • livus@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              There must be such a lot of them. Accounts get sold on third party websites.

        • YouveCatToBeKittenMe@kbin.social
          link
          fedilink
          arrow-up
          9
          ·
          1 year ago

          To add to what other people said: As a casual user who didn’t go deliberately looking for bots, I mostly caught them when they posted a comment that was a complete non sequitur to the comment they replied to, like they were posted in the wrong thread. Which, well, is because they were–they were copied from elsewhere in the comment section and randomly posted as a reply to a more prominent thread. Ctrl+F came in very handy there. (They do sometimes reword things, but generally only a couple of words, so searching for bits and pieces of their comment still usually turns up results.)

          Also, the bot comments I caught were usually just a line or two, not entire paragraphs, even if they were copied from a longer comment.

        • Aesthesiaphilia@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          The past year or so, they’ve been in every single thread with more than 50 comments. If you expand the comments and do a little ctrl+f searching, you’ll see how they copy comments from users and then repost and have their fellow bots upvote them for visibility. Look at the timestamps on the posts.

      • XiELEd@kbin.social
        link
        fedilink
        arrow-up
        10
        ·
        1 year ago

        Considering most of reddit’s comments are either one-liners and jokes, it’s not that hard to get upvotes as a bot… but how’d you spot one? I’m afraid that I didn’t learn the ability to distinguish between bot and human.

          • livus@kbin.social
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            No it was nothing like that, I’m talking about obvious bots.

            Not sure what it’s been like since the blackout but it used to be if you reported a bot its account was normally deleted by admin within about 20 minutes.

        • Aesthesiaphilia@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Expand the comments and do a little ctrl+f searching. The bots tend to copy a comment, post it as their own (verbatim or slightly reworded), then have their fellow bots downvote the original and upvote the bot comment. You can also see in a new but reposted thread, the top few comments are exactly the same as a previous repost.

      • DarienGS@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        What was really sad was when you’d see a human writing screeds of long arguments in reply to them.

        To be fair, when you’re debating online your audience is vastly bigger than just the person you’re directly responding to.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          That is a good point.

          I guess the times I had in mind were where the human had obviously become upset - often when a bot repurposed a reply in a way that made it seem like it was contributing a horrible point of view, so not a real debate.

    • umbraroze@kbin.social
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      SubredditSimulator was based on older generative algorithms, so everyone could fairly easily tell that it was rubbish. When SubredditSimulator got shut down, someone made a new one based on GPT-2 (I think) and everyone was like “OK, this is getting harder to distinguish from real people”.

      I’m betting someone has made even more advanced bots by now. I’m betting someone’s also not concerned about telling other users upfront that they’re bots, and they’re not confining them into specific subs. Now, the only reason I’m not accusing Reddit Inc themselves of building these bots is that they aren’t exactly a bastion of software engineering excellence; the site barely works as is.

  • Andreas@feddit.dk
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    Reddit has been that way for a long time, after it lost the reputation of “niche forum for tech-obsessed weirdos” and became the internet’s general hub for discussion. The default subreddits are severely astroturfed by marketing and political campaigning groups, and Reddit turns a blind eye to it as long as it’s a paid partnership. There was one obvious case where bots in /r/politics accidentally targeted an AutoModerator thread instead of a candidate’s promotion thread and filled it with praise for that candidate.

    • Maxcoffee@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I see something similar in a lot of tech-related threads too.

      Just check out posts and comments about Corsair and AMD in particular. There is often no room for logic, facts or debate around their products on Reddit. Rather, threads feel like you’re stuck in a marketing promo event where everyone feels the products are great and fantastic and can do no wrong. It’s eerily like you’re seeing a bunch of bots or paid shill accounts all talking to each other.

      • PabloDiscobar@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I discussed in the AMD sub and it’s completely filled with consumers. They have no clue about electronics or development. It could be malevolence, but it’s becoming harder and harder to discern it from ignorance.

    • PabloDiscobar@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      There was one obvious case where bots in /r/politics accidentally targeted an AutoModerator thread instead of a candidate’s promotion thread and filled it with praise for that candidate.

      Any source for this? I’d like to have a look.

      • Andreas@feddit.dk
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Nope, sorry. Just a memory of a Reddit thread with very out-of-context comments. Ironically, while trying to search for documentation of the thread, DuckDuckGo returned a lot of research papers about the analysis of bot content on Reddit starting from 2015, so there’s still proof that botting on Reddit goes way back.

    • JasSmith@kbin.social
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      We control the experience here to a greater degree. If an instance decides to lean into AI content, we can leave for another, and others can defederate (if desired). Further, bots will be far more transparent. Reddit can (and likely does) offer their preferred bots exemptions for automatic filtering; probably promoting their content using some opaque algorithm. Said bots will receive no such preferential treatment across the Fediverse.

      • eleitl@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        The most thorough option is running your own instance. Most won’t do this, but you can.

        When Reddit was open source you could set up your instance. But unmaintained, and without federation.

  • Hypx@kbin.social
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    Ever heard of the Dead Internet Theory? It’s the idea that bots have taken over the Internet and there are few real humans left. For the whole of the Internet, this is a conspiracy theory. But for any individual platform, it is a totally plausible outcome. Reddit could become one of those bot networks that just pretends to be a social media platform. Twitter is on track for that too.

    • cazzodicristo@kbin.socialOP
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      it’s bleak. can I say… what they want is for you to be half-asleep, hooked on drugs, forever hating each other. they want this. it’s your ideal state for anyone that wields power in this world.

    • DarkenLM@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Like all other before it. Tay got the same fate, and the only reason ChatGPT isn’t it because they have some filters that have a bit more quality than the rest.

  • style99@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    The larger subs are already starting to become a war between different groups of spammers. The smaller subs can get by for now, but when the war in the larger subs gets to the extent that spammers start needing to branch out, they’ll likely invade the smaller subs, as well.

  • esc27@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    We need better solutions for proving identity online. Email, capcha, etc. are insufficient. I imagine a system similar to the certificate authority system, where you prove your identity to one of many trusted identity providers and then that provider vouches for you when you sign up for other services (while also protecting you anonymity.)

    • fiah@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      the protecting your anonymity part would be very hard though, such a system has a high risk of eventually enabling a dystopian future where your every online move is being monitored by big brother

      I was thinking that a mandatory donation to a charity could work. Like a simple $5 donation per account to any of a (carefully curated) list of charities. It would dramatically throttle new account creation / app adoption, of course, which is bad, but if a potential user wants it bad enough then they’d be OK with donating $5 to their favorite charity. It would reduce the number of bots / trolls / Sybils and it could work in a decentralized manner (imaging a lemmy instance doing this)

      • BraveSirZaphod@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        There will always be a trade-off between anonymity and authenticity. I could see a future where some web services will only interact with users that present a verified certificate that establishes them as a real person, even if it’s not necessarily tied to your real-world identity. Some could require a cert that is tied to your actual identity. Some others could allow general anonymous accounts, though they would struggle with spam and AI bots. But ultimately, I think people are going to come to value some amount of guarantee that they’re interacting with actual people.

    • The Cuuuuube@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      In a seedy back alley bar, an identity broker checks his bank accounts as a man enters the front door. In his pocket, the man entering the bar carries a uSD card. He sits down across from the broker and sets the card on the vinyl table-top.

      “PGP or minisign,” asks the broker, without looking up from his data pad.

      “PGP,” responds the man, looking over his shoulder, back at the door, nervously.

      The broker looks up, assesses the man, and says, “These older protocols cost extra, you know, you don’t look like you have the credits.”

      “Look, I just need to prove I’m human by the end of tonight, or else The Outlaws are going to put a tire iron between my eyes for not being able to get them the goods they’ve asked for.”

      “The problem,” the broker said, before taking a long pull from his tobacco nebulizer, “Is that the AI bots are getting harder and harder to tell from the humans in this city. Technology has come a long way since Greenville became a coastal town"

      The man looks back at the broker, realization dawning on him about what’s about to happen. The gun which usually lived its days taped under the booth was now pointed at the man. “Typically, I wouldn’t do this, but I don’t like The Outlaws. I’m not going to lose business over that, though. But I work for The Bastards mostly. I know you don’t work for them directly. You got mixed up in all this, didn’t you? Nevertheless. In this one case, the cruelty is the point.”

      Most of the inhabitants of the bar jumped as the pistol cracked, but made a point not to look over at the booth in the corner.

      “Hmm… Yes… Blood. I should have your identity confirmed within the hour. I would wish you luck on your purchase, but frankly I wouldn’t mind if you failed,” says the broker, sliding the uSD card into a slot just to the side of his right eye

    • mac12m99
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This means only 1 account per service? Even if it’s the case, nothing stop spammers from paying people to post what they want (ai-generated or not). Or big corporations can force their employees to do this free (or maybe hire for this exact purpose). Making this illegal won’t stop nothing, if it’s easy a lot of people will do

  • AnonymousLlama@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Never underestimate the power of negative energy, plenty of people flock to also dump on things they don’t like, it’s a great way to drive engagement (albeit shitty engagement)