In many ways, Mastodon feels like rewinding the clock on social media back to the early days of Twitter and Facebook. On the consume side, that means that your home feed has no algorithm (this can be disorienting at first).

Practically, it means that you see only what you want to see and only see it linearly. You never wonder “why am I seeing this and how do I make it go away?”. Content can only enter your home feed via your followed tags or handles and the feed is linear like the early days of social media.

  • Hexagon
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 year ago

    Wrong. The next terrible thing is mass-AI-generated propaganda and disinformation. Like in the “dead internet” theory

      • Hexagon
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        My bad. But I think we haven’t seen the full extent of it yet

        • OpenStars@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Tbf, it seems like the current “mass-AI-generated propaganda and disinformation” has actual humans behind it i.e. state-sponsored disinformation as part of modern warfare, as opposed to just sheer random BS pooped out of an algorithm designed to maximize short-term profits for the person trying to use enough buzzwords to get their algorithm bought out by someone dumb enough to fall for their pitch and short-sighted enough to not realize the wider implications… or worse yet, if they realize, who simply does not care.

          It reminds me of the story behind the USA tax preparation software companies who intentionally went on a campaign to confuse military veterans and students (seriously!? what kind of evil mfers…!?), and while they got caught and even punished & fined, it was something like a decade later and ofc the original CEO and also the next one etc. had long since received their fat bonus checks, leaving the company holding the bag (liability). Thus it was “a smart move”, so long as you entirely disregard ethics. What was presented as a “free gift”, to generate good PR for the company, was in reality predating upon people that they deemed would be highly trusting or at least minimally likely to sue them… and they were correct. Now, watching interviews of these tech-bros, I get the same vibe as in like who cares so long as I get mine.

    • sparr@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Web of trust solves this problem, until people start intentionally trusting AIs as much as they do other humans, at which point it’s no longer a problem.