This might be a real stupid question but why is discover updating to a lower version? Is there any place I can read up why this is the case?

P.S.: Yes, I could have absolutely google this but lemmy is about more than just shitposts and memes imo. Asking some rather noobish questions will make us appear on google btw.

  • isgleas@lemmy.ml
    link
    fedilink
    arrow-up
    22
    arrow-down
    5
    ·
    4 months ago

    It is called “downgrading”, and it is not uncommon to have some packages downgrading when updating/upgrading a system, due to several reasons.

    • leopold@lemmy.kde.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      4 months ago

      No. This is just a thing Discover does. Unless nearly every update I’ve done for every Flatpak I have installed on my Steam Deck have actually been downgrades.

      • Zamundaaa@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        No, it’s just a (long fixed!) bug. In the case of the Deck, the next version of SteamOS comes with the fix soon… in the case of Debian, they don’t ship our bugfix releases, so it’ll be stuck with this until Debian 13 :/

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Under what circumstances? I don’t think I’ve ever seen a package downgraded during an upgrade.

      • isgleas@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        I somehow missed this to be a flatpak via Discover. Granted this may not be usual in distros with a traditional update model, downgrading packages may be present in rolling distros, or distros with overlapping minor versions, or having 3rd party repos providing conflicting packages to those of the distro.

        I offer my system as example:

        The following product is going to be upgraded:
          openSUSE Tumbleweed  20240211-0 -> 20240313-0
        
        The following 14 packages are going to be downgraded:
          ghc-binary ghc-containers ghc-deepseq ghc-directory ghc-exceptions ghc-mtl ghc-parsec ghc-pretty ghc-process ghc-stm ghc-template-haskell ghc-text ghc-time ghc-transformers
        
  • themoonisacheese@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    ·
    4 months ago

    Pretty sure this is a bug in either discover or flatpak. My guess is flatpak has the 2 versions it feeds discover swapped, so the versions appear swapped, but in reality it will be fine

  • silly goose meekah@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    4 months ago

    edit: turns out I’m wrong. google does index lemmy pages. it’s just not easy to find them through google

    I dont think its possible to find any lemmy posts through Google

      • Deebster@infosec.pub
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 months ago

        I just tried searching “element lemmy” and got the article Lemmy: Fans call for periodic table element to be named after Motörhead frontman

        Whereas “element reddit” gives /r/elementchat/

        Lemmy is indexed on Google as using the site: operator will show, e.g. “rust site:programming.dev” gives sensible results, but there’s not a way to search across Lemmy. Well, not with Google anyway (Kagi has a Fediverse lens that works fairly well).

        • gila@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          4 months ago

          I couldn’t find it in my comment history, but I saw a thread months ago where someone was lamenting migrating from reddit where they used to just google “episode ### discussion” for the show they’re watching and would find a corresponding reddit thread, but the same thing wasn’t working for them with Lemmy. Someone else pointed out that it might be because Google personalises some of the search results now, so I tried their example query and the top link was to the post I was commenting on. It had already indexed to the most relevant result about an hour after the original post

    • haui@lemmy.giftedmc.comOP
      link
      fedilink
      arrow-up
      5
      ·
      4 months ago

      Nah, we already are on google but it depends on a lot of factors. A lemmy frontend is just another webpage so google will crawl it if you allow it. So if an instance disallows it specifically or has incorrect/unfamiliar sitemap it might not work.

      • silly goose meekah@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        4 months ago

        hmm, maybe that’s the case for lemmy.world. I wasn’t able to find anything from my comment or post history, even though I copied it exactly into the search.

        edit: just had the idea to put quotation marks around the thing I copied to search for that exact string, and now I found it. Yeah, it really just seems like lemmy pages are not very popular results so google pushes them all the way to the bottom.

        • haui@lemmy.giftedmc.comOP
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Exactly. The reason is that google favors pages that play the algorithm instead of actual content. That and popularity.

        • Chewy@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          The major difference between lemmy and reddit is that there’s many instances for search engines to crawl, compared to a single reddit.com. They likely treat each instance seperately, which leads to a lot of duplicate content and most of lemmy isn’t search engine optimized.

          Sadly I don’t see a better way to do it than for search engines to be optimized for this kind of federated platforms. It’s not obvious from the outside which is the preferred instance to show to a user.

          I’ve had some luck finding content on lemmy by forcing a specific instance using site:lemmy.instance.domain, but it depends on the search engine whether it’s respected.