• just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    218
    ·
    5 months ago

    He’s being misquoted by the headline. He FEARS that it will make the same mistakes. Let’s be clear about RISC is here in the first place: an open-source hardware architecture. Anyone with enough money and willpower to fork it for their needs will do so. It’s anyone’s game still. He’s just simply saying that the same type of people who took over ARM and x86 are doomed to make the same mistakes. Not that RISC-V is bad.

    • bitfucker@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      I’m being pedantic here but RISC-V is not a hardware architecture as in something that you can send to a manufacturer and get it made. It is an ISA. How you implement those ISA is up to you. Yes there are open implementations but I think it is important to distinguish it.

        • bitfucker@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 months ago

          So does x86. The difference is license. Just like how Intel and AMD have a VERY different design (implementation) as of now, so does RISC-V. Any vendor can implement it however they want, but they won’t have to pay anyone for using RISC-V ISA

  • shaked_coffee
    link
    fedilink
    English
    arrow-up
    77
    ·
    5 months ago

    Anyone willing to summarize those mistakes here, for those who can’t watch the video rn?

    • Transient Punk@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      134
      ·
      edit-2
      5 months ago

      He doesn’t list what the mistakes will be. He said that he fears that because hardware people aren’t software people, that they will make the same mistakes that x86 made, which were then made by Arm later.

      He did mention that fixing those mistakes was faster for Arm than x86, so that brings hope that fixing the mistakes on Risc V will take less time

      • MonkderDritte@feddit.de
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        1
        ·
        edit-2
        5 months ago

        I think it was something with instruction sets? Pretty sure i read something about this months ago.

        • Hotzilla@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 months ago

          No, it was about the prediction engines that contain security vulnerabilities. Problem is that software has no control over that, because hardware does future predictions for performance optimization.

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Prediction is a hard problem when coupled with caches. It relatively easy to say that no speculative instruction has any effect until it’s confirmed taken if you ignore caches. However caches need to fetch information from memory to allow an instruction to evaluate, and rewinding a cache to it’s previous state on a mispredict is almost impossible. Especially when you consider that the amount of time you’re executing non-speculative code on a modern processor is very low.

            Not having predictions is consigning yourself to 1990s performance, with faster clocks.

    • SpikesOtherDog@ani.social
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      3
      ·
      edit-2
      5 months ago

      Basically, his concern is that if they are not cooperating with software engineers that the product won’t be able to run AAA games.

      It’s more of a warning than a prediction.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              10
              ·
              5 months ago

              Not OP, but consider using FUTO Keyboard. It’s made by the group Louis Rossmann works with, and it has offline speech to text (no sending data to Google), swipe keyboard, and completions. It’s also source-available, which isn’t as good as open source, but you could examine the code and verify their claims if you wanted to.

              I’m using it and, while it’s not perfect, it’s way better than the open source Android keyboards with swiping that I’ve tried.

              • Victor@lemmy.world
                link
                fedilink
                English
                arrow-up
                8
                ·
                edit-2
                5 months ago

                Thanks, will try it out! I need an emoji picker though. Does it have that?

                Edit: typing with it now. It had an emoji picker. 👍

                1. I like the picker’s grouping, actually. Body parts (hands) are closer to faces.
                2. The recent emoji section doesn’t work.
                3. It doesn’t have the latest emoji set, as far as I can tell.
                4. The swiping is much more sensitive than Gboard. I’m not a fan as of yet. Maybe it’s still learning. Seems like it can’t handle the speed as well as Gboard can.
                5. Prediction suggestions are terrible so far.
                6. I don’t like that swipe delete doesn’t delete whole words.

                All in all, I don’t think I can recommend it in its current state.

                But, if you type by pressing buttons, the predictions are actually pretty good. Maybe that saves a bit of time if you’re very stationary and not on the move.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  5 months ago

                  Yeah, it’s very much alpha software, but it works surprisingly well for being in such an early state. I’m using it as my keyboard now, and it works well enough, but certainly not perfect.

                  Then again, I’m willing to deal with a lot of nonsense to avoid Google, so YMMV.

                  I hear the speech to text is pretty good. I haven’t tried it (I hate dictation), but maybe you could give it a whirl before you give up on it, it’s supposed to be its killer feature.

    • _NoName_@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      5 months ago

      Instruction creep maybe? Pretty sure I’ve also seen stuff that seems to show that Torvalds is anti-speculative-execution due to its vulnurabilities, so he could also be referring to that.

      • Traister101@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        Counterintuitive but more instructions are usually better. It enables you (but let’s be honest the compiler) to be much more specific which usually have positive performance implications for minimal if any binary size. Take for example SIMD which is hyper specific math operations on large chunks of data. These instructions are extremely specific but when properly utilized have huge performance improvements.

        • _NoName_@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          5 months ago

          I understand some instruction expansions today are used to good effect in x86, but that there are also a sizeable number of instructions that are rarely utilized by compilers and are mostly only continuing to exist for backwards compatibility. That does not really make me think “more instructions are usually better”. It makes me think “CISC ISAs are usually bloated with unused instructions”.

          My whole understanding is that while more specific instruction options do provide benefits, the use-cases of these instructions make up a small amount of code and often sacrifice single-cycle completion. The most commonly cited benefit for RISC is that RISC can complete more work (measured in ‘clockcycles per program’ over ‘clockrate’) in a shorter cyclecount, and it’s often argued that it does so at a lower energy cost.

          I imagine that RISC-V will introduce other standards in the future (hopefully after it’s finalized the ones already waiting), hopefully with thoroughly thought out instructions that will actually find regular use.

          I do see RISC-V proponents running simulated benchmarks showing RISC-V is more effective. I have not seen anything similar from x86 proponents, who usually either make general arguments, or worse , just point at the modern x86 chips that have decades of research, funding, and design behind them.

          Overall, I see alot of doubt that ISAs even matter to performance in any significant fashion, and I believe it for performance at the GHz/s level of speed.

  • KillingTimeItself@lemmy.dbzer0.com
    cake
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    1
    ·
    5 months ago

    smells like linus thinks there is going to be an ever increasing tech debt, and honestly, i think i agree with him on that one.

    RISCV is likely going to eventually overstep it’s role in someplaces, and bits and pieces of it will become archaic over time.

    The gap between hardware and software level abstraction is huge, and that’s really hard to fill properly. You just need a strict design criteria to get around that one.

    I’m personally excited to see where RISCV goes, but maybe what we truly need is a universal software level architecture that can be used on various different CPU architectures providing maximum flexibility.

    • Cocodapuf@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      ·
      5 months ago

      but maybe what we truly need is a universal software level architecture that can be used on various different CPU architectures providing maximum flexibility.

      I think that’s called Java.

    • arality@programming.dev
      link
      fedilink
      English
      arrow-up
      15
      ·
      5 months ago

      software level architecture that can be used on various different CPU architectures providing maximum flexibility.

      I’ve only done a little bare metal programming, but I really don’t see how this is possible. Everything I’ve used is so vastly different, I think it would be impossible to create something like that, and have it work well.

      • KillingTimeItself@lemmy.dbzer0.com
        cake
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 months ago

        theoretically you could do it by defining an architecture operations standard, and then adhering to that somewhat when designing a CPU. While providing hardware flexibility as you could simply, not implement certain features, or implement certain other features. Might be an interesting idea.

        That or something that would require minimal “instruction translation” between different architectures.

        It’s like x86. except if most of the features were optional.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          It sounds like you’re just reinventing either the JVM (runtime instruction translation), compilers (LLVM IR), or something in between (JIT interpreters).

          The problem is that it’s a hard problem to solve generally without expensive tradeoffs:

          • interpreter like JVM - will always have performance overhead and can’t easily target arch-specific optimizations like SIMD
          • compiler - need a separate binary per arch, or have large binaries that can do multiple
          • JIT - runtime cost to compiling optimizations

          Each is fine and has a use case, but I really don’t think we need a hardware agnostic layer, we just need languages that help alleviate issues with different architectures. For example, Rust’s ownership model may help prevent bugs that out of order execution may expose. It could also allow programmers to specify more strict limits on types (e.g. non-zero numbers, for example), which could aid arch-specific optimizations).

    • nixcamic@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      universal software level architecture that can be used on various different CPU

      Oh we already have dozens of those haha

  • lps@lemmy.ml
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    5 months ago

    Well regardless, the world needs alternatives that are outside of restrictive US patent law and large monopolistic control. Thank god for pioneers:)

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      1
      ·
      5 months ago

      Sorry, but that is completely wrong. RISC-V is an ISA, nothing less, nothing more, and it is completely, 100% open-source. The licensing of the hardware implementations is a different matter, but that’s outside of the scope of RISC-V. As I said, it is just an ISA.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 months ago

      There’s plenty of designs out there that you can load onto an FPGA or, funds permitting, send off to a fab to burn into silicon.

  • BobGnarley@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 months ago

    RISC-V is the only shot we have at usable open source hardware. I really, really hope it takes off.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      Whilst some open source implementations exist, RISC-V is not open source. It’s an open standard. i.e. there’s no license fee to implement it.

      • BobGnarley@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        I didn’t know that I thought all RISC-V was open source :( I’m not as familiar with it as I’d like to be. I might just have to dive into it more and change that soon

        • woelkchen@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          I didn’t know that I thought all RISC-V was open source :(

          If RISC-V was under some copyleft license where chip designs would have to be made open source, nobody from the chip industry would support RISC-V. They want “kinda like ARM but without licensing fees”.

  • SeattleRain@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    12
    ·
    edit-2
    5 months ago

    It’s open source nature protects against that. People mistake Linus as being in the same boat as Stallman but Linus was only open source by circumstance, he kind infamously doesn’t seem to appreciate the role open source played in his own success.

    It already directly addresses the mistakes of x86 and ARM. I don’t know what he is so worried about.

    • exu@feditown.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      5 months ago

      Only the core part of the ISA is open source. Vendors are free to add whatever proprietary extensions they want and sell the resulting CPU.

      You might get such a CPU to boot, but getting all functionality might be the same fight it is with arm CPUs currently.

      • Richard@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        5 months ago

        I’ll say to you what I said to the other commentor: RISC-V is an ISA, nothing less, nothing more, and it is 100% open-source. It is not trying to be anything else. Yes, hardware implementations from processor vendors can have different licensing and be proprietary, but that is not the fault of RISC-V, nor does that have anything else to do with it. RISC-V, as an ISA, and only an ISA, is completely open-source and not liable for the bs of OEMs.

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      5 months ago

      Protects against what?

      What I read here is just a vague critic from him of the relation between hard- and software developer. Which will not change just because the ISA is open source. It will take some iterations until this is figured out, this is inevevable.

      Soft- and hardware developers are experts in their individual fields, there are not many with enough know-how of both fields to be effective.

      Linus also points out, that because of ARM before, RISC-V might have a easier time, on the software side, but mistakes will still happen.

      IMO, this article doesn’t go into enough depths of the RISC-V specific issues, that it warrants RISC-V in the title, it would apply to any up and coming new ISA.

  • magnolia_mayhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    5 months ago

    Maybe, but the point is that it’s open. There’s a much higher chance that one of the companies that builds parts will make good decisions.