• Heavybell@lemmy.world
    link
    fedilink
    arrow-up
    53
    arrow-down
    1
    ·
    8 months ago

    Why is everything RISC-V some low power device, I want a workstation with PCIe 5.0 powered by RISC-V.

        • oo1@kbin.social
          link
          fedilink
          arrow-up
          16
          arrow-down
          1
          ·
          8 months ago

          I’d guess they’d need to figure out whatever apple did with it’s arm chips.
          efficient use of many-cores and probably some fancy caching arrangement.

          It’ll may also be a matter of financing to be able to afford (compete with intel, apple, amd, nvidia) to book the most advanced manufacturing for decent sized batches of more complex chips.

          Once they have proven reliable core/chip designs , supporting more products and a growing market share, I imagine more financing doors will open.

          I’d guess risc-v is mostly financed by industry consortia maybe involving some governments so it might not be about investor finance, but these funders will want to see progress towards their goals. If most of them want replacements for embedded low power arm chips, that’s what they’re going to prioritise over consumer / powerful standalone workstations.

        • duncesplayed@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 months ago

          At a minimum they’ve got to design a wider issue. Current high-performance superscalar chips like the XuanTie 910 (what this laptop’s SoC are built around) are only triple-issue (3-wide superscalar), which gives a theoretical maximum of 3 ipc per core. (And even by RISC standards, RISC-V has pretty “small” instructions, so 3 ipc isn’t much compared to 3 ipc even on ARM. E.g., RISC-V does not have any comparison instructions, so comparisons need to be composed of at least a few more elementary instructions). As you widen the issue, that complicates the pipelining (and detecting pipeline hazards).

          There’s also some speculation that people are going to have to move to macro-op fusion, instead of implementing the ISA directly. I don’t think anyone’s actually done that in production yet (the macro-op fusion paper everyone links to was just one research project at a university and I haven’t seen it done for real yet). If that happens, that’s going to complicate the core design quite a lot.

          None of these things are insurmountable. They just take people and time.

          I suspect manufacturing is probably a big obstacle, too, but I know quite a bit less about that side of things. I mean a lot of companies are already fabbing RISC-V using modern transistor technologies.

    • oo1@kbin.social
      link
      fedilink
      arrow-up
      23
      arrow-down
      2
      ·
      8 months ago

      I think that’s the whole point of all risc - it saves power over cisc but may take longer to compute some tasks.

      That’d be why things like phones with limited batteries often prefer risc.

      • jdaxe@infosec.pub
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        That’s true for small and simple microcontrollers, but larger and more complicated ones can theoretically implement macro operation fusion in hardware to get similar benefits as CISC architectures

      • duncesplayed@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        It definitely could scale up. The question is who is willing to scale it up? It takes a lot less manpower, a lot less investment, and a lot less time to design a low-power core, which is why those have come to market first. Eventually someone’s going to make a beast of a RISC-V core, though.

          • merthyr1831@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            Excl. Nation-states which have their own strategic reasons- NVidia, Google, Amazon, IBM, almost every single big cloud player are going to begin investing in RISC-V as it matures.

            ARM charges a lot for its licensing and that’s only going up in the near future. x86 is simply too expensive to compete for unless you’re AMD or Intel.

            At some point the Cloud CPU players are gonna jump on RISC for the cost savings, and the prospect of building their own platforms without licensing fees and lack of input on the direction of the ISA.

          • intrepid@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            China is the main driver of growth in RISC-V currently. But we need to see how the trade wars will affect that. There was a recent news about RISC-V specifically in this regard.

            We might also see more activity from Intel, Qualcomm and Nvidia.

    • wiki_me@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      ·
      8 months ago

      milk-v is going to release a pretty powerful system, iirc i read it will be released in about 10 months, ventana also reportedly will release a server cpu in 2024.

    • skilltheamps@feddit.de
      link
      fedilink
      arrow-up
      9
      ·
      8 months ago

      It takes time, as it all is under heavy development. Just since very recently there are risc v sbc available that can run linux - before it was pretty much microcontrollers only. Be patient :)

    • suoko
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      8 months ago

      Risc-v is still 50% slower than an unisoc SOC.

    • qaz@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      8 months ago

      There is the 64 core, 32-128GB DDR4 Milk-V Pioneer, but it uses PCIe 4.0

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      Even once the kinks are worked out, the primary market for RISC-V will be low-end. It’s a FOSS (FOSH?) upgrade path from 8-bit and 16-bit ISAs.

      There will be no reason for embedded systems to use ARM.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        Initial market, absolutely. It’s already there at this point. Low power 32-bit ARM SoC MCUs have largely replaced the 8-bit and 16-bit AVR MCUs, as well as MIPS in new designs. They’ve just been priced so well for performance and relative cost savings on the software/firmware dev side (ex. Rust can run with its std library on Espressif chips, making development much quicker and easier).

        With ARM licensing looking less and less tenable, more companies are also moving to RISC-V from it, especially if they have in-house chip architects. So, I also suspect that it will supplant ARM in such use cases - we’re already seeing such in hobbyist-oriented boards, including some that use a RISC-V processor as an ultra-low-power co-processor for beefier ARM multi-core SoCs.

        That said, unless there’s government intervention to kill RISC-V, under the guise of chip-war (but really likely because of ARM “campaign contributions”), I suspect that we’ll have desktop-class machines sooner than later (before the end of the decade).

        • mindbleach@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          I would’ve had my doubts, until Apple somehow made ARM competitive with x86. A trick they couldn’t pull off with PowerPC.

          I guess linear speed barely ought to matter, these days, since parallelism is an order-of-magnitude improvement, and scales.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            I would’ve had my doubts, until Apple somehow made ARM competitive with x86. A trick they couldn’t pull off with PowerPC.

            Yeah. From what I’ve pieced together, Apple’s dropping PowerPC ultimately came down to perf/watt and delays in delivery from IBM of a suitable chip that could be used in a laptop and support 64-bit instructions. x86 beat them to the punch and was MUCH more suitable for laptops.

            Interestingly, the mix of a desire for greater vertical integration and chasing perf/watt is likely why they went ARM. With their license, they have a huge amount of flexibility and are able to significantly customize the designs from ARM, letting them optimize in ways that Intel and AMD just wouldn’t allow.

            I guess linear speed barely ought to matter, these days, since parallelism is an order-of-magnitude improvement, and scales.

            It is definitely a complicated picture, when figuring out performance. Lots of potential factors come together to make the whole picture. You’ve got ops power clock cycle per core, physical size of a core (RISC generally has fewer transistors per core, making them smaller and more even), integrated memory, on-die co-processors, etc. The more that the angry little pixies can do in a smaller area, the less heat is generated and the faster they can reach their destinations.

            ARM, being a mature, and customizable RISC arch really should be able to chomp into x86 market share. RISC-V, while younger, has been and to grow an advance at a pace not seen before, to my knowledge, thanks to its open nature. More companies are able to experiment and try novel architectures than under x86 or ARM. The ISA is what’s gotten me excited again about hardware and learning how it’s made.

            • mindbleach@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              8 months ago

              With their license, they have a huge amount of flexibility and are able to significantly customize the designs from ARM, letting them optimize in ways that Intel and AMD just wouldn’t allow.

              An opportunity RISC-V will offer to anyone with a billion dollars lying around.

              ARM, being a mature, and customizable RISC arch really should be able to chomp into x86 market share.

              x86 market share is 99.999% driven by published software. Microsoft already tried expanding Windows, and being Microsoft, made half a dozen of the worst decisions simultaneously. Linux dorks (hi) have the freedom to shift over to whatever, give or take some Wine holdovers. Apple just dictated what would change, because you can do that when you’re a petit monopoly.

              What’s really going to threaten x86 are user-mode emulators like box86, fex-emu, and qemu-user. That witchcraft turns Windows/x86 binaries into something like Java: it will run poorly, but it will run. Right now those projects mostly target ARM, obviously. But there’s no reason they have to. Just melting things down to LLVM or Mono would let any native back-end run up-to-date software on esoteric hardware.

              • nickwitha_k (he/him)@lemmy.sdf.org
                link
                fedilink
                arrow-up
                2
                ·
                8 months ago

                An opportunity RISC-V will offer to anyone with a billion dollars lying around.

                Exactly this. Nvidia and Seagate, among others, have already hopped on this. I hold out hope for more accessible custom processors that would enable hobbyists and smaller companies to join in as well, and make established companies more inclined to try novel designs.

                x86 market share is 99.999% driven by published software. Microsoft already tried expanding Windows, and being Microsoft, made half a dozen of the worst decisions simultaneously.

                Indeed. I’ve read opinions that that was historically also a significant factor in PowerPC’s failure - noone is going to want to use your architecture, if there is no software for it. I’m still rather left scratching my head at a lot of MS’s decisions on their OS and device support. IIRC, they may finally be having an approach to drivers that’s more similar to Linux, but, without being a bit more open with their APIs, I’m not sure how that will work.

                Linux dorks (hi)

                Hello! 0/

                What’s really going to threaten x86 are user-mode emulators like box86, fex-emu, and qemu-user. That witchcraft turns Windows/x86 binaries into something like Java: it will run poorly, but it will run.

                Hrm…I wonder if there’s some middle ground or synergy to be had with the kind of witchcraft that Apple is doing with their Rosetta translation layer (though, I think that also has hardware components).

                Right now those projects mostly target ARM, obviously. But there’s no reason they have to. Just melting things down to LLVM or Mono would let any native back-end run up-to-date software on esoteric hardware.

                That would be brilliant.

                • mindbleach@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  8 months ago

                  IIRC Apple’s ARM implementation has a lot of extensions that coincidentally work just like x86.

                  Frankly I’m gobsmacked at how many “universal binary” formats are just two native executables in a trenchcoat. Especially after MS and Apple both got deep into intermediate representation formats. Even a static machine-code-only segment would simplify the hell out of emulation.

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 months ago

      Me too. Hell, I’d settle for a multi-core RV64GC processor offered as a bare chip and socket since I’ve always wanted to give building a motherboard a try but, the dev systems available seem to have everything soldered :(