- cross-posted to:
- technology@lemmy.zip
- hardware@lemmy.world
- cross-posted to:
- technology@lemmy.zip
- hardware@lemmy.world
He’s being misquoted by the headline. He FEARS that it will make the same mistakes. Let’s be clear about RISC is here in the first place: an open-source hardware architecture. Anyone with enough money and willpower to fork it for their needs will do so. It’s anyone’s game still. He’s just simply saying that the same type of people who took over ARM and x86 are doomed to make the same mistakes. Not that RISC-V is bad.
I’m being pedantic here but RISC-V is not a hardware architecture as in something that you can send to a manufacturer and get it made. It is an ISA. How you implement those ISA is up to you. Yes there are open implementations but I think it is important to distinguish it.
isnt ARM the same?
So does x86. The difference is license. Just like how Intel and AMD have a VERY different design (implementation) as of now, so does RISC-V. Any vendor can implement it however they want, but they won’t have to pay anyone for using RISC-V ISA
Anyone willing to summarize those mistakes here, for those who can’t watch the video rn?
He doesn’t list what the mistakes will be. He said that he fears that because hardware people aren’t software people, that they will make the same mistakes that x86 made, which were then made by Arm later.
He did mention that fixing those mistakes was faster for Arm than x86, so that brings hope that fixing the mistakes on Risc V will take less time
I think it was something with instruction sets? Pretty sure i read something about this months ago.
No, it was about the prediction engines that contain security vulnerabilities. Problem is that software has no control over that, because hardware does future predictions for performance optimization.
Aah, right, that.
Prediction is a hard problem when coupled with caches. It relatively easy to say that no speculative instruction has any effect until it’s confirmed taken if you ignore caches. However caches need to fetch information from memory to allow an instruction to evaluate, and rewinding a cache to it’s previous state on a mispredict is almost impossible. Especially when you consider that the amount of time you’re executing non-speculative code on a modern processor is very low.
Not having predictions is consigning yourself to 1990s performance, with faster clocks.
I mean, that’s all chip architectures are, so yes.
Basically, his concern is that if they are not cooperating with software engineers that the product won’t be able to run AAA games.
It’s more of a warning than a prediction.
What are “AAA turns”?
Sorry, AAA games. I was swiping on my keyboard and didn’t see the mistake.
SwiftKey?
Not OP, but consider using FUTO Keyboard. It’s made by the group Louis Rossmann works with, and it has offline speech to text (no sending data to Google), swipe keyboard, and completions. It’s also source-available, which isn’t as good as open source, but you could examine the code and verify their claims if you wanted to.
I’m using it and, while it’s not perfect, it’s way better than the open source Android keyboards with swiping that I’ve tried.
Thanks, will try it out! I need an emoji picker though. Does it have that?
Edit: typing with it now. It had an emoji picker. 👍
- I like the picker’s grouping, actually. Body parts (hands) are closer to faces.
- The recent emoji section doesn’t work.
- It doesn’t have the latest emoji set, as far as I can tell.
- The swiping is much more sensitive than Gboard. I’m not a fan as of yet. Maybe it’s still learning. Seems like it can’t handle the speed as well as Gboard can.
- Prediction suggestions are terrible so far.
- I don’t like that swipe delete doesn’t delete whole words.
All in all, I don’t think I can recommend it in its current state.
But, if you type by pressing buttons, the predictions are actually pretty good. Maybe that saves a bit of time if you’re very stationary and not on the move.
Yeah, it’s very much alpha software, but it works surprisingly well for being in such an early state. I’m using it as my keyboard now, and it works well enough, but certainly not perfect.
Then again, I’m willing to deal with a lot of nonsense to avoid Google, so YMMV.
I hear the speech to text is pretty good. I haven’t tried it (I hate dictation), but maybe you could give it a whirl before you give up on it, it’s supposed to be its killer feature.
I’ll give it a shot. I’m using Google
Giving it a whirl right now. Thanks for the recommendation.
Google in this case. I’ll try the alternative mentioned
Instruction creep maybe? Pretty sure I’ve also seen stuff that seems to show that Torvalds is anti-speculative-execution due to its vulnurabilities, so he could also be referring to that.
Counterintuitive but more instructions are usually better. It enables you (but let’s be honest the compiler) to be much more specific which usually have positive performance implications for minimal if any binary size. Take for example SIMD which is hyper specific math operations on large chunks of data. These instructions are extremely specific but when properly utilized have huge performance improvements.
I understand some instruction expansions today are used to good effect in x86, but that there are also a sizeable number of instructions that are rarely utilized by compilers and are mostly only continuing to exist for backwards compatibility. That does not really make me think “more instructions are usually better”. It makes me think “CISC ISAs are usually bloated with unused instructions”.
My whole understanding is that while more specific instruction options do provide benefits, the use-cases of these instructions make up a small amount of code and often sacrifice single-cycle completion. The most commonly cited benefit for RISC is that RISC can complete more work (measured in ‘clockcycles per program’ over ‘clockrate’) in a shorter cyclecount, and it’s often argued that it does so at a lower energy cost.
I imagine that RISC-V will introduce other standards in the future (hopefully after it’s finalized the ones already waiting), hopefully with thoroughly thought out instructions that will actually find regular use.
I do see RISC-V proponents running simulated benchmarks showing RISC-V is more effective. I have not seen anything similar from x86 proponents, who usually either make general arguments, or worse , just point at the modern x86 chips that have decades of research, funding, and design behind them.
Overall, I see alot of doubt that ISAs even matter to performance in any significant fashion, and I believe it for performance at the GHz/s level of speed.
This is probably correct.
smells like linus thinks there is going to be an ever increasing tech debt, and honestly, i think i agree with him on that one.
RISCV is likely going to eventually overstep it’s role in someplaces, and bits and pieces of it will become archaic over time.
The gap between hardware and software level abstraction is huge, and that’s really hard to fill properly. You just need a strict design criteria to get around that one.
I’m personally excited to see where RISCV goes, but maybe what we truly need is a universal software level architecture that can be used on various different CPU architectures providing maximum flexibility.
but maybe what we truly need is a universal software level architecture that can be used on various different CPU architectures providing maximum flexibility.
I think that’s called Java.
Or Emacs
Then again, if you don’t have the JVM/JRE, Java won’t work, so first you need to write it in another language and in such a way that it works across a bunch of different ARM and x86 processors.
I don’t know, if your platform doesn’t have a jre… Is it really a platform?
Dunno, would you consider the Xbox or Playstation platforms?
but but, minecraft in java bad and stinky??
But Java is the good version of Minecraft…
unfortunately, you’re aren’t wrong.
software level architecture that can be used on various different CPU architectures providing maximum flexibility.
I’ve only done a little bare metal programming, but I really don’t see how this is possible. Everything I’ve used is so vastly different, I think it would be impossible to create something like that, and have it work well.
theoretically you could do it by defining an architecture operations standard, and then adhering to that somewhat when designing a CPU. While providing hardware flexibility as you could simply, not implement certain features, or implement certain other features. Might be an interesting idea.
That or something that would require minimal “instruction translation” between different architectures.
It’s like x86. except if most of the features were optional.
It sounds like you’re just reinventing either the JVM (runtime instruction translation), compilers (LLVM IR), or something in between (JIT interpreters).
The problem is that it’s a hard problem to solve generally without expensive tradeoffs:
- interpreter like JVM - will always have performance overhead and can’t easily target arch-specific optimizations like SIMD
- compiler - need a separate binary per arch, or have large binaries that can do multiple
- JIT - runtime cost to compiling optimizations
Each is fine and has a use case, but I really don’t think we need a hardware agnostic layer, we just need languages that help alleviate issues with different architectures. For example, Rust’s ownership model may help prevent bugs that out of order execution may expose. It could also allow programmers to specify more strict limits on types (e.g. non-zero numbers, for example), which could aid arch-specific optimizations).
yeah pretty much. The JVM but marginally less skill issued lol.
universal software level architecture that can be used on various different CPU
Oh we already have dozens of those haha
overstep its* role in some places
username checks out
Well regardless, the world needs alternatives that are outside of restrictive US patent law and large monopolistic control. Thank god for pioneers:)
ARM Inc is an English company owned by a Japanese company
Pretty sure it’s a plc, not and Inc.
deleted by creator
Sorry, but that is completely wrong. RISC-V is an ISA, nothing less, nothing more, and it is completely, 100% open-source. The licensing of the hardware implementations is a different matter, but that’s outside of the scope of RISC-V. As I said, it is just an ISA.
There’s plenty of designs out there that you can load onto an FPGA or, funds permitting, send off to a fab to burn into silicon.
RISC-V is the only shot we have at usable open source hardware. I really, really hope it takes off.
Whilst some open source implementations exist, RISC-V is not open source. It’s an open standard. i.e. there’s no license fee to implement it.
I didn’t know that I thought all RISC-V was open source :( I’m not as familiar with it as I’d like to be. I might just have to dive into it more and change that soon
I didn’t know that I thought all RISC-V was open source :(
If RISC-V was under some copyleft license where chip designs would have to be made open source, nobody from the chip industry would support RISC-V. They want “kinda like ARM but without licensing fees”.
Even if that happens, still open sauce
Not really? I mean, only partially.
It’s open source nature protects against that. People mistake Linus as being in the same boat as Stallman but Linus was only open source by circumstance, he kind infamously doesn’t seem to appreciate the role open source played in his own success.
It already directly addresses the mistakes of x86 and ARM. I don’t know what he is so worried about.
Only the core part of the ISA is open source. Vendors are free to add whatever proprietary extensions they want and sell the resulting CPU.
You might get such a CPU to boot, but getting all functionality might be the same fight it is with arm CPUs currently.
I’ll say to you what I said to the other commentor: RISC-V is an ISA, nothing less, nothing more, and it is 100% open-source. It is not trying to be anything else. Yes, hardware implementations from processor vendors can have different licensing and be proprietary, but that is not the fault of RISC-V, nor does that have anything else to do with it. RISC-V, as an ISA, and only an ISA, is completely open-source and not liable for the bs of OEMs.
Protects against what?
What I read here is just a vague critic from him of the relation between hard- and software developer. Which will not change just because the ISA is open source. It will take some iterations until this is figured out, this is inevevable.
Soft- and hardware developers are experts in their individual fields, there are not many with enough know-how of both fields to be effective.
Linus also points out, that because of ARM before, RISC-V might have a easier time, on the software side, but mistakes will still happen.
IMO, this article doesn’t go into enough depths of the RISC-V specific issues, that it warrants RISC-V in the title, it would apply to any up and coming new ISA.
Its* open-source nature
Maybe, but the point is that it’s open. There’s a much higher chance that one of the companies that builds parts will make good decisions.