If you were comparing x86 vs RISC-V you might not be far off. But with ARM vs x86 they have basically the same use cases. Namely desktops, laptops, servers, networking equipment, game consoles, set top boxes, and so on. x86 even used to be used in mobile phones or even as a microcontroller. It’s not used in those applications as much now obviously, but it’s very much possible. Originally ARM was developed for the desktop too, and was designed for high performance. Lookup the Acorn Archimedes. When people say ARM is coming to the desktop they really should be saying ARM is coming back to the desktop, since that’s where it started from.
You’re also not correct on the clock speed and IPC front. For a long time Apple’s ARM implementation had better IPC than x86 chips. The whole point of RISC is that you can get better clock speeds and execute more instructions vs CISC having more complex instructions being executed more slowly. The only really correct part is that x86 chips are more pipelined. This is due to them being CISC essentially and needing more stages to hit the same clockspeed. Apple’s ARM makes up for this by having more superscalar units than x86 chips, allowing for greater IPC.
Putting graphics and video compression stuff on x86 chips isn’t new either. That’s a question of system design, not of x86 vs ARM. In the server market you get ARM chips that are CPU only. Both also come paired with FPGAs. So it’s not even fair to say ARM has more accelerators on chip. Also any ARM chip with PCIe (such as the server ones) can take advantage of the same co-processors that x86 can, the only limitations being drivers and software.
It’s not used in those applications as much now obviously, but it’s very much possible.
Sure, when all you have is a hammer, everything looks like a nail. Since then, CPUs have specialized. ARM targets embedded products and they’re pushing into servers, with Apple putting them into laptops, and advertising themselves as “low-power.” X86 targets desktops and servers and advertise themselves as workhorses. Those specializations guide engineering.
The whole point of RISC is that you can get better clock speeds and execute more instructions
Sure, and that’s why RISC tends to go wide, they don’t do as much per instruction, so they need to run lots of instructions.
Complex instructions may take multiple clock cycles to complete, especially if you count various sub-circuits. ARM is getting more and more of those, but X86 is notorious for it, and it gets really complicated to predict execution time since it depends on how the CPU reorders instructions. But generally speaking, ARM pushes for going wide, and X86 pushes for more IPC on fewer cores (pipelining, out of order execution, etc).
So that’s the idea I’m trying to get across. Basically what Youtube reviewers call “generational IPC improvements.”
So it’s not even fair to say ARM has more accelerators on chip
It was an example to get away from specifics like putting memory controllers, disk controllers, etc on the CPU instead of the northbridge or whatever. X86 has done a lot of this recently too, but ARM is still more of a SOC than just a CPU.
But yes, the line is getting blurred the more ARM targets X86-dominant markets.
But generally speaking, ARM pushes for going wide, and X86 pushes for more IPC on fewer cores (pipelining, out of order execution, etc).
Going wide also means having more superscalar units and therefore getting better IPC. You also don’t really understand what pipelining does. Using pipeling increases IPC versus not pipe-lining sure, but adding more stages actually can reduce IPC as with the Pentium 4. This is because it increases the penalty for misprediction and branching. Excessive pipeline stages in a time before modern branch predictors is what made the pentium 4 suck. The reason to add more stages is to increase clockspeed (pentium 4) or to bring in more complicated instructions. The way you talk about this stuff tells me you don’t actually understand what’s going on or why.
Also x86 has had memory controllers on CPUs for well over a decade now. Likewise PCIe, USB, and various other things have also been moved to the CPU - north-bridges don’t even exist anymore. Some even integrate the southbridge too to make an SoC much like a smartphone. None of this is actually relevant to the architecture though, they are entirely down to form factor, engineering decisions, and changes in technology which are relevant to the specific chip or product. If x86 had succeeded more in smartphones and ARM had taken the desktop (as was there original intention) then you would be stood here talking about x86 chips including more functions and ARM chips having separate chipsets. So this isn’t a fair thing to use to compare x86 and ARM.
It’s also not really true that x86 has fewer cores. A modern Ryzen in even a laptop form factor can have up to 16. That’s more than Apple put in their mobile chips. I get why people think this way. It’s because phones had 8 cores long before PCs, and because it made sense at the time. When ARM cores were smaller and narrower and had much less per-core performance and IPC increasing their number made sense. Likewise more smaller cores is more energy efficient than fewer bigger cores, and this makes sense for something like a smartphone. However nowadays when big, wide, power hungry ARM cores exist and are used in higher power form factors than a smartphone there isn’t really the need to have so many. At the same time x86 have efficient small cores these days that in some cases get better performance per watt than their ARM equivalents, and x86 core count has skyrocketed. Both of these platforms were originally focused on per core performance too, as multi-core consumer devices simply weren’t a thing. All of this “ARM has more cores and x86 has more single core performance” malarkey was only true for a certain window of time. It wasn’t where this all started and it’s not where we are going now. Instead what we are seeing is convergent design where ARM and X86 are being used in the same use cases, using the same design concepts, and maybe eventually one will replace the other. Only time will tell.
If you were comparing x86 vs RISC-V you might not be far off. But with ARM vs x86 they have basically the same use cases. Namely desktops, laptops, servers, networking equipment, game consoles, set top boxes, and so on. x86 even used to be used in mobile phones or even as a microcontroller. It’s not used in those applications as much now obviously, but it’s very much possible. Originally ARM was developed for the desktop too, and was designed for high performance. Lookup the Acorn Archimedes. When people say ARM is coming to the desktop they really should be saying ARM is coming back to the desktop, since that’s where it started from.
You’re also not correct on the clock speed and IPC front. For a long time Apple’s ARM implementation had better IPC than x86 chips. The whole point of RISC is that you can get better clock speeds and execute more instructions vs CISC having more complex instructions being executed more slowly. The only really correct part is that x86 chips are more pipelined. This is due to them being CISC essentially and needing more stages to hit the same clockspeed. Apple’s ARM makes up for this by having more superscalar units than x86 chips, allowing for greater IPC.
Putting graphics and video compression stuff on x86 chips isn’t new either. That’s a question of system design, not of x86 vs ARM. In the server market you get ARM chips that are CPU only. Both also come paired with FPGAs. So it’s not even fair to say ARM has more accelerators on chip. Also any ARM chip with PCIe (such as the server ones) can take advantage of the same co-processors that x86 can, the only limitations being drivers and software.
Sure, when all you have is a hammer, everything looks like a nail. Since then, CPUs have specialized. ARM targets embedded products and they’re pushing into servers, with Apple putting them into laptops, and advertising themselves as “low-power.” X86 targets desktops and servers and advertise themselves as workhorses. Those specializations guide engineering.
Sure, and that’s why RISC tends to go wide, they don’t do as much per instruction, so they need to run lots of instructions.
Complex instructions may take multiple clock cycles to complete, especially if you count various sub-circuits. ARM is getting more and more of those, but X86 is notorious for it, and it gets really complicated to predict execution time since it depends on how the CPU reorders instructions. But generally speaking, ARM pushes for going wide, and X86 pushes for more IPC on fewer cores (pipelining, out of order execution, etc).
So that’s the idea I’m trying to get across. Basically what Youtube reviewers call “generational IPC improvements.”
It was an example to get away from specifics like putting memory controllers, disk controllers, etc on the CPU instead of the northbridge or whatever. X86 has done a lot of this recently too, but ARM is still more of a SOC than just a CPU.
But yes, the line is getting blurred the more ARM targets X86-dominant markets.
Going wide also means having more superscalar units and therefore getting better IPC. You also don’t really understand what pipelining does. Using pipeling increases IPC versus not pipe-lining sure, but adding more stages actually can reduce IPC as with the Pentium 4. This is because it increases the penalty for misprediction and branching. Excessive pipeline stages in a time before modern branch predictors is what made the pentium 4 suck. The reason to add more stages is to increase clockspeed (pentium 4) or to bring in more complicated instructions. The way you talk about this stuff tells me you don’t actually understand what’s going on or why.
Also x86 has had memory controllers on CPUs for well over a decade now. Likewise PCIe, USB, and various other things have also been moved to the CPU - north-bridges don’t even exist anymore. Some even integrate the southbridge too to make an SoC much like a smartphone. None of this is actually relevant to the architecture though, they are entirely down to form factor, engineering decisions, and changes in technology which are relevant to the specific chip or product. If x86 had succeeded more in smartphones and ARM had taken the desktop (as was there original intention) then you would be stood here talking about x86 chips including more functions and ARM chips having separate chipsets. So this isn’t a fair thing to use to compare x86 and ARM.
It’s also not really true that x86 has fewer cores. A modern Ryzen in even a laptop form factor can have up to 16. That’s more than Apple put in their mobile chips. I get why people think this way. It’s because phones had 8 cores long before PCs, and because it made sense at the time. When ARM cores were smaller and narrower and had much less per-core performance and IPC increasing their number made sense. Likewise more smaller cores is more energy efficient than fewer bigger cores, and this makes sense for something like a smartphone. However nowadays when big, wide, power hungry ARM cores exist and are used in higher power form factors than a smartphone there isn’t really the need to have so many. At the same time x86 have efficient small cores these days that in some cases get better performance per watt than their ARM equivalents, and x86 core count has skyrocketed. Both of these platforms were originally focused on per core performance too, as multi-core consumer devices simply weren’t a thing. All of this “ARM has more cores and x86 has more single core performance” malarkey was only true for a certain window of time. It wasn’t where this all started and it’s not where we are going now. Instead what we are seeing is convergent design where ARM and X86 are being used in the same use cases, using the same design concepts, and maybe eventually one will replace the other. Only time will tell.