tr:dr; he says “x86 took over the server market” because it was the same architecture developers in companies had on their machines thus it made it very easy to develop applications on their machines to then ship to the servers.
Now this, among others he made, are very good points on how and why it is hard for ARM to get mainstream on the datacenter, however I also feel like he kind lost touch with reality on this one…
He’s comparing two very different situations, more specifically eras. Developers aren’t so tied anymore like they used to be to the underlaying hardware. The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.
Obviously very big companies such as Google, Microsoft and Amazon are more than happy to pay the little “tax” to ensure Javascript runs fine on ARM than to pay the big bucks they pay for x86…
What are your thoughts?
Have you used ARM servers? They’re a massive pain to work with because they just need that one little extra step every time. Oops, this Docker image doesn’t do aarch64, gotta build it yourself. Oops, this package isn’t available, gotta compile it yourself. Oops, this tool doesn’t work, gotta find an alternative or run it through the much slower qemu layer.
The M1 was the first usable ARM development machine for the mainstream and at launch it was plagued with tons of “how do I develop on this” problems. Apple provided x64 compatibility as a workaround for basically every piece of software you want to run being on another platform. Things are moving forward, but I haven’t heard of any companies announcing how their lives improved by switching to Graviton. Maybe if Apple released a 200 core M2 server it would start to make sense to use ARM, but knowing Apple they’d probably force you to run macOS.
Linux was released in 1991, not 1960. There were tons of programming languages out there. BASIC ran on basically anything, as did C++. Pascal and Fortran are still used to write high demand applications to this day. Nobody was stuck with C.
Also, when you actually need performance, Javascript needs to go. Java and dotnet have the same cross platform advantages with much higher speeds. When those become too slow for you (not that hard, they both have huge overhead), you get into the realm of C++ and Rust. After that, you can go one step further, and write your code in C or Fortran (Fortran is especially good at number crunching, beating C at many tasks).
For a while, developers were stuck with compiling stuff for their servers. Then Java came out. Java did what you say Javascript does: write once, run anywhere. Since the late nineties, server architecture does not strictly matter. You can take most .jar files and serve them from your server, your Power9 box, your Android phone, it’ll all just work after downloading a runtime.
Nothing changed, really. The minority of developers running on ARM will usually still deploy to amd64. Unlike in the past, ARM cores on desktop are faster than ARM cores on the server. There’s no benefit to running ARM servers. Running slow software like PHP and Javascript becomes especially problematic on slower hardware, so for those cross platform runtimes, you’re still better off running on amd64. That’s part of the reason why companies like Oracle are handing out free ARM VPS products with tons of free RAM, to convince people to try their ARM product for real.
Maybe Graviton will take off, who knows. People said the same thing about Power9 and they’re saying great stuff about RISC-V too. For now, I don’t see much change.
Yes I’ve had that experience and a similar one once the first ARM SBCs came to the market circa 2009 with the SheevaPlug. At that time was trying to get stuff work on those and I know how things go.
After this point you’re essentially saying the same thing I was BUT replacing the word Javascript with Java/dotnet. Once those virtual machines runs well on ARM (as they mostly do) developers won’t care anymore about the architecture. I only picked Javascript/Typescript as an example because it will most like take over everything in a few years.
And why are they trying to push developers into ARM? It is medium term strategic investment, they’re just waiting and pushing ARM manufacturers such as Ampere Computing to develop “bigger and better” CPUs that will take on Intel. Once they’re very competitive in performance they’ll simply start replacing Intel with ARM and nobody will complain because at that point the 90% of developers are using Java/dotnet/Javascript (things that run on VMs) will not even notice the difference between running on their amd64 or ARM.
It seems that Facebook, the holy grail of running PHP, doesn’t agree with you. They’ve been pushing ARM on their datacenters for years now.
They need to get competitive in performance first, and they haven’t been for a few decades now.
Even still, developers will know, because their Docker images suddenly stop working. I’d agree with you for shared hosting setups, in the manner PHP hosts and a select few Python hosts will allow you to upload files onto a shared server and run them.
Even in devops environments, I’m pretty sure nobody is actually juggling raw source files around the servers. Everything is getting neatly pipelined, and those pipelines need to be changed or the code will simply break.
I don’t know what PHP is doing in their datacenters, but Facebook is not exactly a nornal software company. Their open server architecture is pretty neat but I don’t think they influence any companies but their own in their push for ARM.
All of that said, I agree that architecture shouldn’t be a problem in practice. If you’re a programmer and you don’t know the difference between ARM and amd64, you’re going to run into much bigger problems than “something is up with my build”. In practice, though, I expect Linus to be right, and that ARM will remain a niche product for at least the foreseeable future unless Ampere manages to pull off the stuff they’ve been promising for years.
There’s one exception, though: small, new IoT startups are moving to a very Raspberry Pi-based ecosystem, to the point of devices literally including full Raspberry Pis. Intel seems to be losing the “small computer but not microcontroller” market pretty badly.
I’ve been using Linux4Tegra since before the M1 silicon and it’s really not that bad if you are at all used to build chain management. Granted, Nvidia does a lot of the initial heavy lifting here, but really to spin up a custom environment, you really only need to get the builds done right the first time and then it’s pretty smooth sailing.