tr:dr; he says “x86 took over the server market” because it was the same architecture developers in companies had on their machines thus it made it very easy to develop applications on their machines to then ship to the servers.

Now this, among others he made, are very good points on how and why it is hard for ARM to get mainstream on the datacenter, however I also feel like he kind lost touch with reality on this one…

He’s comparing two very different situations, more specifically eras. Developers aren’t so tied anymore like they used to be to the underlaying hardware. The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.

Obviously very big companies such as Google, Microsoft and Amazon are more than happy to pay the little “tax” to ensure Javascript runs fine on ARM than to pay the big bucks they pay for x86…

What are your thoughts?

  • To be fair, the people doing ARM optimisations aren’t the people working on enterprise servers. V8, Spidermonkey, and JavascriptCore all need to run as fast and efficient as possible on phones and laptops, constrained by battery power. The goals of enterprise servers and mobile browser developers align nicely, but I don’t think there’s much overlap between the two.

    Of course someone still needs to write drivers for server hardware, but the Javascript stuff mostly comes for free from Google (if you use Node) or Apple (if you use Bun).

    • Windex007@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I think people underestimate the challenges involved when building software systems tightly coupled to the underlying hardware (like if you are a team tasked with building a next gen server).

      Successful companies in the space don’t underestimate it though, the engineers who do the work don’t underestimate it, and Linus doesn’t underestimate it either.

      The domain knowledge in your org required to mitigate the business risk isn’t trivial. The value proposition always needs to be pretty juicy to overcome the inertia present caused by institutional familiarity. Like, can we save a few million on silicon? Sure. Do we think we understand the challenges well enough to keep our hardware release schedules without taking shortcuts that will result in reputational impact? Do we think we have the right people in place to oversee the switch?

      Over and over again, it comes back to “is it worth it”, and it’s much more complex of a question to offer than just picking the cheaper chips.

      I imagine at this point there is probably a metric fuckton of enterprise software what strictly dictate that it must be run on X86. Even if it doesn’t have to. If you stray from the vendor hardware requirements, bullshit or not, you’ll lose your support. There is likely friction on some consumer segments as well on the uptake.