• schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    Exactly: too many people confuse the monopoly aspect with the consumer gaming stuff, which isn’t even pocket change at this point.

    CUDA and AI are the whales in the room, and nVidia has a stranglehold on that market and should be investigated.

    (Even though, IMO, this is because AMD did their usual shitty job of software, and basically gave the market away.)

    • filister@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      Yes, AMD completely overslept here and their ROCm is much inferior. But at least regulators can force NVIDIA to open their CUDA library and at least have some translation layers like ZLUDA.

      Even though I think they will play the same card like Microsoft obfuscating and making it very confusing to hinder the portability.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 days ago

        But at least regulators can force NVIDIA to open their CUDA library and at least have some translation layers like ZLUDA.

        I don’t believe there’s anything stopping AMD from re-implementing the CUDA APIs; In fact, I’m pretty sure this is exactly what HIP is for, even though it’s not 100% automatic. AMD probably can’t link against the CUDA libraries like cuDNN and cuBLAS, but I don’t know that it would be useful to do that anyway since I’m fairly certain those libraries have GPU-specific optimizations. AMD makes their own replacements for them anyway.

        IMO, the biggest annoyance with ROCm is that the consumer GPU support is very poor. On CUDA you can use any reasonably modern NVIDIA GPU and it will “just work.” This means if you’re a student, you have a reasonable chance of experimenting with compute libraries or even GPU programming if you have an NVIDIA card, but less so if you have an AMD card.