What really fucking gets me about this article is how they note that Moore already has a stack that can run cuda crap on it. This shit that AMD has not been able to pull their god damn heads out of their asses and do fully and completely for years, and here they already have it ready to go as they’re essentially at the beginning of the chinese homegrown gpu story. Like, this is something that’s going to be a decades long project, and they already have that critical part built. American computing is a fucking joke and should be burned to the ground.
I’m well aware. While I don’t personally work on the gpu related stuff, I do work at a company that has to do a lot of gpu computing. My opinions on this topic are mostly informed by coworkers who do write gpu code, specifically a lot of opencl kernels. opencl has a lot of shortcomings and issues, and the kernel model is a bitch to work with. However, that isn’t the point. The point is about compatibility. You can have a really good gpu but if you don’t have adversarial compatibility with your competitors, it will just die. Specifically, amd have done a shit job at making cuda run on amd gpus. rocm is a disjointed mess, it sucked when I had to work with it in uni, it still sucks now. Cuda is bad and proprietary, but any modern gpu should still be able to run cuda crap simply because it’s useful to be able to do so, and there’s a lot of things already built with it that should remain accessible.
The fact that amd have not been able to get a component as critical as their adversarial compatibility layer working while Moore has already implemented it for their early generation of cards shows: