23H·

CUDA on RISC-V: NVIDIA's silent stab in the back against the x86/ARM cartel

$NVDA (-1,28%)
$ARM (-1,8%)
$9984 (+1,78%)

Anyone who thought that NVIDIA would continue to cuddle up in its wedding bed with ARM while secretly displacing AMD GPUs must have missed the clattering in the RISC-V workshop. Because what has been quietly announced here is nothing less than a seismic tremor in the foundations of the AI infrastructure: CUDA is now running on RISC-V processors. This is not just a technical curiosity, but a strategic broadside against the architecture duopolists x86 and ARM. And exactly where it hurts the most, in the heart of the AI power centers: data centers, edge computing and AI accelerator solutions.


What happened?

RISC-V happily announces that NVIDIA's CUDA, the undisputed control center for GPU-based AI computing, now runs on RISC-V-based host processors. Anyone who thinks this is a clumsy port for hobbyists is very much mistaken: this is an official NVIDIA free pass for the open source architecture, coupled with the license for integration into professional AI workloads. In technical terms, this means that RISC-V can function as a system processor in CUDA-based AI setups in future. The GPU part remains with NVIDIA, but the control? From now on, it can do without Intel, AMD or ARM.


What's behind this?


License-free ISA = more margin, more control

RISC-V is free - and in a world where ARM wanted to be taken over by NVIDIA (and failed), that's a poisoned offer: complete freedom without license fees. Perfect for start-ups, China and anyone who is fed up with Western-priced license models.

Scalability without ballast

The modular structure of RISC-V allows you to build exactly what you need - no more and no less. No unnecessary silicon ballast, no incompatible license clutter.

Entry ticket to new markets

Edge AI, dedicated AI SoCs, scientific data centers - wherever ARM or x86 are used today, there could be a RISC-V variant with CUDA support tomorrow. With the Wormhole n150/n300, Tenstorrent, for example, shows what happens when you put a RISC-V kit in Jim Keller's hands.


Why this is explosive

For Intel & AMD: You lose the last bastion - control via the host CPU. The GPU belongs to NVIDIA anyway. Now they are also losing the processor share in the AI market.

For ARM: The long-held narrative of the "energy-efficient future of servers" is being dented. If even NVIDIA, once almost the owner of ARM, is now embracing RISC-V, you can imagine how much trust there still is in SoftBank & Co.

For China: Jackpot. RISC-V is open, free, developer-friendly - and now also CUDA-compatible. Who still needs Western CPUs when their own SoC will soon be fully AI-capable?


NVIDIA: The laughing third

You could almost applaud: NVIDIA is securing control of the software ecosystem (CUDA) while strategically emancipating itself from hardware partners. The host processor? Interchangeable. The GPU? Stays green. It's brilliantly opportunistic - and a quiet but effective form of platform cannibalization.


What remains critical?

RISC-V is open, but not finished. Toolchains, debugging, compilers, OS support - all this is still lagging behind the industrialized ARM/x86 world. Furthermore, as long as CUDA runs on NVIDIA hardware, the golden cage remains - except that the grids are now made of RISC-V.


The first real break with the architectural monopoly

What we are seeing here is not a technical gimmick, but a strategic bang. With CUDA, RISC-V gains access to one of the most important AI ecosystems in the world. ARM and x86 are losing their exclusivity in the most lucrative technology field of the decade. And NVIDIA? Taking the opportunity to finally become the center of the AI world - regardless of the processor substructure.


Source: RISK_V via X


https://www.igorslab.de/cuda-auf-risc-v-nvidias-stiller-dolchstoss-gegen-das-x86-arm-kartell/

previw image
12
Participar na conversa