Okay... but as someone who was at Nvidia Headquarters today I feel like I have to play Devil's Advocate here just a little 😁
The AMD Strix doesn't have CUDA support which can be a deal breaker to A LOT of developers. And the Spark's 128GB RAM is available to both the CPU and GPU due to the Grace Blackwell architecture. I believe the Spark can handle up to 200B parameter models as well where the Strix can do up to 70B parameter FP16 models. Not to mention if you have 2 Sparks and
Now that being said, the "Founders Edition" of the Spark is $4,000 and the Strix is in the $2,000 range I think.
But other OEM's like ASUS are selling the Spark for around $3,000 so mileage may vary.
I paid $1800 for my Strix and I run a 120b q8 model at 50 tokens/sec. I have run Qwen3 235B as well.
Cuda is becoming less of a deal breaker with rocm improving rapidly.
I don’t take the Strix or the Spark seriously, in my opinion they are both toys.
well yeah, they are toys compared to Enterprise infrastructure. But both of those run on normal 120V power outlets so they can be used in everyday homes.
A DGX runs n 240V C19/C20 cables so not really an option for 'normal' people.
So get 2 Spark devices connected over their ConnectX-7 ports and you can run any 'consumer-grade' model.
And CUEA being less of a deal breaker is true but moving from 99% market share to 94% is less as well, doesn't mean that they all AI Enterprise developers still utilize CUDA :)
But enough Devil's Advocate, The spark is cool and is a super efficient tool to run consumer AI models but so is the AMD Strix.