The large api/token providers, and large consumers are all investing in their own hardware. So, they are in an interesting position where the market is growing, and NVIDIA is taking the lion's share of enterprise, but is shrinking at the hyperscaler side (google is a good example as they shift more and more compute to TPU). So, they have a shrinking market share, but its not super visible.
> The large api/token providers, and large consumers are all investing in their own hardware.
Which is absolutely the right move when your latest datacenter's power bill is literally measured in gigawatts. Power-efficient training/inference hardware simply does not look like a GPU at a hardware design level (though admittedly, it looks even less like an ordinary CPU), it's more like something that should run dog slow wrt. max design frequency but then more than make up for that with extreme throughput per watt/low energy expense per elementary operation.
The whole sector of "neuromorphic" hardware design has long shown the broad feasibility of this (and TPUs are already a partial step in that direction), so it looks like this should be an obvious response to current trends in power and cooling demands for big AI workloads.
The large api/token providers, and large consumers are all investing in their own hardware. So, they are in an interesting position where the market is growing, and NVIDIA is taking the lion's share of enterprise, but is shrinking at the hyperscaler side (google is a good example as they shift more and more compute to TPU). So, they have a shrinking market share, but its not super visible.
> The large api/token providers, and large consumers are all investing in their own hardware.
Which is absolutely the right move when your latest datacenter's power bill is literally measured in gigawatts. Power-efficient training/inference hardware simply does not look like a GPU at a hardware design level (though admittedly, it looks even less like an ordinary CPU), it's more like something that should run dog slow wrt. max design frequency but then more than make up for that with extreme throughput per watt/low energy expense per elementary operation.
The whole sector of "neuromorphic" hardware design has long shown the broad feasibility of this (and TPUs are already a partial step in that direction), so it looks like this should be an obvious response to current trends in power and cooling demands for big AI workloads.
Market share can shrink but if the TAM is growing you can still grow.
But will the whole pie grow or shrink?