The real issue is having SOTA, collapse-proof hardware for inference. Apple and Nvidia hardware are both reliant on drivers that can brick your server with an over-the-air update. AMD hardware has generally more resilient Mesa drivers that can theoretically survive a hostile OEM, but with fewer options for finetuning and training. Intel GPUs are a high-VRAM option but it's unclear how long they'll be supported in-software for. Everything is a system of tradeoffs.