OpenAI worked with NVIDIA on teaming together thousands of AI GPUs together for ChatGPT's recent GPT-4 model, but now the companies are reportedly working together to secure 10 million NVIDIA AI GPUs for its Advanced AI model.
ChatGPT is already using around 20,000 of NVIDIA's latest AI-focused GPUs that will build out new models in the coming months, but now Wang Xiaochuan, the businessman and founder of Chinese search engine Sogou, says that ChatGPT has a mode advanced AI computing model that uses far more advanced training methods, something that 10 million NVIDIA AI GPUs working together is the soul of OpenAI and its plan for AI dominance.
Wang has invested in a new intelligence firm called Baichuan Intelligence for China, which wants to be the main competitor to OpenAI in China. The company recently unleashed its Baichuan-13B language model that runs on consumer-level hardware, like an NVIDIA GeForce RTX series graphics card.
But back to the 10 million NVIDIA AI GPUs powering the future of OpenAI's language model is insane, the thought -- and processing power -- of thousands of GPUs working together on AI is already cool enough -- but 10 million combined? Well, that's incredible. I'd love to walk around inside of that facility when it's finished, nerding out the entire time.
NVIDIA makes around 1 million AI GPUs per year, so the request from OpenAI for 10 million AI GPUs from NVIDIA is not a mountain of GPUs... but rather many years of production. Not only is it an unfathomable amount of AI computing power, but also an unfathomable amount of GPUs that NVIDIA needs to make. OpenAI would surely be the top player in the AI place at that point.