Market Desk (Tech): Microsoft built a supercomputer for OpenAI after investing US$1bn in the startup in 2019. The challenge was to train a large set of AI programs called models, which required powerful cloud computing services for long periods of time. To meet this challenge, Microsoft had to string together tens of thousands of Nvidia A100 graphics chips and change the way servers were positioned on racks to prevent power outages.
The supercomputer allowed OpenAI to release ChatGPT, a viral chatbot that attracted more than 1mn users within days of going public in November 2020. Microsoft now uses the same set of resources to train and run its own large AI models including the new Bing search bot introduced last month and sells the system to other customers.
A type of system is being developed, which can work on a large scale. Scott Guthrie, Microsoft’s executive vice president of cloud and AI, said ChatGPT’s current model was built on supercomputers that were several years old. The next generation of supercomputers has also started. The next version of ChatGPT will be GPT-4. Microsoft may unveil the model at an AI event on Thursday.
The newest version of Nvidia’s networking technology to share data even faster. The new Bing is still in preview with Microsoft gradually adding more users from a waitlist. The team working on this holds a daily meeting to figure out how to bring greater amounts of computing capacity online quickly and fix problems that crop up.