Brought to you by Data Center Knowledge
Chinese cloud giant Tencent’s bid to become a force to reckon with in Artificial Intelligence will be powered by Nvidia’s most advanced GPUs inside its data centers.
Deep learning, the most widely used type of AI, is already enhancing cloud services by Tencent’s American and Chinese competitors, cloud giants like Google, Microsoft, Baidu, and Alibaba. The company has been ramping up investment in AI in a big way; Ma Huateng, its founder and chairman, views AI innovation as a matter of life and death for the business.
In the data center, Nvidia’s Tesla GPUs, based on the chipmaker’s Pascal architecture, are currently state-of-the-art hardware for deep learning. Cloud giants pack as many as eight of these high-power units onto a server motherboard along with Intel CPUs, using them to teach computers to do things like predicting search results, identifying objects in photos, and serving targeted ads.
The amount of resources cloud giants have been investing in AI are driving a surge in demand for hardware like Nvidia’s. “We’ve been working with a lot of the hyper-scales – really all of the hyper-scales – in the large data centers,” Rob Ober, Tesla chief platform architect at Nvidia, told Data Center Knowledge in an interview earlier this month.
These companies are also using GPUs to provide AI-enhanced cloud services to their enterprise customers, and some of them let customers rent “bare-metal” GPUs the same way they rent virtual machines in the cloud to run their own deep learning workloads.
“Tencent Cloud GPU offerings with Nvidia’s deep learning platform will help companies in China rapidly integrate AI capabilities into their products and services,” Tencent Cloud VP Sam Xie said in a statement.
Tencent’s deployment of Tesla P100 and P40 GPUs and Nvidia’s deep learning software is focused primarily on such enterprise cloud services. The Chinese company has been offering cloud services running on earlier-generation Tesla M40 GPUs since December. It expects to integrate cloud servers with up to eight GPUs each by mid-year.