Home technology artificial-intelligence NVIDIA announces new tech for training Giant Artificial Intelligence Models
Artificial Intelligence
CIO Bulletin
2021-04-19
At the GTC 2021, NVIDIA made several announcements including, the development and deployment of a newly designed AI platform to accommodate the training of future AI models that almost rival the number of synapses in a human brain. Ian Buck, NVIDIA's GM and VP of Accelerated Computing, said there has been a dramatic growth in AI models' size, particularly for language (NLP) models.
For comparison, in 2018, Google had published a transformer-based machine learning technique for natural language processing (NLP) pre-training. This early model was called BERT and had 340 million connections. BERT was open-source and developed for a wide range of search engine-related tasks such as answering questions and language inference.
The model from NVIDIA is way ahead of BERT, surpassing 175 billion connections. For comparison, a human brain has 150 trillion connections. As the AI model sizes continue to grow, NVIDIA believes that, by 2023, the models will have 100 trillion or more connections. Models of that size will exceed the technical capabilities of existing platforms.
NVIDIA has not yet released detailed architecture information on the newly announced Grace Module. However, it could possibly be made available in 2023. The vital breakthrough for NVIDIA is that the CPU is capable of giant scale AI and HPC applications. The CPU is comparable to existing GPUs with a high-speed interconnect between the two processors. The interconnecting link has 900 gigabytes/second bandwidth, which is 14X faster than today's standards. It will allow upto trillion connection models to be trained and perform inferencein real-time.
Digital-marketing
Artificial-intelligence
Lifestyle-and-fashion
Food-and-beverage