In the summer, Cerebras Systems uncovered the giant chip in the Cerebras Wafer Scale Engine. The world’s largest chip with 1.2 trillion conductors in its body will be part of the company’s system designed to accelerate deep learning.
At the Supercomputing 2019 conference this week, the CS-1 measures 66 centimeters . The chip, which consumes 20 kW of power, uses 4 kW of power to cool the subsystems. In other words, this huge chip consumes 15 kW and 1 kW is lost due to inefficiency of energy systems.
The Cerebras Wafer Scale Engine, powered by the CS-1, is 56 times larger, 78 times more cores than the largest GPU ever built, and has 3,000 times the chip integrated memory. In other words, very fast. In addition, it works with open source ML systems such as PyTorch and TensorFlow to increase flexibility.
Much of the hardware features of the chip are not yet known. The company said that the features of the hardware, such as clock speed, will be announced in the short term. Although the features of the chip are not yet known, it will be quite expensive .
The company’s spokesman told Tom’s Hardware that it would cost “a few million de, even though it did not specify the exact price. That doesn’t mean that it’s discouraging for everyone. There is also one in the Argonne National Laboratory, run by the University of Chicago Argonne LLC. This laboratory is used for cancer research and basic science experiments.
The reason for developing such a chip is to use it in complex artificial intelligence applications. Feld Reducing training time will eliminate the bottleneck across the industry , Andrew said Andrew Feldman, founder and CEO of the company. The company’s statement said that the data processing process will go down from months to minutes .