Cerebras

Pierre Lamond

|

Aug 19, 2019

|

2 MIN

Introducing the Cerebras Wafer Scale Engine (WSE), the largest chip ever built


In the late 1970s, I sat down with technologist and entrepreneur, Gene Amdahl to discuss the prospect of building a “super-chip.” Gene, like myself and others with a history of chip design, knew that using a whole wafer, via a method known as wafer scale integration, or WSI, would vastly improve performance. Ultimately, despite Gene’s best efforts, his work on WSI was unsuccessful. The hypothesis was correct, but at that time there were simply too many uncharted, fundamental technical impediments to make wafer scale integration a reality.

Three decades later, I sat down with another entrepreneur, Andrew Feldman, and to my surprise, had a similar discussion. I’ve known Andrew for many years — I was an investor in his previous company, (SeaMicro which was bought by AMD) and have always admired his passion for tackling deeply technical problems and his ability to build world-class teams. Andrew told me he wanted to build a chip. A very big chip. A chip that could meet the needs of the AI community which was — and still is -repurposing graphics processors to meet new compute needs. You see, big chips process information more quickly and produce answers in less time. This is incredibly important for researchers crunching enormous amounts of data, it enables them to train models faster, test new ideas, and ultimately solve previously unsolvable problems. A chip like that could fuel unprecedented discovery, and the only way to build it was through wafer scale integration. It was a huge endeavor.

Andrew and the Cerebras team have built that chip. Having successfully navigated issues of yield, power delivery, cross-reticle connectivity, packaging, and more, today they unveil the Cerebras Wafer Scale Engine (WSE), aka, the largest chip ever built. And it’s remarkable. With a 1,000x performance improvement over what’s currently available, the Cerebras WSE is comprised of more than 1.2 trillion transistors and is 46,225 square millimeters. It also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. For comparison, the first chips I created in the 1960s featured a few hundred transistors.

As an early employee at Fairchild Semiconductor, and co-founder of National Semiconductor, I saw the unprecedented impact of the industry’s shift from transistor to integrated circuit. Today, we’re at a similar inflection point for compute. The world is waiting for AI to fulfill its potential and that can only happen with a dedicated chip. A chip designed from the ground up for AI work. Every once in a while, a technology company comes along and defines a generation:

Cerebras will define the AI generation.

Follow Eclipse Ventures on LinkedIn and Twitter for the latest on the Industrial Evolution.

Tags

  • Entrepreneurship
  • Semiconductors
  • Silicon Valley
  • Venture Capital

Related Articles

Portfolio
https://eclipsevcprod.wpengine.com/wp-content/uploads/2024/03/1-4.png
Mar 7, 2024|4 min read

Pioneering Energy-Efficient Computing: Our Seed Investment in Efficient Computer

Read More
Portfolio
https://eclipsevcprod.wpengine.com/wp-content/uploads/2022/03/Enabling-Secure-and-Compliant-Data-Collaboration-Our-Series-A-in-Decentriq-1.jpg
Mar 22, 2022|3 min read

Enabling Secure and Compliant Data Collaboration: Our Series A in Decentriq

Read More
Portfolio
https://eclipsevcprod.wpengine.com/wp-content/uploads/2022/02/Accelerating-the-Next-Generation-of-Transit-Our-Series-A-in-RideCo-1.jpg
Feb 17, 2022|3 min read

Accelerating the Next Generation of Transit: Our Series A in RideCo

Read More
Portfolio
https://eclipsevcprod.wpengine.com/wp-content/uploads/2022/01/Accelerating-the-Next-Wave-of-Autonomous-Vehicles-Our-Series-B-in-Wayve-1.jpg
Jan 18, 2022|5 min read

Accelerating the Next Wave of Autonomous Vehicles: Our Series B in Wayve

Read More