Graphcore, a Bristol, U.K.-primarily based startup creating chips and systems to accelerate AI workloads, today announced it has raised $222 million in a series E funding round led by the Ontario Teachers’ Pension Plan Board. The investment, which values the firm at $2.77 billion post-funds and brings its total raised to date to $710 million, will be used to help continued international expansion and additional accelerate future silicon, systems, and computer software improvement, a spokesperson told VentureBeat.
The AI accelerators Graphcore is creating — which the firm calls Intelligence Processing Units (IPUs) — are a form of specialized hardware developed to speed up AI applications, especially neural networks, deep studying, and machine studying. They’re multicore in design and style and concentrate on low-precision arithmetic or in-memory computing, each of which can increase the efficiency of significant AI algorithms and lead to state-of-the-art outcomes in organic language processing, pc vision, and other domains.
Graphcore, which was founded in 2016 by Simon Knowles and Nigel Toon, released its initial industrial solution in a 16-nanometer PCI Express card — C2 — that became accessible in 2018. It’s this package that launched on Microsoft Azure in November 2019 for buyers “focused on pushing the boundaries of [natural language processing]” and “developing new breakthroughs in machine intelligence.” Microsoft is also applying Graphcore’s goods internally for several AI initiatives.
Earlier this year, Graphcore announced the availability of the DSS8440 IPU Server, in partnership with Dell, and launched Cirrascale IPU-Bare Metal Cloud, an IPU-primarily based managed service providing from cloud provider Cirrascale. More not too long ago, the firm revealed some of its other early buyers — amongst them Citadel Securities, Carmot Capital, the University of Oxford, J.P. Morgan, Lawrence Berkeley National Laboratory, and European search engine firm Qwant — and open-sourced its libraries on GitHub for creating and executing apps on IPUs.
In July, Graphcore unveiled the second generation of its IPUs, which will quickly be produced accessible in the company’s M2000 IPU Machine. (Graphcore says its M2000 IPU goods are now shipping in “production volume” to buyers.) The firm claims this new GC200 chip will allow the M2000 to accomplish a petaflop of processing energy in a 1U datacenter blade enclosure that measures the width and length of a pizza box.
The M2000 is powered by 4 of the new 7-nanometer GC200 chips, every of which packs 1,472 processor cores (operating 8,832 threads) and 59.4 billion transistors on a single die, and it delivers more than 8 instances the processing efficiency of Graphcore’s current IPU goods. In benchmark tests, the firm claims the 4-GC200 M2000 ran an image classification model — Google’s EfficientNet B4 with 88 million parameters — more than 32 instances quicker than an Nvidia V100-primarily based program and more than 16 instances quicker than the most recent 7-nanometer graphics card. A single GC200 can provide up to 250 TFLOPS, or 1 trillion floating-point-operations per second.
Beyond the M2000, Graphcore says buyers will be capable to connect as numerous as 64,000 GC200 chips for up to 16 exaflops of computing energy and petabytes of memory, supporting AI models with theoretically trillions of parameters. That’s produced feasible by Graphcore’s IPU-POD and IP-Fabric interconnection technologies, which supports low-latency information transfers up to prices of 2.8Tbps and straight connects with IPU-primarily based systems (or through Ethernet switches).
The GC200 and M2000 are developed to work with Graphcore’s bespoke Poplar, a graph toolchain optimized for AI and machine studying. It integrates with Google’s TensorFlow framework and the Open Neural Network Exchange (an ecosystem for interchangeable AI models), in the latter case giving a complete education runtime. Preliminary compatibility with Facebook’s PyTorch arrived in Q4 2019, with complete function help following in early 2020. The newest version of Poplar introduced exchange memory management attributes intended to take benefit of the GC200’s exclusive hardware and architectural design and style with respect to memory and information access.
Graphcore may possibly have momentum on its side, but it has competitors in a industry that is anticipated to attain $91.18 billion by 2025. In March, Hailo, a startup creating hardware developed to speed up AI inferencing at the edge, nabbed $60 million in venture capital. California-based Mythic has raised $85.2 million to create custom in-memory architecture. Mountain View-primarily based Flex Logix in April launched an inference coprocessor it claims delivers up to 10 instances the throughput of current silicon. And final November, Esperanto Technologies secured $58 million for its 7-nanometer AI chip technologies.
Beyond the Ontario Teachers’ Pension Plan Board, Graphcore’s series E saw participation from funds managed by Fidelity International and Schroders. They joined current backers Baillie Gifford, Draper Esprit, and other individuals.