Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more
Today, Radium, a startup that aims to use artificial intelligence and machine learning to extract more computing power from cloud hardware, announced it was leaving stealth mode and deploying its solutions to cloud datacenters run by Cyxtera in Toronto, the New York and New Jersey metro area, and Silicon Valley.
The main product, called Launchpad, lets users start and shut down projects on bare metal machines, eliminating the extra layers of hypervisors and virtualization software. Radium offered benchmark tests on machine learning jobs that showed speed increases ranging from 30% and 140%.
“Our initial testing shows that bare metal servers offer a good cloud computing platform for the high-performance deep learning and inference workloads required for these types of applications,” said Srinivasa Narasimhan, a professor at Carnegie Mellon’s School of Computer Science, who has been working with the company to test its product.
Improving the performance of hypervisors
Many cloud products rely heavily on virtualization software layers, or “hypervisors,” that allow one physical machine to simulate a variety of smaller machines that appear independent to users. But implementing hypervisors comes at a cost. The simulation often adds a small delay to tasks like retrieving data from a disk or sending a packet of data on the internet.
[ Related: The emergence of datacenter-as-a-service ]
“At a high level, we felt the dilutive nature of middleware and hypervisor stacks within the leading cloud providers architecture could be improved upon,” said Adam Hendin, Radium CEO and cofounder.
Radium’s bare metal approach is optimized to target the marketplace for AI in research labs and corporate stacks. The smart deployment software built into Launchpad is designed to help optimize performance for the AI algorithms running inside.
The product is also priced with an eye toward the data-heavy workloads common in the field. They add no extra charges for ingress and egress, an often overlooked set of fees that is the source of sometimes large surprises for users of some of the other clouds.
Listening to the ML community
“We’ve designed and purpose-built a cloud platform to exceed the needs of the most demanding AI workloads,” explained Hendin. “A public cloud with the benefits of a private cloud. We’ve also been listening to the ML community who are frustrated by non-portable software, along with heavy data egress fees within the leading cloud providers, which wall customers in.”
Other firms are experimenting with using AI algorithms to allocate resources in public clouds. Zesty, for example, offers a product that will watch over instances in public clouds like AWS and make money-saving decisions by shrinking overprovisioned machines or even shutting them down.
[ Related: Cloud datacenters anticipated to become largely robot-dependent by 2025 ]
Cyxtera already runs 61 datacenters in 21 markets around the world. The company mainly specializes in colocation and bare metal servers for enterprise markets, but it’s expanding to offer services like one for AI/ML training using Nvidia machines.
Deploying Radium in the datacenter
“Cyxtera’s global platform and scalable interconnection solutions provide Radium and their customers a reliable foundation to grow their businesses as their workloads scale,” said Holland Barry, senior vice president and field CTO for Cyxtera.
Cyxtera has deployed the current version of Radium Launchpad to three centers so far and plans to expand soon.
“Radium is breaking the mold of cloud automation with its Launchpad ecosystem, enabling a simple process for provisioning, configuration, and deployment,” said Barry.