GamesBeat Summit 2022 returns with its largest event for leaders in gaming on April 26-28th. Reserve your spot here!
Nvidia CEO Jensen Huang recently hosted yet another spring GTC event that drew more than 200,000 participants. And while he didn’t succeed in acquiring Arm for $80 billion, he did have a lot of things to show off to those gathering at the big event.
He gave an update on Nvidia’s plans for Earth-2, a digital twin of our planet that — with enough supercomputing simulation capability within the Omniverse –could enable scientists to predict climate change for our planet. The Earth 2 simulation will require the best technology — like Nvidia’s newly announced graphics processing unit (GPU) Hopper and its upcoming central processing unit (CPU) Grade.
Huang fielded questions about the ongoing semiconductor shortage, the possibility of investing in manufacturing, competition with rivals, and Nvidia’s plans in the wake of the collapse of the Arm deal. He conveyed a sense of calm that Nvidia’s business is still strong (Nvidia reported revenues of $7.64 billion for its fourth fiscal quarter ended January 30, up 53% from a year earlier). Gaming, datacenter, and professional visualization market platforms each achieved record revenue for the quarter and year. He also talked about Nvidia’s continuing commitment to the self-driving vehicle market, which has been slower to take off than expected.
Huang held a Q&A with the press during GTC and I asked him the question about Earth-2 and the Omniverse (I also moderated a panel on the industrial metaverse as well at GTC). I was part of a large group of reporters asking questions.
Here’s an edited transcript of our collective interview.
Question: With the war in Ukraine and continuing worries about chip supplies and inflation in many countries, how do you feel about the timeline for all the things you’ve announced? For example, in 2026 you want to do DRIVE Hyperion. With all the things going into that, is there even a slight amount of worry?
Jensen Huang: There’s plenty to worry about. You’re absolutely right. There’s a lot of turbulence around the world. I have to observe, though, that in the last couple of years, the facts are that Nvidia has moved faster in the last couple of years than potentially its last 10 years combined. It’s possible that we’re very comfortable being a digital company. It’s possible that we’re quite comfortable working remotely and collaboratively across the planet. It’s quite possible that we work better, actually, when we allow our employees to choose when they’re most productive and let them optimize, let mature people optimize their work environment, their work time frame, their work style around what best fits for them and their families. It’s very possible that all of that is happening.
It’s also true, absolutely true, that it has forced us to put a lot more energy into the virtual work that we do. For example, the work around OmniVerse went into light speed in the last couple of years because we needed it. Instead of being able to come into our labs to work on our robots, or go to the streets and test our cars, we had to test in virtual worlds, in digital twins. We found that we could iterate our software just as well in digital twins, if not better. We could have millions of digital twin cars, not just a fleet of 100.
There are a lot of things that I think–either, one, it’s possible that the world doesn’t have to get dressed and commute to work. Maybe this hybrid work approach is quite good. But it’s definitely the case that forcing ourselves to be more digital than before, more virtual than before, has been a positive.
Question: Do you see your chip supply continuing to be robust?
Huang: Chip supply question. Here’s what we did. The moment that we started to experience challenges–our demand was high, and demand remains high. We started to experience challenges in the supply chain. The first thing we did was we started to create diversity and redundancy, which are the first principles of resilience. We realized we needed more resilience going forward. Over the last couple of years we’ve built in diversity in the number of process nodes that we use. We qualified a lot more process nodes. We’re in more fabs than ever. We qualified more substrate vendors, more assembly partners, more system integration partners. We’ve second sourced and qualified a whole bunch more external components.
We’ve expanded our supply chain and supply base probably fourfold in the last two years. That’s one of the areas where we’ve dedicated ourselves. Nvidia’s growth rate wouldn’t be possible without that. This year we’ll grow even more. When you’re confronted with adversity and challenges, it’s important to go back to first principles and ask yourself, “This is not likely going to be a once in a lifetime thing. What could we do to be more resilient? What could we do to diversify and expand our supply base?”
Question: I’m curious about the progress on Earth-2 and the notion that what you build there in OmniVerse could be reusable for other applications. Do you think that’s feasible, that this will be useful for more than just climate change prediction? And I don’t know if there are different kinds of pieces of this that you’re going to finish first, but could you do climate change prediction for part of the Earth? A milestone with lower detail that proves it out?
Huang: First of all, several things have happened in the last 10 years that made it possible for us to even consider doing this. The three things that came together, the compound effect gave us about a million times speed-up in computation. Not Moore’s Law, 100 times in 10 years, but a million.
The first thing we did was, accelerated computing parallelized software. If you parallelize software, then you can scale it out beyond the GPU into multi-GPU and multi-node, into an entire data center scale. That’s one of the reasons why our partnership with Mellanox, which ended in our combination, was so important. We discovered that not only did we parallelize it on the chip level, but also on the node level and the data center level. That scale-out and scale-up led to 20X times another 100X, another 1000X if you will.
The next thing that happened, that capability led to the invention and democratization of AI. The algorithm of AI was invented, and then it came back and solved physics. Physics ML, physics-informed neural networks. Some of the important work we do in Nvidia Research that led to Fourier neural operators. Basically a partial differential equation learner, a universal function approximator. An AI that can learn physics that then comes back to predict physics.
We just announced this week FourCastNet, which is based on the Fourier neural operator. It learned from a numerical simulation model across about 10 years’ worth of data. Afterward, it was able to predict climate with more accuracy and five orders of magnitude faster. Let me explain why that’s important. In order for us to understand regional climate change, we have to simulate not a 10-kilometer resolution, which is where we are today, but a one-meter resolution. Most scientists will tell you that the amount of computation necessary is about a billion times more, which means that if we had to go and just use traditional methods to get there, we’d never get there until it’s too late. A billion times is a long time from now.
We’re going to take this challenge and solve it in three ways. The first thing we’re going to do is make advances in physics ML, creating AI that can learn physics, that can predict physics. It doesn’t understand physics, because it’s not first-principle-based, but it can predict physics. If we can do that at five orders of magnitude, and maybe even more, and we create a supercomputer that’s designed for AI–some of the work I just announced with Hopper and future versions of it is going to take us further into those worlds. This ability to predict the future – or, if you will, do a digital twin – doesn’t understand it on first principles, because it still takes scientists to do that. But it has the ability to predict at a very large scale. It lets us take on this challenge.
That’s what Earth-2 is all about. We announced two things at this GTC that will make a real contribution to that. The first thing is the FourCastNet, which is worthwhile to take a look at, and then the second is a machine that is designed, more and more optimized for AI. These two things, and our continued innovation, will give us a chance to tackle that billion times more computation that we need.
The thing that we’ll do, to the second part of your question, is we will be able to take all of that computation and predictive capability and zoom it in on a particular area. For example, we’ll zoom it right into California, or zoom it into southeast Asia, or zoom it into Venice, or zoom it into areas around the world where ice is starting to break off. We could zoom into these parts of the world and simulate at very high resolutions across what are called ensembles, a whole lot of different iterations. Millions of ensembles, not hundreds or thousands. We can have a better prediction of what goes on 10, 30, 50, or even 100 years out.
Question: I had a question about the ARM deal falling through. Obviously now Nvidia will be quite a different company. Can you talk in detail about how that will affect the business’s trajectory, but also how it will affect the way you think about the tech stack and the R&D side of the company? How are you looking at that in the longer term? What are the net benefits and consequences of the deal not happening?
Huang: ARM is a one-of-a-kind asset. It’s a one-of-a-kind company. You’re not going to build another ARM. It took 30 years to build. With 30 or 35 years to build, you’ll build something, but you won’t build that. Do we need it, as a company, to succeed? Absolutely not? Would it have been wonderful to own such a thing? Absolutely yes. The reason for that is because, as company owners, you want to own great assets. You want to own great platforms.
The net benefit, of course–I’m disappointed we didn’t get it through, but the result is that we built wonderful relationships with the entire management team at ARM. They understood the vision our company has for the future of high-performance computing. They’re excited about it. That naturally caused the road map of ARM to become much more aggressive in the direction of high-performance computing, where we need them to be. The net result of it is inspired leadership for the future of high-performance computing in a direction that’s important to Nvidia. It’s also great for them, because that’s where the next opportunities are.
Mobile devices will still be around. They’ll do great. However, the next big opportunities are in these AI factories and cloud AIs and edge AIs. This way of developing software is so transformative. We just see the tip of the iceberg right now. But that’s number one.
Number two relates to our internal development. We got even more excited about ARM. You could see how much we doubled down on the number of ARM chips that we have. The robotics ARM chips, we have several that are now in development. Orin is in production this month. It’s a home run for us. We’re going to build a whole lot more in that direction. The reception of Grace has been incredible. We wanted to build a CPU that’s very different from what’s available today and solves a very new type of problem that we know exists out in the world of AI. We built Grace for that and we surprised people with the idea that it’s a superchip – not a collection of chiplets, but a collection of superchips. The benefits of doing that, you’re going to see a lot more in that direction. Our technology innovation around ARM is turbocharged.
With respect to the overall technology stack, we innovate at the core technology level basically in three areas. GPU remains the largest of all, of course. Secondarily, networking. We have networking for node to node computers. We call it NVLink switches. We NVLink from inside the box outside the box. InfiniBand, which is called Quantum, and the connecting InfiniBand systems into the broader enterprise network. Spectrum switches. The world’s first 400 gigabit per second networking stack, end to end. So the second pillar is networking. The third is CPUs.
In cooking, almost every culture has their holy trinity, if you will. My daughter is a trained chef. She taught me that in western cooking, it’s celery, onions, and carrots. That’s the core of just about all soups. In computing we have our three things. It’s the CPU, the GPU, and the networking. That gives us the foundation to do just about everything.
Question: To what extent do you see a need for expanding the stock of chips at Nvidia?
Huang: It’s important to remember that deep learning is not an application. What’s happening with machine learning and deep learning is not just that it’s a new application, like rasterization or texture mapping or some feature of a technology. Deep learning and machine learning is a fundamental redesign of computing. It’s a fundamentally new way of doing computing. The implications are quite important. The way that we write software, the way that we maintain software, the way that we continuously improve software has changed. Number two, the type of software we can write has changed. It’s superhuman in capabilities. Software we never could write before.
And the third thing is, the entire infrastructure of providing for the software engineers and the operations – what is called ML ops – that’s associated with developing this end to end, fundamentally transforms companies. For example, Nvidia has six supercomputers in our company. No chip company in the world has supercomputers like this. And the reason why we have them is because every one of our software engineers, we used to give them a laptop. Now we give them a laptop and a supercomputer in the back. All the software they’re writing has to be augmented by AI in the data center. We’re not unique. All of the large AI companies in the world develop software this way. Many AI startups – many of them in Israel – develop software in this way. This is a complete redesign of the world’s computer science.
Now, you know how big the computing industry is. The impact to all of these different industries beyond computing is quite important. The market is going to be gigantic. There’s going to be a lot of different places that will have AI. Our focus is on the core AI infrastructure, where the processing of the data, the training of the models, the testing of the models in a digital twin, the orchestration of the models into the fleet of devices and computers, even robots, all of the operating systems on top, that is our focus.
Beyond that, there’s going to be a trillion dollars worth of industry around it. I’m encouraged by seeing so much innovation around chips and software and applications. But the market is so big that it’s great to have a lot of people innovating within it.
Question: Could you give us a quick recap on what sounded like an update in terms of the messaging and your expectations around automotive? Over the years we’ve heard you display a huge amount of enthusiasm for various topics in various areas, and generally what happens is they either come true and exceed what you tell us, or they don’t and you’ve gone away. This one seems to be a category where Nvidia has been plugging away for quite some time. A lot of activity, a lot of engagement, a lot of technology brought to the market and offered. But we haven’t seen that quite transition over into vehicles on the road and things that everyday people are using in a mass way yet.
Huang: I am absolutely convinced of three things, more convinced than ever. It’s taken longer than I expected, by about three years I would say. However, I am absolutely convinced of this, and I think it’s going to be larger than ever.
The three things are, number one, a car is not going to be a mechanical device. It’s going to be a computing device. It will be software-defined. You will program it like a phone or a computer. It will be centralized. It will not consist of 350 embedded controllers, but it will be centralized with a few computers that do AI. They will be software-defined. This computer is not a normal type of computer, because it’s a robotics computer. It has to take sensor inputs and process them in real time. It has to understand a diversity of algorithms, a redundancy of computing. It has to be designed for safety, resilience, and reliability. It has to be designed for those things. But number one, I believe the car is going to be programmable. It’s going to be a connected device.
The second thing I believe is that cars will be highly automated. It will be the first, if not in the long term the largest, but the first large robotics market, the first large robotics application. A robotics application does three things. It perceives the environment. It reasons about what to do. It plans an action. That’s what a self-driving car does. Whether it’s level 2, level 3, level 4, level 5, I think that’s secondary to the fact that it’s highly robotic. That’s the second thing I believe, that cars will be highly robotic, and they will become more robotic over time.
The third thing I believe is that the way you develop cars will be like a machine learning pipeline. There will be four pillars to it. You have to have a data strategy for getting ground truth. It can be maps, labeling of data, teaching computer vision, teaching how to plan, recognizing lanes and signs and lights and rules, things like that. Number one, you have to provide data. Second thing is you have to train models, develop AI models. The third is you have to have a digital twin so that you can test your new software against a virtual representation, so that you don’t have to put it on the street right away. And then fourth thing is you need to have a robotics computer, which is a full stack problem.
There are four pillars for us. In financial speak, there are four sets of computers. There’s a computer in the cloud for mapping and synthetic data generation. There’s a data center for doing training. There’s a data center for simulation, what we call OVX OmniVerse computers for doing digital twins. And then there’s a computer inside the car with a bunch of software and a processor we call Orin. We have four ways to benefit. If I just looked at one way, which is the chips in the car, what goes into the car, which is specifically auto, we believe that’s going to–in the next six years we’ve increased our WAN opportunities, our WAN business from $8 billion to $11 billion. In order to go from where we are to $11 billion over the next six years, we need to cross $1 billion soon. That’s why auto is going to be our next multi-billion-dollar business. I’m quite sure.
At this point the three things I believe – software-defined cars, the autonomous car, and the fundamental change in the way you build the car – these three things have come true. And it’s come true to the newer companies, if you will, the younger companies. They have less baggage to carry. They have less baggage to work through. They can design their cars this way from day one. New EV companies, just about every new EV company, is creating as I described. Centralized computers, software-defined, highly autonomous. They’re setting up their engineering teams to be able to do machine learning as I described. This is going to be the largest robotics industry in the near term, leading up to the next robotics industry, which is much smaller robots that will be everywhere.
Question: I’m very interested in how you talked about software yesterday and the terms you talked about. Things like digital twins and OmniVerse. Those are huge opportunities. Where do you plan the stack here longer-term as you look to platform software and applications? Are you in competition with Microsoft and so on in the longer term? And then a second quick question, Intel is adding a lot of fab capacity. The world is not getting any safer. How do you look at this? Is Intel a natural ally of yours? Are you talking to them, and would you like to be a partner of Intel’s on the fab side?
Huang: I’ll do the second one first. Our strategy is to expand our supply base with diversity and redundancy at every single layer. At the chip layer, at the substrate layer, at the assembly layer, at the system layer, at every single layer. We’ve diversified the number of nodes, the number of foundries. Intel is an excellent partner of ours. We qualify their CPUs for all of our accelerated computing platforms. When we pioneer new systems like we just did with OmniVerse computers, we partnered with them to build the first generation. Our engineers work very closely together. They’re interested in us using their foundries. We’re interested in exploring that.
To be in a foundry at the caliber of TSMC is not for the faint of heart. This is a change not just in process technology and investment of capital, but a change in culture, from a product-oriented company, a technology-oriented company, to a product, technology, and service-oriented company. And that’s not service as in bringing you a cup of coffee, but service as in really mimicking and dancing with your operations. TSMC dances with the operations of 300 companies worldwide. Our own operation is quite an orchestra, and yet they dance with us. And then there’s another orchestra they dance with. The ability to dance with all these different operations teams, supply chain teams, it’s not for the faint of heart. TSMC does it just beautifully. It’s management. It’s culture. It’s core values. They do that on top of technology and products.
I’m encouraged by the work that’s being done at Intel. I think that this is a direction they have to go. We’re interested in looking at their process technology. Our relationship with Intel has been quite long and we’ve worked with them across a whole lot of different areas. Every laptop, every PC, every server, every supercomputer.
As far as the software stack, this new computing approach, which is called AI and machine learning, is missing–the chips came second. What put us on the map is this architecture called CUDA. This engine on top that’s called cuDNN. cuDNN is for CUDA Deep Neural Networks. That engine is essentially the SQL engine of AI. The SQL database engine that everyone uses around the world, but for AI. We’ve expanded it over the years to include the other stages of the pipeline, from the data ingestion, to the feature engineering called cuDF, to machine learning with XGBoost, to deep learning with cuDNN, all the way to inference.
The entire pipeline of AI, that operating system, Nvidia is used all over the world. Integrated into companies all over the world. We’ve worked with every cloud service provider so they can put it into their cloud, optimize their workload, and we’re now taking that software – we call it Nvidia AI – that entire body of software is now licensable to enterprises. They want to license it because they need us to support it for them. We’ll be that AI operating system, if you will, that we can provide to the world’s enterprises. They don’t have their own computer science team, their own software team to be able to do this like the cloud service providers. We’ll do it for them. It’s a licensable software product.
Question: You mentioned you’re in discussion with Intel already about using their foundries. How advanced are those discussions? Are you specifically talking about potentially using their capacities they announced for Germany? Second, in terms of the ARM deal again, does that affect in any way your future M&A strategy? Will you try to be less aggressive or more tentative after ARM didn’t go through?
Huang: Second question first. Nvidia is generically, genetically, organically grown. We prefer to build everything ourselves. Nvidia has so much technology, so much technical strength, and the world’s greatest computer scientists working here. We’re organically built as a natural way of doing things. However, every so often something amazing comes out. A long time ago, the first large acquisition we made was 3DFX. That was because 3DFX was amazing. The computer graphics engineers there are still working here. Many of them built our latest generation of GPUs.
The next one that you could highlight is Mellanox. That’s a once-in-a-lifetime thing. You’re not going to build another Mellanox. The world will never have another Mellanox. It’s a company that has a combination of incredible talent, the platform they created, the ecosystem they’ve built over the years, all of that. You’re not going to re-create that. And then the next one, you’re never going to build another ARM.
These are things that you just have to–when they come along, they come along. It’s not something you can plan. It doesn’t matter how aggressive you are. Another Mellanox won’t just come along. We have great partnerships with the world’s computer industry. There are very few companies like Mellanox or ARM. The good thing is that we’re so good at organic growth. Look at all the new ideas we have every year. That’s our approach.
With respect to Intel, the foundry discussions take a long time. It’s not just about desire. We have to align technology. The business models have to be aligned. The capacity has to be aligned. The operations process and the nature of the two companies have to be aligned. It takes a fair amount of time. It takes a lot of deep discussion. We’re not buying milk here. This is about integration of supply chain and so on. Our partnerships with TSMC and Samsung in the last several years, they took years to build. We’re very open-minded to considering Intel and we’re delighted by the efforts that they’re making.
Question: With the Grace CPU superchip you’re using Neoverse, the first version of that. Can we expect to see custom ARM cores from Nvidia in the future? And additionally, the news that you’re bringing confidential computing to GPUs is pretty encouraging. Can we expect the same from your CPUs?
Huang: The second question first. The answer is yes on confidential computing for CPUs. As for the first question, our preference is to use off-the-shelf. If somebody else is willing to do something for me, I can save that money and engineering work to go do something else. On balance, we always try not to do something that can be available somewhere else. We encourage third parties and our partners to lean in the direction of building something that would be helpful to us, so we can just take it off the shelf. Over the last couple of years, ARM’s road map has steered toward higher and higher performance, which I love. It’s fantastic. I can just use it now.
What makes Grace special is the architecture of the system around Grace. Very important is the entire ecosystem above it. Grace is going to have pre-designed systems that it can go into, and Grace is going to have all the Nvidia software that it can instantly benefit from. Just as when we were working with Mellanox as they came on board–we ported all of Nvidia’s software onto Mellanox. The benefits and the value to customers, those are X factors. We’re going to do the same thing with Grace.
If we can take it off the shelf, because they have CPUs with the level of performance we need, that’s great. ARM builds excellent CPUs. The fact of the matter is that their engineering team is world class. However, anything they prefer not to do–we’re transparent with each other. If we need to, we’ll build our own. We’ll do whatever it takes to build amazing CPUs. We have a significant CPU design team, world-class CPU architects. We can build whatever we need. Our posture is to let other people do it for us and differentiate upon that.
Question: With what’s going on in AI, the advances going on, what is the potential for people to use it in ways that are detrimental to the industry or to society? We’ve seen examples like deep fake videos that can impact elections. Given the power of AI, what is the potential for misuse, and what can the industry do about it?
Huang: Deep fake, first of all–as you guys know quite well, when we’re watching a movie, Yoda isn’t real. The lightsabers aren’t real. They’re all deep fake. Just about every movie we watch these days is really quite artificial. And yet we accept that because we know it’s not true. We know, because of the medium, that the information presented to us is intended to be entertainment. If we can apply this basic principle to all information, it would easily work out. But I do recognize that, unfortunately, it crosses the line of what is information into mistruths and outright lies. That line is difficult to separate for a lot of people.
I don’t know that I have the answer for this. I don’t know if AI is necessarily going to activate and drive this further. But just as AI has the ability to create fakes, AI has the ability to detect fakes. We need to be much more rigorous in applying AI to detect fake news, detect fake facts, detect fake things. That’s an area where a lot of computer scientists are working, and I’m optimistic that the tools they come up with will be rigorous, more rigorous in helping us decrease the amount of misinformation that consumers are unfortunately consuming today with little discretion. I look forward to that.
Question: I saw the announcement of the NVLink-C2C and thought that was very interesting. What’s Nvidia’s position on chiplet-based architectures? What kind of architecture do you consider the Grace superchips to be? Are those in the realm of chiplet MCM? And what motivated Nvidia to support the UCIe standard?
Huang: UCIe is still being developed. It’s a recognition that, in the future, you want to do system integration not just at the PC board level, which is connected by PCI Express, but you have the ability to integrate even at the multi-chip level with UCIe. It’s a peripheral bus, a peripheral that connects at the chip-to-chip level, so you can assemble at that level.
NVLink was, as you know–this is now in our fourth generation. It’s six years old. We’ve been working on these high-speed chip-to-chip links now for coming up on eight years. We ship more NVLink for chip-to-chip interconnect than just about anyone. We believe in this level of integration. It’s one of the reasons why Moore’s Law stopping never stopped us. Even though Moore’s Law has largely ended, it didn’t slow us down one step. We just kept on building larger and larger systems with more transistors delivering more performance using all of the software stacks and system stacks we have. It was all made possible because of NVLink.
I’m a big believer in UCIe, just as I’m a big believer in PCIe. UCIe has to become a standard so I can take a chip right from Broadcom or Marvell or TI or Analog Devices and connect it right into my chip. I would love that. That day will come. It will take, as it did with PCI Express, about half a decade. We’ll make progress as fast as we can. As soon as the UCIe spec is stabilized, we’ll put it in our chips as fast as we can, because I love PCI Express. If not for PCI Express, Nvidia wouldn’t even be here. In the case of UCIe, it has the benefit of allowing us to connect many things to our chips, and allowing us to connect our chips to many things. I love that.
With respect to NVLink, the reason why we did–our philosophy is this. We should build the biggest chips we can. Then we connect them together. The reason for that is because it’s sensible. That’s why chips got bigger and bigger over time. They’re not getting smaller over time. They’re getting bigger. The reason for that is because larger chips benefit from the high energy efficiency of the wires that are on chip. No matter how energy-efficient a chip-to-chip SerDes is, it’s never going to be as energy-efficient as a wire on the chip. It’s just one little tiny thread of wire. We would like to make the chips as big as we can, and then connect them together. We call that superchips.
Do I believe in chiplets? In the future there will be little tiny things you can connect directly into our chips, and as a result, a customer could do a semi-custom chip with just a little engineering effort, connect it into ours, and differentiate it in their data center in their own special way. Nobody wants to spend $100 million to differentiate. They’d love to spend $10 million to differentiate while leveraging off someone else’s $100 million. NVLink chip-to-chip, and in the future UCIe, are going to bring a lot of those exciting opportunities in the future.
Question: Replicator is one of the neatest things I’ve seen. Is there an area where people are generating these virtual worlds that can be shared by developers, as opposed to trying to build up your own unique world to test your robots?
Huang: Excellent question. That’s very hard to do, and let me tell you why. The Replicator is not doing computer graphics. The Replicator is doing sensor simulation. It’s doing sensor simulation depending on–every camera ISP is different. Every lens is different. Lidars, ultrasonics, radars, infrareds, all of these different types of sensors, different modalities of sensors–the environment is sensed, and the environment reacts depending on the materials of the environment. It reacts differently to the sensors. Some things will be completely invisible, some things will reflect, and some things will refract. We have to be able to simulate the responses of the environment, the materials in the environment, the makeup of the environment, the dynamics of the environment, the conditions of the environment. That all reacts differently to the sensors.
It turns out that it just depends on the sensor you want to simulate. If a camera company wants to simulate the world as perceived by their sensor, they would load their sensor model, computational model, into OmniVerse. OmniVerse then regenerates, re-simulates from physically based approaches the response of the environment to that sensor. It does the same thing with lidar or ultrasonics. We’re doing the same thing with 5G radios. That’s really hard. Radio waves have refraction. They go around corners. Lidar doesn’t. The question is then, how do you create such a world? It just depends on the sensor. The world as perceived by a lizard, the world as perceived by a human, the world as perceived by an owl, those are all very different. That’s the reason why this is hard for us to create.
Also, your question also gets to the crux of why Replicator is such a big thing. It’s not a game engine trying to do computer graphics that look good. It doesn’t matter if it looks good. It looks exactly the way that that particular sensor sees the world. Ultrasound sees the world in a different way. The fact that we have the images come back all photographically beautiful, that’s not going to help the ultrasound maker, because that’s not the way it sees the world. CT reconstruction sees the world very differently. We want to model all the different modalities using physically-based computation approaches. Then we send the signal into the environment and see the response. That’s Replicator. Deep science stuff.
Question: Are you, to some degree, skeptical about manufacturing with Intel, given that they’re increasingly a competitor? They’re doing GPUs. You’re doing CPUs. Does that raise some concerns about sharing chip designs?
Huang: First of all, we’ve been working closely with Intel, sharing with them our road map long before we share it with the public, for years. Intel has known our secrets for years. AMD has known our secrets for years. We’re sophisticated and mature enough to realize that we have to collaborate. We work closely with Broadcom, with Marvell, with Analog Devices. TI is a great partner. We work closely with everybody and we share early road maps. Micron and Samsung. The list goes on. Of course this happens under confidentiality. We have selective channels of communications. But the industry has learned how to work that way.
On the one hand, we compete with many companies. We also partner deeply with them and rely on them. As I mentioned, if not for AMD’s CPUs that are in DGX, we wouldn’t be able to ship DGX. If not for Intel’s CPUs and all of the hyperscalers connected to our HGX, we wouldn’t be able to ship HGX. If not for Intel’s CPUs in our OmniVerse computers that are coming up, we wouldn’t be able to do the digital twin simulations that rely so deeply on single-thread performance. We do a lot of things that work this way.
What I think makes Nvidia special is that over the years–Nvidia is 30 years in the making. We have built up a diverse and robust and now quite an expanded-scale supply base. That allows us to continue to grow quite aggressively. The second thing is that we’re a company like none that’s been built before. We have core chip technologies that are world class at each of their levels. We have world-class GPUs, world-class networking technology, world-class CPU technology. That’s layered on top of systems that are quite unique, and that are engineered, architected, designed, and then their blueprints shared with the industry right from inside this company, with software stacks that are engineered completely from this company. One of the most important engines in the world, Nvidia AI, is used by 25,000 enterprise companies in the world. Every cloud in the world uses it. That stack is quite unique to us.
We’re quite comfortable with our confidence in what we do. We’re very comfortable working with collaborators, including Intel and others. We’ve overcome that–it turns out that paranoia is just paranoia. There’s nothing to be paranoid about. It turns out that people want to win, but nobody is trying to get you. We try to take the not-paranoid approach in our work with partners. We try to rely on them, let them know we rely on them, trust them, let them know we trust them, and so far it’s served us well.