Feature Story


More feature stories by year:

2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998

Return to: 2018 Feature Stories

CLIENT: ESPERANTO TECHNOLOGIES

Feb. 16, 2018: Embedded Systems Engineering

Q&A with Dave Ditzel, CEO, Esperanto Technologies

Gaming, artificial intelligence, superior graphics performance, and more have put RISC-V in the catbird seat
—industry standard tools and languages can help keep it there.

Editor’s Note: At the time of the  7th RISC-V Workshop, Esperanto Technologies announced plans for RISC-V based computing solutions, and EECatalog spoke with company president and CEO Dave Ditzel. Edited excerpts of our conversation follow:

EECatalog:  For our readers who may not be closely familiar with RISC-V, please give us a bit of context.

Dave Ditzel, Esperanto
Technologies

Dave Ditzel, Esperanto Technologies: RISC-V started at universities with a bunch of people that were frustrated that there was no open instruction set to work with. There are new chips every year, but software lasts forever, so having some kind of common instruction set that the software could run on is very important. So, some of the folks at UC Berkeley have done several generations of RISC processors. They are now on their fifth generation, that is why this new instruction set is called RISC-V.

The term “RISC microprocessor” was started in 1980 with a paper called “The Case for the Reduced Instruction Set Computer” by UC Berkeley professor David Patterson and myself. I was one of his first graduate students. And then I went off and did RISC processors for AT&T Bell Labs, Sun Microsystems, Transmeta and Intel.

EECatalog: You’ve noted certain questions need to be fielded if RISC-V is to be taken seriously.

Dave Ditzel, Esperanto Technologies: Yes, you’ve got folks asking, “Well, if RISC-V is really going to take off, where is the high end?  Where are chips at leading edge process nodes like 7 nanometer CMOS? What do we do for graphics on the chip? Where can I get a RISC-V design based on industry standard tools and languages like Verilog?”

EECatalog: What approach is Esperanto Technologies taking to win over the folks asking the questions you mention?

Ditzel, Esperanto Technologies: We’ve put together a team of top processor designers to show a high-end RISC-V that is more compelling than some of the alternatives out there, so people don’t see RISC-V just as a low-end embedded play. Using leading edge 7nm TSMC CMOS, our goal is to have the highest single-thread RISC-V performance as well as the best teraflops per watt. And we’ll do this using industry standard CAD tools. Esperanto will sell chips and offer IP. When we offer IP we are going to offer it in Verilog, but, more important, in a human-readable synthesizable Verilog so it’s easy to maintain. We are also going to do a strong physical design effort, optimizing for 7nm technology.

EECatalog: What are some of the details our readers should be aware of?

Ditzel, Esperanto Technologies: We are going to make a chip that incorporates not one, but two different kinds of RISC-V processors. One of the RISC-V processors we are doing is called the ET-MAXION, and its goal is to be the highest single-thread performance RISC-V processor out there with performance such as you would find from any other IP vendor like Arm.

One of reasons the RISC-V community needs a high-end CPU is because if you only have low-end CPUs, then some customer has to build a chip or put in a CPU from Arm or others, and you’re not going to be viewed as loyal Arm customer if you are mixing these.

We wanted to make sure there was availability of a high-end RISC-V processor. Our goal is to have the highest single thread performance as well as the best teraflops per watt. And we are going to do this using industry standard CAD tools as well. In particular this means we are going to sell chips and offer IP. When we offer IP we are going to offer it in Verilog, but, more important, in a human-readable synthesizable Verilog so it’s easy to maintain. We are also going to do a strong physical design effort, optimizing for 7nm technology.

We are doing a second CPU as well, our ET-MINION, which is also a full 64-bit RISC-V instruction set compatible processor. However, it also has an integrated vector floating point unit, so it can issue multiple floating point operations per cycle—we are not saying exactly how many yet. It’s also optimized for energy efficiency, which is key because we are running into a power wall with our chips.

When you have any kind of a power limit, whether it’s a 10-watt limit on an embedded part or maybe a 100-watt limit in a high-end system, it is all about how many teraflops can you cram into that particular power limit. That is what reducing the amount of energy per operations is all about.

EECatalog: You’ve mentioned applicability to machine learning.

Ditzel, Esperanto Technologies: Esperanto has enhanced the RISC-V instruction set with Tensor instructions. Tensors are important data types used in machine learning. We have also added a few special instructions that allow us to run graphics codes faster. This small processor, the Minion, has interfaces so we can add hardware accelerators.

As with our Maxion processor, are going to use the Minion processor in our product as well as making it available as a licensable core. Then what we are going to do is put a number of these processors on a single chip in seven nanometer. It is going to have 16 of our high-performance 64-bit Maxion cores, but it is going to have over 4,000 full 64-bit ET Minion RISC-V cores—each of which will have its own vector floating point unit.

If you look at multicore chips today, it is not that uncommon to find an Intel chip with 16 to 28 cores for servers. On your desktop, you’d be lucky to find six cores, maybe. But RISC-V is so simple, we can put 4,000 microprocessors on a single chip

EECatalog: What machine learning trends are you keeping an eye on, and are there chicken-and-egg problems you have suggestions for solving?

Ditzel, Esperanto Technologies: I think the trend is just how to get more and more and more teraflops into a particular system and how to do it at reasonable power levels.

We’re trying to do a real focus on keeping the power levels down. We see a lot of people doing 300-watt chips. That just seems very scary. If you are doing an embedded application you probably can’t put in a 300W chip; you are probably looking at a lot of embedded applications having to be fanless, there is probably about a 10 watt limit there.

One of the things we’re doing is making the Esperanto solution very scalable for performance but also in power. We can get down to systems using just a few watts. And our goal is to get as many teraflops into those few watts as we can.

EECatalog: How are you setting yourselves apart from companies working on similar projects?

Ditzel, Esperanto Technologies: We see a distinction between what we are doing and what others are doing out there. We see some other companies proposing special purpose hardware for machine learning using proprietary instruction sets. [They say] “Oh, we have the latest thing for machine learning, but it doesn’t use any standards, it is not x86, Arm or RISC-V.”  Esperanto thinks a better approach to building chips for artificial intelligence is to base all the processing on RISC-V, therefore we can use all the software that is being developed for RISC-V. There is Linux for RISC-V; we will have operating systems, compilers, and other applications.

And, in order to make the machine learning performance even better, what we’ll do is have a few extensions on top of RISC-V, as mentioned earlier. Our approach to AI is to use thousands of these very energy-efficient RISC-V cores, each including vector acceleration.

Those who are experts in machine learning will recognize a benchmark here (Figure 1) called Resnet50. Very well known here; for image processing. It will recognize a picture of your cat, or a bicycle or something similar.

Figure 1: The Resnet50 Deep Neural Network benchmark. RISC-V+ with Tensor extensions running on Esperanto’s ET-Minion Verilog RTL Inference on one batch of eight images, running all layers.

Each red dot in the bottom right shows one of the four thousand 64-bit RISC-V microprocessors; where it shows bright red it is very busy, where it’s white, it is not so busy. What the video from which the Figure 1 image is taken shows is the running of each one of the convolution layers. It goes through 50 different layers here in order to try to recognize—what is this a picture of? We already have full Verilog RTL of our Minion cores up and working, and we already have the software compiling the benchmarks into RISC-V instructions that are running on top of that Verilog RTL.

Our chip is not done yet, but we are far enough along that we wanted to share this with the RISC-V community to get feedback from them. It shows that we have done a lot of very serious work towards making a great AI processor based on RISC-V.

EECatalog: What is the applicability to high-end graphics?

Ditzel, Esperanto Technologies: High-end graphics chips typically have thousands of shader processors, and those processors aren’t too different from RISC-V processors.

If you want to run a high-end video game you need a shader compiler, so we wrote one and also the software that can distribute all the graphics computation across our 4,000 cores, and that works pretty well.

Figure 2

Here we have two high end video game demos, one called Crysis and another one called Adam (Figure 2). Adam has a lot of robots in it. We are rendering scenes out of those video games on our processors. Of course, this is still just in simulation. We don’t have silicon, but it shows we have a pretty good graphics solution here, and it is all based on the open RISC-V instruction set.

EECatalog: Do you feel you have a strong financial plan for supporting continued research and innovation?

Ditzel, Esperanto Technologies: The same question was asked in the early days of Linux; Red Hat is one company that comes to mind. We are like the Red Hat of hardware.

When we look at licensing, if we did not license our core people would still buy our chips, but they might go out and use an Arm core or some other core. And we want to make the entire RISC-V ecosystem more popular, so we think there is more to gain by sharing and licensing our cores than there is to lose by holding it so tight and proprietary and having a smaller market.

Right now, the number of RISC-V users is small, we need to get more adopters. The more adopters there are, the more software there is, the more software there is the more people want to buy hardware chips.

I encourage everybody to take a hard look at RISC-V. It is being adopted very quickly, and a lot of people are not familiar with it yet, but it is going to have a lot of impact in the same way that Linux had impact many years ago. When everybody gets together to support an open standard, they are doing it for an important reason. RISC-V has been designed very carefully; it’s a great general-purpose RISC instruction set, and by adding special purpose extensions we can make it even better for AI and graphics.

EECatalog: Anything to add before we wrap up?

Ditzel, Esperanto Technologies: We believe that using general purpose RISC-V processors is a better solution for AI than using these special purpose chips with some new custom proprietary implementation. Part of what is happening in the field of machine learning is the algorithms are still changing rapidly.

So, if you go off and build special-purpose hardware to do one kind of machine learning algorithm, you may find out six to 12 months from now that because the algorithms are changing, you have built the wrong hardware. In the past, those of us who have designed processors have seen the world moving toward more general-purpose chips—that is why Intel has done so well with its x86 chip.

Our approach is to use a general-purpose RISC-V instruction set as a base and where necessary add extra domain specific extensions. These are things like Tensor instructions or hardware accelerators or merely using the standard RISC-V vector instruction set.

The nice thing about RISC V is anybody is free to innovate on it, in any way you want, whereas with an instruction set like Arm it is controlled by Arm itself. Arm doesn’t allow you to go off and play with the instructions without getting a special license and incurring extra costs, so much of the innovative research that is happening is going on with RISC-V today.

So, this is a call to the community to say, “Hey, you know there are all these people running special purpose machine learning processors, but we can make something just as good or better if we build on top of RISC-V rather than making the whole thing proprietary.”

We’re going to build advanced computing chips and systems, and we think that this can be the best system for machine learning, not just the best one using RISC-V. And when we can do that on top of RISC-V, people will say, “Wow!”, and it will help accelerate the adoption of RISC-V itself.

Because we are building a number of new kinds of RISC-V cores along the way, when we have those cores done, it will be very easy to license those out to other people. We are looking at licensing our Maxion and Minion cores as well as our solution for graphics here—and again, all this is optimized for TSMC 7nm CMOS but can be used in other older technologies as well. We will provide the most highly optimized version starting in 7nm. As it turns out, in 7nm there are a lot of reasons to have the physical design be very specifically optimized for that technology to obtain best results.

In addition to the licensable IP we will probably also support some free IP. There is a RISC-V design that we have adopted called BOOM from UC Berkeley and since we hired the person who did that, everybody wants to be sure it stays supported and maintained, and yes, we are going to do that.

Our call to the RISC-V community is, “Hey, this is what we are doing, we want to let you know, let’s all work together and make the piece of the pie bigger that we get for RISC-V. Then we all win.”

Return to: 2018 Feature Stories