The Rising Cost Of 5G
October 11, 2018
Semiconductor Engineering sat down to talk about challenges and progress in 5G with Yorgos Koutsoyannopoulos, president and CEO of Helic; Mike Fitton, senior director of strategic planning and business development at Achronix; Sarah Yost, senior product marketing manager at National Instruments; and Arvind Vel, director of product management at ANSYS. What follows are excerpts of that conversation. To view part one, click here. Part two is here.
SE: Is the goal to have everything eventually hanging off a 5G network?
Yost: Not everything needs that complexity. So the key is how we design to that capability, but also have the ability to scale back so we can deploy smartly and just have the pieces we need for different applications. You will never need that much bandwidth on your cell phone. You don’t need a 4K video because your eyes can’t see the difference. But your car does need that higher bandwidth to communicate with the cloud. So knowing when to make those tradeoffs and using the critical parts in the right way will be important. Machine learning will come in big there, as well, with automated network slicing and things like that.
Fitton: Vertical slicing of a network is really interesting for when you need that processing at the edge.
Yost: And how we dynamically do that so a human doesn’t have to be involved in switching.
SE: Is this technology only for urban areas?
Koutsoyannopoulos: One of the biggest problems today is to build new models in urban areas. It’s a work in progress. In areas like Silicon Valley, it’s slightly easier to get going and more predictable. If you go the dense environment of a city, it will take longer. One of the reasons is that we have the drag of parallel dialogs related to health and safety. Now we talk about lighting with millimeter-wave frequencies at a power level we have not yet agreed upon. We will have to come up with a standard for compliance. We have to settle that first. Then we can talk about the power level, and then we can talk about the models we can build. So from a deployment point of view, it may be a mix of urban and rural areas.
Fitton: I agree from a radio wave propagation point of view. It’s going to be extremely complicated to build up in a true urban environment. That’s where the money is.
Yost: One of the key benefits being touted for 5G is the ability to add more users per unit of area. You don’t need that in a rural area. There won’t be 100,000 people crammed into a tiny area.
SE: However, farmers do want 5G.
Yost: That’s more of an IIoT application. Farmers or a factory are similar in that you have hundreds of towers. But you don’t need the infrastructure from one ranch to another to support that kind of density. You can have it spread out. Maybe you have just one base station and it can service all of them. But in a city, where you have millions of people, you have very different needs. You can serve farmers with one tower in a way that you couldn’t serve an entire city with one tower.
Vel: Simulating these infrastructures in different environments, like an urban versus a rural environment, is completely different. In urbanized environments you have to account for so many different aspects because when you’re talking about millimeter wave it’s line-of-sight.
SE: It won’t even go through a window.
Vel: Yes, and the technologies we’re going to use to build these products have to withstand all of these different conditions. One of the things we’re wrestling with is how you simulate them in an ideal world and non-ideal. When you build the product, you’re building it in an ideal situation. But the minute you install that product on a tower, you automatically have deflections, ambient conditions like rain, temperature and humidity, and all of these are going to change. Along with that, the temperature levels of the embedded devices are going to change. You literally have to design for all of these different conditions. Using one technology to design for all of these different conditions is not possible. You can use a simple Maxwell’s equation to solve antenna radiation a few meters wide. But when it comes to kilometers of radiation, how do you simulate that? You cannot use similar physics for simulating an urban environment.
SE: Is latency higher on 5G than 4G?
Koutsoyannopoulos: The applications we have today will exploit the higher bandwidth. But many of the applications we envision will need significantly lower latency. So we have to deal with that from a system point of view. It’s not about connectivity between a base station and a mobile device. It’s the whole system, end-to-end.
SE: This is one of the reasons we’re heading toward more computing at the edge, right?
Koutsoyannopoulos: Correct, and that might be the only way to guarantee low latency. But people want to explore how to optimize the end-to-end system latency, not just by throwing a lot of power in the edge.
Vel: My take on edge computing is that we are looking at that because of bandwidth limitations. If we didn’t have the limitations of bandwidth and latency, we would not have moved to edge computing.
Fitton: But proponents of edge computing would say that centralizing everything is an unnatural way of doing things. Why are we moving all of this data around? We don’t we do the data processing where the data is?
Vel: It’s evolutionary. When we had only edge computing possible, we built everything on the edge. Everyone had their own computer and you had to take it with you. Then the Internet came along and all of a sudden edge computing began going away and we all began relying on servers. And then all of a sudden bandwidth became a limitation, so we had to be smart on the edge and lower power. This is where we are now, and it will stay like that for the next few years. But what if you removed that bandwidth limitation, where everything you could do is exactly like what you can do on your computer?
Yost: It’s all about batteries. If you look at a cell phone today versus 10 years ago, you didn’t used to have to charge it every day. That’s a huge thing to consider, at least in the mobile applications.
Fitton: Having everything decentralized doesn’t make sense. It has to be a mix. We need to centralize some things where we can because it’s more power-efficient to do it that way. But some things have to be done right at the edge. It’s a continuum.
Vel: If you look at autonomous vehicles, edge computing will be the future. You have to have important decisions made on the automobile. You cannot have a decision being made somewhere else and sent back.
Yost: But with technology as we’ve been rolling it out, you also need regular updates. Cars are expensive enough where you’re not going to replace them every two years like a cell phone. There has to be a way to get updates out to older cars on the market without the need for a hardware replacement. Having a balance is good, because you can take advantage of some of the newer technologies. There’s also a big reliability question with 5G. We’re assuming it’s going to work and you’re going to get your signal back every time, but we’re still going to need localized processing in order to ensure those critical decisions get made.
Koutsoyannopoulos: The main reason we couldn’t provide more power at the edge until now is power consumption. Power consumption in small form-factor cell phones has been a big issue. What we do as EDA companies is help people lower the power consumption. We design tools that can make that possible. Now that we’re talking about 5G, we’re throwing a lot more complexity into the system design to make that possible. We have to improve and design the algorithms and the simulation methods and the analysis to help designers make things ultra-low power. This is how we will be able to deploy compute power at the edge combined with server compute power on the other end.
SE: Does this become a system architecture issue?
Fitton: I agree. It will require tremendous innovation from a lot of people for every different aspect of this. Only by considering all of the aspects are we going to get something worthwhile out of this.
SE: Let’s drill down on cost. As this moves forward, the amount of infrastructure investment 5G will require will be huge. How do we get the cost down?
Fitton: There are two different elements here. One is making it cheaper. The other is about new revenue streams. Making it cheaper is just a deployment model. All of the OEMs are doing a common infrastructure model. Everyone has a project going. You take optical cables back to a data center that’s running a cloud RAM in the basement. So now you’ve changed the deployment model. The building operator will probably put it in a mutual host to provide network coverage. That’s much cheaper. So you see a lot of people converging from a traditional DAS (distributed antenna system) space, and then you push on that for more revenue. If you save 1% of aviation fuel over 10 years, that saves you $2 billion. Achieving that should be easy with analytics at the edge. What you’re doing is minimizing the infrastructure costs while tapping into all of these other optimizations you can do from an end application point of view.
Vel: One thing that needs to be thought about holistically is the total power consumption, which will go up because of the infrastructure. For 5G infrastructure, the power consumption is at least 10 to 20 times what is consumed today. Imagine if you were to outfit an entire infrastructure end-to-end with 5G as it exists today. Nobody can afford 10 times the power consumption. What we believe will happen is over the next two years there will be aggressive reduction in terms of power consumption. How those innovations will happen is still to be determined. But history is a good indication that this will happen. The amount of power today’s cell phones consume is sub-2 watts. The same amount of compute 5 years ago would have required 20 cell phones. There is capability in terms of technology innovation. We are only scratching the surface of power consumption on 5G, on antenna design, on software-defined antennas, and on algorithms. There’s a lot that can be done, which is literally going to drive down the cost. As it stands today, there is a huge cost for infrastructure, which includes power.
SE: There is certainly a lot happening in this regard with edge devices. Is it happening with base stations, as well?
Vel: Yes. Even if you take a base station, if it’s going to use megawatts of power, and there will need to be 10 of those, that isn’t going to happen. A city isn’t going to let that happen. So all of a sudden you have this huge power crisis. We probably won’t get to the point where that will hit us. The economics will help us innovate fast enough so we don’t come to that point. But if you think about this, for autonomous driving just on the vehicle’s premise you will require several hundred kilowatts of power for the automobile. There will be racks of computers processing teraflops of data. That won’t be the reality. You won’t have computer racks in your trunk. All these fusion sensors that sense data will do edge computing in a power-efficient way, and send that data back to efficient processors.
SE: When you do drop the power that far you get a lot of noise.
Koutsoyannopoulos: At the chip level, on one hand you have a transmitter that has to be loud enough to send a signal across. You have multiple radios—maybe more than 20 bands on the same die—and you have a lower-power supply. So noise is going to be much more important because your power is lower. Dielectrics are really thin, and you have all this complexity coming together. The problem is how to optimize all of that. It appears that we will go in phases to deliver the designs and the chips. We have the tools and the methods and the way to take into account the effects that play an important role in optimizing the physics. From our perspective, the biggest problem we see is the electromagnetic crosstalk between the different radios and different blocks. This will play an important role in optimizing these systems. We provide EM-aware analysis that really helps lower the power consumption. This is one way to help optimize the design.
Yost: Testing for this isn’t going to be easy. We haven’t done a good job yet defining what 5G tests are going to look like. One you start looking at antennas, which are designed as chips in package, you don’t have a test point to tap into and figure out what the actual performance looks like. So we’re going to start looking at over-the-air tests, but once that happens our traditional methods of calibration go out the window. For LTE, we spent 20-plus years optimizing how that test is going to look. For 5G, we have a chance to re-evaluate what we’re doing to determine whether EVM is even the right measurement if you’re looking at adjacent channel power issues. We need to step back and re-evaluate what a chipset looks like and how to make sure it’s working properly. Low power is difficult, but if we’re just turning on a transmitter and making sure it creates a signal, maybe the basis of that doesn’t change. Doing interoperability testing to see how multiple radios affect each other could come up, but no one wants to increase their test time or test cost, so keeping it simple is important, too.
Fitton: There are some things we can do to reduce the power. We’re going to have many more antennas at much lower power. There has always been a power limit on the kinds of things we do. From a semiconductor point of view, everyone moving onto deep submicron technology is where we’re going. We will move to 7nm and beyond. There are other options, including multi-core and multi-chip and a variety of heuristic ways to optimize these devices. FPGAs consume more power than ASICs, but they consume considerably less than software. And you have to think about your overall system. What do you need to run in software? And where do you need hardware accelerators? When you go from 2G to 3G, you increase complexity by an order of magnitude. From 3G to 4G, and 4G to 5G, you need more processing power at each step and more processing hardware. If you can harden that, you want to put it in an ASIC. If you can’t harden it because you don’t know what you’re doing next, then you need programmability. You need flexibility because the standards are going to change and because the applications are going to change. Sometimes consuming a few more watts is the only way to compete in a new space.