How Edge Computing Will Shape the Future of Data Centers
Sunrise in Smart City
Sometime in the future, Robin wakes up to a voice reading the daily forecast for Smart City – sunny, with a high of 75 degrees. She reaches for her phone and taps open an app, and soon the smell of brewing coffee drifts in from the kitchen.
An hour later, an automated car arrives on schedule to take her to work. Outside her office, she tosses her empty coffee cup into a recycling can and it silently signals for a nearby waste truck, which is winding its way through the city on a route planned in real time by the day’s demand.
The internet of things (IoT) is layered over the fabric of Smart City, with sensors and controls acting as minimally invasive tailors that correct inefficiencies in the built environment.
Lights turn on as soon as Robin walks in a room; trash cans ask to be emptied when they are full.
IoT represents a distributed network of devices that sense their environment, communicate with each other, and act on the information they collect – and it’s already here, in a limited form.
In a whitepaper, Cisco defined the internet of things as “the point in time when more ‘things or objects’ were connected to the Internet than people.” In 2010, there were 6.8 billion people on the planet and 12.5 billion devices connected to the Internet – averaging out to 1.68 devices per person. Cisco traced this exponential growth backward to estimate that the world passed the IoT inflection point sometime between 2008 and 20091.
Many people already integrate smart devices into daily tasks at home. However, emerging IoT applications involve more complicated devices, sometimes making critical decisions – for example, self-driving cars.
A fully automated vehicle will need to read the path and condition of the road, the signaling infrastructure of traffic lights and stop signs, obstacles ranging from surrounding cars to fallen tree limbs and much more. Then, it will need to make a decision based on this information, in as close to real time as possible – and continue making multiple decisions every second until it reaches its destination.
A popular estimate2 claims that a single autonomous car will generate four terabytes of data in an hour and a half of driving – more than the amount used by the audience of a Superbowl. Rush hour in Smart City represents a mind-boggling amount of information.
Networking professionals call the collection of local areas surrounding devices “the edge,” in contrast to a central “cloud” where processing resources are concentrated. As the edge gets denser with new devices, data is becoming more distributed, there is more of it and it needs to be processed faster.
“IoT is the internet of things, and those things need to be connected to the edge and to have instant processing and instant reaction,” said Joe Reele, Vice President, Solution Architects at APC by Schneider Electric.
The Bandwidth and Latency Blues
Transferring all this data to the cloud raises two concerns – bandwidth and latency.
Data passes through a network like cars on a highway, and bandwidth and latency are two ways of measuring the trip. Bandwidth represents the capacity of the network – how many lanes are on the highway. As the volume of data trying to pass through that network reaches capacity, its speed will decrease.
“Latency represents how long it takes for information to travel to its destination, so it is dependent on distance and congestion. Think of L.A. at two in the morning and think of L.A. during rush hour. You didn't change the bandwidth - the bandwidth is still the same because [the amount of] lanes, your capacity, is still the same. But the time it takes to get from point A to point B and back to point A is significantly different - and that's latency from congestion,” said David Eckell, a National Market Manager at Graybar who focuses on data centers.
Many IoT devices don’t require much bandwidth, comparatively speaking – their main purpose is to sense and respond to a particular change in the environment. A motion-sensitive switch doesn’t need to process a lot of data to turn a light on or off.
Other devices that use augmented reality (AR), virtual reality (VR) and/or artificial intelligence (AI) to overlay or adapt to their surroundings will be more bandwidth-intensive.
All told, however, more devices coming online will mean more data entering the network, eating up bandwidth – and all of these devices are designed to react in real time.
As these applications move beyond convenience to take over critical tasks, latency will become a serious concern. A room can stay dark a second too long, but a car cannot brake a second too late.
The next generation of wireless technologies, 5G and Wi-Fi 6, aim to pave the way for new devices by improving the bandwidth and latency limitations of current networks.
According to Joe, 5G will have three main benefits: “We’re going to be able to put more things into the network. It’s going to reduce latency … so we’re going to have a much faster network. And the third thing, which is the most important thing, is 5G will enable a million IoT sensors to be connected to it in one point.”
Increasing how fast data can travel is an important first step. However, latency is also affected by a second factor – how far it has to go to be processed.
To return to David’s metaphor of L.A. traffic, 5G will add more lanes to the freeway and raise the speed limit, but it won’t change the distance from downtown to the beach.
The Edge Vs. The Cloud
Modern computing has been dominated by a cloud model, where many users and devices access centralized, shared IT resources. Cloud data centers rely on scale to deliver greater processing and storage capacity at lower costs.
By consolidating and leasing out IT infrastructure, several large cloud providers have enabled a varied ecosystem of connected devices and applications.
However, the further a device gets from the cloud, the worse latency becomes.
As new applications become more latency-dependent, leaders in the data center field like Joe Reele have begun to discuss building up computing resources at the edge, instead of in the cloud, to overcome the last hurdle to near-real-time processing.
The idea of distributing processing and storage resources closer to users is known as edge computing. It involves installing processing capability directly into devices, and locating smaller data centers closer to users for dedicated applications3.
By shortening the distance data has to travel, edge computing reduces latency and conserves bandwidth across the network.

Ideally, information that is relevant in the moment and needs to be acted on in real time should be processed at the edge. If the use case can tolerate some delay or the data needs to be stored, aggregated and analyzed at a higher level, it should be sent on to the cloud.
“Cloud computing is more big data and edge computing is more instant data,” said Joe.
The Future Combines Cloud and Edge
As the sun sets on Smart City, Robin’s self-driving car uses resources from both the cloud and the edge to get her home safely.
Traffic slows a few miles before her usual exit, and her car checks a cloud-based map of the city, taking a second or two to reroute itself around construction. As the new turn approaches, it begins to merge, drifting closer to the white stripes on the pavement – but there is another car in the next lane. Both cars recognize each other and communicate immediately at the edge to avoid a collision.
Later, the car will send information it collected about the potential crash on to its manufacturer’s cloud storage, which they will use to improve future vehicles.
Complex IoT applications will handle information with different tolerances for latency and levels of relevance, and will need to leverage a spectrum of complementary computing resources from the cloud to the edge.
“I think the data center landscape in 10 years is going to be a hybrid mesh network between cloud computing and edge computing, intertwined with wireless and 5G. We’re going to see a much different landscape – rather than a centralized data center, I think it’s going to be much more distributed and interactive,” said Joe.

Data centers have traditionally focused on achieving efficiencies through scale, but new applications will also require distributing computing resources based on the needs of different kinds of devices.
“It's more [about] the way we think and design around information at a fundamental level - what information do I have, when do I need to understand it, and what's it going to cost me to do that,” said David.
He points out that when reaction time isn’t the first priority, the cloud will be less expensive to use, and regional data centers have already shrunk average latency below 100 milliseconds in most cases – acceptable for non-critical uses. However, speed is ultimately determined by the slowest link, so across the network latency can increase considerably at peak times.
Also, new devices won’t just represent new demands on data, but more data overall, much of which will be valuable for long-term analysis of the environments in which they are embedded. This type of big-picture processing will remain cloud-based work.
In fact, leasing in multi-tenant data centers doubled from 2017 to 2018, largely driven by users seeking hyperscale cloud resources4.
A fully realized edge-to-cloud architecture will open up choices about where to send data.
But getting there will take building up infrastructure at the edge, and connecting the edge to the cloud.
Fog Connects the Cloud to the Edge
“Edge [computing] is big business today, but in reality we are just getting started,” said David.
Several industry groups have begun to design “fog” solutions to allow consistency and secure communication between various edge and cloud networks.
In 2018, the IEEE Communication Society and the OpenFog Consortium collaborated to release IEEE 1934™, a new standard that is “intended to address the need for an end-to-end, interoperable solution that is positioned along the things-to-cloud continuum5.”

The new standard was showcased later that year at the second annual Fog World Congress, a three-day conference in San Francisco for “fog computing leaders and edge influencers6.” In the exhibitor area, organizers deployed low-slung, four-wheeled robots a little smaller than a floor tile that scurried around the feet of attendees and created a map of the venue in real time. The demonstration showed that coordination between fog systems from various domains was possible using the OpenFog Reference Architecture outlined in IEEE 1934.
In addition to tech solutions, the growth of edge capabilities in the U.S. will require a significant investment in infrastructure – especially the fiber optic cabling that enables 5G networks to transport data between edge devices and back to the cloud.
“5G at scale, at full deployment, is years off in this country,” said Joe. “Out of the top 25 countries in the world with fiber, where do you think the United States is? The answer is the United States is 23rd out of 25, and the edge and 5G won’t work without fiber. … In order for us to get to 5G, we’re going to have to put a lot of fiber in the ground, and that fiber has to be supported with critical infrastructure.”
APC has already begun to work with customers to build computing solutions at the edge – including a super-cooled data center for a university health sciences facility, located on the shaded roof of a nearby parking garage7.

Future applications will require investment on a city-wide, and even country-wide, scale. Imagine Robin taking a weekend trip out of Smart City: self-driving cars will require edge resources at regular intervals to stay connected. Just as the U.S. had to build roads across the country, it will have to build a new system of infrastructure to make them smart-ready.
The corresponding transformation of the data center landscape will require the coordination of wide-ranging expertise in design, infrastructure, and technology.
“There’s no one provider that can do it all,” said Joe. “No one person can do all of the solution. It’s an ecosystem, like, for example, the [relationship] between APC and Graybar. That’s a [relationship] that together creates a more powerful and impactful solution than any one of us can do by ourselves.”