Computing on the Edge
Every single day more and more data is produced, exchanged, processed, and transmitted around the world. The sheer amount of data processed every day is growing exponentially and by 2020 Intel has estimated that the average internet user will being consuming 1.5GB of traffic every 24 hours, while daily video traffic will have reached a staggering 1PB (1 Petabyte = 1000 Terabytes). The booming expansion of IoT devices, such as smart home technology and autonomous cars, is the driving force behind this monstrous growth prediction, with more and more data being collected and processed by every single device we purchase.
When combined with the prospect of 5G technology and super-fast worldwide connections, data centers are already starting to groan at the thought of the massive strains that are likely to be placed on their processing capacity. And while data centres have already begun to upgrade to prepare for the onslaught of data and analytics, there is a new type of processing on everyone’s lips – Edge Computing.
The idea sounds relatively simple, it is a distributed information technology architecture that allows client data to be processed at the “edge” of the network, as close to the source as possible. By utilizing the processing of intermediary and periphery servers, mobile services can get faster responses without putting stress on core network servers. Time-sensitive data can be processed at locally based intermediary servers, while data that is less time-sensitive can be sent to the cloud for big data analytics and long term storage.
Sending massive amounts of raw data over a network can put huge strains on the network resources, so it does make sense to process data closer to its source rather than transmit it to a data centre and then back again. There have also been suggestions that smart devices could be designed to only transmit relevant data to further reduce the strain on networks as the volume of data increases over the next decade. For example, autonomous vehicles could monitor the oil level, or brake fluids, and then only transmit that data when the levels fall outside pre-set limit. Similarly, a Wi-Fi security camera could use edge analytics and only transmit data when a certain percentage of pixels change between frames. As IoT devices grow in processing power, each device could effectively become a mini data center.
A self-driving car, which could well house up to and above 200 CPUs, “that’s a data center on wheels” commented venture capitalist Peter Levine at the Wall Street Journal’s CIO Network event in San Francisco earlier this month. He believes that a self-driving car will simply have too much data for it to be practical to constantly send it out to the cloud and back – especially since a self-driving car will need the data immediately! Then consider the vast field of other intelligent devices that will also be trying to use the cloud:
“You will never have enough bandwidth and speed on the network between for that,” Levine declared. So each devices will deal with all of their own processing and storage, and send any relevant data to the cloud, which becomes like the brain for the system – analysing all the data and sending back what it learns to all the devices. So all the devices are able to learn not just from themselves, but from each other.
“I can’t imagine there will ever not be a place for cloud computing,” Diana McKenzie, the CIO of Workday, told CIO.com. She envisions a future where cloud and edge computing work in tandem, complementing one another. Edge computing is likely to reduce the work that the cloud does at the moment, processing data for individual devices, and allow it to focus on big data analytics from fields of devices.
The concept of edge computing is one that has become increasingly viable because of the sheer number of IoT devices being produced, the endlessly falling costs of computer components, and the advent of mobile computing.
However, while there are numerous benefits to be had with using edge computing, there are several trade-offs that come along with the technology. It can reduce response time to mere milliseconds, as well as combat network latency and bottlenecks, but there are serious concerns regarding network security and licensing that need to be addressed before edge computing can be considered totally viable.
By Josh Hamilton
Josh Hamilton is an aspiring journalist and writer who has written for a number of publications involving Cloud computing, Fintech and Legaltech. Josh has a Bachelor’s Degree in Political Law from Queen’s University in Belfast. Studies included, Politics of Sustainable Development, European Law, Modern Political Theory and Law of Ethics.