The Edge

What is ‘The Edge’?

For over a decade now, experts have tried to define ‘the edge’ without being able to agree on what it means and what it entails. One thing that can be agreed on is that the edge is fluid and constantly developing, driven by providing computing and communications closer to where it is required to provide not only regional connectivity but to make the consumption of computing and communications as much of a utility as power and water.

What exactly is driving the need to decentralise from large data centres running cloud infrastructure mostly situated in metro areas and cities? Why is there a demand to move closer to where it is required?

These questions are driving a new outlook on a distributed architecture, one where speed and ease of use are paramount to deploying developing technologies such as IoT, self-driving cars and AI.

Speed is at the Heart of It All

Throughout history, humankind has been obsessed with speed. From the fastest man, animal, car, plane and boat to the fastest computer, internet speed and processing time, we have been on a never-ending quest to be faster.

One key driver of how the edge is developing and evolving is speed, otherwise known as latency. In computing terms, the Oxford Dictionary defines latency as ‘the delay before a transfer of data begins following an instruction for its transfer’. So, what do latency and our quest for speed have to do with the edge?

There are several examples where the criticality of how quickly that data begins transmitting once it has received that instruction, but to illustrate it in this article we will take a closer look at AI in the form of autonomous vehicles or self-driving cars.

There are several emerging technologies when it comes to autonomous vehicles, with many prominent companies in a race to develop a commercial version for widespread use and adoption. Critical to the mass adoption of this technology is latency.

The scientific community has published several papers on the reaction time of the human brain, particularly around driver reaction time. In many cases, the speed with which a person can respond or reaction time is the key to assigning liability. Accident reconstruction commonly uses a standard reaction time number of 1.5 seconds. There is a multitude of factors that make up that time including mental processing time or how long it takes to perceive that a signal has occurred and to decide upon a response, movement time or how long it takes to perform the required movement such as applying the brakes or moving the steering wheel and device response time which relates to the time it takes for the mechanical device to engage.

Self-driving Cars and Saving Lives

This brings us back to speed. If there is a way to decrease reaction time or react faster, we can avoid accidents and the loss of life that follows. This is one of the key drivers of developing an autonomous vehicle, specifically to use on public roads; to save lives and make our roads safer to use.

Artificial intelligence (AI) used in self-driving cars rely on receiving those signals to decide on a response. They will act within milliseconds or a fraction of the time it takes the human brain to respond, reducing the time it takes to decide and act. Critical to reducing this time is the transmission of data to the device for it to process information such as location, speed and vector of other cars around it.

The transmission of this data starting with other cars transmitting their location to reaching the other cars around it must be as quick as possible, to reduce the time it takes for the AI or device to have all the required information to make a decision.

Having the processing power as close to the source as possible reduces latency and time it physically takes for information to be received and transmitted. Using a centralised data centre that sits in a metro area, to receive, process and transmit data back to the source in a regional area increases latency which increases the amount of time it takes for the device to receive the information it requires to decide. The consequences of this could be fatal in some cases.

Placing the compute as close to the source and recipient is key to reducing latency. A couple of milliseconds could be the difference in saving lives and in this specific example is a good illustration of how the edge is developing, adapting and being defined as the need arises.

Helping You Get to The Edge

At XCircle, we understand the need to assist communication and compute providers to develop the supporting infrastructure to house their equipment at the edge, as close to the source as possible and in a variety of configurations and solutions.

Our wide range of products, from single rack modular data centres, telecommunications equipment rooms and enclosures to larger modular edge data centres allow these companies to select from a wide array of products to bring their equipment closer to where it is needed and adapt to the ever-changing requirements and redefinition of distributed communications and computing.

Our in-house design, engineering and manufacturing capabilities at our premises in Wangara allow us to quickly develop and deliver infrastructure solutions to suit your requirements and deploy your equipment as quickly as you need it.

Contact us today, to assist you with bringing communications, intelligence and computing closer to where it is required.