Q&A: Open source on the edge
Between the cloud and our devices resides the edge, a decentralized layer that is expected to play a key role in the ongoing evolution of computing. Andrew Randall, VP of Business Development at Kinvolk, offers his thoughts on edge computing, touches upon who is using early forms of edge, and why eventually, every industry is going to find applications for it.
What is edge computing? How do you define it?
“Edge” is a broad term for a number of related trends and deployment architectures. In general, I see it as a decentralized computing layer that sits between end devices and centralized cloud compute and storage. It employs a lot of the principles of cloud computing - like automation and shared general-purpose resources - while being physically located outside of the large data centers that power the cloud. One key common characteristic is that Edge compute nodes may have intermittent connectivity. Frequently, also, physical server attributes like size, power consumption and heat dissipation are critical, leading to use of efficient architectures like ARM processors.
How is the open source community addressing the edge computing market?
Pretty much all innovation in infrastructure technology these days is happening in open source (think: Kubernetes, Linux, open networking like SONIC, etc.). Edge is no exception. There are a number of projects addressing each layer of the stack, from reduced-footprint Kubernetes distros like K3s, to explicitly “edge computing” platforms like LF-Eve.
At Kinvolk, everything we do is 100% open source and our flagship Flatcar Container Linux is being adopted for edge applications due to its minimal footprint, secure architecture, and auto-update mechanism enabling zero-touch management.
What do you see as the drivers for this shift toward edge computing?
I’ve been around a few years and seen the pendulum swing from mainframe to personal computing, to client/server, and to cloud. These shifts are typically driven by improvements in computing capacity (for example, virtualization on commodity servers) and networking (for example, fast wide-area connections, or more recently 4G and 5G wireless).
Edge is the logical next step in that ongoing evolution of computing, driven by ubiquitous, connected devices, which is in turn driving applications that need resilient, low latency connections to back-end services - and generating large amounts of telemetry and other data that need to be efficiently processed. More specifically, the rise of smart homes, manufacturing, autonomous vehicles, personal devices, and an increasingly mobile workforce, is driving the consumption of cloud-based services, some of which can be centralized, but others which hit the constraints of locality and need to be closer to the end device.
What are the advantages of moving more computing to the edge as opposed to the cloud?
I don’t really see it as a “one or the other” decision. Edge actually is a part of a cloud strategy, an enabler if you will, because it allows companies to move out of legacy data center architectures to a cloud-based platform spanning cloud and edge, with the service location based on the specific needs of the application.
The advantage of locating services at the edge are performance, reliability, and scalability. Performance, because locating compute closer to where it’s consumed reduces latency. Reliability because edge computing can be resilient to wide area network failures. And scalability, because for global deployments generating and processing huge quantities of data, relying on centralized compute becomes a scalability bottleneck.
What are the killer apps for edge computing? Which seem the most promising?
Often people talk about artificial intelligence as the killer app for edge -- but really it’s anything generating masses of data across a large number of endpoints, that needs to be processed close (in time or distance) to the source. Sure, call that AI if you like.
For example, cars today are high performance always-connected mobile computing devices, with over a hundred sensors generating around 25Gb of data per hour - and that’s before we add self-driving functions which will increase that data rate by a couple of orders of magnitude. The car itself could be thought of as an edge computing device; and if you’re an automotive company selling millions of cars in two hundred countries worldwide, you can’t bring all that data back to a centralized data center in the US or Germany. So you have to have a sophisticated multi-layer approach to distributing your data processing.
Who is using edge computing today?
In early forms, it’s already getting quite widely deployed. I’ve seen adoption by enterprises with branch offices - like retail and banks. And I mentioned the automotive example -- we are working today with a vehicle manufacturer deploying global edge infrastructure.
Which industries will be adopting edge computing in the future?
Hear me out here, but I would argue that, as edge computing becomes as easy to consume as cloud computing, every industry is going to find applications for it. Once they’ve adopted a cloud approach to deploying their applications, moving some services to the edge is an inevitable optimization that will follow.
One interesting “next adopter” is likely to be telcos. As they roll out 5G, and their core network services become containerized, they are looking to build micro-clusters closer to the edge, including in cellular towers and customer premises.
Why is there an opportunity for open source projects on the edge?
Open source tends to be most successful where many organizations benefit from collaborating on a common platform, where there is more to gain from sharing resources to create a reliable base infrastructure than there is by competing. We’ve seen that in operating systems, container technologies, big data, databases, among others. Edge computing infrastructure has all these characteristics.
Of course, the alternate argument is that Edge will end up an extension of the cloud vendors’ proprietary control planes - and that’s a future that AWS and Microsoft would be happy with. The reality is that there will be a mix: some applications will be best served with services like AWS Local Zones or Azure Stack, but many will be built on open infrastructure - including open hardware designs by the way, like the 2U Open Compute Project “Sled”.
There are several initiatives that are operating under the auspices of making edge computing open source -- some from industry groups, some from open source groups. But the market seems incredibly fragmented -- in that world how do you see the market developing?
I’m not close enough to the various projects to say how each of them will evolve. I would say that edge computing is a very broad term that encompasses many different deployment models, and therefore - unlike cloud computing where value is created by deploying more or less homogeneous workloads at massive scale - there is scope for multiple approaches to thrive in parallel.