A brief history of decentralized computing
Browse Blog Topics

A brief history of decentralized computing

Just as tides shift according to the gravitational pull of the Moon, the distribution of computing resources shifts from centralized to decentralized and back again, according to the pull of new technologies and their use cases.

Over the past decade, we have witnessed a centralizing movement driven by an adoption of cloud storage and cloud computing. Amazon Web Services, Microsoft Azure, and Google Cloud offer not just compute and storage services but highly scalable applications for a wide variety of tasks, ranging from support for software development lifecycles to transcribing audio to text.

But we are entering a period of rapid decentralization, as mobile computing and the Internet of Things (IoT) extend the ways we apply data collection and computation.

The combination of increasingly powerful decentralized devices, ubiquitous network access, and concentrated compute and storage resources is laying the groundwork for the next paradigm shift in computing.

Early centralized computing
Computing started as a highly centralized technology. Early mainframes like the IBM 700 weighed as much as 16 tons and included 1,700 vacuum tubes. These were designed for batch-oriented business and scientific computation. The nature of the hardware technology limited the adoption of mainframes to large corporations and government agencies that could not only cover the cost of the mainframe but also employ a team specialists to program and maintain these systems.

When mainframes were the dominant type of computer, computation was centralized. The economies of scale during this period favored having a small number of computers constantly running jobs over having many machines that were not fully utilized.

The economics of mainframes also fit the needs of customers who had relatively simple computations that need to be applied to a large volume of data (i.e., large by 1950s and 1960s standards). Tracking deposits and withdrawals in bank accounts and tallying census data were the kinds of workloads that were well suited to mainframes.

The advent of decentralized computing
The introduction of personal computers in the 1980s initiated the first major shift from centralized to decentralized computing. Processing moved from the mainframe to desktop devices.

New types of software, including word processors and spreadsheets, helped drive adoption of personal computing. These personal productivity applications soon shared the PC with client server applications, which performed some computation locally on the client and some centrally on the server.

The clients handled user interface tasks, like providing a graphical user interface, while storage and back end processing took place on the server.

Centralizing in the cloud
Starting in the early 2000s, Amazon Web Services introduced a new term to the business lexicon: cloud computing. Storage and processing were shifting back to a centralized model.

Instead of a mainframe maintained by the same business that purchased it, cloud computing enabled businesses to leverage the infrastructure of a third party on an as-needed basis. This allowed organizations to avoid the upfront capital costs of acquiring hardware and in some case long-term licensing of enterprise software.

Cloud computing is growing at exceptional rates but at the same time, there is rapid growth in mobile and IoT computing. The confluence of these two factors are enabling more applications to run on the periphery of the infrastructure network. This is known as edge computing.

Beyond the cloud
Consider your favorite maps application, which provides directions from your mobile phone. The phone — if we can still call it that — receives GPS signals and computes your location, which is displayed in an app. Most of the location calculations and interface updating are done locally but data is also being pushed to the cloud.

Google Traffic, for example, use data collected from Google Maps users to analyze traffic conditions and inform users about areas of congestion and slowdowns. This is possible because a centralized computing platform collects and analyzes data from a large number of users in real time.

Google Traffic and Google Maps employ a common pattern in edge computing: Data is collected locally and information needed in real time by the user is computed on the device. Meanwhile, data is sent to the cloud, where it is combined with other user data to derive additional useful information, which is later sent back to multiple edge devices.

There are advantages to pushing more computation to the edge. It eliminates the latency involved in sending data to the cloud for processing and receiving the results back. This is especially important when the device is interacting with a human or needs to respond immediately to changing conditions. Keeping computation at the edge reduces the load on network infrastructure and leaves centralized computer and storage devices available for other application needs.

The emerging edge
We will see increasingly decentralized edge computing. Gartner estimates that the 10 percent of computational processing happening outside the cloud will grow to 50 percent by 2022. There is no single driver for this kind of growth — a wide range of applications will contribute to the increasing importance of edge computing.

Recent advances in artificial intelligence, especially in language processing and image recognition will power new applications and ways of interacting with computing devices. Transportation professionals can use AI-enabled applications to analyze images of traffic and respond in real time without having to communicate with a centralized system. Translation apps turn mobile phones into digital translators.

These applications are built on machine learning and language models that require significant computational resources. The development of AI chips, like Google Tensor Processing Unit (TPU) and Nvidia’s recently announced Turing graphics processing unit, will enable more AI processing on edge devices.

Consumer electronics used to refer to televisions and stereos but today it also includes smart appliances. This year’s Consumer Electronics Show (CES) included a refrigerator that tracks expiration dates and helps you find the best price on items on your shopping list.

The same technology that powers smart refrigerators is making inroads into industry as well. The Industrial Internet of Things is the set of edge and cloud computing technologies that enabling better monitoring of industrial equipment, collecting more data to drive predictive analytics, and fostering innovation in industrial processes.

The first decentralization wave changed the way business offices operated and brought personal productivity tools to consumers. The second wave of decentralization will be more ubiquitous. Mobile device users, home appliances, self-driving automobiles and trucks, and industrial equipment are all incorporating edge computing technologies.

The rise of edge computing does not come at the expense of cloud computing, which will continue to grow. But edge computing will bring digital technologies to new areas and drive innovation across a wide range of industries.

Related Stories