Buzzwords. Now and then, we see or hear one pop up here and there. Investors hail it to be the next big thing. Vendors aggressively push its “solutions.” And businesses, afraid they would be left behind, rush to implement them.
Not all buzzwords are deceptive. Some do live up to the hype—if given enough time. The Web is a good example. And so could be blockchain. But agreed, until then, the technologies are hyped to hell. If a modern electronic device does not make use of AI or machine learning, is it even worth buying?
Here is an addition: edge computing.
Edge computing has been doing the rounds for a long time now. But it was especially hot during the pandemic, when we functionally lived online. It ticks all the boxes listed above: it pops up now and then on Twitter, Reddit, and business and technology podcasts. Investors claim the market is already worth billions. And cloud companies are aggressively pushing edge solutions, claiming that they will make companies “more innovative, successful, and proactive for their customers.”
Is that true? Well, yes and no.
What is edge computing?
As is the case with any hype, there exists a lot of confusion over what we mean by edge computing. Here is a dangerously simple definition: edge computing is the successor of cloud computing.
If you have streamed high-definition content lately, you may think cloud computing is sufficiently fast. But fast means different things for different use-cases. It may be fast for viewing videos on Facebook and YouTube. But perhaps not for building and iterating applications and integrating feedback in real-time. Or perhaps for learning and acting on insights in real-time to make your business more agile and make more informed decisions.
Cloud computing seems plenty fast. But fast means different things for different use-cases.
Cloud computing is centralized, and the central computer that handles the computation and distribution of data could be situated thousands of miles away. In such an arrangement, where you, the user, lie at the edge of the network, the delay could perhaps be in milliseconds. However, when the stakes are high, even milliseconds count.
And that is what edge computing entails: bringing to the edge—the end user—not just distribution, but also computation. Here is the cloud in your backyard, making delay virtually non-existent.
Edge computing: why should I care?
Well, because it does have some incredible benefits.
1. Low latency
Latency refers to the time it takes to access data. And when the database is right in your backyard, the time is dramatically reduced. Again, many might be indifferent to the reduction. Many may not even find out. But for time-critical businesses, the dramatic decrease in time could lead to a dramatic increase in user engagement.
Online gamers, for example, would benefit immensely. It could even save lives by using real-time data to predict disasters more accurately.
2. More bandwidth
By 2025, we will generate an estimated 175 zettabytes of data. IoT devices will generate a large chunk of that data. And IoT devices will be everywhere. They already are—phones, laptops, watches, earphones, home devices, cars, appliances, and even sunglasses.
But soon, we will see smart traffic lights, streetlights, sewers, factories, oil rigs, and whatnot. Dedicated clouds will enable more devices to connect to the internet.
3. Wide access
More devices will connect to the internet. But more importantly, more devices from farther places or edges will connect to the internet. Since edge computing is decentralized, there would not exist a single portal to the internet. Instead, each node will have its own portal.
Soon, we will see smart traffic lights, streetlights, sewers, factories, oil rigs.
Assuming the arrangement is scalable, edge computing will make access to data far and wide.
4. Real-time analytics
In a market that evolves so rapidly, insights lose value with time. That is perhaps the most important benefit of edge computing: enabling real-time analytics and insights. Unlike cloud computing, data processing and distribution are not separate in edge computing. Traditionally, data is first generated by the end-user, say, a customer. The data is routed to a data center. Then, it is accessed by developers, following which analysis begins.
Instead, edge computing combines distribution and processing, allowing investment or ESG research firms to receive data, process it, learn insights, and ship solutions with little or virtually no delay. When feedback is so quick, companies can iterate and adapt to changing needs more efficiently.
Long story short, edge computing will make businesses more effective. It will save time and, therefore, money. Engagement will increase, and so will revenue. The stunning combination of speed and efficiency will empower a wide variety of breakthroughs.
Well…maybe not.
Cutting-edge or “edge-washing?”
Okay, we get it. Edge computing is a big deal. Wait. Even better is combining it with machine learning, AI, 5G, and every other fancy technology that has captured the internet.
The prospect certainly is exciting. Think about it. We will finally be able to leverage the millions and billions of data points we generate on a daily basis. An oil rig alone has 30,000 sensors. The insights would enable the industry to optimize resources to an unprecedented degree.
And that is just one use-case. There are dozens more—in the automotive industry, agriculture, healthcare, material design, finance, telecommunication, and education. Essentially, edge computing can benefit any industry that generates mountains of data. Which industry does not?