At the end of a busy week of Akamai platform updates, edge computing rises as a critical focus. Akamai has been operating services at the edge for over 20 years, from content and media delivery, app and IoT optimization, to cloud and enterprise security. Now we are excited to open up the full potential of our platform to developers with EdgeWorkers, allowing companies to run their own code serverless on our edge nodes. Let’s explore why we at Akamai consider this to be so important, and why edge computing is gaining momentum in the industry.
Why Take It to the Edge?
We see edge computing as the next and natural paradigm shift in IT, ringing in a new wave of decentralization. Over the past decade, the industry has gone through two seemingly juxtaposed trends in IT. On one side, there is the centralization and consolidation of infrastructure in (private, public, or hybrid) cloud data centers. On the other side, on the side of end users, we have witnessed an explosion of both the diversity and local distribution of client devices, driven by high-capability mobile devices and widely available wireless networks. This has created its own set of challenges that edge computing can address in ways the cloud or traditional data centers cannot.
The distances between a small number of big centralized data centers and billions of mobile end users distributed around the globe are too big to allow for adequate performance — be it to support latency-critical applications of any kind, or simply just to allow for highly personalized and responsive end-user experiences.
And while the demand for such personalized experiences has skyrocketed, there is at the same time a growing concern about holding and processing all user-centric data, including personal data, in big centralized data centers. The complete delegation of control to the cloud requires unilateral trust from clients to the clouds, often in direct conflict with new data protection regulations and Zero Trust principles.
While one of the key drivers to consolidating more and more services and functions on the cloud was cost reduction, the necessary interactions between end users and the cloud has led to massive network round-tripping and an inflation of traffic, storage, and compute costs on the cloud itself.
Edge computing offers significant potential to address these problems. By pushing the frontier of applications, data, and services away from centralized nodes to the periphery of the network, edge computing brings data, applications, insights, and decision-making closer to the users and “things” that act upon them. It puts control and trust decisions to the edges and allows for novel, and more human-centered, applications and experiences while minimizing the transfer of personal data. Round-tripping, storage, and compute requirements for the cloud are minimized, and so are the associated costs.
Edge Computing Is Serverless Computing
What makes edge computing even more intriguing and easier to adopt is that it also means serverless computing — at least when companies employ a solution like our EdgeWorkers, which takes care of everything that is needed to distribute and run the code. No hardware is needed, and there is no runtime environment or OS to maintain. Developers and IT organizations also don’t need to worry about scalability, availability, performance aspects such as cold start times or the actual distribution of their code on the edge network — and, obviously but importantly, they don’t need to own an edge network.
Building and maintaining an efficiently large and distributed network of edge nodes is an undertaking that is not feasible and doesn’t make business sense even for companies that are large enough to run their own private cloud. Akamai operates the largest and most widely distributed edge network on the planet. By opening this platform up for edge computing, we can take care of all major aspects needed to run your code. Developers should be able to focus only on what they do best: write code, innovate, and create applications that provide value to end users and differentiation for the company.
The New Paradigm: Put Your Code Where It Runs Best
What makes edge computing stand out from so many trends in the past is that it is decisively complementary, not competitive, to the trend that it seems to be juxtaposed with. Private, public, and hybrid clouds will remain a crucial part of infrastructure, but edge computing will be an increasingly important additional place where code can run.
In other words, the paradigm shift is not about moving everything to this new location. The new paradigm is that it’s best to distribute workloads and data to the places where they run best. Do you need to keep distances, traffic, and latency between data and its consumers low? Is the application required to minimize distribution and centralization of sensitive data, such as personally identifiable information (PII)? Do you plan to utilize data and insights based on user context and location to make near real-time decisions for personalization? If your requirements are anywhere close to these, then the best place for your code is almost always the edge.
Companies should not think of edge computing as a technology that requires intrusive changes and a radical deviation from existing practices — challenges that led many to hesitate adopting cloud computing. Instead, they should look at it as an additional tool that doesn’t replace, but complements and improves existing systems, applications, and concepts. After all, most organizations already use some form of edge technology for caching, monitoring, or protection. And if that can be extended to execute custom code, use cases for edge computing can be identified and implemented one by one, and without the major rollout pains the move to the cloud has caused for many.
This is exactly what we are doing for companies with Akamai EdgeWorkers. Our customers use EdgeWorkers today to cut down wait times for their end users, by decreasing latency from seconds to single-digit microseconds for geolocation-based personalization services, or by performing consent evaluation mandated by regulations locally at the edge rather than by calling back to the cloud. Others perform URL and routing transformations directly at the edge to optimize caching and minimize round-tripping, resulting in shorter load times and decreased network traffic and compute cycles at the origin servers. You can find detailed blog articles about some of these use cases in the links below.
This is just the beginning, and the possibilities for custom code at the edge are almost infinite. The Akamai Intelligent Edge Platform is the largest distributed network, and it’s now open for developers to run their own code, serverless and with zero effort.
Just bring your code. We take care of the rest.
If you want to get started on serverless computing at the edge, you can sign up for EdgeWorkers.
We’ve created a number of how-to posts and user stories that detail how Akamai customers are utilizing the new capabilities in EdgeWorkers and Image & Video Manager to provide better digital experiences to their end users. EdgeWorkers will also be a major topic at our upcoming Edge Live virtual conference.
There will be more opportunities to interact with us on this and more at Edge Live | Adapt on November 10 and 11. Sign up to see how customers are leveraging these improvements, engage in technical deep dives, and hear from our executives about how Akamai is evolving for the future.
*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Lelah Manz. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/mVLSjDPe_Zg/moving-to-the-edge-an-outlook-into-a-new-era-of-computing.html