BrainGu is headed to KubeCon + CloudNativeCon Europe 2023
The containerization crew at BrainGu is headed to Amsterdam
This article explores when stepping to the edge, what should be the primary considerations. Edge deployments provide increased freedom to develop for latency-sensitive applications, reduce cloud costs by offloading processing, and help reduce operational overhead.
Today, everyone wants to run to the edge, and hopefully not fall.
The buzzwords last year involved AI, ML, and some variation of combining various technical aspects. Today, the current push seeks to drive all developers to the edge. This article explores when stepping to the edge, what should be the primary considerations. Edge deployments provide increased freedom to develop for latency-sensitive applications, reduce cloud costs by offloading processing, and help reduce operational overhead. When a customer tells someone, they require development on the edge, with edge nodes, edge operations, and multiple other nouns associated with edge, one should ask the persistent question, what are the development and operations requirements on the edge?
This question leads into several good underlying thoughts, first,what makes operating at the edge different from the local environment. Next, how do I start in a safe manner and ensure my deployments are reliable and secure? Third, how do I integrate my existing operations to maintain stability, and finally, how do I get the right data back from the edge to guarantee my operations remain resilient. Examining each question will allow shaping the structure to effectively guide to resilient, scalable, repeatable and secure edge deployments. After all those questions, one should also consider the best platform to support multiple edge deployments as well as centralized management.
Edge computing appears as a paradigm where the range of networks and devices is characterized by proximity to the user.
One may wonder if then all computing is not edge because all functions inherently start at the user and move forward. The distinction occurs during the second part of the distinction, stated as processing data closer to generation to enable processing at higher speeds and leading to improved results in real-time. The actual edge difference might be better stated as the difference between where the data is collected and processed is significantly different than the location of the end-user of the data.
That approach throws a couple of wrenches in the standard approach.
The first being that disconnection requires more capability for things to work independently at a distance. Using Kubernetes and containers helped reach this goal as these functions are intended to be distributed, immutable and ephemeral. Edge locations can be either a single difficult location or the thousands of locations often needed for business environments to manage daily functions. Traditional practices involved sending engineers to those sites regularly to guarantee operations but stacking centrally managed containers through the cloud can eliminate that expense. Rather than integrating an on-site engineer, edge operations can use central orchestration with containers at the edge as engineers rather than sending a person. Building multiple control planes for localized edge functions provide even more flexibility.
Each control plane can optimize for functions within their domain while remaining connected to the master plan with the immutable and known-good container set. As they functions constantly move out to the edge, the customer site maintains consistency while maintenance can be managed from that user location, distributed from the edge location. Each of these sites can be rebuilt, reloaded, or repaired from a distance rather than moving to engineers on site. The next challenge then becomes how to move data from those distributed locations to best conduct central management for the distributed edge.
One key to implementing edge technology occurs in processing data as close to the edge as possible and moving smaller data sets back to central management. Actions closer to the edge as opposed to a centralized remote data center help to reduce bandwidth and latency for various functions. A common example of edge processing could be selling through distributed stores when recording prices, inventories, and timesheets locally but aggregating data back to a central site to manage composite revenues, insurance payments, and money movement for the branches. Moving up from the initial example, one can envision edge processing to help with smart cities, military intelligence applications, autonomous vehicle fleets or any operations coordinated across a wide geography.
In many instances, raw data at the edge makes sense. In the store, one wants access to what aisle items are being purchased from, where they are stocked, and perhaps a customer membership that tracks purchases. Containers at the edge frequently look similar to central ones. Many of the functions remain the same but only the consolidated data returns to the center. As mentioned, edge containers help with low latency, load balancing, and scalability. Containers developed centrally can be launched simultaneously across multiple different edge nodes.
There are some negatives to using containers at the edge. Central management can be a key, but managing updates and deployment across multiple regions can increase complexity. At the initial stage, deployment must be carefully managed. Deploying to multiple edge locations can also increase the attack surface so security must always be a factor. Included in that security is that edge containers are going to require more network changes in both the edge location and back to central areas. One should include guarding those containers and providing tools to signal hostile intent within the edge networks.
Several ways to improve data at the edge include analyzing the right data, investigating data before storage, preparing models for response, and strengthening security. One way to make sure all these things are accomplished is to choose a platform. Edge computing frequently occurs as a way to use all different code and architectural applications before mashing the set together. While this can work, one should think about using a common platform. The platform from where one starts development provides the standard for moving forward. Implementing a Thinnest Viable Platform (TVP) mindset allows a thicker platform at central locations and the more appropriate tool where less processing is available. Correlating those platforms can be done through using a slightly thicker platform to enable more substantial development and test activities at the core while only allowing minimal test and development at the edge. Too many organizations try to shortcut by using one design architecture centrally, than expand into the Wild West for their edge solutions.
One suggestion for everyone seeking to maximize containers in managing edge deployments, use the same platform at every location. Multiple types of platforms exist but in edge computing, one should generally start with computing, utility, and data harvesting. For the refresher, a computing platform allows interactions between platform users and third party developers, a utility platform is generally a free service, and data harvesting offers a useful service that generates data. In the edge scenario, computing provides the initial development and deployment of cutting edge tools, utility provides the edge users an ability to do more with the nodes, and the data harvesting ensures information is processed and distributed. Some organizations use multiple platforms for each but the minimal processing available at the edge returns one to the TVP solution.
This becomes where the complexity at the center manages against the scalability on the edge. Consolidating multiple functions into a single platform allows analyzing one set of data at the center while consolidating and managing different data at the edge. The different data focus while using the same platform helps reduce complexity, improve reliability and function more effectively. A common platform also helps scaling containers developed at the center when deployed to multiple edge locations. Perhaps platform architecture should be the first step in developing at the edge but most customers prefer to start with the edge and move to center.
Overall, when building an edge architecture, one should start with a common platform and expand to multiple locations. This philosophy allows using containers to repeatedly and rapidly deploy to any geographic location while retaining centralized control. The second part of this architecture deals with which data should be processed at the edge, and which data then returns to the center. Managing data, and consolidating platforms creates the edge architecture, preferably using containers, that can then succeed at a wide scope of distributed tasks.
The containerization crew at BrainGu is headed to Amsterdam
The Cloud Native Computing Foundation's flagship conference gathers adopters and technologists from leading open source and cloud native communities.
An authentic user experience goes far beyond giving customers what they say they want. So we go deeper and research the underlying wants and needs of the end-user. To provide a high-quality user experience, we have to be seamless in our disciplines, which we discuss in detail.