Outshift Logo

9 min read

Blog thumbnail
Published on 08/29/2021
Last updated on 02/05/2024

The Reality of Edge Application Development

Share

Running applications at the edge offers many advantages if architected well, including lower latency and cost effectiveness, as well as easier compliance with privacy and data regulations, especially for data-heavy workloads. By bringing the application closer to where your data is, these advantages translate into faster response times and more timely, valuable insights. However, developing, deploying and maintaining applications at the edge remains incredibly painful today.

The edge exposes the need for a completely new development framework, replete with newer data management APIs and services, novel ways of invoking, slicing and stitching together artificial intelligence/ machine learning toolchains, and a new set of energy-efficient AI/ML algorithms for computer vision and natural language understanding (NLU).  Operationally, to work over highly distributed environments, optimized workload scheduling across incredible heterogeneity in computing architectures, and the simplification of application development, deployment and lifecycle management needs to be enabled.

In short, the edge is not your grandfather’s cloud. It is a whole new paradigm for application development and operational simplicity.

Challenges at the Edge

There are myriad challenges software development and site reliability engineering (SRE) teams need to address.

First, there is a need to balance myriad conflicting requirements:

  • of data-heavy applications versus the resource-constrained environments of most edge locations;
  • of simplicity of development and scheduling versus the tens to hundreds to sometimes thousands of heterogenous locations due to hardware, operating systems, software differences;
  • of enabling a distributed edge computing footprint versus the need to synchronize with a common public cloud backend;
  • of leveraging the economics of scale-out computing versus the network costs of moving data around;
  • and of event-driven scaling versus the latency hit of instantiation.

Second, there are significant challenges in the operating model, which is a far cry from the centralized operating model of the cloud. Securely installing, configuring and handling life cycle management, including API, service and tooling upgrades; event-driven scaling to save on operational costs and load-balancing across all edge locations and the cloud; continuing to operate and recover from potential WAN outages; complying with local data regulation needs – all based on the assumption that possibly non-technical personnel will be handling large percentages of the edge’s day-to-day operational needs.

Because they are up against so much, many enterprises end up delaying their digitization plans and lose out on valuable insights and new revenue streams.

Cloud-Out vs. Edge-In Approaches

One could approach solutions to these challenges from two diametrically opposite directions. Currently, many of the major cloud service providers (CSPs) are trying to tailor and extend their cloud platform – for example, Google Anthos or Amazon Outpost – to solve the edge development and operational challenges. The rationale being quite simple: to bring to the edge the same development and operational paradigm a customer is used to in their public cloud and the promise of seamlessness, especially if you have bet on a singular cloud provider. This is called Cloud-Out.

Since large-scale cost-effective public cloud hardware is built and operated using very different optimizations in mind, they pose a host of challenges when being retrofitted to an edge environment.

For starters, no one plans out the power-cooling-space requirements of an edge in the same manner that a cloud data center is planned for, which is quite a formal and methodical process. Often those envelopes are handed down to an edge compute location and are typically highly resource constrained – think retail, health care, enterprise branch, cameras, robotic entities, etc. This is called Edge-In.

Second, there could be very limited support for a CSP’s hardware matrix. Edge-In compute locations come in various permutations of compute, memory, network, storage, uplink, AI/ML co-processing and security offload engines.

CSP software stacks are also typically designed for larger workloads, and availability, reliability and performance are handled architecturally via hierarchical designs that don’t translate easily to Edge-In deployments.

Operationally, CSPs approaches to edge deployments assume uplink and WAN connectivity to drive their maximum benefit as centralized SRE teams drive efficiency in a cloud environment, which is not a great assumption to make for Edge-In deployments.

Industry experts anticipate that only about 20 percent of applications at the edge will use these Cloud-Out solutions, while a large majority will rely on Edge-In approaches. But the cloud is clearly here to stay, so any Edge-In approach will have to work seamlessly with, though not identically to, a cloud operating model.

Driving New Applications at the Edge

 To enable a new slew of edge applications, developers need help with two categories of edge services: one around new APIs for application development, and another for simplified life cycle management.

Creating a new class of highly distributed data-heavy application development will require software services and APIs for:

  • Robust data streaming filtering, formatting and processing libraries, with an ability to synchronize between the large number of edge nodes and the public cloud backend, if and as needed, but with an eye toward untethered operability.
  • The ability to wire up data processing workflows with a simple-to-use low-code/ no-code (LCNC) engine.
  • A satellite mothership-based hierarchical architecture for observability, correlation, insights and predicable actions across all types of telemetry data, including metrics, logs and traces.
  • To enable new computer vision and NLU applications, we would need newer AI/ML algorithms that are energy efficient, can work over scale-out, resource-constrained nodes and can join their learning to create a holistic world-view, energy-efficient video and audio functions.
  • Event-driven compute functions, serverless and more, with both stateless and stateful operations.

To simplify life cycle management for large-scale, lightweight distributed deployments, we need to enable consistent edge-to-cloud application deployments on the edge side with the following elements:

  • An abstraction layer to allow for consistent edge-to-cloud application development, deployment, and runtime leading to a write-anywhere, deploy-anywhere paradigm.
  • Transparent networking and secure bootstrapping for application edge nodes, allowing customers to integrate their own edge hardware easily.
  • One-click packaging and deployment allowing developers to make their applications accessible from a dedicated marketplace and usable by developers or by less code-savvy end users.
  • Comprehensive application and API security instrumentation, enforcement layered with a policy engine for geo-fencing, governance, real-time compliance and security.

So, What will all this Enable?

The best way to showcase the true value of application development at the edge is through an example of what the future could look like when this process is optimized.

Imagine an automated retail environment where cameras sense and enable the autonomous shopping journey for a customer as they enter, shop through a few aisles picking up items, and then check out at a cashier-less register. A computer vision application would ensure queue management at the set of registers in the store and over time, learn about the queue depths the store needs to handle.

The autonomous data-heavy edge applications would also allow for inventory management – both on the aisles, as well as in the back storage areas. They could provide insights into which brands are being successful at which stores across the retailer’s global footprint. Additional edge apps would use this data to deliver hyperlocal product advertising on event-driven displays in the store. All of this would run on low-cost, lightweight edge nodes with appropriate security permissions, adapt to ever-changing retail-level network connectivity based on end-to-end observability and allow for simple app life cycle management that could be performed easily by a thin staffing of store personnel.

Imagine the possibilities if we were able to create this future.

 

Group Created with Sketch.
Subscribe card background
Subscribe
Subscribe to
the Shift!

Get emerging insights on emerging technology straight to your inbox.

Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

thumbnail
I
Subscribe
Subscribe
 to
the Shift
!
Get
emerging insights
on emerging technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background