Combining Real-Time Edge Computing and Video Streaming

Edge computing

New developments pushing the intersection between cloud computing and real-time video streaming to rooftops, curbsides, and factory floors are bringing the world closer than ever to the next era in digital commerce. Already, the benefits of video-rich interactivity are driving surging demand for network connectivity that does away with the high latency, unidirectional, and other… Continue reading Combining Real-Time Edge Computing and Video Streaming

New developments pushing the intersection between cloud computing and real-time video streaming to rooftops, curbsides, and factory floors are bringing the world closer than ever to the next era in digital commerce.

Already, the benefits of video-rich interactivity are driving surging demand for network connectivity that does away with the high latency, unidirectional, and other limitations of traditional streaming technology. When real-time interactive streaming as supported by Red5 Pro’s Experience Delivery Network (XDN) platform is in play, centralized cloud computing can be employed to address the processing needs associated with the multitude of video applications that work well when end-to-end latencies register in the 200ms-400ms range.

But there are ever more compute-intensive internet-of-things (IoT) and other use cases with ultra-low latency requirements that can only be met with the extension of cloud computing all the way to network endpoints. By colocating XDN nodes with those real-time edge computing endpoints, providers of services and applications can support the growing volume of video-rich use cases that require reductions in streaming latencies to levels approaching light speed.

Our purpose here and in a subsequent discussion is to explore two major trends that will ensure edge computing and real-time networking operate in tandem to support the most latency-intolerant high bandwidth applications. First, we’re focusing on major advances aimed at facilitating the high-density proliferation of tightly meshed compute instances far closer to end-users than ever before. We’ll follow this with a look at developments in blockchain technology and Red5 Pro innovations that can be leveraged to accelerate the emergence of a pervasively available real-time interactive streaming infrastructure in tight alignment with the new edge-computing infrastructure.

Ever More Real-Time Video Applications Are Impacting all Types of Network Endpoints

When it comes to enabling the disbursement of edge computing, much of the discussion has centered on what this means for networked applications supported by 5G infrastructure. But it’s important to recognize that deep penetration of real-time edge computing is just as important to providers operating over other types of interactive broadband networks, including cable hybrid fiber coax (HFC), telco fiber-to-DSL, pure fiber, and versions of these infrastructures that rely on last-mile wide area connectivity over Wi-Fi.

No matter what type of physical network is used, network operators must be able to support direct connections between edge compute locations and multi-directional real-time streaming infrastructure. In addition, they need to be able to support near-zero latency data transfers between edge and core cloud resources for applications that rely on core as well as edge processing.

Use cases that require this confluence of real-time streaming and cloud computing at what some are calling the “extreme edge” abound across multiple market segments. They’re especially prevalent in the non-entertainment arena, where processing for AI and other tasks deep in the network is often coordinated across all cloud locations. With the ability to link edge and core cloud computing points in real time, it’s possible to perform highly synchronized processing to deliver insights applicable to each location at global scales.

Examples of this can be found in surveillance technology where single camera-to-observer configurations are giving way to aggregation and AI-driven analysis of multiple camera feeds to deliver the information operators need to react to unfolding events in real time. As reported in this white paper, this marriage of real-time streaming and computing is transforming the use of surveillance in law enforcement, public safety, fire control, military operations, and business and household security.

Some of the other fields where edge computing and real-time streaming are transforming how things are done include:

  • Video-based collaboration – Remote collaboration across all fields of business, institutional, and government activity has grown to new levels of participation in the wake of the Covid 19 pandemic. The ability to conduct operations in this environment requires spontaneous activation of video engagement at all locations with reliance on intelligence generated through interactions between local and core cloud processing centers to support recordkeeping, graphics sharing, access to specialized software tools, and much else.
  • Maintenance and construction – Managers with visibility into locally generated video streams and analysis of raw data collected by sensors and video cameras can assign tasks and provide guidance to workers in the field.
  • Live training sessions – Instructors engaging remotely via video communications with trainees can utilize AI-assisted analysis of work skills and cloud-based 3D representations of complex machinery to expedite the learning process.
  • Manufacturing – Robotic processes, requiring persistent monitoring and analysis, are a fixture in today’s smart factories. Adding endpoints for real-time video communications colocated with intelligent processing adds new efficiencies by providing support for interactions between workers on the factory floor and centralized management.
  • Medical operations – The combination of rapid local synthesis and transmission of machine-generated patient data with treatment involving video communications between dispersed health workers adds much-needed efficiency to caregiving in an industry that is consolidating around core hospitals tied to remote health centers.
  • Applications of extended reality (XR) technology – XR, encompassing virtual, augmented, and mixed reality (VR, AR, and MR), is taking hold as a highly effective embellishment to the types of non-entertainment applications listed here. Networking XR applications for multi-user participation in these use cases set latency and processing requirements that can only be met through tight integration of real-time streaming and computer processing at the deep edge. VR is especially challenging, requiring incessant real-time transmission of volumetric payloads to keep pace with every action impacting what’s happening in the virtual space occupied by each user.

A New Realm of Consumer Applications

Interfacing computing with real-time streaming at deep edge locations also enables significant advances in consumer services and applications. For example, providers of services in the surging online multiplayer gaming market can go far beyond the user experiences on offer today to enable simultaneous, delay-free participation in fast-action competitions to players anywhere.

Providers can leverage edge computing to instantly transcode user input, including any webcam flows that may be used to enhance social interactions as games are played. Instant interpretations of user behavior can tap core cloud databases to deliver recommendations, intelligence to improve players’ performances, and other information pertinent to immediate personalized responses to players’ behavior.

All types of live event production, including sports, esports, and concerts, can benefit in multiple ways from real-time streaming tightly attuned to edge computing. A/V input from venues, dispersed commentators, and graphics production centers can be processed locally and streamed in real time to central production studios, eliminating the need to mount dedicated production operations at every location.

More broadly, there are multiple ways consumers can participate in networked XR applications. Along with enabling the real-time latencies essential to XR participation in multiplayer gaming, social networking, interactive 3600 engagement with live sports, and other live use cases, real-time streaming colocated with edge transcoding and other computing tasks allows producers to provide more photo-realistic immersive experiences involving the transfer of much greater volumes of data in instant response to every user’s action.

Miniaturized Real-Time Edge Computing Devices from Acromove

The cloud computing infrastructures taking shape to support this next generation of use cases are designed to derive maximum benefits from massive amounts of processing while ensuring all elements work together to deliver results at speeds replicating what’s accomplished when big data centers are colocated with application workflows. The challenges are immense, requiring edge processing that can handle the execution of AI and other complex tasks with just the right balance between how core and edge computing resources are meted out for each type of application.

In fact, there would be no way to achieve that balance without the densification of immense amounts of processing power in miniaturized form factors suited to installation at the extreme edge. Today even a drone can perform AI-processing to identify and categorize relevant information before streaming the content to control centers.

The point was underscored with the announcement of IBM’s collaboration with Acromove, Inc., one of 20 partners in Big Blue’s Edge Application Manager initiative. IBM describes the Edge Application Manager as a container-based platform that’s designed to enable data, AI, and IoT workloads to be deployed where data is collected in order to provide analysis and insight that can be delivered to customers in real time.

Acromove supplies a set of compact, self-contained datacenter building blocks with CPU, GPU, RAM, flash memory, and other components that have been optimized for either data processing, storage, or networking. These “Edge Cloud-in-a-Box” solutions, like traditional public cloud services, are provided on an out-sourced basis, allowing customers to operate a dispersed, scalable cloud infrastructure without incurring capital or maintenance costs.

IBM says it will rely on the Acromove platform to serve clients who want to deploy, in IBM’s words, “highly mobile geo-elastic edge data centers in any environment within minutes.” With reliance on Acromove’s platform, the partners say these workloads can be extended to edge points mounted virtually anywhere, including drones used to support farming operations, monitor the spread of wildfires, or perform any of myriad other surveillance tasks that require instant analysis.

Qualcomm’s Edge-Anywhere Juggernaut

No company has gone farther toward miniaturizing compute form factors for the deep edge and user devices than Qualcomm. Its latest advances, targeting 5G use cases in both domains, are designed to ensure edge servers work in concert with on-device processing capabilities to support the most latency-sensitive use cases while relying on the core cloud to perform non-real-time, processing-intensive tasks and to provide mass storage capacity for data and video content.

This wireless edge architecture can be configured to support a wide range of extremely low-latency applications. For example, time is saved in real-time IoT use cases like autonomous vehicle operations or disaster mitigation when at least some of the processing can be executed by devices embedded with smart sensor systems-on-chips (SoCs).

Other types of Qualcomm SoCs make it possible to meet latency requirements of network-connected XR experiences, camera-reliant surveillance, and other video-rich use cases that rely on direct connectivity with real-time video streaming infrastructure. Even when AI is in play, on-device chipsets can be used to perform processing that would add too much latency if the processing was left entirely to the edge or core cloud.

Where XR is concerned, Qualcomm has taken a leading role in fostering the development of eyewear and XR functionalities aimed at radically reducing device form factors while saving time with on-device processing. For example, the Snapdragon XR2 SoW supports 4K video resolution at 120 frames per second (fps), 6K at 90 fps, and 8K at 60 fps. The chipset offers 11 times the AI processing power of the XR1 and interacts with multiple graphics APIs to enable hardware-accelerated composition, dual-display functionality, and 3D overlays.

Looking ahead, Qualcomm describes the types of “sleek and stylish XR glasses” people might one day happily wear to engage with all categories of XR experience. The company predicts such glasses will feature multi-functional, semi-transparent lenses supporting display surfaces and telescopic viewing. Rims and earpieces will be embedded with multiple dot-size devices, including tracking and recording cameras; motion health, ambient light, and thermal imaging sensors; directional speakers and microphones; image projectors, and haptic devices conveying a sense of touch in user interactions with virtual elements.

Another big area of development for Qualcomm has to do with making drones a part of the wireless edge. The firm’s Flight RB5 reference design can be used in conjunction with specialized chipsets to develop drones with up to seven concurrently running cameras streaming video at resolutions as high as 8K to terrestrial aggregation points. On-board intelligence can support autonomous operations with algorithms designed for self-guidance, as in the case of package-delivering drones that can determine optimal routes to precisely targeted destinations.

Macrometa’s Support for Dispersed Data Processing

Much development energy is also flowing into support for high-volume data processing closer to the network endpoints. A case in point is the Global Data Network (GDN) and data processing service developed by Macrometa. The GDN service is designed to support the global availability of real-time processing that allows customers to ingest, transform, and process data instantly. As described by Macrometa, the platform’s globally distributed NoSQL database makes it easy to build “stateful, real-time, low-latency, globally distributed, edge, and multi-cloud apps that run simultaneously across 100s of edge or cloud regions.”

The highly secure, high-performance, low-latency data infrastructure allows developers to achieve consistency in database replication at a global scale, the company says. They can govern replications with a high degree of granularity to achieve different levels of isolation and consistency at all storage and processing points.

Macrometa says the GDN allows data read and write processes locally in all locations in parallel without requiring users to know which data should be placed in which location or to redesign the schema every time they want to add or remove a location. The GDN, which is designed to interact with stateful data in existing databases, can serve queries, reads, and writes to actively operating apps representing millions of events per second anywhere in the world with less than 50ms of total roundtrip time from the client to the edge database and back.

Snowflake’s Support for Application-Specific Access to Silo-Free Data Pools

Another facet to data management in the real-time communications environment involves ensuring expeditious data access without the encumbrances imposed by traditional approaches to data storage. Companies must have the flexibility to harness and analyze any data pertinent to their efforts to stay competitive in what has become an insights-driven digital economy.

The ability to process data, regardless of its origin or original purpose, required recourse to automated processes that render siloed approaches to storing data for specific departments or projects obsolete. One company that has had significant success moving the enterprise market in this direction is Snowflake.

The Snowflake cross-cloud platform provides a service that utilizes logically integrated storage, compute, and service layers that can be instantiated at global scales. Its architecture allows users to launch independent data workloads that can instantly draw whatever data is needed from the aggregated pool under their control.

The ability to instantly spin any number of concurrent workloads up and down against the same single-copy storage pool marks a major step beyond reliance on cloud storage services that support some measure of fluidity within their domains but impose time-consuming barriers to operating workloads across multiple domains. The Snowflake cloud services assign compute power on an as-needed basis to manage client sessions, metadata, transactions, query planning, security, and other essential processes. .

Forrester Consulting, reporting on a survey of Snowflake customers, found that the platform was generating millions of new dollars in saved product-launch costs, quicker returns on new initiatives, lower data management and infrastructure scaling costs, and faster, more effective decision making. An in-depth analysis of four customers’ results covering a three-year period found the difference between benefits and costs of using the Snowflake platform translated to an average net present value of $18.4 million.

AWS Innovations Supporting Deep Edge Computing and Low Latency Cloud Access

Some of the most important advances supporting the execution of latency-sensitive use cases have been implemented by AWS. The cloud operator’s portfolio of Snow-branded products and its Wavelength initiative are particularly noteworthy in this regard.

AWS Snowball and Snowcone comprise a family of portable edge devices that can be optimized for computing or storage to give customers the flexibility to implement functions essential to real-time applications in proximity to the points of data generation, including locations beyond AWS Regions and Outposts. These devices, owned and managed by AWS, integrate with AWS security, monitoring, storage management, and computing capabilities to support real-time intelligent responsiveness to incoming data feeds.

Snowball, with the larger form factor weighing in at just under 50 pounds and measuring 28.3×10.6×15.5 inches, provides 52 virtual CPUs of compute capacity with optional GPU support and 42 TBs of usable block or object storage capacity in the compute-optimized version and 40 vCPUs with 80 TBs of storage in the data-optimized version. Snowball supports on-device data analytics with fast-data transfer to the AWS cloud through 10-100 Gbps network interfaces. Up to ten units can be clustered for management through the AWS OpsHub user interface.

Snowcone, weighing just 4.5 pounds and measuring 9x6x3 inches, contains four vCPUs with 8 TB of HDD and 14 TB of SSD storage capacity. The battery-powered device can be taken anywhere with support for data transfer to the AWS cloud via AWS DataSync through two 1/10 Gbps link portals. Both Snowcone and Snowball support encryption and transcoding of all ingested data.

On a separate track, Wavelength is the AWS edge strategy that facilitates extremely low latency use cases over 5G networks by creating a direct path into the AWS cloud. In partnership with mobile network operators (MNOs), AWS instantiates direct access to its compute and storage services in the MNO facilities that aggregate local 5G traffic.

These Wavelength Zones eliminate delays of anywhere from tens of milliseconds to multiple seconds that are incurred by traffic that usually has to traverse cell sites, metro and regional aggregation centers, and the internet to get to and from the AWS cloud. Wavelength Zones directly connect with AWS infrastructure running in 77 Availability Zones across 24 AWS Regions worldwide.

With Microsoft Azure and Google Cloud moving to implement their own versions of the Wavelength strategy, it’s likely that colocations of 5G small-cell aggregation sites with on/off ramps to the cloud will become the foundation for many of the cloud-processing-intensive applications 5G is expected to support. But, when the types of low-latency video-rich applications discussed earlier are in play, these edge solutions won’t be sufficient to meet the requirements if the 5G payload is streamed over traditional content delivery network (CDN) infrastructure.

Here it’s important to note that the latency-reducing mechanisms introduced with 5G Radio Access Network (RAN) technology will have a significant impact on all types of latency-sensitive applications, including those that require distant streaming as well as those where intelligence colocated with small cells enables the application functions to be executed in immediate proximity to endpoints.

The widely deployed first-generation 5G Non-Standalone (NSA) RAN technology, which employs software upgrades to expand the control plane functions of existing LTE Evolved Packet Cores (EPCs), cuts the time consumed by RAN processing to 4ms-10ms, compared to 10s of milliseconds or even several seconds on LTE and earlier generation mobile networks. And with the buildout of 5G Standalone (SA) facilities, MNOs will be able to implement Ultra-Reliable and Low Latency Communications (URLLC) technology, which cuts the transmission time from end devices through the RAN processing to 1ms.

The URLLC device-to-RAN-to-device turnaround will be fast enough to support the lowest-latency IoT applications that depend on intelligence colocated with small cells. But the URLLC impact will just be part of the latency-reducing mechanisms that must come into play with applications, including those involving video, that rely on more distant cloud processing and end-to-end streaming among distantly dispersed endpoints.

The Wavelength/XDN Connection Illustrates What’s in Store at the Deep Edge

It all comes back to the point raised earlier: when video or high-volume data payloads are involved, the ability to capitalize on the low-latency potential of all the intelligent edge solutions discussed here depends on direct connectivity between the deep edge points and real-time streaming infrastructure. One significant illustration of this fact can be found in the partnership between AWS and Red5 Pro that has made the XDN infrastructure the first real-time streaming platform to be certified for use in Wavelength Zones.

By deploying XDN infrastructure in Wavelength Zones, applications developers, service providers, and the carriers themselves can deliver interactive video streams to and from any number of 5G phones or other user equipment (UE) served by these AWS cloud on/off ramps to satisfy the most stringent latency requirements set for such applications. As long as those endpoints are in proximity to Wavelength Zones, these requirements, typically set at 50ms or lower, can be met no matter how far apart the sending and receiving UE might be.

The same principle applies to all other latency-reducing edge strategies where direct connectivity with XDN infrastructure for video streaming is in play. For example, a video-surveillance operation involving multiple camera feeds from stationary or drone-mounted devices has much to gain from AI-fueled instant analysis performed at an edge aggregation point by an Acromove appliance. The device can identify objects of interest as the aggregated video feeds are ingested onto the XDN to be streamed in real time to manned operations centers.

How XDN Architecture Bring Real-Time Streaming Together with Deep Edge Computing

The digital economy has reached a point where the full range of next-generation services and applications can be activated with the right combination of edge compute and streaming infrastructure. The flexibility of XDN architecture ensures that the alignment with edge intelligence appropriate to any application involving video can be implemented no matter how small the edge enclosure might be.

As explained in this white paper, the XDN platform is built on a cross-cloud architecture that supports tightly synchronized real-time streaming to and from any number of endpoints at any distance. With ingestion of content at XDN Nodes positioned with the types of deep edge processors discussed here, impediments to meeting video latency requirements, no matter how stringent they might be, are eliminated.

XDN infrastructure is built on automatically orchestrated hierarchies of Origin, Relay, and Edge Nodes operating in one or more cloud clusters. The platform makes use of the Real-Time Transport Protocol (RTP) as the foundation for interactive streaming via WebRTC (Real-Time Communications) and Real-Time Streaming Protocol (RTSP). In most cases, WebRTC is the preferred option for streaming on the XDN platform by virtue of its support by all the major browsers, which eliminates the need for device plug-ins.

There are also other options for receiving and transmitting video in real time when devices are not utilizing any of these browsers. RTSP, often the preferred option when mobile devices are targeted, can be activated through Red5 Pro iOS and Android SDKs. And video can be ingested onto the XDN platform in other formats as well, including Real-Time Messaging Protocol (RTMP), Secure Reliable Transport (SRT), and MPEG-Transport Protocol (TS). The XDN retains these encapsulations while relying on RTP as the underlying real-time transport mechanism.

The XDN platform also provides full support for the multi-profile transcodes used with ABR streaming by utilizing intelligent Edge Node interactions with client devices to deliver content in the profiles appropriate to each user. And to ensure ubiquitous connectivity for every XDN use case, the platform supports content delivery in HTTP Live Streaming (HLS) mode as a fallback. In the rare instances where devices can’t be engaged via any of the other XDN-supported protocols, they will still be able to render the streamed content, albeit with the multi-second latencies that typify HTTP-based streaming.

XDN Nodes can be deployed on multiple cloud infrastructure-as-a-service (IaaS) platforms. This can be done by leveraging pre-integrations with major suppliers like AWS, Google Cloud, Microsoft Azure, and DigitalOcean or through integrations with many other IaaS platforms enabled by Red5 Pro’s use of the Terraform multi-cloud toolset.


Having explored some of the major edge advances that can be paired with XDN infrastructure to enable a new generation of latency-sensitive, video-rich applications, we’ll focus the second part of this discussion on new developments that will help speed the implementation of such pairings at massive scales. Specifically, we’ll explore the implications of blockchain technology and other innovations spearheaded by Red5.

As shall be seen, there’s every reason to expect that access to real-time interactive streaming will become as ubiquitous as the network edge intelligence that will soon be permeating residential and industrial landscapes. Meanwhile, to learn more about XDN infrastructure, contact or schedule a call.