Utilizing KLV Metadata in the WebRTC Data Channel

Woman running with KLV Metadata

We generate vast amounts of data in our daily lives, and that data can hold a lot of value. But the data is useless unless we can get it into the hands of the right people when they need it. Video streaming, while rich with imagery and sound, doesn’t always provide a full picture. Fortunately,… Continue reading Utilizing KLV Metadata in the WebRTC Data Channel

We generate vast amounts of data in our daily lives, and that data can hold a lot of value. But the data is useless unless we can get it into the hands of the right people when they need it. Video streaming, while rich with imagery and sound, doesn’t always provide a full picture. Fortunately, we are now able to create further dimension to traditional streaming video by adding a channel to provide data that is synchronized with the rest of the content. Red5 supports streaming synchronized KLV metadata in the WebRTC data channel, providing a more enriching and informative experience. 

What is the WebRTC Data Channel?

Welcome to the WebRTC data channel! It is a mechanism built into the WebRTC standard. This channel can deliver a diverse range of data, from simple information like timecodes to more exciting uses such as delivering sports scores, telemetry data, and more. The data arrives synchronized with the audio and video, so it can also be utilized to update interfaces and trigger events immediately.

What is KLV Metadata?

All this data is wonderful, but using a standardized format allows for a common interface to the metadata. Along comes the Key Length Value (KLV) schema ratified by the Society of Motion Picture and Television Engineers (SMTPE) in 2007 (SMPTE 226M-2007). It defines that metadata is broken into three fields: Key, Length and Value.

The Key field defines the type of data that the metadata describes. This key can be of varying lengths of 1, 2, 4, or 16 bytes. 16-byte keys are typically used for unique identifiers that have been registered with the Motion Industry Standards Board (MISB), making them globally recognized.

The Length field provides information about the length of the value field so that the data can be verified as complete. Four kinds of encoding are allowed: 1, 2, and 4 bytes, as well as Basic Encoding Rules (BER). The byte encoding is simple in that it is an integer representing the length of the data. BER, part of the X.690 standard, provides a more complicated format but allows for added flexibility in defining the length.

The Value field contains the actual data, typically binary encoded. Values can be chained, allowing multiple sets of data to be passed in a single KLV data segment. For example, a value chain can contain both a timestamp and GPS coordinates.

Uses for Synchronized KLV Metadata in the WebRTC Data Channel

Combining the WebRTC data channel with the KLV metadata standard brings us real-time video and metadata over a single connection. Data is synchronized with the video allowing applications to display time-sensitive contextual data along with the video. Automated processing and tagging of the video using the synchronized metadata make it easier to search and manage large video libraries. AI and machine learning can use the metadata during inference in addition to improving their accuracy. Adaptive streaming can be improved to use more than the paltry information that WebRTC currently relies upon.

KLV Metadata is already being used by many industries to enhance video. Some use cases include:

  • Sports – Sports scores and stats display updated information in overlays or in additional components next to the video stream. Player positions and motion can be streamed so that 2D and 3D models of the action are available.
  • Drone Surveillance – Drones send positional telemetry in addition to other helpful information such as current temperature, wind speed, roll, yaw, and camera direction. This is essential when controlling the drone or using it to discover conditions in remote areas.
  • Metaverse – User position, direction facing, and arm and leg motion are sent via metadata to provide updated and synchronized information to other users.

The MISB database includes definitions for over 90 defined types of KLV data, including timestamp, mission ID, longitude, latitude, and more obscure data like icing detection, relative humidity, and weapon load. New definitions can be submitted to be added to the specification, making this a flexible and future-friendly standard.

Red5 supports KLV metadata ingestion over the SRT protocol. KLV streams are automatically parsed to provide the metadata in both human-readable and standard KLV format over the WebRTC data channel. Each WebRTC data channel receives the metadata ingested from the SRT stream in sync with the video.

Support in Red5 is expanding the ingest KLV via our other ingestion protocols, including WebRTC, RTMP, RTSP, and Zixi. This will allow any encoding platform to use KLV when streaming into the Red5 server.

By supporting KLV data synchronized with the video over the WebRTC data channel, Red5 has provided the ability for applications to take advantage of streaming data without the need for additional/alternate connections that provide metadata. This reduces the load on both the server infrastructure and the client devices. Having the data the client app needs exactly when it needs it provides huge new opportunities for applications to enhance what their viewers experience. 

If you are interested in a demo of the power of synchronized KLV data channel metadata, contact info@red5.net or set up a call