Red5 Documentation

DVR

Live DVR

The Live DVR feature provides the ability for viewers to seek during a live broadcast stream. This feature requires modifications to server configurations and specifics in the initialization configuration of the WebRTC SDK.

Server Modifications

To enable the server to provide Live DVR functionality, you will need to modify the conf/hlsconfig.xml file in from the server deployment.

Within the hlsconfig.xml you have the option to specify the format of video files to output which will be subscribed to on the client-side to provide DVR functionality:

<property name="outputFormat" value="TS"/>

The default is TS, which will generate transport streams with your standard HLS collection of files with a single .m3u8 manifest and several .ts files. You can also specify FMP4, which will generate fragmented MP4 and SMP4.

For the purporses of this document, we will keep the default TS format.

To enable the server to generate and retain all segments of a stream, you will need to change the dvrPlaylist property to true:

<property name="dvrPlaylist" value="true"/>

With the updated properties set in the conf/hlsconfig.xml, you will need to restart your server in order for them to take effect and allow for Live DVR on the client.

Development Information

Developer information can be found in the DVR API Development Section.

Additional DVR Configuration for Stream Manager with NFS

The Deployment consists of three parts:
NFS Server

WebServer Nginx

Stream Manager cluster – deployed on DO

  1. Deploy The NFS server
    Create Storage optimized droplet with 300Gb+ disk space. Droplet type:so-2vcpu-16gb

Available Storage optimized droplet types:

Slug                    Memory    VCPUs    Disk    Price Monthly    Price Hourly
so-2vcpu-16gb           16384     2        300     131.00           0.194940
so-4vcpu-32gb           32768     4        600     262.00           0.389880
so-8vcpu-64gb           65536     8        1200    524.00           0.779760
so-16vcpu-128gb         131072    16       2400    1048.00          1.559520
so-24vcpu-192gb         196608    24       3600    1572.00          2.339290
so-32vcpu-256gb         262144    32       4800    2096.00          3.119050

Connect to it via SSH

Install NFS server
sudo apt update
sudo apt install nfs-kernel-server
mkdir /home/nfs
sudo chown nobody:nogroup /home/nfs
sudo chmod 777 /home/nfs
sudo nano /etc/exports

/home/nfs *(rw,sync,no_subtree_check)

sudo exportfs -a
sudo systemctl restart nfs-kernel-server

Create test file in the NFS folder

touch /home/nfs/test.txt

Create Floating IP and connect to this droplet

Create DNS record for this IP. Example: my-nfs.red5.net

Deploy Nginx
Create Basic droplet with 80Gb disk space – price 24/month

Connect to it via SSH

Install NFS client
sudo apt update
sudo apt install nfs-common

Create folder for mount NFS

mkdir /home/nfs

Mount NFS server

sudo mount my-nfs.red5.net:/home/nfs /home/nfs

Check that you can see mounted disk

df -h
my-nfs.red5.net:/home/nfs   78G  1.6G   76G   2% /home/nfs

Mounting the Remote NFS Directories at Boot

my-nfs.red5.net:/home/nfs /home/nfs nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0

Reboot the droplet

sudo reboot

Connect to the droplet via SSH
ssh -i ssh_key.pem root@IP

Check that you can see mounted disk and test file: test.txt
df -h
ls -lo /home/nfs

Install Nginx
sudo apt update
sudo apt install nginx

Configure Nginx to serve HLS files from /home/nfs
sudo nano /etc/nginx/sites-available/default

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    location / {
    autoindex on;
    types {
        application/vnd.apple.mpegurl m3u8;
        video/mp2t ts;
    }
    root /home/nfs;
    add_header Cache-Control no-cache;
    }
}

Restart Nginx

sudo systemctl restart nginx

Create DNS A record for your domain pointing to your droplet IP snip-4-nginx.red5.net

2. Deploy Stream Manager cluster

Deploy regular Stream Manager cluster on the Digital Ocean

Make two different node images

First for Origin, Relay, Edge nodes with regular configuration. Example: ci-snip-4-node-origin-edge-relay-image

Second for the Transcoder node with extra configuration to record streams to NFS mounted disk. Example: ci-snip-4-node-transcoder-image

Extra steps to make extra configuration on the node image for the transcoder:

Create a new CPU-optimized droplet from the regular Origin, Relay, Edge snapshot/image on Digital Ocean

Connect to it via SSH

Install NFS client

sudo apt update
sudo apt install nfs-common

Mount NFS server

sudo mount my-nfs.red5.net:/home/nfs /usr/local/red5pro/webapps/live/streams

Check that you can see mounted disk

df -h
my-nfs.red5.net:/home/nfs   78G  1.6G   76G   2% /usr/local/red5pro/webapps/live/streams

Reboot the droplet

sudo reboot

Connect to the droplet via SSH

ssh -i ssh_key.pem root@IP

Check that you can see the mounted disk and test file: test.txt

df -h
ls -lo /usr/local/red5pro/webapps/live/streams

Configure red5pro configuration files to record streams
Change two values in the configuration file: /usr/local/red5pro/conf/hlsconfig.xml

From:
<property name="forceVODRecord" value="false"/>
To:
<property name="forceVODRecord" value="true"/>

From:
<property name="dvrPlaylist" value="false"/>
To:
<property name="dvrPlaylist" value="true"/>

Create a new snapshot/image from this droplet (This image will be used for Origin node)

Create a new Launch config using two different images, the second one for the Transcoder node and the first one with regular configuration for Origin, Edge, and Relay nodes. Example:

{
    "launchconfig": {
        "name": "origin-edge-relay-transcoder-record",
        "description": "This is a sample digital ocean launch configuration with all four node types",
        "version": "0.0.3",
        "image": "ci-snip-4-node-origin-edge-relay-image",
        "targets": {
            "target": [
                {
                    "role": "origin",
                    "connectionCapacity": 30.0,
                    "instanceType": "c-4"
                },
                {
                    "role": "edge",
                    "connectionCapacity": 300.0,
                    "instanceType": "c-4"
                },
                {
                    "role": "relay",
                    "connectionCapacity": 40.0,
                    "instanceType": "c-4"
                },
                {
                    "role": "transcoder",
                    "connectionCapacity": 20.0,
                    "image": "ci-snip-4-node-transcoder-image",
                    "instanceType": "c-8"
                }
            ]
        },
        "properties": {
            "property": [
                {
                    "name": "property-name",
                    "value": "property-value"
                }
            ]
        },
        "metadata": {
            "meta": [
                {
                    "value": "meta-value",
                    "key": "meta-name"
                }
            ]
        }
    }
}

Create a new node group using a new launch config (postman)

Publish stream using Stream Manager testbed. Example stream name: oles-test-1

Check recorded streams on the WebServer Nginx: http://snip-4-nginx.red5.net/

Estimated Network Calculations:

Droplet Network connection: up to 2000 Mbit/s
1 stream 720p 4,5mbit/s
15 streams 720p 4,5mbit/s = 67,5mbit/s
So in this case Network should support 15 recordings without any problems.
They can use storage-optimized droplets for the NFS server:

so-2vcpu-16gb           16384     2        300     131.00           0.194940
so-4vcpu-32gb           32768     4        600     262.00           0.389880
so-8vcpu-64gb           65536     8        1200    524.00           0.779760
so-16vcpu-128gb         131072    16       2400    1048.00          1.559520
so-24vcpu-192gb         196608    24       3600    1572.00          2.339290
so-32vcpu-256gb         262144    32       4800    2096.00          3.119050

For example, one could start with so-2vcpu-16gb droplet size for NFS with 300Gb disk
Additionally, the transcoder node should use a c-8 droplet so that it can support 5 streams published. will need to Check: recordings on the NFS server, CPU, Memory, and Network load on the transcoder node and NFS server, to make sure they are all at a nominal load.

Documentation of how to use AWS EFS with NFS client here: AWS Elastic File System-Red5

Documentation about installing NFS server/client on the Digital Ocean: How to Set Up an NFS Mount on Ubuntu (Step-by-Step Guide) | DigitalOcean