Back to Technology

Complete Protocols Master Part 12: Streaming Protocols

January 31, 2026 Wasil Zafar 38 min read

Master video streaming from RTMP ingest to HLS/DASH delivery. Learn adaptive bitrate streaming, CDN integration, and build your own streaming pipeline.

Table of Contents

  1. Introduction
  2. RTMP (Ingest)
  3. HLS (Apple)
  4. DASH (MPEG)
  5. Adaptive Bitrate
  6. CDN Delivery
  7. FFmpeg Pipeline
  8. Summary

Introduction: Video Streaming Architecture

Video streaming has evolved from Flash-based RTMP to HTTP-based adaptive streaming. Modern platforms use RTMP for ingest and HLS/DASH for delivery—combining low-latency capture with scalable HTTP delivery.

Series Context: This is Part 12 of 20 in the Complete Protocols Master series. Streaming protocols operate at the Application Layer, primarily over HTTP for delivery.
Architecture

Modern Streaming Pipeline

Streaming Pipeline:

ENCODER → INGEST → TRANSCODER → PACKAGER → CDN → PLAYER

1. ENCODER (OBS, Hardware)
   • Capture video/audio
   • Encode to H.264/H.265
   • Send via RTMP/SRT

2. INGEST SERVER
   • Receive RTMP stream
   • Validate stream key
   • Forward to transcoder

3. TRANSCODER
   • Create multiple bitrates
   • 1080p, 720p, 480p, 360p
   • Adaptive streaming variants

4. PACKAGER
   • Segment into chunks
   • Generate HLS/DASH manifest
   • Add encryption (DRM)

5. CDN
   • Cache at edge locations
   • Deliver to viewers globally
   • Handle millions of viewers

6. PLAYER
   • Fetch manifest
   • Select bitrate (ABR)
   • Buffer and play
Comparison

Protocol Comparison

ProtocolUse CaseLatencyTransport
RTMPIngest1-3sTCP
HLSDelivery10-30sHTTP
LL-HLSLow-latency delivery2-5sHTTP
DASHDelivery10-30sHTTP
LL-DASHLow-latency delivery2-5sHTTP
WebRTCUltra-low latency<1sUDP/TCP
SRTReliable ingest1-2sUDP

RTMP: Real-Time Messaging Protocol

RTMP was created by Adobe for Flash. While Flash is dead, RTMP lives on as the de facto standard for stream ingest—OBS, encoders, and streaming platforms all speak RTMP.

RTMP connection flow diagram showing TCP handshake, connect, create stream, and publish phases
RTMP connection lifecycle: from TCP handshake through publish for live stream ingest
RTMP Basics

RTMP Connection Flow

RTMP Connection:

1. TCP HANDSHAKE
   Client → Server: C0 + C1 (version + random)
   Server → Client: S0 + S1 + S2
   Client → Server: C2

2. CONNECT
   Client: connect('rtmp://server/app')
   Server: result (success)

3. CREATE STREAM
   Client: createStream()
   Server: result (stream_id)

4. PUBLISH (for streaming)
   Client: publish('stream_key', 'live')
   Server: onStatus('NetStream.Publish.Start')

5. SEND DATA
   Client: audio/video chunks
   (Continues until disconnect)

RTMP URL Format:
rtmp://server.com/app/stream_key
• server.com - Server address
• app - Application name
• stream_key - Unique stream identifier
# RTMP streaming examples

# OBS Settings:
# Server: rtmp://live.twitch.tv/app
# Stream Key: live_xxxxx_yyyyy

# FFmpeg RTMP stream to server
ffmpeg -i input.mp4 \
    -c:v libx264 -preset veryfast \
    -maxrate 3000k -bufsize 6000k \
    -c:a aac -b:a 128k \
    -f flv rtmp://server.com/app/stream_key

# Receive RTMP and save to file
ffmpeg -i rtmp://server.com/app/stream_key \
    -c copy output.mp4

# RTMP to HLS conversion (live)
ffmpeg -i rtmp://server.com/app/stream_key \
    -c:v copy -c:a copy \
    -f hls -hls_time 4 -hls_list_size 5 \
    /var/www/stream/playlist.m3u8
SRT Alternative

SRT: Secure Reliable Transport

SRT vs RTMP:

SRT Advantages:
• UDP-based (lower latency)
• Built-in encryption (AES)
• Error correction (ARQ)
• Better over unreliable networks
• Open source (Haivision)

When to use SRT:
• Contribution links over internet
• Remote production
• When RTMP has packet loss issues

SRT Example:
ffmpeg -i input.mp4 \
    -c:v libx264 -f mpegts \
    'srt://server.com:9000?streamid=mystream'

# Receive SRT
ffmpeg -i 'srt://server.com:9000?mode=caller' \
    -c copy output.ts

HLS: HTTP Live Streaming

HLS (Apple, 2009) is the most widely supported streaming format. It segments video into small chunks delivered over HTTP, enabling CDN caching and adaptive bitrate switching.

HLS file hierarchy showing master playlist pointing to multiple quality media playlists each containing .ts segment files
HLS structure: master playlist selects quality level, media playlist sequences .ts video segments
Why HTTP-based? HTTP works through firewalls, caches on CDNs, and scales massively. This is why HLS/DASH won over RTMP for delivery.
HLS Structure

HLS File Organization

HLS Directory Structure:

stream/
├── master.m3u8          # Master playlist (quality selector)
├── 1080p/
│   ├── playlist.m3u8    # Media playlist (segment list)
│   ├── segment000.ts    # Video chunk (4-10 seconds)
│   ├── segment001.ts
│   └── segment002.ts
├── 720p/
│   ├── playlist.m3u8
│   └── *.ts
├── 480p/
│   ├── playlist.m3u8
│   └── *.ts
└── audio/
    ├── playlist.m3u8
    └── *.aac

Player Flow:
1. Fetch master.m3u8
2. Select quality based on bandwidth
3. Fetch media playlist for that quality
4. Download and play segments sequentially
5. Switch quality if bandwidth changes
# Master playlist (master.m3u8)

#EXTM3U
#EXT-X-VERSION:3

#EXT-X-STREAM-INF:BANDWIDTH=800000,RESOLUTION=640x360
360p/playlist.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=1400000,RESOLUTION=842x480
480p/playlist.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=2800000,RESOLUTION=1280x720
720p/playlist.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=5000000,RESOLUTION=1920x1080
1080p/playlist.m3u8
# Media playlist (720p/playlist.m3u8)

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:6
#EXT-X-MEDIA-SEQUENCE:0

#EXTINF:6.000,
segment000.ts
#EXTINF:6.000,
segment001.ts
#EXTINF:6.000,
segment002.ts
#EXTINF:4.500,
segment003.ts

# For VOD, add at end:
#EXT-X-ENDLIST

# For live, playlist updates as new segments added
# Create HLS from video file

ffmpeg -i input.mp4 \
    -c:v libx264 -crf 21 -preset veryfast \
    -c:a aac -b:a 128k \
    -hls_time 6 \
    -hls_list_size 0 \
    -hls_segment_filename "segment%03d.ts" \
    playlist.m3u8

# Multi-bitrate HLS (adaptive)
ffmpeg -i input.mp4 \
    -filter_complex "[0:v]split=3[v1][v2][v3];\
        [v1]scale=1280:720[v1out];\
        [v2]scale=854:480[v2out];\
        [v3]scale=640:360[v3out]" \
    -map "[v1out]" -c:v:0 libx264 -b:v:0 2800k \
    -map "[v2out]" -c:v:1 libx264 -b:v:1 1400k \
    -map "[v3out]" -c:v:2 libx264 -b:v:2 800k \
    -map 0:a -c:a aac -b:a 128k \
    -f hls -hls_time 6 \
    -master_pl_name master.m3u8 \
    -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" \
    stream_%v/playlist.m3u8

DASH: Dynamic Adaptive Streaming over HTTP

MPEG-DASH is the international standard for adaptive streaming. Unlike HLS (Apple proprietary), DASH is codec-agnostic and widely adopted on non-Apple platforms.

DASH MPD structure showing Period, AdaptationSet, and Representation elements with segment templates
MPEG-DASH MPD manifest: Periods contain AdaptationSets with multiple Representations for adaptive quality
HLS vs DASH

Protocol Comparison

FeatureHLSDASH
CreatorAppleMPEG (ISO Standard)
Manifest.m3u8 (text).mpd (XML)
Segments.ts (MPEG-TS).m4s (fMP4)
iOS Support✅ Native❌ Requires JS
Android Support✅ ExoPlayer✅ Native
DRMFairPlayWidevine, PlayReady
Codec FlexibilityLimitedAny codec
<!-- DASH MPD (Media Presentation Description) -->
<?xml version="1.0"?>
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011"
     type="static"
     mediaPresentationDuration="PT1H30M"
     minBufferTime="PT2S">
    
    <Period>
        <AdaptationSet mimeType="video/mp4" segmentAlignment="true">
            <!-- 1080p -->
            <Representation id="1080p" bandwidth="5000000" 
                            width="1920" height="1080">
                <SegmentTemplate media="1080p_$Number$.m4s"
                                 initialization="1080p_init.mp4"
                                 duration="6000" timescale="1000"/>
            </Representation>
            
            <!-- 720p -->
            <Representation id="720p" bandwidth="2800000"
                            width="1280" height="720">
                <SegmentTemplate media="720p_$Number$.m4s"
                                 initialization="720p_init.mp4"
                                 duration="6000" timescale="1000"/>
            </Representation>
        </AdaptationSet>
        
        <AdaptationSet mimeType="audio/mp4">
            <Representation id="audio" bandwidth="128000">
                <SegmentTemplate media="audio_$Number$.m4s"
                                 initialization="audio_init.mp4"
                                 duration="6000" timescale="1000"/>
            </Representation>
        </AdaptationSet>
    </Period>
</MPD>
# Create DASH content with FFmpeg

# Single quality
ffmpeg -i input.mp4 \
    -c:v libx264 -c:a aac \
    -f dash \
    -seg_duration 6 \
    output.mpd

# Multi-bitrate DASH
ffmpeg -i input.mp4 \
    -map 0:v -map 0:v -map 0:v -map 0:a \
    -c:v libx264 -c:a aac \
    -b:v:0 5000k -s:v:0 1920x1080 \
    -b:v:1 2800k -s:v:1 1280x720 \
    -b:v:2 1400k -s:v:2 854x480 \
    -b:a 128k \
    -f dash \
    -adaptation_sets "id=0,streams=v id=1,streams=a" \
    output.mpd

Adaptive Bitrate Streaming (ABR)

ABR algorithms automatically switch video quality based on network conditions. This ensures smooth playback—high quality when bandwidth allows, lower quality to prevent buffering.

Adaptive bitrate algorithm decision flowchart showing bandwidth estimation, buffer monitoring, and quality level switching
ABR algorithm: monitors bandwidth and buffer level to dynamically select optimal video quality
ABR Logic

How ABR Works

ABR Decision Factors:

1. BANDWIDTH ESTIMATION
   • Measure download speed of recent segments
   • Weighted average (recent segments matter more)

2. BUFFER LEVEL
   • How many seconds in buffer?
   • Low buffer → safer (lower quality)
   • High buffer → can try higher quality

3. QUALITY SWITCHING
   • Switch up: Conservative (need consistent bandwidth)
   • Switch down: Aggressive (prevent rebuffer)

ABR Strategies:
• Rate-based: Switch based on throughput
• Buffer-based: Switch based on buffer level
• Hybrid: Combine both signals

Example Logic:
if buffer < 5s:
    select_lowest_quality()
elif throughput > 1.5 * current_bitrate:
    try_higher_quality()
elif throughput < 0.8 * current_bitrate:
    switch_lower_quality()
# Simple ABR algorithm simulation

def simple_abr_algorithm():
    """Demonstrate ABR quality selection"""
    
    # Available quality levels
    qualities = [
        {"name": "360p", "bitrate": 800_000},
        {"name": "480p", "bitrate": 1_400_000},
        {"name": "720p", "bitrate": 2_800_000},
        {"name": "1080p", "bitrate": 5_000_000},
    ]
    
    def select_quality(throughput_bps, buffer_seconds, current_quality_idx):
        """Select quality based on throughput and buffer"""
        
        # Safety margin (don't use 100% of bandwidth)
        safe_throughput = throughput_bps * 0.8
        
        # If buffer is critical, go to lowest
        if buffer_seconds < 3:
            print(f"  ⚠️ Critical buffer ({buffer_seconds}s) - lowest quality")
            return 0
        
        # Find highest quality we can sustain
        selected = 0
        for i, q in enumerate(qualities):
            if q["bitrate"] < safe_throughput:
                selected = i
        
        # Switching logic
        if selected > current_quality_idx:
            # Only switch up if buffer healthy
            if buffer_seconds > 10:
                print(f"  ↑ Buffer healthy ({buffer_seconds}s) - upgrading")
                return selected
            else:
                print(f"  → Buffer moderate - staying at current")
                return current_quality_idx
        elif selected < current_quality_idx:
            print(f"  ↓ Bandwidth dropped - downgrading")
            return selected
        
        return current_quality_idx
    
    print("ABR Algorithm Simulation")
    print("=" * 50)
    
    # Simulate scenarios
    scenarios = [
        (5_000_000, 15, 2),  # Good bandwidth, healthy buffer
        (1_000_000, 8, 2),   # Bandwidth dropped
        (3_000_000, 2, 1),   # Critical buffer
        (4_000_000, 20, 1),  # Bandwidth recovered
    ]
    
    for throughput, buffer, current in scenarios:
        print(f"\nThroughput: {throughput/1_000_000:.1f} Mbps, "
              f"Buffer: {buffer}s, Current: {qualities[current]['name']}")
        new_idx = select_quality(throughput, buffer, current)
        print(f"  Selected: {qualities[new_idx]['name']}")

simple_abr_algorithm()

CDN Delivery

CDNs (Content Delivery Networks) cache streaming content at edge locations worldwide. This reduces latency, handles traffic spikes, and enables global reach.

CDN video delivery architecture showing origin server to shield tier to edge servers to viewer with cache hit/miss flow
CDN delivery chain: origin → shield → edge servers cache video segments closer to viewers worldwide
CDN Architecture

Video CDN Flow

CDN Video Delivery:

ORIGIN → SHIELD → EDGE → VIEWER

1. ORIGIN SERVER
   • Source of truth
   • Generates HLS/DASH
   • Only 1 location

2. SHIELD (Mid-tier)
   • Reduces origin load
   • First cache layer
   • Few locations (1-3)

3. EDGE SERVERS
   • Close to viewers
   • Final cache layer
   • 100+ locations globally

Cache Logic:
1. Viewer requests segment001.ts
2. Edge: Cache miss → ask Shield
3. Shield: Cache miss → ask Origin
4. Origin returns segment
5. Shield caches + returns
6. Edge caches + returns
7. Next viewer request → Edge hit!

CDN Providers for Video:
• CloudFront (AWS)
• Fastly
• Cloudflare Stream
• Akamai
• Azure CDN
# CloudFront + HLS example

# 1. Upload HLS to S3
aws s3 sync ./stream/ s3://my-video-bucket/stream/

# 2. Create CloudFront distribution
# Origin: my-video-bucket.s3.amazonaws.com
# Cache Policy: CachingOptimized

# 3. Access via CDN
# https://d1234567890.cloudfront.net/stream/master.m3u8

# Cache Headers for HLS
# Manifest: Cache-Control: max-age=2 (live) or max-age=31536000 (VOD)
# Segments: Cache-Control: max-age=31536000 (immutable)

# Nginx config for HLS
location /hls/ {
    types {
        application/vnd.apple.mpegurl m3u8;
        video/mp2t ts;
    }
    root /var/www/stream;
    add_header Cache-Control "max-age=31536000";
}

location ~ \.m3u8$ {
    add_header Cache-Control "max-age=2";  # Short cache for live manifest
}

FFmpeg Streaming Pipeline

FFmpeg is the Swiss Army knife of video processing. Here are practical commands for building a complete streaming pipeline.

# Complete HLS streaming pipeline

# 1. RTMP Ingest to HLS (live streaming)
ffmpeg -listen 1 -i rtmp://0.0.0.0:1935/live/stream \
    -c:v libx264 -preset veryfast -tune zerolatency \
    -c:a aac -b:a 128k \
    -f hls \
    -hls_time 4 \
    -hls_list_size 10 \
    -hls_flags delete_segments \
    /var/www/stream/live.m3u8

# 2. Multi-bitrate encoding (ABR ladder)
ffmpeg -i rtmp://localhost/live/stream \
    -filter_complex "[0:v]split=3[v1][v2][v3];\
        [v1]scale=1280:720[v720];\
        [v2]scale=854:480[v480];\
        [v3]scale=640:360[v360]" \
    -map "[v720]" -c:v:0 libx264 -b:v:0 2800k -maxrate:v:0 3000k -bufsize:v:0 6000k \
    -map "[v480]" -c:v:1 libx264 -b:v:1 1400k -maxrate:v:1 1500k -bufsize:v:1 3000k \
    -map "[v360]" -c:v:2 libx264 -b:v:2 800k -maxrate:v:2 900k -bufsize:v:2 1800k \
    -map 0:a -c:a aac -b:a 128k -ac 2 \
    -f hls -hls_time 4 -hls_list_size 10 \
    -master_pl_name master.m3u8 \
    -var_stream_map "v:0,a:0,name:720p v:1,a:1,name:480p v:2,a:2,name:360p" \
    stream_%v.m3u8
# Python streaming helper

import subprocess
import os

def create_hls_stream(input_file, output_dir, qualities=None):
    """Create multi-bitrate HLS from video file"""
    
    if qualities is None:
        qualities = [
            {"name": "720p", "scale": "1280:720", "bitrate": "2800k"},
            {"name": "480p", "scale": "854:480", "bitrate": "1400k"},
            {"name": "360p", "scale": "640:360", "bitrate": "800k"},
        ]
    
    os.makedirs(output_dir, exist_ok=True)
    
    # Build filter complex
    splits = len(qualities)
    filter_parts = [f"[0:v]split={splits}" + "".join(f"[v{i}]" for i in range(splits))]
    
    for i, q in enumerate(qualities):
        filter_parts.append(f"[v{i}]scale={q['scale']}[v{q['name']}]")
    
    filter_complex = ";".join(filter_parts)
    
    # Build command
    cmd = ["ffmpeg", "-i", input_file, "-filter_complex", filter_complex]
    
    # Add outputs
    var_stream_map = []
    for i, q in enumerate(qualities):
        cmd.extend([
            "-map", f"[v{q['name']}]",
            f"-c:v:{i}", "libx264",
            f"-b:v:{i}", q["bitrate"],
        ])
        var_stream_map.append(f"v:{i},a:0,name:{q['name']}")
    
    cmd.extend([
        "-map", "0:a", "-c:a", "aac", "-b:a", "128k",
        "-f", "hls",
        "-hls_time", "6",
        "-master_pl_name", "master.m3u8",
        "-var_stream_map", " ".join(var_stream_map),
        f"{output_dir}/stream_%v.m3u8"
    ])
    
    print("Generated FFmpeg command:")
    print(" ".join(cmd))
    return cmd

# Example usage
print("HLS Creation Example")
print("=" * 50)
create_hls_stream("input.mp4", "/var/www/hls")

Summary & Next Steps

Key Takeaways:
  • RTMP: Standard for stream ingest (OBS → server)
  • HLS: Apple's format, widest device support
  • DASH: Open standard, codec-agnostic
  • ABR: Automatically adapts quality to bandwidth
  • CDN: Essential for global, scalable delivery
  • FFmpeg: Swiss Army knife for encoding/packaging
Quiz

Test Your Knowledge

  1. Why RTMP for ingest but HLS for delivery? (RTMP is low-latency, HLS is cacheable/scalable)
  2. What's the HLS segment duration trade-off? (Longer = less requests, Shorter = lower latency)
  3. HLS vs DASH: which for iOS? (HLS - native support)
  4. What does ABR optimize? (Quality vs buffering balance)
  5. CDN shield tier purpose? (Reduce origin load)