Do not try on Live System. Or your Home VM. Fire Up Test VPS and test it.
Optimizing VPS for High-Performance Jellyfin Streaming: A Complete Technical Guide for Pangolin Setup
When streaming 1080p content from a home server through a VPS running Pangolin, performance optimization becomes crucial for a smooth viewing experience. This guide explains how to configure your VPS to handle high-quality video streams efficiently, focusing on network optimization, traffic prioritization, and system-level tweaks.
Understanding the Challenge
Streaming video through a VPS introduces additional complexity compared to direct streaming. The data flow looks like this:
Home Server → VPS → Client Device
This path requires careful optimization at each step to maintain video quality and minimize buffering. Our solution addresses three key areas: network stack optimization, traffic prioritization, and application-level configuration.
Network Stack Optimization
First, let’s optimize the TCP stack by creating a new configuration file at /etc/sysctl.d/99-jellyfin-stream.conf
:
# TCP Buffer Optimization
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 87380 16777216
These settings increase the TCP buffer sizes to handle high-bandwidth video streams. The maximum buffer size of 16MB (16777216 bytes) allows for smoother streaming by reducing the likelihood of buffer underruns.
We also enable BBR congestion control:
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
BBR (Bottleneck Bandwidth and Round-trip time) is particularly effective for video streaming as it maintains high throughput and low latency even on congested networks.
Traffic Prioritization
To ensure video streams get priority over other traffic, we implement a sophisticated traffic control system. Here’s our traffic shaping script:
#!/bin/bash
# Find the active Docker bridge interface (UP state only)
DOCKER_IF=$(ip -br link show | grep 'br-' | grep 'UP' | head -n1 | awk '{print $1}')
if [ -z "$DOCKER_IF" ]; then
echo "No active Docker bridge interface found!"
exit 1
fi
echo "Found active Docker bridge interface: $DOCKER_IF"
# Clear existing qdiscs
tc qdisc del dev $DOCKER_IF root 2>/dev/null
# Add root qdisc with adjusted r2q
tc qdisc add dev $DOCKER_IF root handle 1: htb default 30 r2q 1000
# Add main class with adjusted burst
tc class add dev $DOCKER_IF parent 1: classid 1:1 htb rate 1000mbit ceil 1000mbit burst 15k cburst 15k
# Add video streaming class with adjusted burst
tc class add dev $DOCKER_IF parent 1:1 classid 1:10 htb rate 800mbit ceil 1000mbit burst 15k cburst 15k
# Add default class with adjusted burst
tc class add dev $DOCKER_IF parent 1:1 classid 1:30 htb rate 200mbit ceil 400mbit burst 15k cburst 15k
# Add fq_codel with adjusted parameters for streaming
tc qdisc add dev $DOCKER_IF parent 1:10 handle 10: fq_codel \
target 15ms interval 100ms flows 1024 quantum 1514 \
memory_limit 32Mb ecn
# Add fq_codel for default traffic
tc qdisc add dev $DOCKER_IF parent 1:30 handle 30: fq_codel \
target 5ms interval 100ms flows 1024 quantum 1514 \
memory_limit 32Mb ecn
# Add filters for Jellyfin traffic
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 \
match ip dport 8096 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 \
match ip sport 8096 0xffff flowid 1:10
# Add additional filter for HTTPS traffic (since Jellyfin might be behind reverse proxy)
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 \
match ip dport 443 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 \
match ip sport 443 0xffff flowid 1:10
echo -e "\nVerifying configuration:"
tc -s qdisc show dev $DOCKER_IF
echo -e "\nClass configuration:"
tc -s class show dev $DOCKER_IF
echo -e "\nFilter configuration:"
tc -s filter show dev $DOCKER_IF
This script creates two traffic classes: one for video streaming and another for remaining traffic. The fq_codel
algorithm helps manage latency and ensures fair queueing within each class.
Traefik Configuration
Traefik needs special configuration to handle video streams efficiently. Update your traefik_config.yml
:
entryPoints:
websecure:
address: ":443"
transport:
respondingTimeouts:
readTimeout: "12h"
writeTimeout: "12h"
http:
middlewares:
- buffering
tls:
options:
default:
minVersion: "VersionTLS12"
The long timeout values prevent connection drops during extended viewing sessions, while the buffering middleware helps manage large media chunks:
http:
middlewares:
buffering:
buffering:
maxRequestBodyBytes: 104857600 # 100MB
memRequestBodyBytes: 52428800 # 50MB
maxResponseBodyBytes: 104857600 # 100MB
Docker Configuration
Optimize your Docker containers by updating docker-compose.yml
:
services:
gerbil:
sysctls:
- net.core.rmem_max=16777216
- net.core.wmem_max=16777216
deploy:
resources:
limits:
cpus: '4'
memory: 4G
reservations:
cpus: '2'
memory: 2G
These settings ensure your containers have adequate resources for handling video streams and maintain optimal network buffer sizes.
Monitoring and Maintenance
To monitor your optimized setup, use these commands:
# Monitor traffic shaping
watch -n 1 'tc -s class show dev br-*'
# Check network throughput
iftop -i br-*
# View TCP buffer usage
cat /proc/net/sockstat
Regular monitoring helps identify potential bottlenecks and verify that your optimizations are working as intended.
Expected Results
After implementing these optimizations, you should notice:
- Reduced buffering during playback
- More stable video quality
- Better handling of network congestion
- Smoother streaming experience, especially during peak usage
Conclusion
By implementing these optimizations, your VPS becomes better equipped to handle 1080p video streaming through Jellyfin. The combination of network stack optimization, traffic prioritization, and application-level configuration creates a robust foundation for high-quality video streaming.
Remember to test thoroughly after implementing these changes and adjust values based on your specific needs and network conditions. While these settings provide a solid starting point, you may need to fine-tune them based on your particular use case and hardware capabilities.
Recap
Full rundown
VPS configuration specifically for streaming 1080p Jellyfin content from your home server.
Traefik configuration to better handle video streams. In your traefik_config.yml
:
# Optimize for large media file transfers
entryPoints:
websecure:
address: ":443"
transport:
respondingTimeouts:
readTimeout: "12h" # Long timeout for extended viewing sessions
writeTimeout: "12h" # Matches read timeout
forwardingTimeouts:
dialTimeout: "30s"
responseHeaderTimeout: "30s"
http:
middlewares:
- buffering
tls:
options:
default:
minVersion: "VersionTLS12"
cipherSuites:
- "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" # Fast encryption for streaming
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" # Backup cipher
# Add a buffering middleware for smoother playback
http:
middlewares:
buffering:
buffering:
maxRequestBodyBytes: 104857600 # 100MB for large media chunks
memRequestBodyBytes: 52428800 # 50MB in-memory buffer
maxResponseBodyBytes: 104857600 # 100MB response size
retryExpression: "IsNetworkError() && Attempts() < 3"
A system-level optimization file specifically for media streaming. Create /etc/sysctl.d/99-jellyfin-stream.conf
:
# TCP optimizations for high-bandwidth video streaming
net.core.rmem_max = 16777216 # Increase max receive buffer for smoother streaming
net.core.wmem_max = 16777216 # Increase max send buffer
net.ipv4.tcp_rmem = 4096 87380 16777216 # Min, default, and max receive buffer
net.ipv4.tcp_wmem = 4096 87380 16777216 # Min, default, and max send buffer
# Enable BBR for better throughput
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
# Optimize for streaming workload
net.ipv4.tcp_slow_start_after_idle = 0 # Maintain window size
net.ipv4.tcp_mtu_probing = 1 # Automatically detect optimal MTU
net.core.netdev_max_backlog = 65536 # Handle more packets in queue
net.ipv4.tcp_window_scaling = 1 # Enable window scaling
Optimize your Docker setup for media streaming. Modify your docker-compose.yml
:
services:
gerbil:
# Add these network optimizations for the proxy
sysctls:
- net.core.rmem_max=16777216
- net.core.wmem_max=16777216
- net.ipv4.tcp_rmem=4096 87380 16777216
- net.ipv4.tcp_wmem=4096 87380 16777216
deploy:
resources:
limits:
cpus: '4' # Allocate more CPU for transcoding if needed
memory: 4G # More memory for buffering
reservations:
cpus: '2' # Guarantee minimum CPU
memory: 2G # Guarantee minimum memory
networks:
pangolin:
priority: 1 # Give network priority to this container
networks:
pangolin:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1500 # Optimize MTU for internet streaming
A traffic shaping script to prioritize video streams. Create /usr/local/bin/optimize-video-stream.sh
:
#!/bin/bash
# Find the active Docker bridge interface (UP state only)
DOCKER_IF=$(ip -br link show | grep 'br-' | grep 'UP' | head -n1 | awk '{print $1}')
if [ -z "$DOCKER_IF" ]; then
echo "No active Docker bridge interface found!"
exit 1
fi
echo "Found active Docker bridge interface: $DOCKER_IF"
# Clear existing qdiscs
tc qdisc del dev $DOCKER_IF root 2>/dev/null
# Add root qdisc with adjusted r2q
tc qdisc add dev $DOCKER_IF root handle 1: htb default 30 r2q 1000
# Add main class with adjusted burst
tc class add dev $DOCKER_IF parent 1: classid 1:1 htb rate 1000mbit ceil 1000mbit burst 15k cburst 15k
# Add video streaming class with adjusted burst
tc class add dev $DOCKER_IF parent 1:1 classid 1:10 htb rate 800mbit ceil 1000mbit burst 15k cburst 15k
# Add default class with adjusted burst
tc class add dev $DOCKER_IF parent 1:1 classid 1:30 htb rate 200mbit ceil 400mbit burst 15k cburst 15k
# Add fq_codel with adjusted parameters for streaming
tc qdisc add dev $DOCKER_IF parent 1:10 handle 10: fq_codel \
target 15ms interval 100ms flows 1024 quantum 1514 \
memory_limit 32Mb ecn
# Add fq_codel for default traffic
tc qdisc add dev $DOCKER_IF parent 1:30 handle 30: fq_codel \
target 5ms interval 100ms flows 1024 quantum 1514 \
memory_limit 32Mb ecn
# Add filters for Jellyfin traffic
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 \
match ip dport 8096 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 \
match ip sport 8096 0xffff flowid 1:10
# Add additional filter for HTTPS traffic (since Jellyfin might be behind reverse proxy)
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 \
match ip dport 443 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 \
match ip sport 443 0xffff flowid 1:10
echo -e "\nVerifying configuration:"
tc -s qdisc show dev $DOCKER_IF
echo -e "\nClass configuration:"
tc -s class show dev $DOCKER_IF
echo -e "\nFilter configuration:"
tc -s filter show dev $DOCKER_IF
Make it executable and create a systemd service:
chmod +x /usr/local/bin/optimize-video-stream.sh
cat > /etc/systemd/system/video-tc.service << EOF
[Unit]
Description=Video Streaming Traffic Control
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/setup-video-tc.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
systemctl enable video-tc
systemctl start video-tc
docker network ls
This will show all your Docker networks. We’re specifically looking for “bridge” type networks that are in use.
let’s also check your Docker network status:
- View running containers and their networks:
docker ps --format '{{.Names}}: {{.Networks}}'
- Inspect the network mode of your containers:
docker inspect -f '{{.HostConfig.NetworkMode}}' your_container_name
If you’re still getting the “No active Docker bridge interface found” error after running the modified script, please share the output of these commands:
# Show all network interfaces
ip -br link show
# List Docker networks
docker network ls
# Show Docker network details
docker network inspect bridge
This will help us understand your exact network configuration and adjust the script accordingly. Remember that Docker networking can be configured in various ways (bridge, host, overlay, etc.), and we need to ensure we’re targeting the correct interface for your setup.
After Script runs Successfully
The configuration is now successfully applied to the correct interface (br-1af21cab23d1) your br. Let’s break down what we’re seeing and what it means for your video streaming performance:
- Root Qdisc Configuration
qdisc htb 1: root refcnt 2 r2q 1000 default 0x30
This shows our root queueing discipline is properly set up with the higher r2q value of 1000, which resolved our previous quantum warnings. The default class (0x30) will handle any traffic that doesn’t match our video streaming filters.
- Class Structure
We have three classes configured:
- Root class (1:1) with 1Gbit total bandwidth
- Video streaming class (1:10) with 800Mbit priority bandwidth
- Default class (1:30) with 200Mbit standard bandwidth
The burst parameters (around 15K) are now properly sized for video streaming, which helps prevent stuttering during sudden bitrate changes in your video stream.
- FQ_CoDel Configuration
qdisc fq_codel 10: ... target 15ms interval 100ms
qdisc fq_codel 30: ... target 5ms interval 100ms
The video class (10:) has a higher target time (15ms vs 5ms) which is better for video streaming as it allows for more buffering while still preventing bufferbloat.
- Filter Configuration
The filters show that we’re correctly matching both Jellyfin’s default port (8096) and HTTPS traffic (443):
match 00001fa0/0000ffff at 20 # Port 8096
match 000001bb/0000ffff at 20 # Port 443
To verify this is working effectively with your video streaming, you can monitor the traffic in real-time while streaming:
watch -n 1 'tc -s class show dev br-1af21cab23d1'
For more detailed monitoring, create this script:
#!/bin/bash
# /usr/local/bin/monitor-video-tc.sh
INTERFACE="br-1af21cab23d1"
while true; do
clear
echo "=== Video Streaming Traffic Statistics ==="
echo "Time: $(date)"
echo
echo "Class Statistics:"
tc -s class show dev $INTERFACE | grep -A 2 "class htb 1:"
echo
echo "Queue Statistics:"
tc -s qdisc show dev $INTERFACE | grep -A 2 "qdisc fq_codel"
sleep 2
done
Make it executable:
chmod +x /usr/local/bin/monitor-video-tc.sh
When streaming video, you should see the byte counts and packet statistics increasing in the 1:10 class, indicating that your video traffic is being properly prioritized.
If you notice any buffering or quality issues while streaming, we can further tune these parameters:
- Increase burst sizes (currently at 15K)
- Adjust the FQ_CoDel target times
- Modify the bandwidth allocation ratios