Optimizing VPS for High-Performance Jellyfin Streaming Technical Guide for Pangolin Setup

Do not try on Live System. Or your Home VM. Fire Up Test VPS and test it.

Optimizing VPS for High-Performance Jellyfin Streaming: A Complete Technical Guide for Pangolin Setup


When streaming 1080p content from a home server through a VPS running Pangolin, performance optimization becomes crucial for a smooth viewing experience. This guide explains how to configure your VPS to handle high-quality video streams efficiently, focusing on network optimization, traffic prioritization, and system-level tweaks.

Understanding the Challenge

Streaming video through a VPS introduces additional complexity compared to direct streaming. The data flow looks like this:

Home Server → VPS → Client Device

This path requires careful optimization at each step to maintain video quality and minimize buffering. Our solution addresses three key areas: network stack optimization, traffic prioritization, and application-level configuration.

Network Stack Optimization

First, let’s optimize the TCP stack by creating a new configuration file at /etc/sysctl.d/99-jellyfin-stream.conf:

# TCP Buffer Optimization
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 87380 16777216

These settings increase the TCP buffer sizes to handle high-bandwidth video streams. The maximum buffer size of 16MB (16777216 bytes) allows for smoother streaming by reducing the likelihood of buffer underruns.

We also enable BBR congestion control:

net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

BBR (Bottleneck Bandwidth and Round-trip time) is particularly effective for video streaming as it maintains high throughput and low latency even on congested networks.

Traffic Prioritization

To ensure video streams get priority over other traffic, we implement a sophisticated traffic control system. Here’s our traffic shaping script:

#!/bin/bash

# Find the active Docker bridge interface (UP state only)
DOCKER_IF=$(ip -br link show | grep 'br-' | grep 'UP' | head -n1 | awk '{print $1}')

if [ -z "$DOCKER_IF" ]; then
    echo "No active Docker bridge interface found!"
    exit 1
fi

echo "Found active Docker bridge interface: $DOCKER_IF"

# Clear existing qdiscs
tc qdisc del dev $DOCKER_IF root 2>/dev/null

# Add root qdisc with adjusted r2q
tc qdisc add dev $DOCKER_IF root handle 1: htb default 30 r2q 1000

# Add main class with adjusted burst
tc class add dev $DOCKER_IF parent 1: classid 1:1 htb rate 1000mbit ceil 1000mbit burst 15k cburst 15k

# Add video streaming class with adjusted burst
tc class add dev $DOCKER_IF parent 1:1 classid 1:10 htb rate 800mbit ceil 1000mbit burst 15k cburst 15k

# Add default class with adjusted burst
tc class add dev $DOCKER_IF parent 1:1 classid 1:30 htb rate 200mbit ceil 400mbit burst 15k cburst 15k

# Add fq_codel with adjusted parameters for streaming
tc qdisc add dev $DOCKER_IF parent 1:10 handle 10: fq_codel \
    target 15ms interval 100ms flows 1024 quantum 1514 \
    memory_limit 32Mb ecn

# Add fq_codel for default traffic
tc qdisc add dev $DOCKER_IF parent 1:30 handle 30: fq_codel \
    target 5ms interval 100ms flows 1024 quantum 1514 \
    memory_limit 32Mb ecn

# Add filters for Jellyfin traffic
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 \
    match ip dport 8096 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 \
    match ip sport 8096 0xffff flowid 1:10

# Add additional filter for HTTPS traffic (since Jellyfin might be behind reverse proxy)
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 \
    match ip dport 443 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 \
    match ip sport 443 0xffff flowid 1:10

echo -e "\nVerifying configuration:"
tc -s qdisc show dev $DOCKER_IF
echo -e "\nClass configuration:"
tc -s class show dev $DOCKER_IF
echo -e "\nFilter configuration:"
tc -s filter show dev $DOCKER_IF

This script creates two traffic classes: one for video streaming and another for remaining traffic. The fq_codel algorithm helps manage latency and ensures fair queueing within each class.

Traefik Configuration

Traefik needs special configuration to handle video streams efficiently. Update your traefik_config.yml:

entryPoints:
  websecure:
    address: ":443"
    transport:
      respondingTimeouts:
        readTimeout: "12h"
        writeTimeout: "12h"
    http:
      middlewares:
        - buffering
      tls:
        options:
          default:
            minVersion: "VersionTLS12"

The long timeout values prevent connection drops during extended viewing sessions, while the buffering middleware helps manage large media chunks:

http:
  middlewares:
    buffering:
      buffering:
        maxRequestBodyBytes: 104857600  # 100MB
        memRequestBodyBytes: 52428800   # 50MB
        maxResponseBodyBytes: 104857600 # 100MB

Docker Configuration

Optimize your Docker containers by updating docker-compose.yml:

services:
  gerbil:
    sysctls:
      - net.core.rmem_max=16777216
      - net.core.wmem_max=16777216
    deploy:
      resources:
        limits:
          cpus: '4'
          memory: 4G
        reservations:
          cpus: '2'
          memory: 2G

These settings ensure your containers have adequate resources for handling video streams and maintain optimal network buffer sizes.

Monitoring and Maintenance

To monitor your optimized setup, use these commands:

# Monitor traffic shaping
watch -n 1 'tc -s class show dev br-*'

# Check network throughput
iftop -i br-*

# View TCP buffer usage
cat /proc/net/sockstat

Regular monitoring helps identify potential bottlenecks and verify that your optimizations are working as intended.

Expected Results

After implementing these optimizations, you should notice:

  • Reduced buffering during playback
  • More stable video quality
  • Better handling of network congestion
  • Smoother streaming experience, especially during peak usage

Conclusion

By implementing these optimizations, your VPS becomes better equipped to handle 1080p video streaming through Jellyfin. The combination of network stack optimization, traffic prioritization, and application-level configuration creates a robust foundation for high-quality video streaming.

Remember to test thoroughly after implementing these changes and adjust values based on your specific needs and network conditions. While these settings provide a solid starting point, you may need to fine-tune them based on your particular use case and hardware capabilities.

Recap

Full rundown

VPS configuration specifically for streaming 1080p Jellyfin content from your home server.

Traefik configuration to better handle video streams. In your traefik_config.yml:

# Optimize for large media file transfers
entryPoints:
  websecure:
    address: ":443"
    transport:
      respondingTimeouts:
        readTimeout: "12h"        # Long timeout for extended viewing sessions
        writeTimeout: "12h"       # Matches read timeout
      forwardingTimeouts:
        dialTimeout: "30s"
        responseHeaderTimeout: "30s"
    http:
      middlewares:
        - buffering
      tls:
        options:
          default:
            minVersion: "VersionTLS12"
            cipherSuites:
              - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" # Fast encryption for streaming
              - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"   # Backup cipher

# Add a buffering middleware for smoother playback
http:
  middlewares:
    buffering:
      buffering:
        maxRequestBodyBytes: 104857600  # 100MB for large media chunks
        memRequestBodyBytes: 52428800   # 50MB in-memory buffer
        maxResponseBodyBytes: 104857600 # 100MB response size
        retryExpression: "IsNetworkError() && Attempts() < 3"

A system-level optimization file specifically for media streaming. Create /etc/sysctl.d/99-jellyfin-stream.conf:

# TCP optimizations for high-bandwidth video streaming
net.core.rmem_max = 16777216          # Increase max receive buffer for smoother streaming
net.core.wmem_max = 16777216          # Increase max send buffer
net.ipv4.tcp_rmem = 4096 87380 16777216  # Min, default, and max receive buffer
net.ipv4.tcp_wmem = 4096 87380 16777216  # Min, default, and max send buffer

# Enable BBR for better throughput
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

# Optimize for streaming workload
net.ipv4.tcp_slow_start_after_idle = 0  # Maintain window size
net.ipv4.tcp_mtu_probing = 1            # Automatically detect optimal MTU
net.core.netdev_max_backlog = 65536     # Handle more packets in queue
net.ipv4.tcp_window_scaling = 1         # Enable window scaling

Optimize your Docker setup for media streaming. Modify your docker-compose.yml:

services:
  gerbil:
    # Add these network optimizations for the proxy
    sysctls:
      - net.core.rmem_max=16777216
      - net.core.wmem_max=16777216
      - net.ipv4.tcp_rmem=4096 87380 16777216
      - net.ipv4.tcp_wmem=4096 87380 16777216
    deploy:
      resources:
        limits:
          cpus: '4'           # Allocate more CPU for transcoding if needed
          memory: 4G          # More memory for buffering
        reservations:
          cpus: '2'           # Guarantee minimum CPU
          memory: 2G          # Guarantee minimum memory
    networks:
      pangolin:
        priority: 1           # Give network priority to this container

networks:
  pangolin:
    driver: bridge
    driver_opts:
      com.docker.network.driver.mtu: 1500  # Optimize MTU for internet streaming

A traffic shaping script to prioritize video streams. Create /usr/local/bin/optimize-video-stream.sh:

#!/bin/bash

# Find the active Docker bridge interface (UP state only)
DOCKER_IF=$(ip -br link show | grep 'br-' | grep 'UP' | head -n1 | awk '{print $1}')

if [ -z "$DOCKER_IF" ]; then
    echo "No active Docker bridge interface found!"
    exit 1
fi

echo "Found active Docker bridge interface: $DOCKER_IF"

# Clear existing qdiscs
tc qdisc del dev $DOCKER_IF root 2>/dev/null

# Add root qdisc with adjusted r2q
tc qdisc add dev $DOCKER_IF root handle 1: htb default 30 r2q 1000

# Add main class with adjusted burst
tc class add dev $DOCKER_IF parent 1: classid 1:1 htb rate 1000mbit ceil 1000mbit burst 15k cburst 15k

# Add video streaming class with adjusted burst
tc class add dev $DOCKER_IF parent 1:1 classid 1:10 htb rate 800mbit ceil 1000mbit burst 15k cburst 15k

# Add default class with adjusted burst
tc class add dev $DOCKER_IF parent 1:1 classid 1:30 htb rate 200mbit ceil 400mbit burst 15k cburst 15k

# Add fq_codel with adjusted parameters for streaming
tc qdisc add dev $DOCKER_IF parent 1:10 handle 10: fq_codel \
    target 15ms interval 100ms flows 1024 quantum 1514 \
    memory_limit 32Mb ecn

# Add fq_codel for default traffic
tc qdisc add dev $DOCKER_IF parent 1:30 handle 30: fq_codel \
    target 5ms interval 100ms flows 1024 quantum 1514 \
    memory_limit 32Mb ecn

# Add filters for Jellyfin traffic
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 \
    match ip dport 8096 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 \
    match ip sport 8096 0xffff flowid 1:10

# Add additional filter for HTTPS traffic (since Jellyfin might be behind reverse proxy)
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 \
    match ip dport 443 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 \
    match ip sport 443 0xffff flowid 1:10

echo -e "\nVerifying configuration:"
tc -s qdisc show dev $DOCKER_IF
echo -e "\nClass configuration:"
tc -s class show dev $DOCKER_IF
echo -e "\nFilter configuration:"
tc -s filter show dev $DOCKER_IF

Make it executable and create a systemd service:

chmod +x /usr/local/bin/optimize-video-stream.sh

cat > /etc/systemd/system/video-tc.service << EOF
[Unit]
Description=Video Streaming Traffic Control
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/setup-video-tc.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

systemctl enable video-tc
systemctl start video-tc
docker network ls

This will show all your Docker networks. We’re specifically looking for “bridge” type networks that are in use.
let’s also check your Docker network status:

  1. View running containers and their networks:
docker ps --format '{{.Names}}: {{.Networks}}'
  1. Inspect the network mode of your containers:
docker inspect -f '{{.HostConfig.NetworkMode}}' your_container_name

If you’re still getting the “No active Docker bridge interface found” error after running the modified script, please share the output of these commands:

# Show all network interfaces
ip -br link show

# List Docker networks
docker network ls

# Show Docker network details
docker network inspect bridge

This will help us understand your exact network configuration and adjust the script accordingly. Remember that Docker networking can be configured in various ways (bridge, host, overlay, etc.), and we need to ensure we’re targeting the correct interface for your setup.

After Script runs Successfully

The configuration is now successfully applied to the correct interface (br-1af21cab23d1) your br. Let’s break down what we’re seeing and what it means for your video streaming performance:

  1. Root Qdisc Configuration
qdisc htb 1: root refcnt 2 r2q 1000 default 0x30

This shows our root queueing discipline is properly set up with the higher r2q value of 1000, which resolved our previous quantum warnings. The default class (0x30) will handle any traffic that doesn’t match our video streaming filters.

  1. Class Structure
    We have three classes configured:
  • Root class (1:1) with 1Gbit total bandwidth
  • Video streaming class (1:10) with 800Mbit priority bandwidth
  • Default class (1:30) with 200Mbit standard bandwidth

The burst parameters (around 15K) are now properly sized for video streaming, which helps prevent stuttering during sudden bitrate changes in your video stream.

  1. FQ_CoDel Configuration
qdisc fq_codel 10: ... target 15ms interval 100ms
qdisc fq_codel 30: ... target 5ms interval 100ms

The video class (10:) has a higher target time (15ms vs 5ms) which is better for video streaming as it allows for more buffering while still preventing bufferbloat.

  1. Filter Configuration
    The filters show that we’re correctly matching both Jellyfin’s default port (8096) and HTTPS traffic (443):
match 00001fa0/0000ffff at 20  # Port 8096
match 000001bb/0000ffff at 20  # Port 443

To verify this is working effectively with your video streaming, you can monitor the traffic in real-time while streaming:

watch -n 1 'tc -s class show dev br-1af21cab23d1'

For more detailed monitoring, create this script:

#!/bin/bash
# /usr/local/bin/monitor-video-tc.sh

INTERFACE="br-1af21cab23d1"

while true; do
    clear
    echo "=== Video Streaming Traffic Statistics ==="
    echo "Time: $(date)"
    echo
    echo "Class Statistics:"
    tc -s class show dev $INTERFACE | grep -A 2 "class htb 1:"
    echo
    echo "Queue Statistics:"
    tc -s qdisc show dev $INTERFACE | grep -A 2 "qdisc fq_codel"
    sleep 2
done

Make it executable:

chmod +x /usr/local/bin/monitor-video-tc.sh

When streaming video, you should see the byte counts and packet statistics increasing in the 1:10 class, indicating that your video traffic is being properly prioritized.

If you notice any buffering or quality issues while streaming, we can further tune these parameters:

  1. Increase burst sizes (currently at 15K)
  2. Adjust the FQ_CoDel target times
  3. Modify the bandwidth allocation ratios
2 Likes

Great guide! Thank you!
What steps would I have to adjust to make this work for Plex instead?

2 Likes

Any streaming service/platform will work. But i have tested with jellyfin only

Wow! This is great. I’ll be testing this during the weekend.

Quick question, these performance optimizations are for the VPS, but are there any changes or considerations to be had on the home server where jellyfin is installed?

1 Like

Is this guide still relevant? Getting errors from traefik:

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:16Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:17Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:18Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:20Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:24Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:31Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:44Z","message":"Command error"} 

Here’s the adjusted traefik_config.yaml

accessLog:
  bufferingSize: 100
  fields:
    defaultMode: drop
    headers:
      defaultMode: drop
      names:
        Authorization: redact
        Content-Type: keep
        Cookie: redact
        User-Agent: keep
        X-Forwarded-For: keep
        X-Forwarded-Proto: keep
        X-Real-Ip: keep
    names:
      ClientAddr: keep
      ClientHost: keep
      DownstreamContentSize: keep
      DownstreamStatus: keep
      Duration: keep
      RequestMethod: keep
      RequestPath: keep
      RequestProtocol: keep
      RetryAttempts: keep
      ServiceName: keep
      StartUTC: keep
      TLSCipher: keep
      TLSVersion: keep
  filePath: /var/log/traefik/access.log
  filters:
    minDuration: 100ms
    retryAttempts: true
    statusCodes:
      - 200-299
      - 400-499
      - 500-599
  format: json
api:
  dashboard: true
  insecure: true
certificatesResolvers:
  letsencrypt:
    acme:
      caServer: https://acme-v02.api.letsencrypt.org/directory
      email: redacted
      httpChallenge:
        entryPoint: web
      storage: /letsencrypt/acme.json
entryPoints:
  web:
    address: :80
  websecure:
    address: :443
    http:
      middlewares:
        - crowdsec@file
        - buffering
      tls:
        certResolver: letsencrypt
        options:
          default:
            minVersion: "VersionTLS12"
            cipherSuites:
              - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
              - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
    transport:
      respondingTimeouts:
        readTimeout: "12h"
        writeTimeout: "12h"
      forwardingTimeouts:
        dialTimeout: "30s"
        responseHeaderTimeout: "30s"
  tcp-20800:
    address: ":20800/tcp"
  tcp-19800:
    address: ":19800/tcp"
  udp-19800:
    address: ":19800/udp"
experimental:
  plugins:
    badger:
      moduleName: github.com/fosrl/badger
      version: v1.0.0
    crowdsec:
      moduleName: github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin
      version: v1.3.5
log:
  format: json
  level: INFO
providers:
  file:
    filename: /etc/traefik/dynamic_config.yml
  http:
    endpoint: http://pangolin:3001/api/v1/traefik-config
    pollInterval: 5s
serversTransport:
  insecureSkipVerify: true
http:
  middlewares:
    buffering:
      buffering:
        maxRequestBodyBytes: 104857600
        memRequestBodyBytes: 52428800
        maxResponseBodyBytes: 104857600
        retryExpression: "IsNetworkError() && Attempts() < 3"

(sorry for spam, I couldn’t edit messages)

1 Like

for what services and what middleware, you are getting this error.
It all depends on your service and deployment. you will have to give the whole picture or ping me on discord off-topic channel

What stats would you suggest for a VPS using this method? I assume the standard 1 core, 1 GB would not suffice. I haven’t decided to run Jellyfin through my vps yet, but would like to be able to if I decide later on.

1 Like

To be honest 1gb ram 1vcpu is very less for streaming relay. Minimum 3gb ram 2 vcpu I will like to use.

Another question. I’m looking at a vps with 3 vCPU cores, 4.5GB RAM and 8500GB Monthly Transfer. I know the CPU and RAM are fine. But was curious about the bandwidth. It doesn’t seem like much if streaming Jellyfin or Plex.

What are people’s experience with bandwidth through their VPS?

1 Like

I have taken 12 streams, 8 hours/day, every day

Factors Affecting Data Usage:

  1. Bitrate: This is the most crucial factor. It’s the amount of data used per second to represent the video. Higher quality generally means a higher bitrate. Streaming services often use variable bitrates, and the actual bitrate of your files in Plex/Jellyfin can vary widely.
  2. Codec: Modern codecs like H.265 (HEVC) are more efficient than older ones like H.264, meaning they can deliver similar quality at a lower bitrate, thus using less data.
  3. Direct Play vs. Transcoding:
    • Direct Play: Streams the file as-is. Bandwidth usage equals the file’s original bitrate.
    • Transcoding: The server converts the file on the fly (e.g., to a lower resolution or bitrate compatible with the client device or bandwidth limits). This uses CPU power on the server but can reduce bandwidth consumption significantly if you set lower quality limits for remote streams.
  4. Actual Usage: Your calculation assumes constant usage (12 streams, 8 hours/day, every day). Real-world usage might fluctuate.

Data Usage Calculation:

We need to estimate the data usage per hour for a single 1080p stream.

  • Lower Estimate (like typical streaming services, e.g., Netflix, YouTube): Around 5 Mbps. This translates to roughly 2.25 GB per hour per stream. (Sources 1.1, 1.2, 4.1, 4.2, 6.1, 7.1)
  • Higher Estimate (higher quality H.264): Around 8 Mbps. This translates to roughly 3.6 GB per hour per stream (8 Mbps / 8 bits/byte * 3600 sec/hr / 1024 MB/GB). (Sources 1.1, 4.1, 4.2, 6.1, 6.2 suggest ranges encompassing this)

Let’s calculate the total monthly usage based on your parameters (12 streams, 8 hours/day):

Scenario 1: Using Lower Estimate (2.25 GB/hour per stream)

  1. Data per hour (all streams): 12 streams * 2.25 GB/hour/stream = 27 GB/hour
  2. Data per day: 27 GB/hour * 8 hours/day = 216 GB/day
  3. Data per month (approx. 30.4 days): 216 GB/day * 30.4 days/month ≈ 6566 GB/month

Scenario 2: Using Higher Estimate (3.6 GB/hour per stream)

  1. Data per hour (all streams): 12 streams * 3.6 GB/hour/stream = 43.2 GB/hour
  2. Data per day: 43.2 GB/hour * 8 hours/day = 345.6 GB/day
  3. Data per month (approx. 30.4 days): 345.6 GB/day * 30.4 days/month ≈ 10506 GB/month

Conclusion:

  • Based on the lower estimate (5 Mbps / 2.25 GB/hr per stream), your projected usage is around 6566 GB/month. This is comfortably within your 8500 GB allowance. This scenario is likely if you are streaming content similar in bitrate to major streaming services or if Plex/Jellyfin often transcodes streams to a lower quality.
  • Based on the higher estimate (8 Mbps / 3.6 GB/hr per stream), your projected usage is around 10506 GB/month. This exceeds your 8500 GB allowance. This scenario is more likely if you are Direct Playing higher-quality 1080p files (e.g., Blu-ray rips with less compression) frequently.

Recommendation:

Your 8500 GB monthly transfer might be sufficient, but it’s cutting it close if your average stream bitrate is high or usage is consistently at the peak you described.

  • Monitor Usage: Check if your VPS provider offers bandwidth monitoring tools. Track your actual usage for a month.
  • Consider File Quality: If you primarily store very high-bitrate 1080p files, you are more likely to exceed the limit with Direct Play.
  • Utilize Transcoding Settings: Configure Plex/Jellyfin to limit remote stream bitrates if necessary. This will use more CPU but save bandwidth.
  • Codec Efficiency: Using H.265/HEVC encoded files where possible will significantly reduce data usage compared to H.264 for the same perceived quality.

Bandwidth bash script Just an estimate

#!/bin/bash

# ==============================================================================
# Bandwidth Usage Calculator
#
# Description: Estimates daily and monthly data usage based on streaming parameters.
# Usage: ./bandwidth_calculator.sh [streams] [hours_per_day] [bitrate_mbps]
#        If arguments are omitted, it uses default values from the example.
# Requires: bc (for floating-point arithmetic)
# ==============================================================================

# --- Configuration ---

# Default values (from the lower estimate in the previous example)
DEFAULT_STREAMS=12
DEFAULT_HOURS_PER_DAY=8
DEFAULT_BITRATE_MBPS=5 # Corresponds to ~2.25 GB/hour/stream

# Average number of days in a month
DAYS_PER_MONTH=30.4

# --- Input Handling ---

# Use command-line arguments if provided, otherwise use defaults
NUM_STREAMS=${1:-$DEFAULT_STREAMS}
HOURS_PER_DAY=${2:-$DEFAULT_HOURS_PER_DAY}
BITRATE_MBPS=${3:-$DEFAULT_BITRATE_MBPS}

# Input validation (basic check if numbers)
if ! [[ "$NUM_STREAMS" =~ ^[0-9]+$ ]] || \
   ! [[ "$HOURS_PER_DAY" =~ ^[0-9]+(\.[0-9]+)?$ ]] || \
   ! [[ "$BITRATE_MBPS" =~ ^[0-9]+(\.[0-9]+)?$ ]]; then
  echo "Error: Invalid input. Please provide numeric values for streams, hours, and bitrate."
  echo "Usage: $0 [streams] [hours_per_day] [bitrate_mbps]"
  exit 1
fi

# Check if bc is installed
if ! command -v bc &> /dev/null; then
    echo "Error: 'bc' command not found. Please install bc (e.g., 'sudo apt install bc' or 'sudo yum install bc')."
    exit 1
fi

# --- Calculations ---

# Set bc scale for floating point precision (e.g., 2 decimal places)
BC_SCALE=2

# 1. Calculate GB per hour per stream
#    Bitrate (Mbps) / 8 = MB/s
#    MB/s * 3600 seconds/hour = MB/hour
#    MB/hour / 1024 MB/GB = GB/hour
gb_per_hour_per_stream=$(echo "scale=$BC_SCALE; ($BITRATE_MBPS / 8) * 3600 / 1024" | bc)

# 2. Calculate total GB per hour (all streams)
total_gb_per_hour=$(echo "scale=$BC_SCALE; $gb_per_hour_per_stream * $NUM_STREAMS" | bc)

# 3. Calculate total GB per day
total_gb_per_day=$(echo "scale=$BC_SCALE; $total_gb_per_hour * $HOURS_PER_DAY" | bc)

# 4. Calculate total GB per month
total_gb_per_month=$(echo "scale=$BC_SCALE; $total_gb_per_day * $DAYS_PER_MONTH" | bc)

# --- Output ---

echo "--- Bandwidth Usage Estimation ---"
echo "Input Parameters:"
echo "  Concurrent Streams:   $NUM_STREAMS"
echo "  Hours per Day:        $HOURS_PER_DAY"
echo "  Bitrate per Stream:   $BITRATE_MBPS Mbps"
echo ""
echo "Calculated Usage:"
echo "  GB per Hour (1 stream): $gb_per_hour_per_stream GB"
echo "  GB per Hour (All streams):$total_gb_per_hour GB"
echo "  Estimated GB per Day:   $total_gb_per_day GB"
echo "  Estimated GB per Month: $total_gb_per_month GB"
echo "----------------------------------"

exit 0

How to Use:

  1. Save: Save the code above into a file named bandwidth_calculator.sh.
  2. Make Executable: Open your terminal and run chmod +x bandwidth_calculator.sh.
  3. Run:
  • With Defaults: ./bandwidth_calculator.sh (Uses 12 streams, 8 hours/day, 5 Mbps/stream)
  • With Custom Values: ./bandwidth_calculator.sh 12 8 8 (Calculates for 12 streams, 8 hours/day, 8 Mbps/stream)
  • Another Example: ./bandwidth_calculator.sh 5 4 10 (Calculates for 5 streams, 4 hours/day, 10 Mbps/stream)

The script will output the parameters used and the estimated daily and monthly data consumption in GB. Remember that this provides an estimate, and actual usage can vary based on the factors we discussed earlier (codec, exact bitrate, transcoding, etc.).

Are there any recommendations for streaming 4K? Such as different settings, etc.? Plex at least has the option to get around tunneling all traffic through the VPS by opening port 32400, but I’m not sure if Jellyfin has the same.

1 Like

to be very honest i have never tried to optimize 4k for vps deployment. i am still stuck 1080p.
but you can definitely look into buffering and how your local transcoding works.

Should be?
ExecStart=/usr/local/bin/optimize-video-stream.sh

2 Likes

After adding buffering to traefik_config.yml getting this errors:

{"level":"error","entryPointName":"websecure","routerName":"2-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"next-router@file","error":"middleware \"buffering@file\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"api-router@file","error":"middleware \"buffering@file\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"1-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"4-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"3-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"5-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"ws-router@file","error":"middleware \"buffering@file\" does not exist","time":"2025-06-12T22:06:44Z"}

How to solve this?

1 Like

I got the same. Gave up eventually…

Edit: Figured it out and now I feel dumb!

My Middleware is defined in a separate file… dynamic_config.yml

I added the Buffering under http there, then added it in the traefik_config.yml as shown in the example, but as “buffering@file” instead of just buffering.

Hope that helps!

1 Like