Optimizing VPS for High-Performance Jellyfin Streaming Technical Guide for Pangolin Setup

Do not try on Live System. Or your Home VM. Fire Up Test VPS and test it.

Optimizing VPS for High-Performance Jellyfin Streaming: A Complete Technical Guide for Pangolin Setup


Warning: Do not apply these changes to a live production system or your home VM. Spin up a dedicated test VPS (e.g., Oracle Cloud Free Tier with 4 cores, 24GB RAM, and 4Gbps bandwidth) to validate performance before deployment. Test with real 1080p/4K streams from your home Jellyfin server to the VPS proxy, monitoring for buffering, throughput, and CPU/memory usage.

The data flow remains: Home Server → VPS (Pangolin Proxy) → Client. Focus areas: network stack, traffic shaping, Traefik/Docker tweaks, and monitoring.

Understanding the Challenge

Proxying Jellyfin via VPS adds hops, risking latency/bufferbloat. Optimizations target TCP throughput (BBR), queue management (fq_codel), and resource allocation. Expect: 2+ simultaneous 4K@80Mbps streams without buffering on your spec VPS.

1. Network Stack Optimization (Sysctl)

Create /etc/sysctl.d/99-jellyfin-stream.conf with balanced settings for streaming (16MB buffers prevent underruns without excessive RAM use; BBR maximizes throughput on variable links). These are validated for high-load video servers.

# TCP Buffers: Scaled for 4Gbps streaming (min/default/max)
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 8388608
net.core.wmem_default = 8388608
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 87380 16777216

# Congestion Control: BBR for high-throughput, low-latency video
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

# Low-Latency TCP: Faster recovery and probing
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_delack_min = 1
net.ipv4.tcp_frto = 2
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_dsack = 1

# Connection Management: Higher limits for concurrent streams
net.ipv4.tcp_keepalive_time = 120
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_keepalive_probes = 3
net.netfilter.nf_conntrack_max = 1048576
net.netfilter.nf_conntrack_buckets = 262144
net.netfilter.nf_conntrack_tcp_timeout_established = 7200
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_max_syn_backlog = 16384
net.core.somaxconn = 16384
net.core.netdev_max_backlog = 65536

# Memory/VM: Responsive for bursts
vm.swappiness = 1
vm.dirty_ratio = 10
vm.dirty_background_ratio = 2
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
vm.vfs_cache_pressure = 10

# Polling/Scheduler: Ultra-low latency
net.core.busy_poll = 10
net.core.busy_read = 10
net.ipv4.tcp_autocorking = 0
net.ipv4.tcp_thin_linear_timeouts = 1
net.ipv4.tcp_thin_dupack = 1
net.core.netdev_tstamp_prequeue = 0
kernel.sched_min_granularity_ns = 500000
kernel.sched_wakeup_granularity_ns = 1000000

# File Descriptors: For high concurrent connections (new addition)
fs.file-max = 2097152

Apply: sudo sysctl --system. Ignore non-existent params. Verify: sysctl net.ipv4.tcp_congestion_control (should show ā€œbbrā€).

Rollback: sudo sysctl --system reverts to defaults.

2. Traffic Prioritization (TC Shaping)

Prioritize Jellyfin/HTTPS traffic (ports 8096/443) with HTB+fq_codel. Scaled for 4Gbps; larger bursts (125kb) handle video spikes. Script auto-detects Docker bridge.

Create /usr/local/bin/optimize-video-stream.sh:

#!/bin/bash
# Auto-detect active Docker bridge (UP state)
DOCKER_IF=$(ip -br link show type bridge | grep 'UP' | head -n1 | awk '{print $1}')
if [ -z "$DOCKER_IF" ]; then
    echo "No active Docker bridge found! Run: docker network ls && ip -br link show"
    exit 1
fi
echo "Configuring TC on: $DOCKER_IF"

# Clear existing
tc qdisc del dev $DOCKER_IF root 2>/dev/null

# Root HTB: 4Gbps total, default to class 30
tc qdisc add dev $DOCKER_IF root handle 1: htb default 30 r2q 2000

# Main class: Full bandwidth
tc class add dev $DOCKER_IF parent 1: classid 1:1 htb rate 4000mbit ceil 4000mbit burst 125kb cburst 125kb

# Video class: 80% priority (3.2Gbps bursty)
tc class add dev $DOCKER_IF parent 1:1 classid 1:10 htb rate 3200mbit ceil 4000mbit burst 125kb cburst 125kb

# Default class: 20% (800mbit)
tc class add dev $DOCKER_IF parent 1:1 classid 1:30 htb rate 800mbit ceil 1600mbit burst 125kb cburst 125kb

# fq_codel for video: Higher target for buffering tolerance
tc qdisc add dev $DOCKER_IF parent 1:10 handle 10: fq_codel target 20ms interval 100ms flows 2048 quantum 3008 memory_limit 64Mb ecn

# fq_codel for default: Tighter for responsiveness
tc qdisc add dev $DOCKER_IF parent 1:30 handle 30: fq_codel target 5ms interval 100ms flows 1024 quantum 1514 memory_limit 32Mb ecn

# Filters: Jellyfin (8096) + HTTPS (443)
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 match ip dport 8096 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 1 u32 match ip sport 8096 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 match ip dport 443 0xffff flowid 1:10
tc filter add dev $DOCKER_IF protocol ip parent 1: prio 2 u32 match ip sport 443 0xffff flowid 1:10

echo "Configuration applied. Verify:"
tc -s qdisc show dev $DOCKER_IF
tc -s class show dev $DOCKER_IF
tc -s filter show dev $DOCKER_IF

Make executable: chmod +x /usr/local/bin/optimize-video-stream.sh. Run: sudo ./optimize-video-stream.sh.


Systemd Service (auto-start post-Docker):

cat > /etc/systemd/system/video-tc.service << EOF
[Unit]
Description=Video Streaming Traffic Control
After=docker.service network.target
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/optimize-video-stream.sh
RemainAfterExit=yes
ExecStop=tc qdisc del dev br-* root 2>/dev/null

[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload && sudo systemctl enable --now video-tc.service

Troubleshooting Docker Bridge:
If no bridge: docker network ls (look for ā€œbridgeā€ or custom like ā€œpangolinā€). Inspect: docker network inspect <network>. For host mode: Target eth0 instead.

Rollback: sudo systemctl stop video-tc && sudo tc qdisc del dev <interface> root.

3. Traefik Configuration

Enhance for long streams/large chunks. Validated YAML; added forwarding timeouts and retry for resilience.

Update traefik.yml (or static config):

entryPoints:
  websecure:
    address: ":443"
    transport:
      respondingTimeouts:
        readTimeout: "12h"  # Long for extended sessions
        writeTimeout: "12h"
      forwardingTimeouts:
        dialTimeout: "30s"
        responseHeaderTimeout: "30s"
    http:
      middlewares:
        - buffering
      tls:
        options:
          default:
            minVersion: "VersionTLS12"
            cipherSuites:
              - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"  # Fast for streaming
              - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"

http:
  middlewares:
    buffering:
      buffering:
        maxRequestBodyBytes: 104857600  # 100MB requests
        memRequestBodyBytes: 52428800   # 50MB in-memory
        maxResponseBodyBytes: 104857600 # 100MB responses
        retryExpression: "IsNetworkError() && Attempts() < 3"  # Retry on errors

Restart Traefik: docker compose up -d. In Jellyfin: Add VPS IP to ā€œKnown proxiesā€ under Networking.

4. Docker Configuration

Resource limits for your 24GB/4-core VPS; sysctls propagate buffers. ā€œGerbilā€ assumed as proxy container (e.g., Traefik/Pangolin)—adjust name.

Update docker-compose.yml:

services:
  gerbil:  # Or your proxy service name
    sysctls:
      - net.core.rmem_max=16777216
      - net.core.wmem_max=16777216
      - net.ipv4.tcp_rmem=4096 87380 16777216
      - net.ipv4.tcp_wmem=4096 87380 16777216
    deploy:
      resources:
        limits:
          cpus: '3.0'  # 75% for transcoding headroom
          memory: 8G   # Increased for buffering
        reservations:
          cpus: '1.0'
          memory: 2G
    networks:
      pangolin:
        priority: 10  # Higher egress priority

networks:
  pangolin:
    driver: bridge
    driver_opts:
      com.docker.network.driver.mtu: 9000  # Jumbo frames if supported; fallback 1500

Apply: docker compose up -d. Verify: docker stats.

5. Jellyfin-Specific Tweaks

  • Dashboard > Playback: Enable HW acceleration (if VPS GPU); set bitrate limit to 10Mbps initially for remote.
  • Networking: Add VPS Tailscale/ public IP to proxies; disable ā€œSecure connection modeā€ if behind Traefik.
  • Client: Use direct play where possible; test with VLC for diagnostics.

6. Monitoring and Validation

Real-time script /usr/local/bin/monitor-tc.sh (enhanced for your interface):

#!/bin/bash
INTERFACE=$(ip -br link show type bridge | grep 'UP' | head -n1 | awk '{print $1}') || INTERFACE="br-*"
while true; do
    clear
    echo "=== Jellyfin Streaming Monitor (Interface: $INTERFACE) ==="
    echo "Time: $(date)"
    echo "----------------------------------------"
    echo "Classes (Bytes Sent/Recv):"
    tc -s class show dev $INTERFACE | grep -E "class htb 1:(1|10|30)"
    echo "----------------------------------------"
    echo "Queues (Drops/Overlimits):"
    tc -s qdisc show dev $INTERFACE | grep -E "qdisc fq_codel (10|30)"
    echo "----------------------------------------"
    echo "Network Stats:"
    iftop -t -i $INTERFACE -s 5 -N | head -20  # Top flows
    sleep 3
done

Run: chmod +x /usr/local/bin/monitor-tc.sh && sudo ./monitor-tc.sh. Expect: Video class (1:10) shows high bytes during streams; <1% drops.

Other commands:

  • Throughput: sudo iftop -i <interface>
  • Buffers: cat /proc/net/sockstat
  • Logs: journalctl -u video-tc -f

Benchmarks:

  • iperf3 client-to-VPS: >3Gbps sustained.
  • Jellyfin stream: No buffering at 1080p/10Mbps; <2s seek time.

Conclusion

This validated setup should handle 2-4 4K streams flawlessly on your Oracle VPS. Test iteratively: Apply one section, stream a 1080p file, monitor. If issues (e.g., bufferbloat), increase fq_codel target to 25ms or bursts to 250kb. For further tuning, check Jellyfin forums for 2025 HW transcoding guides.

Full Rollback: Reboot VPS; delete custom files/services. Share tc -s class show output for more help!

2 Likes

Great guide! Thank you!
What steps would I have to adjust to make this work for Plex instead?

2 Likes

Any streaming service/platform will work. But i have tested with jellyfin only

Wow! This is great. I’ll be testing this during the weekend.

Quick question, these performance optimizations are for the VPS, but are there any changes or considerations to be had on the home server where jellyfin is installed?

1 Like

Is this guide still relevant? Getting errors from traefik:

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:16Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:17Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:18Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:20Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:24Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:31Z","message":"Command error"}

{"level":"error","error":"command traefik error: invalid node options: string","time":"2025-04-01T10:00:44Z","message":"Command error"} 

Here’s the adjusted traefik_config.yaml

accessLog:
  bufferingSize: 100
  fields:
    defaultMode: drop
    headers:
      defaultMode: drop
      names:
        Authorization: redact
        Content-Type: keep
        Cookie: redact
        User-Agent: keep
        X-Forwarded-For: keep
        X-Forwarded-Proto: keep
        X-Real-Ip: keep
    names:
      ClientAddr: keep
      ClientHost: keep
      DownstreamContentSize: keep
      DownstreamStatus: keep
      Duration: keep
      RequestMethod: keep
      RequestPath: keep
      RequestProtocol: keep
      RetryAttempts: keep
      ServiceName: keep
      StartUTC: keep
      TLSCipher: keep
      TLSVersion: keep
  filePath: /var/log/traefik/access.log
  filters:
    minDuration: 100ms
    retryAttempts: true
    statusCodes:
      - 200-299
      - 400-499
      - 500-599
  format: json
api:
  dashboard: true
  insecure: true
certificatesResolvers:
  letsencrypt:
    acme:
      caServer: https://acme-v02.api.letsencrypt.org/directory
      email: redacted
      httpChallenge:
        entryPoint: web
      storage: /letsencrypt/acme.json
entryPoints:
  web:
    address: :80
  websecure:
    address: :443
    http:
      middlewares:
        - crowdsec@file
        - buffering
      tls:
        certResolver: letsencrypt
        options:
          default:
            minVersion: "VersionTLS12"
            cipherSuites:
              - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
              - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
    transport:
      respondingTimeouts:
        readTimeout: "12h"
        writeTimeout: "12h"
      forwardingTimeouts:
        dialTimeout: "30s"
        responseHeaderTimeout: "30s"
  tcp-20800:
    address: ":20800/tcp"
  tcp-19800:
    address: ":19800/tcp"
  udp-19800:
    address: ":19800/udp"
experimental:
  plugins:
    badger:
      moduleName: github.com/fosrl/badger
      version: v1.0.0
    crowdsec:
      moduleName: github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin
      version: v1.3.5
log:
  format: json
  level: INFO
providers:
  file:
    filename: /etc/traefik/dynamic_config.yml
  http:
    endpoint: http://pangolin:3001/api/v1/traefik-config
    pollInterval: 5s
serversTransport:
  insecureSkipVerify: true
http:
  middlewares:
    buffering:
      buffering:
        maxRequestBodyBytes: 104857600
        memRequestBodyBytes: 52428800
        maxResponseBodyBytes: 104857600
        retryExpression: "IsNetworkError() && Attempts() < 3"

(sorry for spam, I couldn’t edit messages)

1 Like

for what services and what middleware, you are getting this error.
It all depends on your service and deployment. you will have to give the whole picture or ping me on discord off-topic channel

What stats would you suggest for a VPS using this method? I assume the standard 1 core, 1 GB would not suffice. I haven’t decided to run Jellyfin through my vps yet, but would like to be able to if I decide later on.

1 Like

To be honest 1gb ram 1vcpu is very less for streaming relay. Minimum 3gb ram 2 vcpu I will like to use.

Another question. I’m looking at a vps with 3 vCPU cores, 4.5GB RAM and 8500GB Monthly Transfer. I know the CPU and RAM are fine. But was curious about the bandwidth. It doesn’t seem like much if streaming Jellyfin or Plex.

What are people’s experience with bandwidth through their VPS?

1 Like

I have taken 12 streams, 8 hours/day, every day

Factors Affecting Data Usage:

  1. Bitrate: This is the most crucial factor. It’s the amount of data used per second to represent the video. Higher quality generally means a higher bitrate. Streaming services often use variable bitrates, and the actual bitrate of your files in Plex/Jellyfin can vary widely.
  2. Codec: Modern codecs like H.265 (HEVC) are more efficient than older ones like H.264, meaning they can deliver similar quality at a lower bitrate, thus using less data.
  3. Direct Play vs. Transcoding:
    • Direct Play: Streams the file as-is. Bandwidth usage equals the file’s original bitrate.
    • Transcoding: The server converts the file on the fly (e.g., to a lower resolution or bitrate compatible with the client device or bandwidth limits). This uses CPU power on the server but can reduce bandwidth consumption significantly if you set lower quality limits for remote streams.
  4. Actual Usage: Your calculation assumes constant usage (12 streams, 8 hours/day, every day). Real-world usage might fluctuate.

Data Usage Calculation:

We need to estimate the data usage per hour for a single 1080p stream.

  • Lower Estimate (like typical streaming services, e.g., Netflix, YouTube): Around 5 Mbps. This translates to roughly 2.25 GB per hour per stream. (Sources 1.1, 1.2, 4.1, 4.2, 6.1, 7.1)
  • Higher Estimate (higher quality H.264): Around 8 Mbps. This translates to roughly 3.6 GB per hour per stream (8 Mbps / 8 bits/byte * 3600 sec/hr / 1024 MB/GB). (Sources 1.1, 4.1, 4.2, 6.1, 6.2 suggest ranges encompassing this)

Let’s calculate the total monthly usage based on your parameters (12 streams, 8 hours/day):

Scenario 1: Using Lower Estimate (2.25 GB/hour per stream)

  1. Data per hour (all streams): 12 streams * 2.25 GB/hour/stream = 27 GB/hour
  2. Data per day: 27 GB/hour * 8 hours/day = 216 GB/day
  3. Data per month (approx. 30.4 days): 216 GB/day * 30.4 days/month ā‰ˆ 6566 GB/month

Scenario 2: Using Higher Estimate (3.6 GB/hour per stream)

  1. Data per hour (all streams): 12 streams * 3.6 GB/hour/stream = 43.2 GB/hour
  2. Data per day: 43.2 GB/hour * 8 hours/day = 345.6 GB/day
  3. Data per month (approx. 30.4 days): 345.6 GB/day * 30.4 days/month ā‰ˆ 10506 GB/month

Conclusion:

  • Based on the lower estimate (5 Mbps / 2.25 GB/hr per stream), your projected usage is around 6566 GB/month. This is comfortably within your 8500 GB allowance. This scenario is likely if you are streaming content similar in bitrate to major streaming services or if Plex/Jellyfin often transcodes streams to a lower quality.
  • Based on the higher estimate (8 Mbps / 3.6 GB/hr per stream), your projected usage is around 10506 GB/month. This exceeds your 8500 GB allowance. This scenario is more likely if you are Direct Playing higher-quality 1080p files (e.g., Blu-ray rips with less compression) frequently.

Recommendation:

Your 8500 GB monthly transfer might be sufficient, but it’s cutting it close if your average stream bitrate is high or usage is consistently at the peak you described.

  • Monitor Usage: Check if your VPS provider offers bandwidth monitoring tools. Track your actual usage for a month.
  • Consider File Quality: If you primarily store very high-bitrate 1080p files, you are more likely to exceed the limit with Direct Play.
  • Utilize Transcoding Settings: Configure Plex/Jellyfin to limit remote stream bitrates if necessary. This will use more CPU but save bandwidth.
  • Codec Efficiency: Using H.265/HEVC encoded files where possible will significantly reduce data usage compared to H.264 for the same perceived quality.

Bandwidth bash script Just an estimate

#!/bin/bash

# ==============================================================================
# Bandwidth Usage Calculator
#
# Description: Estimates daily and monthly data usage based on streaming parameters.
# Usage: ./bandwidth_calculator.sh [streams] [hours_per_day] [bitrate_mbps]
#        If arguments are omitted, it uses default values from the example.
# Requires: bc (for floating-point arithmetic)
# ==============================================================================

# --- Configuration ---

# Default values (from the lower estimate in the previous example)
DEFAULT_STREAMS=12
DEFAULT_HOURS_PER_DAY=8
DEFAULT_BITRATE_MBPS=5 # Corresponds to ~2.25 GB/hour/stream

# Average number of days in a month
DAYS_PER_MONTH=30.4

# --- Input Handling ---

# Use command-line arguments if provided, otherwise use defaults
NUM_STREAMS=${1:-$DEFAULT_STREAMS}
HOURS_PER_DAY=${2:-$DEFAULT_HOURS_PER_DAY}
BITRATE_MBPS=${3:-$DEFAULT_BITRATE_MBPS}

# Input validation (basic check if numbers)
if ! [[ "$NUM_STREAMS" =~ ^[0-9]+$ ]] || \
   ! [[ "$HOURS_PER_DAY" =~ ^[0-9]+(\.[0-9]+)?$ ]] || \
   ! [[ "$BITRATE_MBPS" =~ ^[0-9]+(\.[0-9]+)?$ ]]; then
  echo "Error: Invalid input. Please provide numeric values for streams, hours, and bitrate."
  echo "Usage: $0 [streams] [hours_per_day] [bitrate_mbps]"
  exit 1
fi

# Check if bc is installed
if ! command -v bc &> /dev/null; then
    echo "Error: 'bc' command not found. Please install bc (e.g., 'sudo apt install bc' or 'sudo yum install bc')."
    exit 1
fi

# --- Calculations ---

# Set bc scale for floating point precision (e.g., 2 decimal places)
BC_SCALE=2

# 1. Calculate GB per hour per stream
#    Bitrate (Mbps) / 8 = MB/s
#    MB/s * 3600 seconds/hour = MB/hour
#    MB/hour / 1024 MB/GB = GB/hour
gb_per_hour_per_stream=$(echo "scale=$BC_SCALE; ($BITRATE_MBPS / 8) * 3600 / 1024" | bc)

# 2. Calculate total GB per hour (all streams)
total_gb_per_hour=$(echo "scale=$BC_SCALE; $gb_per_hour_per_stream * $NUM_STREAMS" | bc)

# 3. Calculate total GB per day
total_gb_per_day=$(echo "scale=$BC_SCALE; $total_gb_per_hour * $HOURS_PER_DAY" | bc)

# 4. Calculate total GB per month
total_gb_per_month=$(echo "scale=$BC_SCALE; $total_gb_per_day * $DAYS_PER_MONTH" | bc)

# --- Output ---

echo "--- Bandwidth Usage Estimation ---"
echo "Input Parameters:"
echo "  Concurrent Streams:   $NUM_STREAMS"
echo "  Hours per Day:        $HOURS_PER_DAY"
echo "  Bitrate per Stream:   $BITRATE_MBPS Mbps"
echo ""
echo "Calculated Usage:"
echo "  GB per Hour (1 stream): $gb_per_hour_per_stream GB"
echo "  GB per Hour (All streams):$total_gb_per_hour GB"
echo "  Estimated GB per Day:   $total_gb_per_day GB"
echo "  Estimated GB per Month: $total_gb_per_month GB"
echo "----------------------------------"

exit 0

How to Use:

  1. Save: Save the code above into a file named bandwidth_calculator.sh.
  2. Make Executable: Open your terminal and run chmod +x bandwidth_calculator.sh.
  3. Run:
  • With Defaults: ./bandwidth_calculator.sh (Uses 12 streams, 8 hours/day, 5 Mbps/stream)
  • With Custom Values: ./bandwidth_calculator.sh 12 8 8 (Calculates for 12 streams, 8 hours/day, 8 Mbps/stream)
  • Another Example: ./bandwidth_calculator.sh 5 4 10 (Calculates for 5 streams, 4 hours/day, 10 Mbps/stream)

The script will output the parameters used and the estimated daily and monthly data consumption in GB. Remember that this provides an estimate, and actual usage can vary based on the factors we discussed earlier (codec, exact bitrate, transcoding, etc.).

Are there any recommendations for streaming 4K? Such as different settings, etc.? Plex at least has the option to get around tunneling all traffic through the VPS by opening port 32400, but I’m not sure if Jellyfin has the same.

1 Like

to be very honest i have never tried to optimize 4k for vps deployment. i am still stuck 1080p.
but you can definitely look into buffering and how your local transcoding works.

Should be?
ExecStart=/usr/local/bin/optimize-video-stream.sh

2 Likes

After adding buffering to traefik_config.yml getting this errors:

{"level":"error","entryPointName":"websecure","routerName":"2-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"next-router@file","error":"middleware \"buffering@file\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"api-router@file","error":"middleware \"buffering@file\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"1-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"4-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"3-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"5-router@http","error":"middleware \"buffering@http\" does not exist","time":"2025-06-12T22:06:44Z"}
{"level":"error","entryPointName":"websecure","routerName":"ws-router@file","error":"middleware \"buffering@file\" does not exist","time":"2025-06-12T22:06:44Z"}

How to solve this?

1 Like

I got the same. Gave up eventually…

Edit: Figured it out and now I feel dumb!

My Middleware is defined in a separate file… dynamic_config.yml

I added the Buffering under http there, then added it in the traefik_config.yml as shown in the example, but as ā€œbuffering@fileā€ instead of just buffering.

Hope that helps!

1 Like

Hello,

Is this still accurate ? Can it get in the way of others services served by Pangolin ?

1 Like

Yes it’s accurate but it all depends on the service provider and your version of the OS. Please don’t try on live system. Test it out first, tune it according to your requirement and then deployment the steps on your system. It varies from system to system. This just a guideline.