Fix Slow or Bursty Speed in Pangolin When Using Newt Tunnels

Guide: Fix Slow or Bursty Speed in Pangolin When Using Newt Tunnels

Many users have problems with slow speeds or “bursty” behavior (fast at first, then stalls or buffering) when using Newt sites in Pangolin. This happens a lot with streaming videos, big file downloads, or long connections. Short connections or low-speed flows are usually fine.

The main reasons from the discussion:

  • Newt runs in user space, so it can be slower under heavy load.
  • Problems with how packets are split (fragmentation) and MTU settings cause stalls.
  • Other things like Docker overhead can make it worse.

The best fix that worked for many users is to switch to a Basic WireGuard Site. This uses the normal kernel WireGuard, which is faster and more stable for big data flows.

Here is a step-by-step guide to set it up. Follow carefully.

Step 1: Create a Basic WireGuard Site in Pangolin

  • Go to your Pangolin dashboard.

  • Create a new Site and choose Basic WireGuard type (not Newt).

  • Pangolin will give you a WireGuard config file (like wg0.conf).

  • On your home network (where your services run), set up WireGuard on a gateway. This can be:

    • A Linux host machine
    • A VM
    • An LXC container
    • Or a Docker container (see special notes below if using a container as the peer)
  • Bring up the tunnel: wg-quick up wg0 (or the file name given).

  • Check if it connects: wg show

    • You should see a handshake and data transfer.
  • Note: On the Pangolin server side, the WireGuard interface (wg0) often runs inside the gerbil Docker container. You can check it with:

    docker exec -it gerbil ip -br addr
    
    • Look for wg0 with an IP like 100.89.x.x/24.

Step 2: Point Your Resource to the WireGuard Peer IP

  • In Pangolin, edit your resource (the service you want to access).
  • Set the Target/Upstream to the WireGuard IP of your home gateway peer, not your local LAN IP.
    • Example: https://100.89.x.x:PORT (use the peer IP from your config).
  • Do NOT use your home LAN IP like 192.168.x.x directly.

Step 3: Set Up Routing and Rules on Your Home WireGuard Peer

  • Edit the WireGuard config file on your peer (/etc/wireguard/wg0.conf or wherever you saved it).
  • Add these lines (replace placeholders):
    • <WG_PEER_IP_CIDR>: Your WireGuard IP with subnet, like 100.89.x.x/32
    • <TARGET_IP>: The real LAN IP of your service (or Docker host IP if service is in Docker)
    • <PORT>: The port your service uses
    • eth0: Your main network interface (change if different; in Docker, it might be different – see below)
[Interface]
Address = <WG_PEER_IP_CIDR>
PrivateKey = your_private_key_here
MTU = 1280   # Important! Keeps it low to avoid problems

PostUp = sysctl -w net.ipv4.ip_forward=1
PostUp = iptables -t nat -A PREROUTING -i %i -p tcp --dport <PORT> -j DNAT --to-destination <TARGET_IP>:<PORT>
PostUp = iptables -t nat -A POSTROUTING -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j MASQUERADE
PostUp = iptables -A FORWARD -i %i -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j ACCEPT
PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp -s <TARGET_IP> --sport <PORT> -j ACCEPT

# This fixes the stall problem
PostUp = iptables -t mangle -A FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostUp = iptables -t mangle -A FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

# Cleanup when WireGuard stops
PostDown = iptables -t nat -D PREROUTING -i %i -p tcp --dport <PORT> -j DNAT --to-destination <TARGET_IP>:<PORT>
PostDown = iptables -t nat -D POSTROUTING -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp -s <TARGET_IP> --sport <PORT> -j ACCEPT
PostDown = iptables -t mangle -D FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = iptables -t mangle -D FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
  • Save and restart: wg-quick down wg0 && wg-quick up wg0

If You Are Using a WireGuard Sidecar Container as the Peer

Some users run the WireGuard peer inside a Docker container (called a “sidecar” setup). This is common if your services are in Docker.

  • Run a lightweight WireGuard container with NET_ADMIN capability.
  • Example using a popular image like linuxserver/wireguard or a simple one:
    docker run -d \
      --name=wg-peer \
      --cap-add=NET_ADMIN \
      --cap-add=NET_RAW \
      -v /path/to/wg0.conf:/config/wg0.conf \
      -e PUID=1000 -e PGID=1000 \
      linuxserver/wireguard
    
  • Or use wg-quick in a custom container.
  • Important changes:
    • The outgoing interface (%o) may not be “eth0”. Check inside the container: ip link
    • Replace “eth0” in PostUp/PostDown with the correct one (often “eth0” still works, but test).
    • For forwarding to work, the container needs access to the host network or Docker network.
    • Best option: Use --network=host for the WireGuard container so it can forward to your services easily.
    • If services are in other containers, use Docker networks or host networking.
    • iptables rules go inside the container (add them to the config as shown).

This container setup works like a VM/LXC but inside Docker. Test the rules carefully.

Step 4: Make It Start Automatically

  • If on host/VM/LXC: systemctl enable --now wg-quick@wg0
  • If in Docker: Use --restart unless-stopped in your docker run/compose.

Step 5: Check Everything Works

  • MTU should stay at 1280:
    • Run: ip -d link show wg0 | grep mtu (or inside container: docker exec …)
    • If it changes to 1420, stalls may come back.
  • Check rules:
    • iptables -t nat -S | grep <PORT>
    • iptables -t mangle -S | grep TCPMSS
  • Test your service through Pangolin. Speeds should be steady, no bursts or stalls.

Extra Tips

  • If you have many services, add more DNAT lines for each port.
  • This setup uses kernel WireGuard, so it handles high speeds better than Newt.
  • If you still have issues, check your VPS bandwidth, ISP, or try a different VPS location.
  • Newt may get better in future updates, but this fix works well now.

This should give you smooth, fast access. The sidecar container option is great if you already use Docker a lot. If you need help with a specific part, ask!

3 Likes

So is this also possible within unifi? I’ve allready a wireguard server running there and can also connect to peers.

I saw this reddit post seems to me a viable option as well.

1 Like

Yes I have it running since last week with no issues.
Just make sure you have it on a seperate vlan so if vps gets compromised your home firewall is not affected.

1 Like

Oke thanks. Is the reddit post a good example as in the firewall rules?

Create firewall and NAT rules in UniFi

I used the new firewall interface in the UniFi Policy Engine. You’ll need three rules:

Rule 1 — Firewall allow

Path: Policy Engine → Firewall → Create Policy

Name: Allow_WG_HTTP

Source Zone: External → IP → Specific → (use the IP shown in AllowedIPs — in my case 100.89.128.1/32)

Port: Any

Action: Allow

Destination Zone: Internal → Any → Port: Any

Enable Auto Allow Return Traffic.

Leave other settings default and save.

Rule 2 — DNAT (port forward into LAN host)

Path: Policy Engine → NAT → Create New

Name: WG_HTTP_

Type: Destination NAT

Interface/VPN Tunnel: select your WireGuard client

Translated IP: the LAN host you’re trying to reach (e.g. 192.168.8.19)

Translated Port: the service port on that host (e.g. 80)

Protocol: TCP (or TCP/UDP if needed)

Source: Any → Port: Any

Destination: Any → Port: (choose an unused port — this is what you’ll connect to via the WireGuard client, e.g. 8080)

Save with default

1 Like

Deployment Structure with Separate VLAN for All Services

All services run on hosts in isolated Exposed VLAN (DMZ style). No hosts in Main LAN except trusted devices.

1. Network Setup

  • Main LAN: VLAN 1, 192.168.1.0/24 (devices, IoT if separate)
  • Exposed VLAN: New network
    • Name: Exposed_Services
    • VLAN ID: 20 (example)
    • Subnet: 192.168.20.0/24
    • Gateway: UniFi gateway
    • DHCP: Enable
    • All self-hosted services here (Docker host, Proxmox, TrueNAS, etc.)
    • Hosts get IP e.g. 192.168.20.10-50

Block inter-VLAN by default.

2. WireGuard Client (Pangolin)

Same import.
Peer IP: e.g. 100.89.128.1/32 (AllowedIPs)

Incoming traffic from External zone.

3. DNAT Rules (Policy Engine > NAT > Destination NAT)

One rule per service needed from remote.

Interface: Your WG tunnel

Source: Any (or peer /32)

Protocol: TCP (most cases; UDP for some)

Destination Port: High unique external (50000+ range, avoid overlap)

Translated IP: Service host in Exposed VLAN (e.g. 192.168.20.50)

Translated Port: Internal default

Recommended External Ports (high to hide; change if conflict)

Service Internal Port Suggested External Port Protocol Notes
Immich 2283 50283 TCP Web UI
Jellyfin 8096 508096 TCP
Plex 32400 532400 TCP Avoid if direct share
Home Assistant 8123 508123 TCP
Nextcloud 443 50443 TCP HTTPS
Vaultwarden 8080 508080 TCP
Proxmox 8006 58006 TCP Web UI
TrueNAS 443 50444 TCP HTTPS
Grafana 3000 503000 TCP
Portainer 9443 59443 TCP HTTPS
Pi-hole 80 50080 TCP Web; DNS UDP 53 separate if needed
Paperless-ngx 8000 508000 TCP
Gitea 3000 503001 TCP
Sonarr 8989 58989 TCP
Radarr 7878 57878 TCP
PostgreSQL 5432 Block or 55432 if remote TCP Avoid expose DB
MySQL 3306 Block or 53306 TCP Avoid
Redis 6379 Block TCP Internal only
SSH 22 Block or 50022 TCP High risk
SMB 445 Block TCP High risk
RDP 3389 Block TCP High risk

Important: Block databases (PostgreSQL, MySQL, Redis), SSH, SMB, RDP from remote unless critical. Use only internal or add auth.

Create only needed DNAT. DNAT allows init from tunnel, stateful return.

4. Firewall Rules (Policy Engine > Firewall)

No broad allow External to Internal/Exposed.

Stateful handles return.

Isolation rules:

  • Policy: Exposed to Main LAN

    • Source Zone: Exposed
    • Dest Zone: Main/Internal
    • Action: Drop
    • High priority
  • If need Main access Exposed (management): Specific allow Main to Exposed, ports (e.g. SSH 22 from Main IPs)

No extra allow needed for WG → Exposed (DNAT triggers).

5. ATMS Exceptions

Enable IDS/IPS High.

Add exceptions:

  • Source: Peer IP /32
  • Dest Ports: All your external DNAT ports
  • Or Dest: Exposed subnet

6. my Recommendations

  • Put all services behind reverse proxy (Traefik/Nginx) in Exposed VLAN if possible. Expose only proxy ports.
  • No Traffic Routes on WG.
  • Test: From Pangolin client, reach only DNAT external ports → services. All else block.
  • Monitor traffic on peer IP.

This minimal exposure. Only explicit services reachable via tunnel. Adjust ports as needed.

1 Like

do we still need all the PostUp and PostDown lines in the config when using unifi wireguard setup?

1 Like

I don’t need it in my unifi fiber router

I managed to create one page using the Wireguard tunnel. This resulted in noticeable differences in performance. Following up on this success, I wanted to create another such page. Unfortunately, creating another tunnel also changed the IP address of the previous tunnel. Instead of

Address = 100.89.128.24/30
ListenPort = 51820
PrivateKey = ••••••••••••••••••••••••••••••••••••••••••••

the address

Address = 100.89.128.28/30
ListenPort = 51820
PrivateKey = ••••••••••••••••••••••••••••••••••••••••••••

After that, both tunnels stopped working. After making a few changes, I managed to restore one of the tunnels.

Does anyone know how to run two parallel tunnels?

1 Like

this might be a bug, i didn’t have this issue in the previous versions. i will have to check.

Edit:-
From your description, it sounds like “page” refers to a pangolin resource (the exposed service endpoint), but you ended up creating a new site (tunnel) instead, which assigns a new wg config and IP. pangolin sites are meant for connecting distinct remote networks/peers, and each gets its own IP in the CGNAT range (like 100.89.x.x). creating a new site doesn’t inherently change existing ones, but if you regenerated configs, deleted/recreated, or hit a bug (e.g., similar to known issues where restarts break connections), it could shift IPs or cause conflicts.

My understanding of your issue

recommended approach: use one Site for multiple Resources

you don’t need multiple tunnels for multiple resources. one Basic WireGuard site (tunnel) can handle many resources efficiently:

  • Stick to your restored tunnel (e.g., the working one with 100.89.128.24/30 or whichever is active).

  • For the second “page” (resource):

    1. In Pangolin dashboard, create a new resource (not a new site).
    2. Set its upstream to the same WireGuard peer IP (e.g., https://100.89.128.24:PORT), but use a unique PORT if it conflicts with the first resource.
  • This keeps everything over one stable tunnel, avoiding IP changes or port conflicts.

Since I changed it to the wireguard tunnel the streaming experience is much better.

1 Like

This was really helpful! I have successfully set up Plex & Emby to use a basic WG tunnel instead of Newt! :grinning_face:

A question:
I also have Jellyfin as a 3rd backup. Emby & Jellyfin use the same HTTP port 8096. How should we adjust our DNAT lines in wg0.conf when that’s the case? Thanks!

EDIT: I’m using the docker setup, including the wireguard as a sidecar container.

adjusting DNAT Lines for Emby and Jellyfin on the Same Port (docker sidecar setup)

since Emby and Jellyfin both default to HTTP port 8096, you can’t forward the same incoming port on your WireGuard peer to both services without a conflict (iptables DNAT rules are port-specific). assuming they are running in separate Docker containers (possibly on the same host or different ones in your network), they likely have distinct internal IPs (e.g., via Docker networks or host IPs). the solution is to use different incoming ports on the WireGuard peer side for each service, then forward those to the respective service’s IP:8096.

this way:

  • your wg sidecar container acts as the gateway, handling forwarding for multiple services over the single tunnel.
  • In pangolin, you will create separate resources for each, targeting the same wg peer IP but the different ports you assign.

here is how to adjust your wg0.conf (inside the sidecar container). i will assume:

  • emby target: <EMBY_IP>:8096 (e.g., 172.17.0.2:8096 or your Docker network IP).
  • jellyfin target: <JELLYFIN_IP>:8096 (e.g., 172.17.0.3:8096).
  • incoming ports on peer: 8096 for emby, 8097 for jellyfin (you can choose any unused ports).
  • your interface is eth0 (confirm with ip link inside the container; if using --network=host, it might match the host’s).
  • keep MTU=1280 and the TCPMSS clamp rules as in the guide—they apply globally.

add these to the [Interface] section (or expand if you already have similar rules). Include separate PostUp/PostDown blocks for each service:

[Interface]
Address = <WG_PEER_IP_CIDR>
PrivateKey = your_private_key_here
MTU = 1280
PostUp = sysctl -w net.ipv4.ip_forward=1

# Global TCPMSS fix for stalls (keep this once)
PostUp = iptables -t mangle -A FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostUp = iptables -t mangle -A FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

# Rules for Emby (incoming port 8096 -> <EMBY_IP>:8096)
PostUp = iptables -t nat -A PREROUTING -i %i -p tcp --dport 8096 -j DNAT --to-destination <EMBY_IP>:8096
PostUp = iptables -t nat -A POSTROUTING -o eth0 -p tcp -d <EMBY_IP> --dport 8096 -j MASQUERADE
PostUp = iptables -A FORWARD -i %i -o eth0 -p tcp -d <EMBY_IP> --dport 8096 -j ACCEPT
PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp -s <EMBY_IP> --sport 8096 -j ACCEPT

# Rules for Jellyfin (incoming port 8097 -> <JELLYFIN_IP>:8096)
PostUp = iptables -t nat -A PREROUTING -i %i -p tcp --dport 8097 -j DNAT --to-destination <JELLYFIN_IP>:8096
PostUp = iptables -t nat -A POSTROUTING -o eth0 -p tcp -d <JELLYFIN_IP> --dport 8096 -j MASQUERADE
PostUp = iptables -A FORWARD -i %i -o eth0 -p tcp -d <JELLYFIN_IP> --dport 8096 -j ACCEPT
PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp -s <JELLYFIN_IP> --sport 8096 -j ACCEPT

# PostDown cleanups
PostDown = iptables -t mangle -D FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = iptables -t mangle -D FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

# Emby cleanups
PostDown = iptables -t nat -D PREROUTING -i %i -p tcp --dport 8096 -j DNAT --to-destination <EMBY_IP>:8096
PostDown = iptables -t nat -D POSTROUTING -o eth0 -p tcp -d <EMBY_IP> --dport 8096 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -o eth0 -p tcp -d <EMBY_IP> --dport 8096 -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp -s <EMBY_IP> --sport 8096 -j ACCEPT

# Jellyfin cleanups
PostDown = iptables -t nat -D PREROUTING -i %i -p tcp --dport 8097 -j DNAT --to-destination <JELLYFIN_IP>:8096
PostDown = iptables -t nat -D POSTROUTING -o eth0 -p tcp -d <JELLYFIN_IP> --dport 8096 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -o eth0 -p tcp -d <JELLYFIN_IP> --dport 8096 -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp -s <JELLYFIN_IP> --sport 8096 -j ACCEPT

my takes for you for Docker Sidecar:

  • container networking: if your emby/jellyfin containers are on a shared Docker network with the wg sidecar, use their container IPs (e.g., from docker inspect). if using --network=host for the sidecar, you can forward directly to host IPs/ports. If not, attach the sidecar to the same network as your services (e.g., via docker network connect or compose).
  • capabilities: make sure your sidecar has NET_ADMIN and NET_RAW (as in the guide’s example). if iptables fails, check logs or run commands manually inside the container (docker exec -it wg-peer bash).
  • restart: after editing, restart the interface: wg-quick down wg0 && wg-quick up wg0 (inside the container). or restart the container if needed.
  • pangolin resources: Create/edit resources:
    • Emby: upstream https://<WG_PEER_IP>:8096
    • Jellyfin: upstream https://<WG_PEER_IP>:8097
  • Testing: verify rules with iptables -t nat -S and iptables -S inside the container. test access via Pangolin—traffic should route smoothly without port conflicts.
  • if both services are on the same IP (e.g., same container/host binding), you’ll need to change one service’s port in its config (e.g., set Jellyfin to 8097 internally), then DNAT accordingly.

this scales if you have more services—just add more rule sets with unique incoming ports.
i hope this helps

1 Like

Thank you soo much for the detailed guide. It worked! :grinning_face:
All my containers are on a custom bridge network (172.19.0.0/16) so I can use each container’s IP. And I only needed to adjust the PREROUTING line to use port 8097 as you described.

PS:
For those who use a similar docker network setup, you should make sure to use a dedicated ipv4 address for each container; otherwise, the IP will change after each container restart. To do that:

  1. Get the current ip address from the container using docker inspect. Ex: docker inspect jellyfin
  2. Add that ip address to the docker compose file under networks.
---
services:
  jellyfin:
    image: lscr.io/linuxserver/jellyfin:latest
    container_name: jellyfin
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
    volumes:
      - ./config:/config
      - /mnt/data/media:/data
#    ports:
#      - 8096:8096
#      - 8920:8920 #optional
#      - 7359:7359/udp #optional
#      - 1900:1900/udp #optional
    restart: unless-stopped
    devices:
      - /dev/dri:/dev/dri
    networks:
      mynetwork:
        ipv4_address: 172.19.0.37
        
networks:
  mynetwork:
    external: true
  1. docker compose up -d to recreate the container. Now you will have a persistent, dedicated IP address for the Jellyfin container to use in your wg0.conf.
1 Like

you can keep adding various services in similar way

1 Like

Hi there. I set this up attempting to get this working for Jellyfin on my network. I have Jellyfin running in an LXC, and WireGuard running in a separate LXC all on Proxmox.

Here’s my WireGuard interface config:

[Interface]
Address = <WIREGUARD-IP>/30
ListenPort = 51820
PrivateKey = <PRIVATE-KEY>
MTU = 1280

PostUp = sysctl -w net.ipv4.ip_forward=1

#Rules for Jellyfin
PostUp = iptables -t nat -A PREROUTING -i %i -p tcp --dport 8096 -j DNAT --to-destination 10.30.30.25:8096
PostUp = iptables -t nat -A POSTROUTING -o eth0 -p tcp -d 10.30.30.25 --dport 8096 -j MASQUERADE
PostUp = iptables -A FORWARD -i %i -o eth0 -p tcp -d 10.30.30.25 --dport 8096 -j ACCEPT
PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp -s 10.30.30.25 --sport 8096 -j ACCEPT

#This fixes the stall problem
PostUp = iptables -t mangle -A FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostUp = iptables -t mangle -A FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

#Cleanup rules for Jellyfin when WireGuard stops
PostDown = iptables -t nat -D PREROUTING -i %i -p tcp --dport 8096 -j DNAT --to-destination 10.30.30.25:8096
PostDown = iptables -t nat -D POSTROUTING -o eth0 -p tcp -d 10.30.30.25 --dport 8096 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -o eth0 -p tcp -d 10.30.30.25 --dport 8096 -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp -s 10.30.30.25 --sport 8096 -j ACCEPT

#PostDown cleanup
PostDown = iptables -t mangle -D FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = iptables -t mangle -D FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

[Peer]
PublicKey = <PUBLIC-KEY>
AllowedIPs = <PANGOLIN-GERBIL-INTERFACE-IP>/32
Endpoint = pangolin.mydomain.com:51820
PersistentKeepalive = 5

After starting up the tunnel, I can see it online in the Pangolin dashboard. Unfortunately, after I change my Jellyfin resource in Pangolin to point to the WireGuard site with https://100.89.x.x:8096 I get an error of Gateway Timeout in my browser.

Any idea of what I need to change to get this working? Please let me know. Thanks!

troubleshooting gateway timeout for Jellyfin via wg in Pangolin @ThiccChimi

your setup looks good to me overall— wg config follows the guide closely, with proper DNAT, MASQUERADE, FORWARD rules, TCPMSS clamping for MTU issues, and cleanup in PostDown. since tunnel shows as online in the pangolin dashboard, the handshake and basic connectivity are working. gateway timeout (likely 504 error) typically means the request reaches pangolin proxy but times out waiting for a response from the upstream (your wg peer forwarding to jellyfin). this points to a break in the chain: either the traffic isn’t reaching Jellyfin, isn’t returning properly, or there’s a protocol mismatch.

here are likely causes which comes to my head and step-by-step fixes, focused on your proxmox LXC setup (wg and jellyfin in separate LXC). we will start from quick checks first.

1. Verify protocol in pangolin resource upstream

  • hellyfin defaults to HTTP on port 8096 (not HTTPS). If you have set the upstream to https://100.89.x.x:8096 (where 100.89.x.x is your home wg peer IP), pangolin will try an SSL/TLS connection, but jellyfin won’t respond with a valid handshake, causing to a timeout.
  • Fix: change the upstream in your jellyfin resource to http://100.89.x.x:8096. save and test in your browser (in private window). pangolin handles the external TLS (via its domain), so the internal upstream can be plain HTTP.
  • if jellyfin is configured for HTTPS (e.g., with a cert in its settings), keep it as https:// and ensure the cert is valid/self-signed but trusted somehow—but this is rare for internal setups.

2. Check Connectivity from wg LXC to jellyfin

  • your DNAT targets 10.30.30.25:8096, which I assume is the jellyfin LXC’s IP on your proxmox bridge (e.g vmbr0 or similar). confirm routing works outside the tunnel first.
  • Tests (run in the wg LXC shell):
    • Ping jellyfin: ping 10.30.30.25 (should succeed if on the same bridge/subnet).
    • Test jellyfin access: curl -v http://10.30.30.25:8096 (should get a response like jellyfin login page or API endpoint; if timeout or connection refused, check jellyfin firewall or if it’s bound to 0.0.0.0).
    • if curl fails: maker sure jellyfin is listening on all iface (check its network settings) and no LXC/proxmox firewall blocks (e.g., ufw status or iptables -L in jellyfin LXC).
  • proxmox-specific: LXCs share the hosts bridge. if they are on different bridges/VLANs, add routes or adjust networking. also, make surev the wg LXC is privileged (or has nesting=1 and features: keyctl=1 in its config) to allow iptables and sysctl changes—unprivileged LXCs often block this. Edit /etc/pve/lxc/<id>.conf and restart the LXC if needed.

3. validate iptables rules and iface name

  • your rules use eth0 as the outgoing interface. In proxmox LXC, this is usually correct (default eth iface), but confirm.
  • checks (in wg LXC after wg-quick up):
    • confirm iface: ip link show (look for eth0 or similar; if it’s enpXs0 or something else, replace eth0 in all PostUp/PostDown rules).
    • list rules:
      • iptables -t nat -L -v -n (should show PREROUTING DNAT and POSTROUTING MASQUERADE chains with hits if traffic is flowing).
      • iptables -L -v -n (FORWARD chain should have ACCEPT rules).
      • iptables -t mangle -L -v -n (TCPMSS clamps in FORWARD).
    • if no hits on rules during access attempts, traffic isn’t reaching the peer—check next steps.
  • fix: If iface is wrong, update the config, then wg-quick down wg0 && wg-quick up wg0. For persistence, enable via systemd: systemctl enable --now wg-quick@wg0.

4. inspect tunnel traffic and logs

  • Checks (in wg LXC):
    • tunnel status: wg show (confirm latest handshake < 2 minutes ago, and some rx/tx bytes when testing access).
    • packet capture: install tcpdump if needed (apt install tcpdump), then tcpdump -i wg0 port 8096 while accessing via Pangolin (should see incoming SYN packets from Pangolin’s IP).
    • outgoing to jellyfin: tcpdump -i eth0 port 8096 (should see forwarded traffic to 10.30.30.25).
  • pangolin Side (assuming you have SSH to the VPS):
    • check gerbil container: docker exec -it gerbil wg show (confirm handshake and peer to your home public key).
    • logs: docker logs gerbil (look for errors related to your resource or tunnel).
    • test from Pangolin: docker exec -it gerbil curl -v http://100.89.x.x:8096 (replace with your peer IP; if this times out, the issue is tunnel forwarding).
  • Home Side Logs: Check Jellyfin logs (usually in /var/lib/jellyfin/log/) for incoming connections from the WireGuard LXC’s IP. Also, dmesg or journalctl for WireGuard errors.
  • If no traffic: Increase PersistentKeepalive to 25 (common for NAT traversal). Ensure your home firewall allows UDP 51820 inbound/outbound.

5. Return Path and Routing

  • Your MASQUERADE hides the source IP, so return traffic from jellyfin should go back to the WireGuard LXC’s eth0 IP, then get un-NATed and sent through the tunnel.
  • Check: In jellyfin LXC, make sure default route points to the proxmox host or bridge gateway if needed (usually automatic). Run ip route in both LXCs to confirm 10.30.30.0/24 is local/reachable.
  • if jellyfin is firewalled, allow traffic from the wg LXC’s IP (not the tunnel IP).

6. other Issues

  • MTU/fragmentation: your MTU=1280 and clamps are good, but test by accessing a small endpoint like http://10.30.30.25:8096/system/info via curl in wg LXC.
  • proxmox firewall: If enabled at host/datacenter level, allow traffic between LXC and UDP 51820.
  • pangolin resource config: Double-check the resource is assigned to the correct wg site. test with a simple resource (e.g., if you have another service on port 80).
  • bandwidth/ISP: if your VPS or home connection is slow, large Jellyfin requests (e.g., streaming) might timeout—start with a small file test.
  • lxc capabilities: if iptables commands fail (check with journalctl -u wg-quick@wg0), add lxc.cgroup2.devices.allow: c 10:200 rwm and lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file to the wg LXC config.

start with changing the upstream to http:// and the connectivity tests—that resolves most timeouts like this. If it still fails, share output from the checks (e.g wg show, iptables -t nat -L, curl results)

1 Like

In Pangolin, do you have as IP for the Jellyfin Ressource the IP from your [Interface] Address = or another?

Thanks for the thorough response!

I feel a bit silly because the fix was simply setting the upstream resource in Pangolin to http:// instead of https://

Thanks for the help!

1 Like

I gave a detailed response for future readers. I knew http was the issue but included and covered all the points.

2 Likes