Unraid Network Segmentation with Pangolin
graph TB
subgraph "Internet"
VPS[Pangolin on VPS<br/>Public IP]
end
subgraph "Home Network - 192.168.1.0/24 - VLAN 1"
RT[Router/Firewall]
MG[Management Network<br/>192.168.1.1/24]
end
subgraph "Unraid Server"
subgraph "Docker Networks"
direction TB
NET1[Management Network<br/>172.16.1.0/24]
NET2[Media Network<br/>172.16.2.0/24]
NET3[Development Network<br/>172.16.3.0/24]
NET4[Database Network<br/>172.16.4.0/24]
NEWT[Newt Container<br/>172.16.1.10]
subgraph "Media Services - VLAN 10"
JF[Jellyfin<br/>172.16.2.10]
PL[PhotoPrism<br/>172.16.2.11]
end
subgraph "Dev Services - VLAN 20"
CL[Coolify<br/>172.16.3.10]
VS[VSCode<br/>172.16.3.11]
end
subgraph "Databases - VLAN 30"
DB1[MariaDB<br/>172.16.4.10]
DB2[PostgreSQL<br/>172.16.4.11]
end
end
end
VPS <--> RT
RT --> MG
MG --> NET1
NET1 --> NEWT
NEWT --> NET2
NEWT --> NET3
NEWT --> NET4
NET2 --> JF
NET2 --> PL
NET3 --> CL
NET3 --> VS
NET4 --> DB1
NET4 --> DB2
classDef network fill:#f9f,stroke:#333
class NET1,NET2,NET3,NET4 network
TrueNAS Network Segmentation with Pangolin.
I use old deployment and will update the guide in the future for docker based.
graph TB
subgraph "Internet"
VPS[Pangolin on VPS<br/>Public IP]
end
subgraph "Home Network - 192.168.1.0/24 - VLAN 1"
RT[Router/Firewall]
MG[Management Network<br/>192.168.1.1/24]
end
subgraph "TrueNAS Scale"
subgraph "K8s Networks"
direction TB
NET1[Management Network<br/>10.10.1.0/24]
NET2[Application Network<br/>10.10.2.0/24]
NET3[Storage Network<br/>10.10.3.0/24]
NET4[Backup Network<br/>10.10.4.0/24]
NEWT[Newt Pod<br/>10.10.1.10]
subgraph "Apps - VLAN 40"
NC[Nextcloud<br/>10.10.2.10]
HA[Home Assistant<br/>10.10.2.11]
end
subgraph "Storage - VLAN 50"
NFS[NFS Share<br/>10.10.3.10]
iSCSI[iSCSI Target<br/>10.10.3.11]
end
subgraph "Backup - VLAN 60"
BK1[Backup Target<br/>10.10.4.10]
BK2[Replication<br/>10.10.4.11]
end
end
end
VPS <--> RT
RT --> MG
MG --> NET1
NET1 --> NEWT
NEWT --> NET2
NEWT --> NET3
NEWT --> NET4
NET2 --> NC
NET2 --> HA
NET3 --> NFS
NET3 --> iSCSI
NET4 --> BK1
NET4 --> BK2
classDef network fill:#f9f,stroke:#333
class NET1,NET2,NET3,NET4 network
Proxmox Network Segmentation with Pangolin
graph TB
subgraph "Internet"
VPS[Pangolin on VPS<br/>Public IP]
end
subgraph "Home Network - 192.168.1.0/24 - VLAN 1"
RT[Router/Firewall]
MG[Management Network<br/>192.168.1.1/24]
end
subgraph "Proxmox Node"
subgraph "Virtual Networks"
direction TB
NET1[Management Bridge<br/>10.0.1.0/24]
NET2[Application Bridge<br/>10.0.2.0/24]
NET3[Database Bridge<br/>10.0.3.0/24]
NET4[Backup Bridge<br/>10.0.4.0/24]
NEWT[Newt LXC<br/>10.0.1.10]
subgraph "Apps VMs - VLAN 70"
GT[Gitea VM<br/>10.0.2.10]
DZ[Dozzle VM<br/>10.0.2.11]
end
subgraph "DBs - VLAN 80"
GF[Grafana LXC<br/>10.0.3.10]
PR[Prometheus LXC<br/>10.0.3.11]
end
subgraph "Backup - VLAN 90"
BK1[Backup VM<br/>10.0.4.10]
BK2[Archive VM<br/>10.0.4.11]
end
end
end
VPS <--> RT
RT --> MG
MG --> NET1
NET1 --> NEWT
NEWT --> NET2
NEWT --> NET3
NEWT --> NET4
NET2 --> GT
NET2 --> DZ
NET3 --> GF
NET3 --> PR
NET4 --> BK1
NET4 --> BK2
classDef network fill:#f9f,stroke:#333
class NET1,NET2,NET3,NET4 network
I’ll create a detailed networking guide for each hypervisor platform that ensures proper service isolation and privacy. Let me create diagrams that show the network segmentation and then explain the implementation details.
Let me explain the detailed networking setup for each platform:
Unraid Network Implementation
The Unraid setup uses Docker networks and VLANs to create isolated segments:
- Management Network (VLAN 1):
# Create management network
docker network create \
--driver=bridge \
--subnet=172.16.1.0/24 \
--gateway=172.16.1.1 \
management_net
- Media Network (VLAN 10):
# Create media services network
docker network create \
--driver=bridge \
--subnet=172.16.2.0/24 \
--gateway=172.16.2.1 \
--opt "com.docker.network.bridge.name"="media_net" \
media_net
# Configure VLAN tagging
ip link set media_net type vlan id 10
- Development Network (VLAN 20):
# Create development network
docker network create \
--driver=bridge \
--subnet=172.16.3.0/24 \
--gateway=172.16.3.1 \
--opt "com.docker.network.bridge.name"="dev_net" \
dev_net
# Configure VLAN tagging
ip link set dev_net type vlan id 20
- Firewall Rules for Unraid:
# Allow only necessary traffic between networks
iptables -A FORWARD -i media_net -o dev_net -j DROP
iptables -A FORWARD -i media_net -o management_net -j DROP
iptables -A FORWARD -i dev_net -o media_net -j DROP
TrueNAS Scale Network Implementation. I use old deployment and will update the guide in the future for docker based
TrueNAS Scale uses Kubernetes networking with network policies:
- Create Network Policies:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-isolation
spec:
podSelector:
matchLabels:
role: application
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
role: newt
egress:
- to:
- podSelector:
matchLabels:
role: storage
- Configure VLANs in TrueNAS:
# Create VLAN interfaces
iface enp1s0.40 inet static
vlan-raw-device enp1s0
address 10.10.2.1
netmask 255.255.255.0
vlan_id 40
iface enp1s0.50 inet static
vlan-raw-device enp1s0
address 10.10.3.1
netmask 255.255.255.0
vlan_id 50
- Storage Network Isolation:
# Create dedicated storage network
apiVersion: v1
kind: ConfigMap
metadata:
name: storage-config
data:
nfs.conf: |
[nfs]
bind=10.10.3.10
ports=2049
sec=sys
Proxmox Network Implementation
Proxmox uses Linux bridges and VLANs for network isolation:
- Create Network Bridges:
# Management Bridge
echo "auto vmbr0
iface vmbr0 inet static
address 10.0.1.1/24
bridge-ports eno1
bridge-stp off
bridge-fd 0" >> /etc/network/interfaces
# Application Bridge
echo "auto vmbr1
iface vmbr1 inet static
address 10.0.2.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward" >> /etc/network/interfaces
- Configure VLAN Tagging:
# Create VLAN interfaces for each bridge
echo "auto vmbr0.70
iface vmbr0.70 inet manual
vlan-raw-device vmbr0" >> /etc/network/interfaces
echo "auto vmbr1.80
iface vmbr1.80 inet manual
vlan-raw-device vmbr1" >> /etc/network/interfaces
- LXC Container Configuration:
# Configure network isolation for LXC containers
pct set 100 -net0 name=eth0,bridge=vmbr0,ip=10.0.1.10/24,gw=10.0.1.1,firewall=1
pct set 101 -net0 name=eth0,bridge=vmbr1,ip=10.0.2.10/24,gw=10.0.2.1,firewall=1
- Firewall Rules:
# Create firewall rules for VM and container isolation
pvesh create /cluster/firewall/groups -group "management"
pvesh create /cluster/firewall/rules -action ACCEPT -type group -group "management" -enable 1
Common Security Considerations for All Platforms:
- Implement MAC address filtering:
# Example for management network
ebtables -A FORWARD -i eth0 -j DROP -s ! aa:bb:cc:dd:ee:ff
- Enable strict reverse path filtering:
# Enable reverse path filtering
sysctl -w net.ipv4.conf.all.rp_filter=1
sysctl -w net.ipv4.conf.default.rp_filter=1
- Disable unnecessary network services:
# Disable IPv6 if not needed
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
- Monitor network traffic:
# Install and configure network monitoring
apt install nethogs iftop
These network configurations ensure that:
- Each service runs in its own isolated network segment
- Inter-service communication is strictly controlled
- Management access is separated from application traffic
- Storage networks are properly isolated
- Backup traffic uses dedicated networks
- All traffic between networks must pass through Pangolin’s authentication
The key to maintaining privacy is ensuring that:
- Services can only communicate through defined channels
- Each network segment has its own VLAN
- Firewall rules enforce the isolation
- Monitoring is in place to detect any anomalies
TrueNAS with docker
graph TB
subgraph "Internet"
VPS[Pangolin on VPS<br/>Public IP]
end
subgraph "Home Network - 192.168.1.0/24 - VLAN 1"
RT[Router/Firewall]
MG[Management Network<br/>192.168.1.1/24]
end
subgraph "TrueNAS Scale"
subgraph "Docker Networks"
direction TB
NET1[Management Network<br/>172.20.1.0/24]
NET2[Application Network<br/>172.20.2.0/24]
NET3[Storage Network<br/>172.20.3.0/24]
NET4[Backup Network<br/>172.20.4.0/24]
NEWT[Newt Container<br/>172.20.1.10]
subgraph "Apps - VLAN 40"
NC[Nextcloud<br/>172.20.2.10]
HA[Home Assistant<br/>172.20.2.11]
end
subgraph "Storage Services - VLAN 50"
SMB[SMB Shares<br/>172.20.3.10]
NFS[NFS Services<br/>172.20.3.11]
end
subgraph "Database Services - VLAN 60"
DB1[MariaDB<br/>172.20.4.10]
DB2[Redis<br/>172.20.4.11]
end
end
subgraph "ZFS Datasets"
DS1[Apps Dataset]
DS2[Media Dataset]
DS3[Backup Dataset]
end
end
VPS <--> RT
RT --> MG
MG --> NET1
NET1 --> NEWT
NEWT --> NET2
NEWT --> NET3
NEWT --> NET4
NET2 --> NC
NET2 --> HA
NET3 --> SMB
NET3 --> NFS
NET4 --> DB1
NET4 --> DB2
NC --> DS1
HA --> DS1
SMB --> DS2
NFS --> DS2
DB1 --> DS3
DB2 --> DS3
classDef network fill:#f9f,stroke:#333
class NET1,NET2,NET3,NET4 network
classDef storage fill:#ffd,stroke:#333
class DS1,DS2,DS3 storage
TrueNAS Scale Docker Network Implementation
TrueNAS Scale now uses Docker networks and VLANs for service isolation. Here’s how to set it up:
- First, create isolated Docker networks:
# Create management network for Newt
docker network create \
--driver=bridge \
--subnet=172.20.1.0/24 \
--gateway=172.20.1.1 \
--opt "com.docker.network.bridge.name"="mgmt_net" \
management_net
# Create application network
docker network create \
--driver=bridge \
--subnet=172.20.2.0/24 \
--gateway=172.20.2.1 \
--opt "com.docker.network.bridge.name"="app_net" \
application_net
# Create storage network
docker network create \
--driver=bridge \
--subnet=172.20.3.0/24 \
--gateway=172.20.3.1 \
--opt "com.docker.network.bridge.name"="storage_net" \
storage_net
# Create database network
docker network create \
--driver=bridge \
--subnet=172.20.4.0/24 \
--gateway=172.20.4.1 \
--opt "com.docker.network.bridge.name"="db_net" \
database_net
- Configure VLAN tagging for network isolation:
# Create VLAN interfaces in TrueNAS Scale shell
cat >> /etc/netctl/vlan40 << EOF
Description='Application VLAN'
Interface=vlan40
Connection=vlan
VLANID=40
BindsToInterfaces=(igb0)
IP=static
Address=('172.20.2.1/24')
EOF
cat >> /etc/netctl/vlan50 << EOF
Description='Storage VLAN'
Interface=vlan50
Connection=vlan
VLANID=50
BindsToInterfaces=(igb0)
IP=static
Address=('172.20.3.1/24')
EOF
- Set up Docker Compose for your applications with proper network isolation:
version: "3.8"
services:
newt:
image: fosrl/newt
networks:
- management_net
environment:
- PANGOLIN_ENDPOINT=https://yourdomain.com
- NEWT_ID=your_id
- NEWT_SECRET=your_secret
restart: unless-stopped
nextcloud:
image: nextcloud
networks:
- application_net
- database_net
volumes:
- nextcloud_data:/var/www/html
depends_on:
- db
homeassistant:
image: homeassistant/home-assistant
networks:
- application_net
volumes:
- hass_config:/config
environment:
- TZ=YOUR_TIMEZONE
db:
image: mariadb
networks:
- database_net
volumes:
- db_data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=secure_password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=secure_password
networks:
management_net:
external: true
application_net:
external: true
database_net:
external: true
volumes:
nextcloud_data:
driver: local
driver_opts:
type: zfs
device: tank/apps/nextcloud
hass_config:
driver: local
driver_opts:
type: zfs
device: tank/apps/homeassistant
db_data:
driver: local
driver_opts:
type: zfs
device: tank/apps/database
- Implement network security rules:
# Create firewall rules for network isolation
ipfw add 100 deny all from any to 172.20.3.0/24 not via storage_net
ipfw add 200 deny all from any to 172.20.4.0/24 not via database_net
ipfw add 300 allow tcp from 172.20.2.0/24 to 172.20.4.0/24 dst-port 3306
- Configure ZFS datasets with proper permissions:
# Create datasets with appropriate permissions
zfs create -o mountpoint=/mnt/tank/apps tank/apps
zfs create -o mountpoint=/mnt/tank/apps/nextcloud tank/apps/nextcloud
zfs create -o mountpoint=/mnt/tank/apps/homeassistant tank/apps/homeassistant
# Set appropriate permissions
chmod 770 /mnt/tank/apps/nextcloud
chown www-data:www-data /mnt/tank/apps/nextcloud
- Enable monitoring for network security:
# Install monitoring tools
pkg install -y iftop tcpdump
# Create monitoring script
cat > /root/monitor-networks.sh << EOF
#!/bin/bash
iftop -i igb0 -F 172.20.0.0/16 -t > /var/log/network-usage.log
EOF
chmod +x /root/monitor-networks.sh
- Set up automatic backups for Docker volumes:
# Create backup script for Docker volumes
cat > /root/backup-volumes.sh << EOF
#!/bin/bash
DATE=$(date +%Y%m%d)
zfs snapshot tank/apps@backup-$DATE
zfs send tank/apps@backup-$DATE | gzip > /mnt/backups/apps-$DATE.gz
EOF
chmod +x /root/backup-volumes.sh
This updated TrueNAS Scale configuration provides:
- Isolated network segments for different types of services
- Direct integration with ZFS for persistent storage
- Proper backup capabilities for Docker volumes
- Network monitoring and security controls
- VLAN separation for different service types
- Efficient resource management through Docker
The key differences from the Kubernetes version are:
- Direct Docker network configuration instead of K8s networking
- ZFS integration through Docker volumes
- Simpler deployment and management through Docker Compose
- More straightforward backup procedures
- Direct access to host networking features
- Easier resource allocation and monitoring