Docker on NAS: network architecture and best practices
- Published on
- ·6 min read
Why Docker on a NAS
A NAS stopped being just a file server a long time ago. It's a media server. A photo manager. A download aggregator. A home automation dashboard. Install all that natively on Debian and you've got a mess of dependency conflicts everywhere. Docker solves that. Isolation. Each service in its own container, its own dependencies, zero conflicts.
But Docker on a NAS needs real thought: where do you store data? How do you isolate networks? How do you stop Docker from bypassing the firewall? Here's the architecture I built, and why each decision matters.
Installation: Docker CE, not docker.io
Rule one: don't install docker.io from the Debian repos. That package is always 2-3 versions behind. Use the official Docker CE repository:
# Add the Docker CE repository
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker.gpg] \
https://download.docker.com/linux/debian trixie stable" \
> /etc/apt/sources.list.d/docker.list
apt update
apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin
Also install the Compose v2 plugin (not the old Python docker-compose). Command becomes docker compose (no hyphen). Faster, modern.
Data root on the RAID
By default Docker stores everything in /var/lib/docker. On my NAS that's a 4 GB USB stick. Definitely not where you want Docker images. Move the data root to the RAID Btrfs:
// /etc/docker/daemon.json (excerpt)
{
"data-root": "/mnt/data/docker"
}
All layers, volumes, images, containers now live on /mnt/data/docker. The RAID handles the rest.
Hardened daemon.json
The daemon.json file is Docker's control panel. Here's my full configuration:
{
"data-root": "/mnt/data/docker",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 65536,
"Soft": 65536
}
},
"userns-remap": "default",
"no-new-privileges": true,
"storage-driver": "overlay2",
"live-restore": true
}
What this actually does:
- log-driver + log-opts: automatic log rotation (10 MB max, 3 files, then purge). Without this a chatty container fills your disk in days. Learned that one the hard way.
- default-ulimits: raises the open file limit. Intensive services need it.
- userns-remap: containers run with shifted UID mapping. Root inside the container is never root on the host. Subtle but it's real attack mitigation.
- no-new-privileges: processes inside containers can't gain additional privileges via setuid/setgid.
- live-restore: if the Docker daemon restarts, containers keep running. Useful for updates without downtime.
Network architecture: isolation by stack
Instead of everything on the default bridge network, create separate Docker networks by service group:
docker network create proxy_net
docker network create arr_net
docker network create photos_net
docker network create monitoring_net
docker network create auth_net
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
a1b2c3d4e5f6 bridge bridge local
f6e5d4c3b2a1 host host local
1a2b3c4d5e6f none null local
7f8e9d0c1b2a proxy_net bridge local
3c4d5e6f7a8b arr_net bridge local
9a0b1c2d3e4f photos_net bridge local
5e6f7a8b9c0d monitoring_net bridge local
2d3e4f5a6b7c auth_net bridge local
Why bother? Blast radius. If a container gets compromised, it can only talk to its neighbors. Radarr can't see Immich. Grafana can't see Jellyfin. Only the reverse proxy (Traefik or Caddy) connects to multiple networks to route traffic. It's basically VLANs for containers.
A container can connect to multiple networks when needed. Sonarr lives on arr_net to talk to Prowlarr and downloaders, but also on proxy_net for reverse proxy access.
The UFW + Docker gotcha
Classic trap. Everyone hits it at least once (I hit it twice). Docker manipulates iptables directly, completely bypassing UFW. You block port 8080 in UFW all you want - if a container publishes it, it's open. Infuriating.
Proper fix has two parts:
1. The DOCKER-USER chain
Docker creates a special iptables chain DOCKER-USER evaluated before its own rules. That's where we add restrictions:
# /etc/ufw/after.rules (at the end of the file)
*filter
:DOCKER-USER - [0:0]
# Only allow LAN access to Docker ports
-A DOCKER-USER -s 192.168.1.0/24 -j ACCEPT
-A DOCKER-USER -s 172.16.0.0/12 -j ACCEPT
-A DOCKER-USER -j DROP
COMMIT
2. Restriction in daemon.json
Also restrict the default listening address:
{
"ip": "192.168.1.50"
}
Published ports now listen only on the NAS's LAN IP, not all interfaces.
Combine both? Containers are accessible only from the local network, regardless of Docker's iptables shenanigans.
The PUID/PGID pattern
Most of my containers come from linuxserver.io. These images use an elegant pattern: instead of running as root, they accept PUID and PGID environment variables:
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Paris
1000:1000 maps to my nasadmin user on the host. Result? Files created by containers have correct ownership. Zero permission headaches. One simple thing that saves hours of debugging.
Directory structure
I adopted a clean layout on the RAID:
/mnt/data/
├── apps/
│ ├── traefik/config/
│ ├── authelia/config/
│ ├── jellyfin/config/
│ ├── immich/config/
│ ├── sonarr/config/
│ ├── radarr/config/
│ ├── prowlarr/config/
│ └── grafana/config/
├── media/
│ ├── movies/
│ ├── tv/
│ └── music/
├── downloads/
│ ├── complete/
│ └── incomplete/
├── photos/
└── docker/ # Docker data-root
Each service gets its own directory under /mnt/data/apps/<service>/config. Shared data (media, downloads, photos) sits in common directories bind-mounted into containers that need them. Clean. Predictable.
Compose file organization
One docker-compose.yml per functional stack:
compose/
├── auth/ # Authelia, LLDAP
├── media/ # Jellyfin
├── arr/ # Sonarr, Radarr, Prowlarr, qBittorrent
├── photos/ # Immich
├── files/ # Syncthing, Samba
├── monitoring/ # Grafana, Prometheus, node-exporter
└── homelab/ # Homepage, Uptime Kuma
Here's an example for the monitoring stack:
# compose/monitoring/docker-compose.yml
services:
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: unless-stopped
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD}
volumes:
- /mnt/data/apps/grafana/config:/var/lib/grafana
networks:
- monitoring_net
- proxy_net
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
volumes:
- /mnt/data/apps/prometheus/config:/etc/prometheus
- prometheus_data:/prometheus
networks:
- monitoring_net
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
pid: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/rootfs'
networks:
- monitoring_net
networks:
monitoring_net:
external: true
proxy_net:
external: true
volumes:
prometheus_data:
Grafana is the only one also on proxy_net because it needs reverse proxy access. Prometheus and node-exporter stay confined to monitoring_net. That's intentional.
Ansible toggles
Each stack can be enabled or disabled via Ansible:
# group_vars/nas.yml
docker_apps_auth_enabled: true
docker_apps_media_enabled: true
docker_apps_arr_enabled: true
docker_apps_photos_enabled: true
docker_apps_files_enabled: true
docker_apps_monitoring_enabled: true
docker_apps_homelab_enabled: true
The Ansible role deploys only enabled stacks. Want to temporarily disable Immich? Set docker_apps_photos_enabled: false and rerun the playbook. Containers stop, compose files cleanly removed. No mess.
The payoff
Docker on a NAS doesn't happen by accident. Network isolation, daemon hardening, storage management on RAID, firewall coexistence - all require upfront work. Once in place though, adding a new service is: create a compose file, attach to the right network, flip an Ansible toggle. The NAS becomes a real service platform, maintainable, secure, and not a mess of tangled dependencies.
Debian NAS from scratch series — This article is part of a complete series on building a Debian NAS.
Previous: Installing Debian on a NAS in fully automated mode | Next: Firewall and Fail2ban: locking down NAS network access