NAS monitoring: watching disks, services and logs at a glance
- Published on
- ·5 min read
Why monitor your NAS?
A NAS running 24/7 means things will fail silently. Services crash. Disks degrade. Docker logs pile up unread. Then one day a disk starts showing bad sectors, a service dies without warning, or a container enters a restart loop nobody notices.
On my TerraMaster F4-424 running Debian 13, I deployed four tools to monitor everything properly. Each one has a specific job, no redundancy. Everything runs as Docker containers on an isolated network.
The monitoring architecture
Quick overview first:
- Scrutiny (port 9998): SMART disk health
- Uptime Kuma (port 3001): service availability
- Dozzle (port 9999): real-time Docker logs
- Homepage (port 3010): unified dashboard
All on monitoring_net, configurations under /mnt/data/apps/monitoring/<service>/.
Scrutiny: disk health (don't skip this)
This is the critical tool. A disk showing degradation is an alert you absolutely cannot miss. Scrutiny collects SMART data from all drives and displays them with historical trends.
The metrics that matter:
- Temperature: overheating disk = accelerated aging
- Reallocated Sectors: reallocated sectors = physical degradation happening
- Current Pending Sectors: sectors waiting reallocation = imminent problems
- Read/Write errors: errors climbing = disaster incoming
Scrutiny works in two parts: a collector that queries disks periodically, and a web UI to visualize results and trends.
Docker config is a bit special since Scrutiny needs direct access to physical drives:
# docker-compose.yml (monitoring excerpt)
services:
scrutiny:
image: ghcr.io/analogj/scrutiny:master-omnibus
container_name: scrutiny
restart: unless-stopped
ports:
- '9998:8080'
volumes:
- /mnt/data/apps/monitoring/scrutiny/config:/opt/scrutiny/config
- /mnt/data/apps/monitoring/scrutiny/influxdb:/opt/scrutiny/influxdb
- /run/udev:/run/udev:ro
devices:
- /dev/sda
- /dev/sdb
- /dev/sdc
- /dev/sdd
cap_add:
- SYS_RAWIO
networks:
- monitoring_net
cap_add: SYS_RAWIO is mandatory. Without it, the collector can't query drives via SMART. Mounting /run/udev read-only ensures proper disk identification.
Uptime Kuma: the alert that actually helps
Uptime Kuma is my go-to tool for monitoring service availability. HTTP, HTTPS, TCP, Ping, DNS - everything. Every NAS service gets its own monitor.
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
restart: unless-stopped
ports:
- '3001:3001'
volumes:
- /mnt/data/apps/monitoring/uptime-kuma/data:/app/data
networks:
- monitoring_net
What I actually monitor:
- HTTP checks: Jellyfin, Immich, Home Assistant, Sonarr, Radarr, etc.
- TCP checks: Samba (445), SSH (22), NFS (2049)
- Ping checks: the NAS itself, the internet router, network switches
Beyond just detecting outages, the response time graphs are gold. A gradual slowdown can reveal disk, network, or memory issues before the service actually crashes.
Notifications? Uptime Kuma supports email, Telegram, Discord, Slack, Gotify, ntfy... I use Telegram to get alerts straight to my phone.
You can also create status pages - public or internal - handy for sharing service status with household members.
Dozzle: debug in seconds
Container behaving weirdly? Check the logs. Dozzle provides a web interface to stream all container logs in real time, no SSH required.
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: unless-stopped
ports:
- '9999:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- monitoring_net
Dozzle is intentionally minimal: no database, no persistence, no background collection. It reads from the Docker socket and displays logs on the fly. That's what keeps it lightweight - just a few MB of RAM.
Features I use daily:
- Search and filtering within a container's logs
- Multi-container view to correlate events across services
- Auto-scroll with pause when scrolling through history
- Regex support for fine-grained filtering
Read-only Docker socket access is sufficient. Dozzle only needs to read, not control containers.
Homepage: the control center you want
Remembering twenty service ports becomes a headache. Homepage provides a YAML-configurable dashboard.
homepage:
image: ghcr.io/gethomepage/homepage:latest
container_name: homepage
restart: unless-stopped
ports:
- '3010:3000'
volumes:
- /mnt/data/apps/monitoring/homepage/config:/app/config
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- monitoring_net
Configuration through YAML files in the config directory. Here's an excerpt from services.yaml:
# /mnt/data/apps/monitoring/homepage/config/services.yaml
- Media:
- Jellyfin:
href: http://192.168.1.50:8096
icon: jellyfin.svg
description: Media server
widget:
type: jellyfin
url: http://192.168.1.50:8096
key: '{{HOMEPAGE_VAR_JELLYFIN_KEY}}'
- Immich:
href: http://192.168.1.50:2283
icon: immich.svg
description: Photos and videos
- Downloads:
- qBittorrent:
href: http://192.168.1.50:8080
icon: qbittorrent.svg
description: BitTorrent client
widget:
type: qbittorrent
url: http://192.168.1.50:8080
username: admin
password: '{{HOMEPAGE_VAR_QBIT_PASSWORD}}'
- Sonarr:
href: http://192.168.1.50:8989
icon: sonarr.svg
description: TV series management
- Monitoring:
- Scrutiny:
href: http://192.168.1.50:9998
icon: scrutiny.svg
description: Disk health
- Uptime Kuma:
href: http://192.168.1.50:3001
icon: uptime-kuma.svg
description: Service availability
Homepage supports widgets for many services: Jellyfin displays active sessions, qBittorrent shows current downloads. The dashboard becomes a real control center.
The isolated Docker network
All services share a dedicated network:
networks:
monitoring_net:
name: monitoring_net
driver: bridge
Isolation provides two benefits: monitoring containers communicate with each other without going through published ports, and you maintain clear separation from other stacks (media, downloads, home automation).
Cockpit: the system console companion
Alongside the Docker stack, I have Cockpit (port 9090) installed natively on Debian. It's a web-based administration console providing system metrics the Docker tools don't cover: CPU, RAM, swap, disk space, systemd services, and even a web terminal.
sudo apt install cockpit
It pairs well: Cockpit handles the host system, the four Docker tools handle services and disks.
Final thoughts
Monitoring isn't a luxury on a NAS - it's insurance. Scrutiny caught an abnormal temperature spike before it became critical. Uptime Kuma alerts me within seconds when a service goes down. Dozzle saves me serious time debugging. Homepage gives me a single entry point to the entire ecosystem. Four lightweight tools, each with a clear purpose, deployed in minutes via Docker Compose. That's the beauty of it.
Debian NAS from scratch series — This article is part of a complete series on building a Debian NAS.
Previous: Managing photos with self-hosted Immich