I run a small fleet of Linux servers — bare metal, VPS, and a couple of container hosts — that quietly power side projects and a couple of paid products. Nothing exotic, but real customers depend on them being up.
Production rarely fails all at once. It usually creeps: a service starts swapping under load, memory ticks up by a few megabytes per hour, a noisy neighbor on a shared disk turns 5 ms reads into 500 ms. By the time alerts fire on “site is down,” you're not investigating a fire — you're explaining one to customers.
I've been running Netdata across these servers for years, and more than a few times it has caught the slow burn before it turned into a real incident. This article is how I use it: install, defaults to change, parent–child for scale, and three production stories where it saved me an outage.
What we want from monitoring
Before the tooling, the goals. Monitoring isn't about pretty dashboards — it's about answering three questions, fast:
- What changed? When the box starts misbehaving at 02:14, what was different at 02:13?
- Where is the bottleneck? CPU, RAM, swap, disk I/O, network, file descriptors, a specific process?
- How do we know before users do? What threshold should have paged us 20 minutes earlier?
Concretely, I want metrics for:
- CPU utilization, per core, with system / user / iowait split out.
- Memory pressure, swap-in / swap-out rate.
- Disk throughput, IOPS, queue length, errors.
- Network throughput, retransmits, packet drops.
- Per-service health: nginx, Postgres, MySQL, Redis, Docker containers, systemd units.
- Alerts that actually page me — not 200 dashboards I never look at.
Why Netdata, in particular
There are good reasons people pick Prometheus + Grafana — long retention, query language, ecosystem of exporters. I use Netdata because of a different set of trade-offs:
- Install is one command. A single
kickstart.shand the agent is collecting hundreds of metrics within a minute, with sane defaults already enabled. - Tiny footprint. A typical agent runs at under 2% CPU and around 50–100 MB RAM. I'm not dedicating servers to monitoring.
- Per-second resolution, in real time. Most stacks aggregate at 10–30 seconds. With 1-second resolution, a 4-second spike is a visible notch instead of a phantom.
- Auto-discovery. Install nginx, restart Netdata, and you have nginx charts. Same for Postgres, Redis, Docker, systemd — no scrape config to write.
- Health alerts out of the box. Hundreds of pre-configured alarms (swap, disk fill rate, ramping load) ship enabled. You start with reasonable defaults and tune from there.
I think of Netdata as the day-1 tool that gets you 80% of the visibility for 5% of the effort. If you eventually need long retention, custom queries, or exotic SLOs, you can run Netdata alongside something else or migrate. But you don't need to start there.
When Netdata caught what would have become an incident
Three concrete stories. None of these would have shown up on a “service is up” healthcheck — by the time that flipped, the page would already be coming.
The silent swap storm
Symptoms. A backend API started returning random 500s in the morning. Latency dashboards showed elevated p95 but no clear cause. CPU was “fine.” The team's instinct was “restart the service and see.”
What Netdata showed. The mem.swapio chart had been quietly accumulating swap-out for the previous 36 hours. Free memory was near zero. The kernel was thrashing — CPU was “fine” because it was waiting on disk, not running anything useful.
Fix. A config rollout had bumped a pool size 5×, the box had no swap headroom for it, and once memory was tight every minor allocation pushed something into swap. We rolled the config back, added an alert on system.ram available_percent < 15 and another on mem.swapio out > 1MB/s for 5m. The next memory creep would page us a day before users noticed.
The memory leak that took a week
Symptoms. A Python worker drifted from ~400 MB to ~3.5 GB over six days. No crashes — the box had headroom — but every restart deploy reset the leak, masking it from anything that only looked at point-in-time RAM.
What Netdata showed. The apps.mem chart for that process had a beautiful linear ramp. Restarts stood out as crisp drops, six of them in a week. Once the pattern was visible, the cause (a cache without an eviction policy) was 30 minutes of tracemalloc.
Fix. Bound the cache, add an alert on apps.mem for the worker breaching 1 GB, and re-run the leak hunt with the alert as a backstop. Without per-process memory history across restarts, the leak would have continued until the OOM killer made it everyone's problem.
The noisy neighbor on shared disk
Symptoms. Postgres on a VPS started getting slow at random times of day. EXPLAIN looked fine. Connections fine. CPU fine.
What Netdata showed. disk.iops and disk.await spiked together, but our own disk.io in MB/s was modest — meaning we weren't doing the I/O. The host was shared, and a neighbor had become noisy.
Fix. Migrate Postgres to a host with dedicated NVMe. Add an alert on disk.await > 50ms for 5m so the next time a neighbor wakes up, we know in minutes, not hours of customer reports.
Quick start: from zero to useful graphs in 10 minutes
If you're trying this on one server right now, this is the path I take.
1. Install the agent. The official kickstart script handles distro detection, package signing, and systemd wiring.
# one-line install (Linux)
wget -O /tmp/netdata-kickstart.sh https://get.netdata.cloud/kickstart.sh
sh /tmp/netdata-kickstart.sh --stable-channel --disable-telemetry2. Lock down the dashboard. The default port 19999 should never be open to the public internet. Bind to localhost and reach it via SSH tunnel.
[web]
bind to = 127.0.0.1Then, from your laptop:
ssh -L 19999:localhost:19999 user@your-host
# open http://localhost:19999 in your browser3. Tune the alerts that matter. Defaults are reasonable but noisy. The handful I always touch on day one:
system.ram— available memory percent.mem.swapio— sustained swap-out rate.disk.space— both fill rate and absolute %.disk.await— read/write latency.- Postgres / MySQL / Redis connection saturation if any of those are present.
Files live in /etc/netdata/health.d/. Reload with netdatacli reload-health — no restart needed.
4. Wire notifications. Edit /etc/netdata/health_alarm_notify.conf and pick a channel. Telegram is the lightest: a bot token plus a chat ID and you're done. Slack works similarly with a webhook URL.
Pro tip: turn off the dozens of charts you don't care about. Netdata is happy to chart the temperature of every NVMe sensor; you don't need to look at it. Mute aggressively — your dashboard becomes more trustworthy when nothing on it is noise.
Scaling: parent–child for a fleet
On a single host, the Netdata agent both collects and stores metrics. Once you have more than a handful of nodes, you don't want to log into each one to look at graphs, and you want longer retention than fits comfortably on each node.
The parent–child model fixes both:
- Children run a stripped-down agent that collects metrics and streams them to a parent.
- Parents receive streams from many children and store them centrally, with whatever retention you want.
A topology that works well for a small fleet: app/web/db nodes as children, one parent on a dedicated VM with a generous SSD. Alerts run on the parent, children only collect.
┌──────────────────────┐
│ parent-monitor │
│ 30d retention │
│ alerts → telegram │
└──────────▲───────────┘
│ stream
┌────────────────────┼────────────────────┐
│ │ │
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
│ web-01 │ │ db-01 │ … │ bg-01 │
│ child │ │ child │ │ child │
└─────────┘ └─────────┘ └─────────┘
children: collect only · 1h local cache
The configuration knobs I touch:
/etc/netdata/stream.confon each child: an API key plus the parent's URL.- The same file on the parent: list of allowed children with the matching API key.
- The
[db]section innetdata.confon the parent: bump the multi-tier retention to a few weeks for high-resolution data and several months for downsampled tiers. - Move all alerts to the parent so you have one place to tune them.
When to skip parent–child: a single node, or a small cluster where each agent's built-in dashboard is enough. The setup cost isn't huge, but it's not free either, and one parent is one more thing to keep alive.
Practical best practices
What I've learned from running this at small scale:
- Tune for signal, not coverage. A default Netdata install surfaces a lot. Mute charts you don't look at. You'll trust the dashboard more, and you'll spot the unusual one faster.
- Daily glance, weekly review. Daily: open the parent, scan RAM / swap / disk-await / error-rate on the top hosts. Weekly: walk the alarm history and adjust thresholds for anything that paged but didn't need to.
- Show non-engineers a sanitized view. The full dashboard is overwhelming. A custom page with “API latency, error rate, queue depth” is much more useful for a product person than a wall of kernel terminology.
The anti-patterns that bite hardest:
- Opening port 19999 to the public internet. Don't.
- Leaving every default alert routed to Slack until the channel becomes background noise.
- Treating every yellow as an incident. Yellow exists to be looked at, not to be paged on.
- Forgetting that the parent is now a single point of failure for visibility. Back up its config; consider a second parent if uptime of monitoring is itself a requirement.
Where Netdata fits in observability
Netdata is the metrics + realtime alerts layer. It is not a logs system, and it is not a tracing system. The model I use:
- Netdata for host and service metrics, realtime, alerts.
- Logs in a centralized store (Loki, an ELK setup, or something hosted) for after-the-fact investigation.
- Tracing only when there's a multi-service request graph that justifies it. For a small fleet, structured logs plus Netdata is enough.
Alert routing stays boring on purpose: Netdata → Telegram for me, Slack for the team. Don't over-engineer the alert pipeline before you've tuned which alerts actually matter.
Wrapping up
Three years of Netdata in production has translated into roughly the same number of incidents that would have been outages but became minor merge requests instead. The cost is one install command, a config to bind to localhost, an hour of alert tuning, and the discipline to mute charts you don't use.
If you're running Linux in production and don't have monitoring you trust, install Netdata on one server today. Bind it to localhost, tunnel in, look at the graphs for ten minutes. You'll find at least one thing you didn't know was happening.