Running a healthy Rust server requires ongoing monitoring. Problems develop gradually - entity creep, memory leaks, plugin degradation - and catching them early prevents player-facing issues.
Key Metrics
Server FPS (Framerate)
Rust servers target 30 FPS. Check with:
perf 1
This displays real-time performance data including FPS, entity count, and network stats.
- 28-30 FPS: Healthy. Everything runs smooth.
- 22-27 FPS: Warning. Something is consuming extra tick time. Investigate.
- 15-21 FPS: Problem. Players experience lag. Fix immediately.
- Below 15 FPS: Critical. Server is barely playable.
Entity Count
ent count
Track daily. Plot it on a chart. Entity count should:
- Rise during play hours (players building)
- Fall slightly during off-hours (decay removing abandoned structures)
- Reset on wipe
If entity count only rises and never falls, decay is too slow or disabled.
Memory Usage
Monitor through your hosting panel or htop. Rust memory should:
- Start at 4-6GB after fresh wipe
- Grow as the world develops (8-16GB typical for populated server)
- Stabilize after a few days (old entities decay, new ones replace)
If memory grows continuously without stabilization, you have a leak (likely a plugin).
Network Out
High network-out relative to player count indicates:
- Too many entities synchronizing
- Players in high-entity areas
- Possible network abuse
Oxide Profiling
With Oxide installed:
o.profiler.start
(wait 2-5 minutes during peak hours)
o.profiler.stop
o.profiler.dump
The profiler shows:
- Milliseconds per tick consumed by each plugin
- Hook call frequency
- Memory allocated per plugin
Any single plugin consuming more than 2-3ms per tick on a 200-player server is worth investigating.
External Monitoring
BattleMetrics
BattleMetrics.com tracks your server publicly:
- Player count over time
- Uptime history
- Peak hours
- Ranking compared to other servers
Custom Dashboards
For advanced operators, push metrics to Grafana/Prometheus:
- Server FPS over time
- Entity count history
- Player count correlation with FPS
- Memory usage trends
- Save duration tracking
Warning Signs
Watch for these patterns:
Gradual FPS decline over wipe cycle: Normal to some degree (more entities). But if FPS drops from 30 to 20 by mid-wipe, your entity management needs work.
Sudden FPS drops: A new plugin, a massive base, or a specific player's activity. Check Oxide profiler and entity count near recently active players.
Memory growth without FPS impact: A plugin is likely caching data without cleanup. Check o.profiler.dump for memory-heavy plugins.
Save lag increasing: World saves take longer as entity count grows. If saves cause noticeable lag spikes, your storage I/O is the bottleneck. NVMe SSD eliminates this on Space-Node servers.
Post-wipe performance worse than expected: Map seed creates an unusual monument/terrain configuration, or a mod update introduced a regression. Test on a staging server first.
Consistent monitoring prevents surprises. Check these metrics daily during the first wipe on new hardware, then weekly once you know the baseline.
