How to Use the Spark Profiler to Find Lag on Your Minecraft Server

Published on

Your server lags but the hardware looks fine. Spark is the diagnostic tool that shows exactly which plugin, mod, or system call is eating your tick budget. Learn how to install it, generate a report, and read the output.

Written by Jochem Wassenaar – CEO of Space-Node – 15+ years combined experience in game server hosting, VPS infrastructure, and 24/7 streaming solutions. Learn more

You upgraded to 16 GB of RAM. The CPU sits at 30% usage. The server still drops to 15 TPS during peak hours and nobody can figure out why.

This is one of the most common situations in Minecraft server administration. The hardware metrics look fine. The real problem is invisible to standard system monitoring because it lives inside the game's tick loop, not in the operating system.

Spark is the tool that sees what /mspt and htop cannot.


What Spark Actually Measures

Every Minecraft server runs on a 50-millisecond tick cycle. Twenty ticks per second is the target. Each tick must complete in under 50ms. If any single tick takes longer, the server drops below 20 TPS and players feel lag.

Spark attaches a CPU profiler directly to the game's execution threads. It samples what code is running at microsecond resolution, then aggregates these samples into a tree showing exactly where the tick budget is being spent.

The output answers: which exact function, from which specific plugin or mod, is taking the most time per tick.


Installing Spark

Download Spark from spark.lucko.me.

For Paper or Spigot servers, place the .jar file in your plugins folder and restart. Spark runs passively and does not affect normal server operation.

For Fabric servers, add the Spark mod to your mods folder. It works the same way on the Fabric side.

For NeoForge and Forge modpacks, Spark has a mod version. Drop it in the mods folder.

Once loaded, verify it is running:

/spark version

Generating a Profile

Run the profiler during the period when lag is happening, not when the server is quiet. Profiling an idle server tells you nothing useful.

Start the profiler:

/spark profiler start

Wait at least 3 to 5 minutes while the lag is occurring. More time means more samples and more reliable data. For intermittent lag spikes, run it for 10 to 15 minutes.

Stop the profiler and generate a report:

/spark profiler stop

Spark uploads the results automatically and prints a URL in the console. That URL opens an interactive flame graph in your browser.


Reading the Spark Output

The output is a tree showing the call hierarchy. At the top are the root threads. Expand "Server thread" to see everything running on the main tick loop.

The numbers next to each entry show:

  • Percentage of total samples: How much of the profiled time this code consumed
  • Self time: Time spent in this specific function (not its children)

A healthy server shows most of the tick budget inside Minecraft's core systems: entity ticking, chunk management, player network I/O. Each individual system uses a small fraction.

A problem server shows one or two items at 30%, 40%, or even 60% of total time. That is your culprit.


Real Examples of What Spark Finds

Logistics mods with complex routing algorithms: Some pipe and fluid routing systems recalculate their entire network every tick or every few ticks. On a large factory world, this can consume 30 to 50 microseconds per tick. That is your entire lag budget for one system alone.

In one documented case with Super Factory Manager, deep profiling revealed a sorting algorithm running at O(n²) complexity across hundreds of connected machines. The fix was configuration changes to the mod's update frequency, not a hardware upgrade.

Hopper chains: A line of 50 hoppers all ticking simultaneously consumes meaningful tick time. /spark profiler start while walking near your hopper sorter will immediately show HopperBlockEntity.tick consuming a significant percentage.

Poorly coded plugins checking online players every tick: Some plugins loop through all online players on every single tick to do things that do not need per-tick checking. Common in older or abandoned plugins. Spark shows these immediately.

World border operations: Plugins that scan chunks near players for custom terrain modifications can spike dramatically when many players are in unexplored areas.


The Tick Health Report

Spark has a separate command that gives you a snapshot view without profiling:

/spark tps

This shows your current TPS and MSPT (milliseconds per tick). MSPT is what matters most:

  • Under 25ms: Healthy. You have headroom.
  • 25 to 45ms: Acceptable but approaching the limit.
  • 45 to 55ms: Borderline. Your server is close to missing ticks.
  • Over 50ms: Your server is missing ticks. TPS will be below 20.
/spark healthreport

This generates a comprehensive system health overview including garbage collection activity, CPU usage, disk I/O, and thread states. It uploads a report URL the same way the profiler does.


Interpreting Garbage Collection Spikes

Garbage collection lag is a separate issue from CPU load. If you see GC pauses in your health report causing lag spikes every few minutes, this is a Java heap management problem, not a plugin problem.

Signs of GC-caused lag in Spark output:

  • TPS drops periodically for 2 to 5 seconds, then recovers
  • The profiler shows GarbageCollector.collect consuming time during the laggy window
  • Memory usage climbs steadily then drops sharply at each lag spike

The fix is generally JVM flag tuning, specifically enabling the G1GC or Aikar's flags, and ensuring your minimum and maximum heap sizes are equal.

Aikar's recommended flags for Paper servers:

-XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=200 
-XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC 
-XX:+AlwaysPreTouch -XX:G1NewSizePercent=30 
-XX:G1MaxNewSizePercent=40 -XX:G1HeapRegionSize=8M 
-XX:G1ReservePercent=20 -XX:G1HeapWastePercent=5 
-XX:G1MixedGCCountTarget=4 -XX:InitiatingHeapOccupancyPercent=15 
-XX:G1MixedGCLiveThresholdPercent=90 -XX:G1RSetUpdatingPauseTimePercent=5 
-XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1

These flags reduce GC pause duration significantly compared to default Java settings.


After Finding the Problem

Once Spark identifies the bottleneck, your options are:

  1. Update the mod or plugin: The issue may be fixed in a newer version
  2. Configure the mod: Many mods expose update frequency or tick rate settings. Reducing how often a logistics mod re-scans its network reduces the tick cost proportionally
  3. Remove it: If the performance cost is too high and the feature is not critical, removing the mod is a valid choice
  4. Report the issue: File a bug report with the Spark report URL attached. Developers take performance reports with profiler data seriously. It is direct evidence of the problem

You run your profiler from the Pterodactyl console at Space-Node without needing SSH access. Type the Spark commands directly in the console input, copy the report URL from the output, and open it in your browser. No additional tools required.

Jochem Wassenaar

About the Author

Jochem Wassenaar – CEO of Space-Node – Experts in game server hosting, VPS infrastructure, and 24/7 streaming solutions with 15+ years combined experience.

Since 2023
500+ servers hosted
4.8/5 avg rating

Our team specializes in Minecraft, FiveM, Rust, and 24/7 streaming infrastructure, operating enterprise-grade AMD Ryzen 9 hardware in Netherlands datacenters. We maintain GDPR compliance and ISO 27001-aligned security standards.

Read full author bio and credentials →

Start Minecraft Server in Minutes

Join content creators worldwide who trust our minecraft infrastructure. Setup is instant and support is always available.

How to Use the Spark Profiler to Find Lag on Your Minecraft Server