rabin.blog

How to Properly Backup, Move, and Migrate Uptime Kuma (Docker + Direct Install)

3 min read

There are two kinds of Uptime Kuma migrations:

  1. The one that works.
  2. The one where people export JSON, assume life is good, then discover half their setup is missing.

This guide is for the first one.

The safest and least error-prone way to migrate Uptime Kuma is:

Move the ENTIRE data directory.

Not screenshots.
Not copy-paste.
Not “I exported a backup JSON so I should be fine.”

The actual source of truth is the database and app data.


What Actually Needs to Be Backed Up

Uptime Kuma stores everything in its data directory.

Critical files:

kuma.db
kuma.db-wal
kuma.db-shm
uploads/

The important one is the kuma.db – the rest can be ignored.

kuma.db

That is your monitors, notifications, users, settings, status pages, and integrations.

If you lose that file, congratulations, you now own an empty monitoring dashboard.


Before You Start

Recommended Approach

  1. Stop Uptime Kuma completely
  2. Copy the entire data folder
  3. Restore the folder on the new system
  4. Start Uptime Kuma again

This avoids:

  • SQLite corruption
  • Partial copies
  • Missing WAL data
  • Broken monitors
  • Silent failures

METHOD 1 — Docker-Based Uptime Kuma Migration

This is the most common setup.

Step 1 — Find Your Data Directory

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    volumes:
      - /opt/uptime-kuma:/app/data

In this example:

/opt/uptime-kuma

Is your actual data?

Step 2 — Stop the Container

docker stop uptime-kuma

Step 3 — Create a Proper Backup

tar -czvf uptime-kuma-backup.tar.gz -C /opt uptime-kuma

Step 4 — Move Backup to New Server

scp uptime-kuma-backup.tar.gz user@newserver:/root/

Step 5 — Install Fresh Uptime Kuma on New Server

docker compose up -d

Then stop it again:

docker stop uptime-kuma

Step 6 — Restore Your Backup

rm -rf /opt/uptime-kuma/*
tar -xzvf uptime-kuma-backup.tar.gz -C /opt/

Step 7 — Start Uptime Kuma

docker start uptime-kuma

METHOD 2 — Direct Install Migration (Non-Docker)

Step 1 — Find the Data Directory

find / -type f -name "kuma.db" 2>/dev/null

Step 2 — Stop Uptime Kuma Properly

Check systemd first

systemctl status uptime-kuma
sudo systemctl stop uptime-kuma

If using PM2

pm2 list
pm2 stop uptime-kuma

If manually started

ps aux | grep node
kill PID

Step 3 — Backup the Data Folder

tar -czvf uptime-kuma-direct-backup.tar.gz -C /opt/uptime-kuma data

Step 4 — Move Backup to New Server

scp uptime-kuma-direct-backup.tar.gz user@newserver:/root/

Step 5 — Install Fresh Uptime Kuma

git clone https://github.com/louislam/uptime-kuma.git
cd uptime-kuma
npm install

Step 6 — Restore Data

rm -rf /opt/uptime-kuma/data/*
tar -xzvf uptime-kuma-direct-backup.tar.gz -C /opt/uptime-kuma/

Step 7 — Start Uptime Kuma Again

sudo systemctl start uptime-kuma

The Migration Method People SHOULD Use

If you want the least chance of disaster:

  • Docker
  • Bind-mounted persistent volume
  • Nightly tar backup
  • rsync to NAS

This is boring.

Boring is good.

Boring restores at 2AM when something dies.


What NOT To Do

Do NOT rely only on JSON export.

Inside Settings → Backup, there is a JSON export.

Treat it as convenience, not disaster recovery.

Do not use it for full state backup.


Recommended Backup Script

#!/bin/bash

BACKUP_DIR="/backup/uptime-kuma"
DATE=$(date +%F-%H%M)

mkdir -p "$BACKUP_DIR"

docker stop uptime-kuma

tar -czvf "$BACKUP_DIR/uptime-kuma-$DATE.tar.gz" -C /opt uptime-kuma

docker start uptime-kuma

Final Thoughts

People massively overcomplicate monitoring stacks.

Uptime Kuma itself is actually simple.

The important thing is respecting the database.

If you:

  • stop the service properly
  • move the entire data folder
  • restore it cleanly

then migration is usually painless.

If you skip steps because “it should probably work”…

that is usually where the suffering begins.