Bobcares

CEPH proxmox backup

by | Jan 26, 2023

Wondering how to perform CEPH proxmox backup? Our proxmox Support team is here to lend a hand with your queries and issues.

CEPH proxmox backup

RBD (RADOS Block Device) volumes can be exported using RBD Snapshots and exports. Once we use Snapshots, we can also run differential exports, so have differential backups from Ceph.

When providing Proxmox with Ceph RBD, it will create a RBD volume for each VM or CT.

CephFS different and are out of the scope. In case you would be interested in creating snapshots on CephFS, just create named subfolder of the .snap magical folder. This will give you a CephFS snapshot, and can be used anywhere in the directory hierarchy. Here is an example I use for some part of CephFS backups in a script:

mkdir /mnt/cephfs/docker/volumes/.snap/$(date +%Y%m%d-%Hh%M)

List rbd pools

root@pve1:~# ceph osd lspools 
1 device_health_metrics
2 cephfs_data
3 cephfs_metadata
4 cephblock

The first pool (device_health_metrics) is for ceph internals. The pools 2 ans 3 are for cephfs. So only the pool 4 interrests us.

List volumes in a rbd pool

root@pve1:~# rbd ls cephblock
vm-101-disk-0
vm-105-disk-0
vm-134-disk-0
vm-139-disk-0

Take a snapshot

rbd snap create {pool-name}/{image-name}@{snap-name}

So in our case:

root@pve1:~# rbd snap create cephblock/vm-139-disk-0@$(date +%Y%m%d-%Hh%M)                        
Creating snap: 100% complete...done.

List the snapshots

root@pve1:~# rbd snap ls cephblock/vm-139-disk-0
SNAPID  NAME            SIZE    PROTECTED  TIMESTAMP                
   46  20220123-17h43  15 GiB             Sun Jan 23 17:43:19 2022

Export the snapshot

root@pve1:~# rbd export cephblock/vm-139-disk-0@20220123-17h43 /tmp/vm-139-disk-0_20220123-17h43
Exporting image: 100% complete...done.
root@pve1:~# ls -lh /tmp/vm-139-disk-0_20220123-17h43
-rw-r--r-- 1 root root 15G 23 jan 18:05 /tmp/vm-139-disk-0_20220123-17h43

It’s possible to extract and compress in a single operation. Here is another example (Thanks Andrii):

rbd export cephblock/vm-101-disk-0@akira_2022-05-25T21:47Z - | nice xz -z8 -T4 > /tmp/vm-101-disk-0_akira_2022-05-25T21:47Z_FULL-RBD-EXPORT.xz

Take another subsequent snapshot

In order to take an incremental backup, we must take a new snapshot.

root@pve1:~# rbd snap create cephblock/vm-139-disk-0@$(date +%Y%m%d-%Hh%M)  
Creating snap: 100% complete...done.
root@pve1:~# rbd snap ls cephblock/vm-139-disk-0
SNAPID  NAME            SIZE    PROTECTED  TIMESTAMP                
   46  20220123-17h43  15 GiB             Sun Jan 23 17:43:19 2022
   47  20220123-22h49  15 GiB             Sun Jan 23 22:49:34 2022

Export the difference between the 2 snapshots

root@pve1:~# rbd export-diff --from-snap 20220123-17h43 cephblock/vm-139-disk-0@20220123-22h49 /tmp/vm-139-disk-0_20220123-17h43-20220123-22h49
root@pve1:~# rbd export-diff --from-snap 20220123-17h43 cephblock/vm-139-disk-0@20220123-22h49 /tmp/vm-139-disk-0_20220123-17h43-20220123-22h49
Exporting image: 100% complete...done.
root@pve1:~# ls -lh /tmp/vm-139-disk-0_20220123-17h43-20220123-22h49
-rw-r--r-- 1 root root 34M 23 jan 22:58 /tmp/vm-139-disk-0_20220123-17h43-20220123-22h49
root@pve1:~# time nice xz -z -T4 /tmp/vm-139-disk-0_20220123-17h43-20220123-22h49  

real    0m2,916s
user    0m4,460s
sys     0m0,095s
root@pve1:~# ls -lh /tmp/vm-139-disk-0_20220123-17h43-20220123-22h49*              
-rw-r--r-- 1 root root 1,4M 23 jan 22:58 /tmp/vm-139-disk-0_20220123-17h43-20220123-22h49.xz

File format

File format is very simple. That’s why compression works quite well.
Just for fun: if you use a recent diff on vm with low activity on binary files, you can use the command strings on it, and you will mostly see logs & text files:

strings /tmp/vm-139-disk-0_20220123-17h43-20220123-22h49
List all volumes including snapshots on the cephblock pool
root@pve1:~# rbd ls -l cephblock      
NAME                          SIZE    PARENT  FMT  PROT  LOCK
vm-101-disk-0                 40 GiB            2        excl
vm-105-disk-0                 15 GiB            2        excl
vm-134-disk-0                 40 GiB            2        excl
vm-139-disk-0                 15 GiB            2        excl
vm-139-disk-0@20220123-17h43  15 GiB            2             
vm-139-disk-0@20220123-22h49  15 GiB            2             
vm-139-disk-0@20220124-00h19  15 GiB            2            

You can grep on those containing @ to get only snapshots.

Get the VM name by vmid

This might be useful for scripting to include de VM/CT name in the filename. Might not be the best solution, but this is a possible way.

root@pve1:~# vmid=139
root@pve1:~# vmname=$(pvesh get /cluster/resources --type vm --output-format json | jq -r ".[] | select(.vmid==$vmid) | .name " | tr -cd "[:alnum:]")
root@pve1:~# echo $vmname
freepbx
Restore the base snapshot

Then import, and import the diff… (check rbd manual)

Get the GB used in the ‘cephblock’ pool

rados -p cephblock -f json df | jq '.pools [0] .size_kb/1024/1024 | ceil'

[Need assistance with a different issue? Our team is available 24/7.]

Conclusion

In conclusion, our Support Engineers demonstrated CEPH proxmox backup. Furthermore, we went through different causes and solutions for this specific error.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.