Wondering how to migrate Openvz to lxc proxmox? We can help you.
As part of our Server Management Services, we assist our customers with several similar queries.
Today, let us see procedure followed by our Support Techs in order to perform this task.
How to migrate Openvz to lxc proxmox?
OpenVZ is not available for Kernels above 2.6.32, therefore a migration is necessary.
Today, let us see the steps followed by our Support Techs to perform this task.
Basically you have to follow these steps:
On the Proxmox VE 3.x node:
1. Firstly, note the network settings used by the container
2. Then, make a backup of the OpenVZ container
On the Proxmox VE 4.x node:
1. Restore/create a LXC container based on the backup
2. Then, configure the network with the previous settings
3. Boot and voilà, it works
Unsupported OpenVZ templates
All OpenVZ templates are not supported.
If you try to convert OpenVZ template with unsupported OS then you will get error message during pct restore command and restore will fail.
unsupported fedora release ‘Fedora release 14 (Laughlin)’
Step by step conversion
Firstly, login with ssh on your Proxmox VE 3.x node:
Suppose you want to migrate three different containers: a CentOS container, an Ubuntu, and a Debian container.
vzlist CTID NPROC STATUS IP_ADDR HOSTNAME 100 20 running - centos6vz.proxmox.com 101 18 running - debian7vz.proxmox.com 102 20 running 192.168.15.142 ubuntu12vz.proxmox.com
Get the network configuration of the OpenVZ containers, and note it somewhere
A) If your container uses a venet device, you get the address directly from the command line:
vzlist 102 CTID NPROC STATUS IP_ADDR HOSTNAME 102 20 running 192.168.15.142 ubuntu12vz.proxmox.com
B) If your container uses veth, the network configuration is done inside the container.
How to find the network configuration depends on which OS is running inside the container:
If you have a CentOS based container, you can get the network configuration like this:
# start a root shell inside the container 100 vzctl enter 100 cat /etc/sysconfig/network-scripts/ifcfg-eth0 exit
There may be more than one network interface in CentOS that will be seen using ifcfg-eth1 and the like in the above command.
If you have a Debian, Ubuntu or Turnkey Linux appliance (all network interfaces are available in one go here):
vzctl enter 101 cat /etc/network/interfaces exit
Make a backup of your containers
Firstly, choose on which storage you want to backup the containers.
# List available storages: pvesm status freenas nfs 1 27676672 128 27676544 0.50% local dir 1 8512928 2122088 6390840 25.43% nas-iso nfs 1 2558314496 421186560 2137127936 16.96%
By default, this storage does not allow backups to be stored, so make sure you enable it for backup contents. (See Storage type Content)
Then backup all the containers
# Stop the container, and start a backup right after the shutdown: vzctl stop 100 && vzdump 100 -storage local vzctl stop 101 && vzdump 101 -storage local vzctl stop 102 && vzdump 102 -storage local
At that point you can either:
A) Upgrade your Proxmox VE 3.x node to Proxmox VE 4.x
B) Copy the backups to a Proxmox VE 4.x node, and do the conversion on the Proxmox VE 4.x node
Suppose you follow option B) (copy the backups to the Proxmox VE 4.x node, and convert to LXC format)
# Copy each container tar backup to the pve4 node via ssh: cd /var/lib/vz/dump/ scp vzdump-openvz-100-2015_08_27-10_46_47.tar [email protected]:/var/lib/vz/dump scp vzdump-openvz-101-2015_08_27-10_50_44.tar [email protected]:/var/lib/vz/dump scp vzdump-openvz-102-2015_08_27-10_56_34.tar [email protected]:/var/lib/vz/dump
Restore/Create LXCs based on your backup
Now switch to the Proxmox VE 4 node, and create containers based on the backup:
pct restore 100 /var/lib/vz/dump/vzdump-openvz-100-2015_08_27-10_46_47.tar pct restore 101 /var/lib/vz/dump/vzdump-openvz-101-2015_08_27-10_50_44.tar pct restore 102 /var/lib/vz/dump/vzdump-openvz-102-2015_08_27-10_56_34.tar
At that point you should be able to see your containers in the web interface, but they still have no network.
Please note if you want to / have to restore to a different storage than the default ‘local’ one, add “-storage STORAGEID” to the “pct restore” command.
E.g., if you have a ZFS storage called ‘local-zfs’, you can use the following command to restore:
pct restore 100 /var/lib/vz/dump/vzdump-openvz-100-2015_08_27-10_46_47.tar -storage local-zfs
Add network configuration based on the original settings
LXCs uses virtual network adapter which are bridged to the physical interface of your host.
In Proxmox VE 3.x the configuration of each container using a veth device had to be done inside the container.
In Proxmox VE 4.x you can do this directly from the host.
Add network configuration via the GUI
For each container:
1. Firstly, select the container by clicking on it
2. Then, go to the Network tab
3. Next, click Add device
4. On the veth device panel, add a device with the parameters:
ID: net0 name eth0 put your IP address and the corresponding netmask in the following format 192.168.5.75/24
Add network configuration via the CLI
pct set 101 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.144/24,gw=192.168.15.1 pct set 102 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.145/24,gw=192.168.15.1
Start the containers
pct start 100 pct start 101 pct start 102
and voilà, you can now log in to a container and check that your services are running
pct enter 100
[Need help with similar query? We’d be glad to assist you]
In short, today we saw how our Support Techs migrate Openvz to lxc proxmox.