How to setup high density VPS hosting using LXC (Linux Containers) and LXD

How many VPS should I put in a server? As part of our hosting support services, we often get this question from VPS hosting providers. We’ve seen that anywhere from 1.5 to 2.5 times the normal server capacity is safe depending on how the server is managed.

As a rule of thumb, only about 10% of VPS customers really hit their resource limits frequently. By moving these top 10% heavy users into a single low density server, VPS providers can pack up to 2.5 times the normal number of customers in the rest of their servers.

Now, the question is, can this VPS density be higher? Is it safe to do so?

In hard server virtualization technologies like Xen and KVM, the server density cannot be much higher than 2.5, but container technologies like OpenVZ and Virtuozzo can attain much higher densities depending on how the server is configured.

So, when Canonical claimed that their new LXD “hypervisor” achieved 14.5 times better server density than KVM, we sat up and took notice. LXD is a “wrapper” to Linux Containers (LXC) which was the foundation for popular commercial products such as Virtuozzo.

So, the higher density part was entirely plausible. The big difference was that, now LXD made LXC administration easy and containers more secure. Finally LXC could now come out of the shadows and compete with others.

After a few days of rigorous testing in our labs, we found LXD to be sufficiently stable to use in production servers. We setup an LXD server for a VPS host we support, and this is the story of how we did it, and the difference in density we noted.

LXD & LXC for VPS hosting – What was needed

The use of LXD and LXC is mostly limited to the DevOps world, and as such, its default settings are not oriented for VPS hosting. So, for LXD to be a viable VPS hosting solution, we wanted the following to be resolved:

  1. Resource limitation – It should be easy for us to set and change disk space limit, CPU limit and memory limit on LXC containers.
  2. Quick provisioning based on templates – We should be able to provision a new container in a matter of seconds and we should be able to choose from a wide variety of server templates like Ubuntu, CentOS, LAMP, LNMP, etc.
  3. Services exposed through public IP – The default LXD configuration didn’t expose the containers to a public IP. We wanted the containers to be visible on a public IP, and customers to access Web, Mail, FTP etc.
  4. Backups – We should be able to automatically take daily backups of containers and put them in a central backup repository. This is needed to restore service in case of a hardware failure.
  5. VPS migration – We should be able to migrate a VPS to another server in case its resource usage exceeded what the server can offer.

See how our 24/7 support team helps you!

Getting started with LXD & LXC

The basic setup was pretty straight forward. LXD is included in Ubuntu 15.04. So, it was only a matter of running apt-get install lxd to get the hypervisor running. If you only have Ubuntu 14.04 servers, you’ll need to run the following commands:

# add-apt-repository ppa:ubuntu-lxc/lxd-stable
# apt-get update
# apt-get dist-upgrade
# apt-get install lxd

LXD has a repository for trusted images. So, we got our base images from their repo using the lxd-images command, like so:

# lxd-images import ubuntu --alias ubuntu

Then it was just a matter of launching these images to get our first container named server01.

# lxc launch images:ubuntu server01

Putting the container on a public IP

Now, we had an Ubuntu server, but the default configuration of LXD is to assign private IPs to containers that is not visible from the internet. To be able to assign public IPs, the default network interface of the host server should be bridged to the containers.

For that, first we changed the server ethernet (eth0) to a bridge (br0), as shown below:

In /etc/network/interfaces,

auto eth0
iface eth0 inet static

was changed to

auto br0
iface br0 inet static
bridge_ports eth0

Then we rebooted the server, and once we saw that the default IP was in br0, we disabled USE_LXC_BRIDGE in /etc/default/lxc-net, and set the as br0 in the default LXC profile at /etc/lxc/default.conf. For all containers already created, the following command was used to set their network link to br0:

# lxc config set server01 raw.lxc ' = br0'

Then a static IP was added to the container by editing /var/lib/lxd/containers/<container-name>/rootfs/etc/network/interfaces.d/eth0.cfg.

Note: While configuring the container’s eth0, proper settings need to be defined based on the IP subnet (like subnet mask, gateway, etc.).

Once the changes were saved, the container was restarted, and the new public IP became visible from the internet.

A server visible on the internet need to be secure. As is standard with all our VPS deployments, a set of security rules were then added to the network and firewall settings so that the VPS customer would be immune to a slew of common attacks prevalent in the internet.

All these steps were then put into a simple shell script for fast provisioning of VPSs.


Guaranteed 100% uptime for your servers & 24/7 support for your customers!



  1. Good writeup and explanation of LXD. Take note of the work Canonical is doing with OpenStack, LXD/LXC and Juju.

    The “single installer method” in the following gitub link will explain how to install all of OpenStack in LXC containers and using the “–use-nclxd” option to enable using LXD by the Canonical provided “nclxd” Neutron plugin to use Nova to create/manage LXC containers as a “cloud” compute option.

    • Thanks Brian.

      We are already trying out LXD with OpenStack. Will post our observations soon. BTW, I didn’t see a link in your comment. Did you mean this one?


      Will try it out and post the details here. 🙂

      EDIT: Saw your next comment after I posted this one. Thanks for the link. Will check it out.

  2. Thank you for helping to set up a high-density VPS, it is very useful information for me.

    • You are welcome. 🙂

      Please do let me know if you need any help in customization.

  3. thanks for the very useful article!

  4. What prevents the VPS user from changing the public IP address from within the container, and take over another’s?


Submit a Comment

Your email address will not be published. Required fields are marked *

BUSY WITH TECH SUPPORT ALL DAY? We help web hosts and other web solution providers save time and focus on growth.
Here's how we helped a web host reduce support engagement time from 3 hours to 30 mins per day: