Bobcares

How to setup high density VPS hosting using LXC (Linux Containers) and LXD

by | Oct 9, 2015

How many VPS should I put in a server? As part of our hosting support services, we often get this question from VPS hosting providers. We’ve seen that anywhere from 1.5 to 2.5 times the normal server capacity is safe depending on how the server is managed.

As a rule of thumb, only about 10% of VPS customers really hit their resource limits frequently. By moving these top 10% heavy users into a single low density server, VPS providers can pack up to 2.5 times the normal number of customers in the rest of their servers.

Now, the question is, can this VPS density be higher? Is it safe to do so?

In hard server virtualization technologies like Xen and KVM, the server density cannot be much higher than 2.5, but container technologies like OpenVZ and Virtuozzo can attain much higher densities depending on how the server is configured.

So, when Canonical claimed that their new LXD “hypervisor” achieved 14.5 times better server density than KVM, we sat up and took notice. LXD is a “wrapper” to Linux Containers (LXC) which was the foundation for popular commercial products such as Virtuozzo.

So, the higher density part was entirely plausible. The big difference was that, now LXD made LXC administration easy and containers more secure. Finally LXC could now come out of the shadows and compete with others.

After a few days of rigorous testing in our labs, we found LXD to be sufficiently stable to use in production servers. We setup an LXD server for a VPS host we support, and this is the story of how we did it, and the difference in density we noted.

LXD & LXC for VPS hosting – What was needed

The use of LXD and LXC is mostly limited to the DevOps world, and as such, its default settings are not oriented for VPS hosting. So, for LXD to be a viable VPS hosting solution, we wanted the following to be resolved:

  1. Resource limitation – It should be easy for us to set and change disk space limit, CPU limit and memory limit on LXC containers.
  2. Quick provisioning based on templates – We should be able to provision a new container in a matter of seconds and we should be able to choose from a wide variety of server templates like Ubuntu, CentOS, LAMP, LNMP, etc.
  3. Services exposed through public IP – The default LXD configuration didn’t expose the containers to a public IP. We wanted the containers to be visible on a public IP, and customers to access Web, Mail, FTP etc.
  4. Backups – We should be able to automatically take daily backups of containers and put them in a central backup repository. This is needed to restore service in case of a hardware failure.
  5. VPS migration – We should be able to migrate a VPS to another server in case its resource usage exceeded what the server can offer.

See how our 24/7 support team helps you!

Getting started with LXD & LXC

The basic setup was pretty straight forward. LXD is included in Ubuntu 15.04. So, it was only a matter of running apt-get install lxd to get the hypervisor running. If you only have Ubuntu 14.04 servers, you’ll need to run the following commands:

# add-apt-repository ppa:ubuntu-lxc/lxd-stable
# apt-get update
# apt-get dist-upgrade
# apt-get install lxd

LXD has a repository for trusted images. So, we got our base images from their repo using the lxd-images command, like so:

# lxd-images import ubuntu --alias ubuntu

Then it was just a matter of launching these images to get our first container named server01.

# lxc launch images:ubuntu server01

Putting the container on a public IP

Now, we had an Ubuntu server, but the default configuration of LXD is to assign private IPs to containers that is not visible from the internet. To be able to assign public IPs, the default network interface of the host server should be bridged to the containers.

For that, first we changed the server ethernet (eth0) to a bridge (br0), as shown below:

In /etc/network/interfaces,

auto eth0
iface eth0 inet static

was changed to

auto br0
iface br0 inet static
bridge_ports eth0

Then we rebooted the server, and once we saw that the default IP was in br0, we disabled USE_LXC_BRIDGE in /etc/default/lxc-net, and set the lxc.network.link as br0 in the default LXC profile at /etc/lxc/default.conf. For all containers already created, the following command was used to set their network link to br0:

# lxc config set server01 raw.lxc 'lxc.network.link = br0'

Then a static IP was added to the container by editing /var/lib/lxd/containers/<container-name>/rootfs/etc/network/interfaces.d/eth0.cfg.

Note: While configuring the container’s eth0, proper settings need to be defined based on the IP subnet (like subnet mask, gateway, etc.).

Once the changes were saved, the container was restarted, and the new public IP became visible from the internet.

A server visible on the internet need to be secure. As is standard with all our VPS deployments, a set of security rules were then added to the network and firewall settings so that the VPS customer would be immune to a slew of common attacks prevalent in the internet.

All these steps were then put into a simple shell script for fast provisioning of VPSs.

Customizing the container and creating images

Now we had a container running a stable server image. What we now needed was images customized for various purposes such as Mail server, Web server, VPN server, etc., so that we could commission special purpose servers in seconds.

For this, relevant packages were installed on the base server image, configuration settings were optimized for container environment, and firewall rules were updated to ensure smooth connectivity. Then these images were saved using the lxc publish command like so:

# lxc publish server01 --alias=ubuntu1403-LNMP

The above command shows the creation of an image for an LNMP server from an Ubuntu 14.04 base image. The same process was repeated for all other image variants, and we then had a list of server templates which we could use to quickly provision VPSs.

Note:  Based on the purpose, additional settings may need to be applied to the LXC container to cover all possible usage scenarios. For eg., LXC mount entries need to be adjusted to allow GUI (XOrg) display on VPN servers.

New containers could now be created within seconds using the lxc launch command like this:

# lxc launch ubuntu1403-LNMP server02

The full list of images could be viewed using:

# lxc image list

Configuring resource limits

Based on different VPS plans, the resources available need to be limited. The CPU and memory limits were changed for containers using the commands:

# lxc config set server01 limits.cpus 1
# lxc config set server01 limits.memory 500

The above command set the number of cores available to the container server01 to 1, and the memory to 500 MB. We saw that some container images had issues in booting up with a low memory setting. This fixed by adjusting the swap space in the LXC container profiles.

For new containers, the resource settings were defined in profiles under /etc/lxc/ and those profiles were referred to profiles when creating new containers.

To limit the disk space, LXC containers were created on an LVM volume (which merits an entire article on its own), and the create command looked like this:

# lxc-create -t ubuntu1403-LNMP -n server03 -B lvm --fssize=5G

The above command created a container with disk space 5 GB.

[ Use your time to build your business. We’ll take care of your customers. Hire our tech support specialists at affordable pricing. ]

Taking VPS backups

To take backups, we used the lxc-clone command like so:

# lxc-clone -P /path/to/backup/drive/ server01 server01-$(date)

This created a backup of the container server01 in our backup drive. Restoring this backup is as simple as copying this directory to /var/lib/lxd/containers/. A backup script was created to automate this process, and space was saved by compressing the archive.

Migrating the VPS containers

We never did a migration of a container to another server in the production environment, but in our test labs, the workable solution was to backup and restore the container in another server.

LXD docs say that the command lxc move should work when security settings are disabled. Since it is not an option in production servers, that solution was not implemented.

Right now, the production server has 10 VPSs and everything works like a charm. It is far from the capacity of the server, and going by the test results in our lab, this server (with 16 GB RAM) can easily take up to 40 VPS or more.

Conclusion

Server density has always been a critical factor in running a VPS hosting business. For long, container technology software like OpenVZ and Virtuozzo has ruled the roost in delivering economical VPS hosting solutions.

Now, with the release of LXD, the VPS hosting business has a strong new choice which is as good as or even better than others. If the steady development is anything to go by, LXD could be the de-facto high-density VPS hosting solution sooner rather than later.

Bobcares helps VPS and cloud hosting providers design, deploy, customize and maintain virtualized infrastructure that is custom tailored to meet their business goals. Contact Us to know how we can help you.

 

ENSURE ZERO DOWNTIME SERVICES

Guaranteed 100% uptime for your servers & 24/7 support for your customers!

GET IN TOUCH WITH THE 'BEST IN INDUSTRY' SUPPORT

7 Comments

  1. bmullan

    Good writeup and explanation of LXD. Take note of the work Canonical is doing with OpenStack, LXD/LXC and Juju.

    The “single installer method” in the following gitub link will explain how to install all of OpenStack in LXC containers and using the “–use-nclxd” option to enable using LXD by the Canonical provided “nclxd” Neutron plugin to use Nova to create/manage LXC containers as a “cloud” compute option.

    Reply
    • Visakh S

      Thanks Brian.

      We are already trying out LXD with OpenStack. Will post our observations soon. BTW, I didn’t see a link in your comment. Did you mean this one?

      https[:]//github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/673#issuecomment-137955614

      Will try it out and post the details here. 🙂

      EDIT: Saw your next comment after I posted this one. Thanks for the link. Will check it out.

      Reply
  2. NJ NatcoWeb.com

    Thank you for helping to set up a high-density VPS, it is very useful information for me.

    Reply
    • Visakh S

      You are welcome. 🙂

      Please do let me know if you need any help in customization.

      Reply
  3. Tom

    thanks for the very useful article!

    Reply
  4. Sune Beck

    What prevents the VPS user from changing the public IP address from within the container, and take over another’s?

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.