Bobcares

Rancher LXC Proxmox | All About

by | Jan 24, 2024

Here is an overview of Rancher LXC Proxmox. Our LXC/LXD Support team is here to help you with your questions and concerns.

Rancher LXC Proxmox | All About

Proxmox is a popular open-source hypervisor with an excellent management interface.

Rancher LXC Proxmox | All About

Today, we are going to explore deploying K3s on Proxmox using LXC containers. This helps boost efficiency and resource utilization.

Creating LXC Containers

To begin with, we have to create LXC containers in the Proxmox UI. These will containers host our K3s instances. Furthermore, they need additional configuration to ensure smooth operations.

  1. First, click on “Create CT” and enable the advanced settings.
  2. Then, uncheck the “Unprivileged container” option.
  3. Now, enter the container details. Make sure that every machine has a static IP address configured in the network. Furthermore, we should configure the internal DNS server we use on the next page, if we use one
  4. Select a template and allocate an appropriate root disk size.
  5. On the last page, uncheck “Start after created” and click finish.

Now, it is time to grant proper permissions to our containers. So, we have to SSH into our Proxmox host as the root user. Then, edit the configuration files for the created containers found in the /etc/pve/lxc directory.
In the /etc/pve/lxc directory, we can find files called XXX.conf. Here, XXX are the ID numbers of the containers we just created.

Next, add the following lines to each container’s configuration:

lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"

Next, we need to publish the kernel boot configuration into the container. but the Kubelet uses the configuration to determine various settings for the runtime, so we need to copy it into the container. Use the Proxmox web UI to start the container first, and then use the following command on the Proxmox host:

pct push <container id> /boot/config-$(uname -r) /boot/config-$(uname -r)

Now, make sure that /dev/kmsg exists in each container by creating the necessary files and systemd service.

This is not present by default in the containers and is used by Kubelet for some logging operations. Make the following changes to the file /usr/local/bin/conf-kmsg.sh in each container:


#!/bin/sh -e
if [ ! -e /dev/kmsg ]; then
ln -s /dev/console /dev/kmsg
fimount --make-rshared /

Here, we are symlinking /dev/console as /dev/kmsg in case the latter does not exist. Now, we have to create the file /etc/systemd/system/conf-kmsg.service and add the following content:


Description=Make sure /dev/kmsg exists[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/usr/local/bin/conf-kmsg.sh
TimeoutStartSec=0[Install]
WantedBy=default.target

Then, run the following to enable the service:

chmod +x /usr/local/bin/conf-kmsg.sh
systemctl daemon-reload
systemctl enable --now conf-kmsg

Setting Up Container OS & K3s

Once the containers are ready, it is time to set up Rancher K3s on them.

Now, run this command to set up K3s

curl -fsL https://get.k3s.io | sh -s - --disable traefik --node-name control.k8s

As we have to join our worker node to the K3s cluster, we need the cluster token.

We can get this by running the following command on the control node:

cat /var/lib/rancher/k3s/server/node-token

Then, head to the worker node and run this command to set up K3s and join the existing cluster:

curl -fsL https://get.k3s.io | K3S_URL=https://:6443 K3S_TOKEN= sh -s - --node-name worker-1.k8s

After the above steps, we will be able to see the worker node appear when we run the “kubectl get nodes” command.

How to Set up NGINX Ingress Controller

Now, we are going to use the ingress-nginx/ingress-nginx Helm chart to set up the NGINX ingress controller.

This is done by adding the repo and loading the repo’s metadata. Then we have to install the chart as seen here:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true

The controller.publishService.enabled setting tells the controller to publish the ingress service IP addresses to the ingress resources.

Once the chart completes, we will be able to see the resources appear in the output of the “kubectl get all” command.

This enables NGINX to publish service IP addresses to the ingress resources.

Alternative method

Here is an alternative method that uses Rancher RKE1 with Docker.

Did you know that Ranchers RKE1 still uses docker as container runtime, even though it has been deprecated by the Kubernetes project?

In fact, a few modules like overlays and aufs have to be enabled in this case. Usually, Docker loads the modules, this is not allowed in the LXC container. So we have to do it ourselves with these commands:

modprobe aufs
modprobe overlay

Alternatively, we can make sure they are loaded during system boot by creating a file in /etc/modules-load.d.
cat > /etc/modules-load.d/docker.conf <<EOF
aufs
overlay
EOF

At this point, we have to create a container on the Proxmox host.

For example

pct create containerid local:vztmpl/ubuntu-xx.xx-standard_xx.xx-1_amd64.tar.gz --cores 4 --memory 4096 --swap 2048 --hostname HOSTNAME --rootfs local:20 –net0
name=eth0,ip=192.168.0.100/24,bridge=vmbr10,gw=192.168.0.1 --onboot 1

The container needs to be privileged to run Docker in the container.

pct set 100 --features nesting=1
Now we can start the container, enter it, install docker, and run a hello-world container.

Since they should not be accessible using SSH and only be used as a docker host, we have to remove some packages.

pct start containerid
pct enter containerid
apt-get -y remove openssh-server postfix accountsservice networkd-dispatcher rsyslog cron dbus apparmor
wget -O - https://releases.rancher.com/install-docker/20.10.sh | sh
reboot

In case we don’t want to use Kubernetes, we can install the docker.io package from distribution repositories.

Then run:

docker run hello-world

With the deployment steps seen above, you can use the power of Proxmox and LXC containers to efficiently run K3s and manage Kubernetes workloads. Whether we choose the standard LXC containers or opt for Docker within LXC containers, these configurations offer a robust Kubernetes environment on Proxmox.

[Need assistance with a different issue? Our team is available 24/7.]

Conclusion

In brief, our Support Experts introduced us to Rancher LXC Proxmox.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.