Bobcares

Ansible proxmox kubernetes

by | Jan 10, 2023

Wondering how to install ansible proxmox kubernetes? Our in-house experts are here to help you out with this article. Our proxmox support is here to offer a lending hand with your queries and issues.

Ansible proxmox kubernetes

Today, let us see the steps followed by our Support techs to install Ansible.

Initial Ansible Housekeeping

First we need to specify some variables similar to how we did it with Terraform.

Create a file in your working directory called ansible-vars.yml and put the following into it:

# specifying a CIDR for our cluster to use.

# can be basically any private range except for ranges already in use.

# apparently it isn’t too hard to run out of IPs in a /24, so we’re using a /22

pod_cidr: “10.16.0.0/22”

# this defines what the join command filename will be

join_command_location: “join_command.out”

# setting the home directory for retreiving, saving, and executing files

home_dir: “/home/ubuntu”

Equally as important (and potentially a better starting point than the variables) is defining the hosts. In ansible-hosts.txt:

# this is a basic file putting different hosts into categories

# used by ansible to determine which actions to run on which hosts

[all]

10.98.1.41

10.98.1.51

10.98.1.52

[kube_server]

10.98.1.41

[kube_agents]

10.98.1.51

10.98.1.52

[kube_storage]

#10.98.1.61

Installing Kubernetes dependencies with Ansible

Then we need a script to install the dependencies and the Kubernetes utilities themselves.

This script does quite a few things.

Gets apt ready to install things, adding the Docker & Kubernetes signing key, installing Docker and Kubernetes, disabling swap, and adding the ubuntu user to the Docker group.

ansible-install-kubernetes-dependencies.yml:
# https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/

# https://github.com/virtualelephant/vsphere-kubernetes/blob/master/ansible/cilium-install.yml#L57

# ansible .yml files define what tasks/operations to run

—

– hosts: all # run on the “all” hosts category from ansible-hosts.txt

# become means be superuser

become: true

remote_user: ubuntu

tasks:

– name: Install packages that allow apt to be used over HTTPS

apt:

name: “{{ packages }}”

state: present

update_cache: yes

vars:

packages:

– apt-transport-https

– ca-certificates

– curl

– gnupg-agent

– software-properties-common

– name: Add an apt signing key for Docker

apt_key:

url: https://download.docker.com/linux/ubuntu/gpg

state: present

– name: Add apt repository for stable version

apt_repository:

repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable

state: present

– name: Install docker and its dependecies

apt:

name: “{{ packages }}”

state: present

update_cache: yes

vars:

packages:

– docker-ce

– docker-ce-cli

– containerd.io

– name: verify docker installed, enabled, and started

service:

name: docker

state: started

enabled: yes

– name: Remove swapfile from /etc/fstab

mount:

name: “{{ item }}”

fstype: swap

state: absent

with_items:

– swap

– none

– name: Disable swap

command: swapoff -a

when: ansible_swaptotal_mb >= 0

– name: Add an apt signing key for Kubernetes

apt_key:

url: https://packages.cloud.google.com/apt/doc/apt-key.gpg

state: present

– name: Adding apt repository for Kubernetes

apt_repository:

repo: deb https://apt.kubernetes.io/ kubernetes-xenial main

state: present

filename: kubernetes.list

– name: Install Kubernetes binaries

apt:

name: “{{ packages }}”

state: present

update_cache: yes

vars:

packages:

# it is usually recommended to specify which version you want to install

– kubelet=1.23.6-00

– kubeadm=1.23.6-00

– kubectl=1.23.6-00

– name: hold kubernetes binary versions (prevent from being updated)

dpkg_selections:

name: “{{ item }}”

selection: hold

loop:

– kubelet

– kubeadm

– kubectl

# this has to do with nodes having different internal/external/mgmt IPs

# {{ node_ip }} comes from vagrant, which I’m not using yet

# – name: Configure node ip –

# lineinfile:

# path: /etc/default/kubelet

# line: KUBELET_EXTRA_ARGS=–node-ip={{ node_ip }}

– name: Restart kubelet

service:

name: kubelet

daemon_reload: yes

state: restarted

– name: add ubuntu user to docker

user:

name: ubuntu

group: docker

– name: reboot to apply swap disable

reboot:

reboot_timeout: 180 #allow 3 minutes for reboot to happen

With our fresh VMs straight outta Terraform, let’s now run the Ansible script to install the dependencies.

Ansible command to run the Kubernetes dependency playbook (pretty straight-forward: the -i is to input the hosts file, then the next argument is the playbook file itself):

ansible-playbook -i ansible-hosts.txt ansible-install-kubernetes-dependencies.yml

Initialize the Kubernetes cluster on the master

With the dependencies installed, we can now proceed to initialize the Kubernetes cluster itself on the server/master machine.

This script sets docker to use systemd cgroups driver initializes the cluster, copies the cluster files to the ubuntu user’s home directory, installs Calico networking plugin, and the standard Kubernetes dashboard.

ansible-init-cluster.yml:

– hosts: kube_server

become: true

remote_user: ubuntu

vars_files:

– ansible-vars.yml

tasks:

– name: set docker to use systemd cgroups driver

copy:

dest: “/etc/docker/daemon.json”

content: |

{

“exec-opts”: [“native.cgroupdriver=systemd”]

}

– name: restart docker

service:

name: docker

state: restarted

– name: Initialize Kubernetes cluster

command: “kubeadm init –pod-network-cidr {{ pod_cidr }}”

args:

creates: /etc/kubernetes/admin.conf # skip this task if the file already exists

register: kube_init

– name: show kube init info

debug:

var: kube_init

– name: Create .kube directory in user home

file:

path: “{{ home_dir }}/.kube”

state: directory

owner: 1000

group: 1000

– name: Configure .kube/config files in user home

copy:

src: /etc/kubernetes/admin.conf

dest: “{{ home_dir }}/.kube/config”

remote_src: yes

owner: 1000

group: 1000

– name: restart kubelet for config changes

service:

name: kubelet

state: restarted

– name: get calico networking

get_url:

url: https://projectcalico.docs.tigera.io/manifests/calico.yaml

dest: “{{ home_dir }}/calico.yaml”

– name: apply calico networking

become: no

command: kubectl apply -f “{{ home_dir }}/calico.yaml”

– name: get dashboard

get_url:

url: https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

dest: “{{ home_dir }}/dashboard.yaml”

– name: apply dashboard

become: no

command: kubectl apply -f “{{ home_dir }}/dashboard.yaml”

Initializing the cluster took 53s on my machine.

One of the first tasks is to download the images which takes the majority of the duration.

You should get 13 ok and 10 changed with the init.

I had two extra user check tasks because I was fighting some issues with applying the Calico networking.

ansible-playbook -i ansible-hosts.txt ansible-init-cluster.yml

Getting the join command and joining worker nodes

With the master up and running, we need to retrieve the join command.

Now to join the workers/agents, our Ansible playbook will read that join_command.out file and use it to join the cluster.

ansible-join-workers.yml –

– hosts: kube_agents

become: true

remote_user: ubuntu

vars_files:

– ansible-vars.yml

tasks:

– name: set docker to use systemd cgroups driver

copy:

dest: “/etc/docker/daemon.json”

content: |

{

“exec-opts”: [“native.cgroupdriver=systemd”]

}

– name: restart docker

service:

name: docker

state: restarted

– name: read join command

debug: msg={{ lookup(‘file’, join_command_location) }}

register: join_command_local

– name: show join command

debug:

var: join_command_local.msg

– name: join agents to cluster

command: “{{ join_command_local.msg }}”

And to actually join:

ansible-playbook -i ansible-hosts.txt ansible-join-workers.yml

With the two worker nodes/agents joined up to the cluster, you now have a full on Kubernetes cluster up and running! Wait a few minutes, then log into the server and run kubectl get nodes to verify they are present and active (status = Ready):

kubectl get nodes

Kubernetes Dashboard

Everyone likes a dashboard. Kubernetes has a good one for poking/prodding around.

It appears to basically be a visual representation of most (all?) of the “get information” types of command you can run with kubectl (kubectl get nodes, get pods, describe stuff, etc.).

The dashboard was installed with the cluster init script but we still need to create a service account and cluster role binding for the dashboard.

Dashboard user/role creation

On the master machine, create a file called sa.yaml with the following

contents:

apiVersion: v1

kind: ServiceAccount

metadata:

name: admin-user

namespace: kubernetes-dashboard

And another file called clusterrole.yaml:

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: admin-user

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: cluster-admin

subjects:

– kind: ServiceAccount

name: admin-user

namespace: kubernetes-dashboard

Apply both, then get the token to be used for logging in. The last command will spit out a long string. Copy it starting at ‘ey’ and ending before the username (ubuntu). In the screenshot I have highlighted which part is the token

kubectl apply -f sa.yaml

kubectl apply -f clusterrole.yaml

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath=”{.secrets[0].name}”) -o go-template=”{{.data.token | base64decode}}”

SSH Tunnel & kubectl proxy

At this point, the dashboard has been running for a while. We just can’t get to it yet.

There are two distinct steps that need to happen.

The first is to create a SSH tunnel between your local machine and a machine in the cluster (we will be using the master).

Then, from within that SSH session, we will run kubectl proxy to expose the web services.

SSH command – the master’s IP is 10.98.1.41 in this example:

kubectl proxy

The Kubernetes Dashboard

At this point, you should be able to navigate to the dashboard page from a web browser on your local machine (http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/) and you’ll be prompted for a log in.

Make sure the token radio button is selected and paste in that long token from earlier.

It expires relatively quickly (couple hours I think) so be ready to run the token retrieval command again.

[Looking for a solution to another query? We’re happy to help.]

Conclusion

In this article, we provide a quick and simple solution from our Support team to install ansible proxmox kubernetes.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.