Bobcares

WeSupport

Call Us! 1-800-383-5193
Call Us! 1-800-383-5193
Call Us! 1-800-383-5193

Need Help?

Emergency Response Time custom

Our experts have had an average response time of 12.24 minutes in February 2021 to fix urgent issues.

We will keep your servers stable, secure and fast at all times for one fixed price.

Nginx ingress on DigitalOcean kubernetes using Helm – How we set it up

by | Mar 9, 2021

Wondering how to set up Nginx ingress on DigitalOcean Kubernetes using Helm? We can help you with it.

Here at Bobcares, we often receive requests regarding DigitalOcean as part of our DigitalOcean Managed Services for web hosts and online service providers.

Today, let’s see how to set up Nginx ingress on DigitalOcean.

 

How to set up Nginx ingress on DigitalOcean Kubernetes using Helm

Now let’s take a look at how our Support Engineers set up Nginx ingress.

 

Step 1 — Setting Up Hello World Deployments

First, we shall create a Hello World app called hello-kubernetes so that we can have some services which we will route the traffic. Then to ensure that Nginx works Ingress properly in the next steps, we will deploy it twice.

We shall store the deployment configuration on the local machine. We shall name the first deployment configuration file as hello-kubernetes-first.yaml. So we create it using a text editor.

$ nano hello-kubernetes-first.yaml

Then we add the below code in it.

apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!

We then save and close the file.

Then we will create the first variant of the hello-kubernetes app in Kubernetes by running the below command:

$ kubectl create -f hello-kubernetes-first.yaml

In order to verify the Service’s creation, we run the below command:

$ kubectl get service hello-kubernetes-first

As a result, we see that the newly created Service has a ClusterIP assigned. It means that it is working properly. Then all the traffic sent to it is forwarded to the selected Deployment on port 8080.

Now, we have worked on the first variant of the hello-kubernetes app. So we shall move on to the second one.

For that, we open a file named hello-kubernetes-second.yaml for editing:

$ nano hello-kubernetes-second.yaml

Then we add the below lines.

apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-second
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-second
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-second
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-second
template:
metadata:
labels:
app: hello-kubernetes-second
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the second deployment!

Then we save and close the file.

Now, we create it in Kubernetes with the below command:

$ kubectl create -f hello-kubernetes-second.yaml

Then to verify whether the second Service is up and running or not we run the below command.

$ kubectl get service

As a result, both the hello-kubernetes-first and hello-kubernetes-second will be listed. This means that Kubernetes has created them successfully.

Finally, now we have created two deployments of the hello-kubernetes app with accompanying Services.

 

Step 2 — Installing the Kubernetes Nginx Ingress Controller

Now, we shall install the Kubernetes-maintained Nginx Ingress Controller using Helm via Github.

https://github.com/kubernetes/ingress-nginx

In order to install the Nginx Ingress Controller to the cluster, we run the below command:

$ helm install nginx-ingress stable/nginx-ingress --set controller.publishService.enabled=true

The above command will install the Nginx Ingress Controller from the stable charts repository. Then it names the Helm release nginx-ingress and sets the publishService parameter to true.

Now, Helm has logged what resources in Kubernetes it created as a part of the chart installation.

Here is the command that we run to see if the Load Balancer is available.

$ kubectl get services -o wide -w nginx-ingress-controller

 

Step 3 — Exposing the App Using an Ingress

Now, we shall create an Ingress Resource to use to expose the hello-kubernetes app deployments. After that, we can test it by accessing it from the browser.

We shall store the Ingress in a file named hello-kubernetes-ingress.yaml. We create it using an editor.

$ nano hello-kubernetes-ingress.yaml

Then we add the below lines.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: hw1.your_domain
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
- host: hw2.your_domain
http:
paths:
- backend:
serviceName: hello-kubernetes-second
servicePort: 80

Here, we define an Ingress Resource with the name hello-kubernetes-ingress. Then we specify two host rules so that hw1.your_domain is routed to the hello-kubernetes-first Service, and hw2.your_domain is routed to the Service from the second deployment (hello-kubernetes-second).

Then we create it in Kubernetes by running the following command:

$ kubectl create -f hello-kubernetes-ingress.yaml

Now, we navigate to hw1.your_domain in the browser, we see the “Hello from the first deployment” message.

The second variant (hw2.your_domain) will show the “Hello from the second deployment” message.

Through this, we have verified that the Ingress Controller correctly routes requests.

 

Step 4 — Securing the Ingress Using Cert-Manager

In order to secure the Ingress Resources, we will install Cert-Manager, create a ClusterIssuer for production, and modify the configuration of the Ingress to take advantage of the TLS certificates.

Before installing Cert-Manager to cluster via Helm, we will manually apply the required CRDs from the jetstack/cert-manager repository by running the following command:

$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager.crds.yaml

Then, we create a namespace for cert-manager by running the following command:

$ kubectl create namespace cert-manager

Next, we add the Jetstack Helm repository to Helm, which hosts the Cert-Manager chart. For that, we run the below command:

$ helm repo add jetstack https://charts.jetstack.io

Finally, we install Cert-Manager into the cert-manager namespace:

$ helm install cert-manager --version v0.14.1 --namespace cert-manager jetstack/cert-manager

Now, we create the one that issues Let’s Encrypt certificates, and we’ll store its configuration in a file named production_issuer.yaml. We create it and open it for editing:

$ nano production_issuer.yaml

Then we add the below lines:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# Email address used for ACME registration
email: your_email_address
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Name of a secret used to store the ACME account private key
name: letsencrypt-prod-private-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx

We then save and close the file.

After that, we roll it with with kubectl:

$ kubectl create -f production_issuer.yaml

Now, to introduce the certificates to the Ingress Resource defined in the previous step, we open the hello-kubernetes-ingress.yaml.

$ nano hello-kubernetes-ingress.yaml

Then we add the below highlighted lines:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- hw1.your_domain
- hw2.your_domain
secretName: hello-kubernetes-tls
rules:
- host: hw1.your_domain
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
- host: hw2.your_domain
http:
paths:
- backend:
serviceName: hello-kubernetes-second
servicePort: 80

Then we re-apply this configuration to the cluster by running the following command:

$ kubectl apply -f hello-kubernetes-ingress.yaml

It would take some time for the Let’s Encrypt servers to issue a certificate for the domains.

when the last line of output reads Certificate issued successfully, we exit by pressing CTRL + C.

[Need more assistance with DigitalOcean related queries?- We are here to help you.]

 

Conclusion

In today’s writeup, we saw how our Support Engineers set up Nginx ingress on DigitalOcean Kubernetes using Helm.

PREVENT YOUR DROPLET FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our DigitalOcean experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Bobcares

Use your time to build great apps. Leave your servers to us.

Managing a server is time consuming. Whether you are an expert or a newbie, that is time you could use to focus on your product or service. Leave your server management to us, and use that time to focus on the growth and success of your business.

TALK TO US Or click here to learn more.
Bobcares

Use your time to build great apps. Leave your servers to us.

Managing a server is time consuming. Whether you are an expert or a newbie, that is time you could use to focus on your product or service. Leave your server management to us, and use that time to focus on the growth and success of your business.

TALK TO US Or click here to learn more.

Categories:

Tags:

Privacy Preference Center

Necessary

Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.

PHPSESSID - Preserves user session state across page requests.

gdpr[consent_types] - Used to store user consents.

gdpr[allowed_cookies] - Used to store user allowed cookies.

PHPSESSID, gdpr[consent_types], gdpr[allowed_cookies]
PHPSESSID
WHMCSpKDlPzh2chML

Statistics

Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.

_ga - Preserves user session state across page requests.

_gat - Used by Google Analytics to throttle request rate

_gid - Registers a unique ID that is used to generate statistical data on how you use the website.

smartlookCookie - Used to collect user device and location information of the site visitors to improve the websites User Experience.

_ga, _gat, _gid
_ga, _gat, _gid
smartlookCookie

Marketing

Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.

IDE - Used by Google DoubleClick to register and report the website user's actions after viewing or clicking one of the advertiser's ads with the purpose of measuring the efficacy of an ad and to present targeted ads to the user.

test_cookie - Used to check if the user's browser supports cookies.

1P_JAR - Google cookie. These cookies are used to collect website statistics and track conversion rates.

NID - Registers a unique ID that identifies a returning user's device. The ID is used for serving ads that are most relevant to the user.

DV - Google ad personalisation

IDE, test_cookie, 1P_JAR, NID, DV, NID
IDE, test_cookie
1P_JAR, NID, DV
NID
hblid

Security

These are essential site cookies, used by the google reCAPTCHA. These cookies use an unique identifier to verify if a visitor is human or a bot.

SID, APISID, HSID, NID, PREF
SID, APISID, HSID, NID, PREF