Bobcares

For every $500 you spend, we will provide you with a $500 credit on your account*

BLACK FRIDAY SPECIAL

*The maximum is $4000 in credits, Offer valid till November 30th, 2024, New Customers Only, Credit will be applied after purchase and expires after six (6) months

For every $500 you spend, we will provide you with a $500 credit on your account*

BLACK FRIDAY SPECIAL

*The maximum is $4000 in credits, Offer valid till November 30th, 2024, New Customers Only, Credit will be applied after purchase and expires after six (6) months

Configuring Network Policies for Applications in GKE

by | Jul 9, 2021

Configuring network policies for applications in GKE? We can help you.

As part of our Google Cloud Platform Services, we assist our customers with several Google Cloud queries.

Today, let us see how we can configure network policies.

 

Configuring network policies for applications in GKE

With Network policies, we can limit connections between Pods. Therefore, it provides better security and reduces the compromise radius.

In order to begin, our Support Techs suggest us to:

a) Enable the Kubernetes Engine API:

  1. In Google Cloud Console, we go to the Kubernetes Engine page.
  2. Here, we create or select a project.
  3. Then we wait for the API and its services to enable.

b) Install the following command-line tools that may come in handy:

  1. gcloud, to create and delete Kubernetes Engine clusters.
  2. kubectl, to manage Kubernetes.

With gcloud we can install kubectl:

gcloud components install kubectl

 

Set defaults for the gcloud command-line tool

Instead of typing the project ID and Compute Engine zone options in the gcloud command-line tool, we can set the defaults:

gcloud config set project project-id
gcloud config set compute/zone compute-zone

 

Create a GKE cluster

We can create a container cluster with network policy enforcement using:

gcloud container clusters create test --enable-network-policy

 

Restrict incoming traffic to Pods

The NetworkPolicy resources of Kubernetes will let us configure network access policies for the Pods.

Initially, we run a web server application with the label app=hello and expose it internally in the cluster:

kubectl run hello-web --labels app=hello \
--image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose

Next, to allow traffic to the hello-web Pods from only the app=foo Pods we configure a NetworkPolicy.

Any other incoming traffic from Pods that do not have this label will be blocked.

The following manifest selects Pods with label app=hello and specifies an Ingress policy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: hello-allow-from-foo
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
app: hello
ingress:
- from:
- podSelector:
matchLabels:
app: foo

To apply this policy to the cluster, we run:

kubectl apply -f hello-allow-from-foo.yaml

 

Validate the Ingress policy

First, we run a temporary Pod with the label app=foo and get a shell in the Pod:

kubectl run -l app=foo --image=alpine --restart=Never --rm -i -t test-1

Then we make a request to the hello-web:8080 endpoint to verify that it allows the incoming traffic:

/ # wget -qO- --timeout=2 http://hello-web:8080
Hello, world!
Version: 1.0.0
Hostname: hello-web-2258067535-vbx6z
/ # exit

Now, the traffic from Pod app=foo to the app=hello Pods is enabled.

Next, to get the shell inside the pod, we run a temporary Pod with a different label (app=other):

kubectl run -l app=other --image=alpine --restart=Never --rm -i -t test-1

 

Restrict outgoing traffic from the Pods

Just like we can restrict incoming traffic we can restrict outgoing traffic as well.

However, to be able to query internal hostnames we must allow DNS resolution in the egress network policies.

To do this, we deploy a NetworkPolicy controlling outbound traffic from Pods with the label app=foo. We will also allow traffic only to Pods with the label app=hello, as well as the DNS traffic.

The following manifest specifies a network policy controlling the egress traffic from Pods with label app=foo with two destinations:

  • Pods in the same namespace with the label app=hello.
  • Cluster Pods or external endpoints on port 53.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: foo-allow-to-hello
spec:
policyTypes:
- Egress
podSelector:
matchLabels:
app: foo
egress:
- to:
- podSelector:
matchLabels:
app: hello
- ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP

We run the below command to apply this policy to the cluster:

kubectl apply -f foo-allow-to-hello.yaml

 

Validate the egress policy

Initially, we deploy a new web application called hello-web-2 and expose it internally in the cluster:

kubectl run hello-web-2 --labels app=hello-2 \
--image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose

Then, we open a shell inside the container and run a temporary Pod with the label app=foo:

kubectl run -l app=foo --image=alpine --rm -i -t --restart=Never test-3

We need to validate that the Pod can establish connections to hello-web:8080:

/ # wget -qO- --timeout=2 http://hello-web:8080
Hello, world!
Version: 1.0.0
Hostname: hello-web-2258067535-vbx6z

Then we validate that the Pod cannot establish connections to hello-web-2:8080:

/ # wget -qO- --timeout=2 http://hello-web-2:8080
wget: download timed out

In addition, we validate that the Pod cannot establish connections to external websites and exit:

/ # wget -qO- --timeout=2 http://www.example.com
wget: download timed out
/ # exit

 

Clean up

We can avoid incurring charges to your Google Cloud account for using these resources. To do so, we need to either delete the project that contains the resources or keep the project and delete the individual resources.

If we delete the container cluster, it will delete the resources that make up the container cluster:

gcloud container clusters delete test

[Need help with the configuration? We’d be happy to assist]

 

Conclusion

In short, we saw how our Support Techs configure network policies for applications in GKE.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.