DigitalOcean Load Balancer WebSocket can be used with or without backend keepalive enabled. Messages will transfer back and forth through the WebSocket tunnel once the load balancer converts an HTTP connection to the WebSocket connection. Bobcares, as a part of our DigitalOcean Managed Service, offers solutions to every Load balancer concern that comes our way.
DigitalOcean Load Balancer WebSocket
The DigitalOcean Load Balancers support WebSocket protocol without the need for any additional configuration. The load balancer substitutes a special one-hour inactivity timeout for the standard 60-second timeout when using WebSockets. We can use WebSockets with or without backend keepalive turned on. The forwarding rule configurations that support WebSockets include:
-
- TCP
- HTTP to HTTP
- HTTPS to HTTP
- HTTPS to HTTPS
- HTTP/2 to HTTP
- HTTP/2 to HTTP/2
Challenges Of Using DigitalOcean Load Balancers With WebSockets
Running a multi-server, web socket application (using Socket IO) in Kubernetes on Digital Ocean’s hosted K8S solution with a Digital Ocean load balancer connected to an Nginx Ingress controller(ingress-nginx) is the issue our Support team is attempting to solve.
Instead of providing a “step by step” tutorial for setting up load balancers and Kubernetes on Digital Ocean, our Support team addresses the challenges that users are likely to encounter when attempting to complete the task on their own in this article.
So what are the things the user should be aware of if the cluster is operational and the “LoadBalancer” type ingress-nginx controller has been configured to create the Digital Ocean load balancer?
Use HTTP And HTTPS As The Load Balancing Protocols
The load balancer’s protocol will automatically be set to TCP with the appropriate ports when Kubernetes creates it. Using HTTP & HTTPS can unquestionably resolve any issues that arise when we use TCP as the protocol. The additional benefit of using HTTPS on the load balancer is that TLS / SSL termination offloads at the load balancer level, which is not possible when using TCP as the load balancer protocol.
We can configure the protocol from the user interface, but for the sake of flexibility, we should also configure this in the Kubernetes manifests.
Certificates With A Long Life For Cloudflare Users
Digital Ocean offers the ability to generate LetsEncrypt certificates, providing the ability to host the app DNS records with them. However, this is different if we use a service like Cloudflare. This stinks because the DNS will host at Cloudflare.
Since the end user will only ever see the Cloudflare certificate, we really only need a valid certificate for secure transport, which could even be self-signed. The concern is to secure the transmission between Cloudflare and the DO load balancer.
We also have another choice with Cloudflare. Although it’s not fully managed, Cloudflare’s Origin CA certificates enable us to create a certificate for our domains. Cloudflare signs this and has a 15-year expiration date. We can then add this certificate to Digital Ocean and assign it to our load balancer. In order to include any additional domains, we will need to regenerate the certificate.
Broken Long Polling And Multi-Server “Session ID Unknown” Disconnect Errors
If we are using something like Redis as a PubSub backend for communication between the pods while running more than one pod serving Socket IO servers. We’re going to run into a nasty issue. The client’s console will blast with “Session ID unknown” errors, and the server logs will show clients repeatedly connecting and disconnecting every few seconds.
This is due to multiple levels of load balancing on various pods at both the DO Load Balancer and the Kubernetes service level. To “Upgrade” the connection to web sockets, Socket IO will first send an HTTP 101 request after long polling the endpoint. The issue here is that the subsequent request doesn’t arrive on the same pod, which results in “Unknown Session ID”.
We can fix it in two ways.
- Use Session Affinity: Use sticky sessions, that is, any follow-up requests from the same user will route to the same pod.
- Disable Long Polling: To disable long polling and force only web sockets, simply set the transports property in the client to “websocket”.
const ioSocket = io('https://ws.myapp1.com', { transports: [‘websocket’], });
Numerous Client Re-connections
Clients automatically reconnect every 60 seconds by default. This is overkill, so let’s change it by configuring in the ingress annotation definition.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp1-node
namespace: myapp1
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
Redirecting HTTP To HTTPS
We can do this at the load balancer level by setting the do-loadbalancer-redirect-http-to-https
annotation to true in the ingress controller service definition.
Enable CORS
Depending on the application, we need to enable CORS (Cross Origin Resource Sharing) on the ingress to allow clients to connect from other domains. We can do this at the ingress resource level with annotations.
Secure Web Sockets (WSS)
We can now force upgrades to WSS instead of WS connections because they will encrypt all the way up to the entry point to our cluster once the aforementioned protocol changes are in place and the load balancer terminates TLS on 443 with our new certificate.
Final Configuration
The final configuration files of Ingress Resource and Ingress Controller Service look like the following:
Ingress Resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: vuepilot-node
namespace: vuepilot
annotations:
#kubernetes.io/ingress.class: nginx-general
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
spec:
rules:
- host: ws.vuepilot.com
http:
paths:
- path: /
backend:
serviceName: vuepilot-node
servicePort: 8080
Ingress Controller Service
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: http
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
# Use "doctl compute certificate list" to get this ID
service.beta.kubernetes.io/do-loadbalancer-certificate-id: “xxx-xxx-xxx”
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
[Looking for a solution to another query? We are available 24/7.]
Conclusion
In this article, we see DigitalOcean Load Balancer WebSocket details. This includes various challenges of using Digital Ocean Load Balancers With WebSockets. Our Support Team also provides possible solutions for those challenges.
PREVENT YOUR SERVER FROM CRASHING!
Never again lose customers to poor server speed! Let us help you.
Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.
var google_conversion_label = "owonCMyG5nEQ0aD71QM";
0 Comments