Apply Mutual TLS Over a Kubernetes Nginx Ingress Controller
Table of Contents
The article explains how to implement mTLS with Kubernetes. Earthly simplifies the CI pipeline for DevOps professionals. Check it out.
The Problem That Mutual TLS Solves
Mutual TLS (Transport Layer Security) concept lies under the umbrella of Zero Trust Policy where strict identity verification is required for any client, person, or device trying to access resources in a private network.
Mutual TLS solves the problem of authenticating both the client and the server in a communication session. In traditional TLS, only the server is authenticated to the client, leaving the client vulnerable to man-in-the-middle attacks.
Mutual TLS provides an additional layer of security by requiring the client to also present a digital certificate, which is verified by the server, ensuring that both parties in the communication are who they claim to be. This helps to prevent unauthorized access and protect against impersonation attacks.
In this article, you’ll learn the differences between TLS and mTLS, and demystify how to apply both TLS and mTLS connections between a client and a Kubernetes endpoint exposed through an Nginx Ingress Controller.
Differences Between TLS and MTLS
Transport Layer Security (TLS) is a protocol used to secure communication over a computer network. It provides a secure channel between two devices communicating over the internet, or between a client and a server.
TLS uses a combination of public key and TLS certificate encryption to secure the transmission of data. In a TLS connection, the client and server exchange messages to negotiate a set of encryption keys that will be used to secure the connection. Once the keys have been negotiated, the client and server use them to encrypt and decrypt the data transmitted between them. TLS certificates can be named as SSL certificates and this is because TLS is an evolved version of SSL (Secure Sockets Layer).
Mutual TLS (mTLS) is an upgrade of TLS that requires both the client and the server to authenticate each other using digital certificates. In a mutual TLS connection, the client presents its own certificate to the server, and the server presents its own certificate to the client. This ensures that both the client and server are who they claim to be, providing an additional layer of security to the connection.
The diagram below shows what we’ll implement in the next steps. We’ll start by deploying an Nginx Ingress Controller, then deploy a simple HTTP application and expose it. We’ll then apply routing rules through a Kubernetes ingress resource. After that we’ll learn how to apply TLS and mutual TLS connections between the client and a Kubernetes endpoint.
Let’s get started!
Deploying Nginx Ingress Controller to a Kubernetes cluster
Nginx Ingress Controller is used to handle external traffic to a Kubernetes cluster. It provides load balancing, SSL termination, and name-based virtual hosting, among other features. Deploying an Nginx Ingress Controller to a Kubernetes cluster allows for easier and more efficient management of external traffic to the cluster and its services. Additionally, it can improve the scalability of the cluster.
To proceed, you should have the following prerequisites:
- Up and running Kubernetes cluster. You can use minikube or kind to start one in a local environment.
- Local installations of Kubectl and openssl
Usually you need to check the Nginx Ingress official website to verify the installation steps according to your Kubernetes environment. You can check the README file and also the Getting Started document related to the Nginx Ingress Controller.
For this demo, I’m using minikube as my Kubernetes cluster environment, so Nginx Ingress Controller can be enabled as below:
minikube addons enable ingress $
The pods related to Nginx Ingress Controller are deployed in ingress-nginx namespace. Now let’s do a pre-flight check to make sure these pods are up and running before proceeding with next steps:
kubectl get pods -n ingress-nginx $
The image below shows the Nginx Ingress Controller pods running in ingress-nginx
namespace as expected, and we can proceed with the following.
Deploying and Exposing an HTTP Application
After making sure that all pods in the ingress-nginx
namespace are up and running, let’s deploy an HTTP application through Kubernetes deployment to act as a backend service as below:
kubectl create deployment demo-app --image=httpd --port=80 $
Then expose this deployment locally through a service of type ClusterIP. The Kubernetes service resource is used to expose a deployment of an application either locally within the cluster or externally outside the cluster. It creates a stable endpoint for clients to access the applications running in the pods. It acts as a load balancer as it is designed to circulate the traffic to the set of pods running the application.
In our demo, we choose to expose the deployment locally within the cluster through a Kubernetes service of type ClusterIP and externally through the Nginx Ingress Controller.
ClusterIP is a type of Kubernetes service which exposes an application within the cluster. It provides load balancing and high availability, so the traffic coming from outside (internet) to the application will hit the external endpoint exposed by Nginx Ingress Controller, which then redirects the traffic to one of the nodes within the cluster. The traffic will be redirected to one of the pods through the service. This enables load balancing and service discovery within the cluster. More details will be provided in the next steps.
kubectl expose deployment demo-app $
The above command will create a service named demo-app
. You need to expose this Kubernetes service to be accessible from outside the cluster. This can be done through a Kubernetes ingress resource which will use Nginx as Ingress class and will be hosted through test.localdev.me which is our localhost DNS.
kubectl create ingress test-localhost --class=nginx \
$ --rule="test.localdev.me/*=demo-app:80"
So “test.localdev.me” will be our external gate to access the applications running in the Kubernetes cluster. When we curl
“test.localdev.me” from the internet, the Nginx Ingress Controller will route the traffic to the “demo-app” service inside the cluster which will redirect the traffic to one of the application pods to serve it.
The entire wildcard domain entries of *.localdev.me points to 127.0.0.1 and this can be tested by executing the following command: nslookup test.localdev.me
.
Next, let’s test the connection to our endpoint application from outside the cluster, which will be made through port-forward technique from our local machine on port 8080 to the Ingress Controller service on port 80 in the ingress-nginx
namespace:
kubectl port-forward -n ingress-nginx \
$ service/ingress-nginx-controller 8080:80
Leave the above command running in the terminal and from another terminal, let’s try to curl the local endpoint:
curl http://test.localdev.me:8080 $
The response should be as shown below. This means that you are able to connect to the Kubernetes endpoint through the Nginx Ingress Controller but with an HTTP connection that is not secure.
<html><body><h1>It works!</h1></body></html>
Our goal in the next section is to secure this call, so we should be able to hit the URL “test.localdev.me” with the HTTPS protocol. This will require a TLS server certificate to be in place and configured in a proper way. So let’s move to the next section.
Enabling TLS Through Self-Signed Certificate
So far, we have deployed an Nginx Ingress Controller and HTTP application to Kubernetes, exposed this application to the outside world through an ingress resource and were able to successfully access the application from outside the cluster through a HTTP connection that’s not secure.
In this section, we will focus on the steps needed to generate a server TLS certificate which will be used to validate the request through the client-server connection. When a user/client navigates through a server/app/website that uses TLS, the connection is established through TLS handshake. It starts with a client sending a message to the server asking to set up an encrypted session. Then the server responds with a public key and TLS certificate.
The client verifies the certificate and uses the public key to generate a new pre-master key, then sends it to the server. The server decrypts the pre-master key using a private key. Finally client and server use the pre-master key to issue a shared secret which will be used to encrypt the messages.
Now you need to generate a self-signed server certificate for domain “test.localdev.me”. By creating a server certificate for a specific domain, the client can verify that it is communicating with the intended server and not an imposter, thus providing security and trust to the client that it is communicating with the right server.openssl
command will be used here to generate a self-signed server certificate as below:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
$ -keyout server.key -out server.crt -subj \
"/CN=test.localdev.me/O=test.localdev.me"
At this point, we have a server certificate “server.crt” which needs to be defined to the Kubernetes cluster through a Kubernetes secret resource. The following command will create a secret named “self-tls” that holds the server certificate and the private key:
kubectl create secret tls self-tls --key server.key --cert server.crt $
The Ingress Controller needs to be modified to add a tls section to refer to the created secret which holds the server certificate.
kubectl edit ingress test-localhost $
The above command will open a vi
session to modify the ingress resource. You can refer to the YAML below to add the “tls:” section which will link the secret we just created “self-tls” to the hostname “test.localdev.me”. So it is expected when a client hits this hostname with HTTPS protocol, “self-tls” server certificate will be provided to the client to be verified. After verification, the request will be redirected to inside the cluster and a secure session will be initiated.
This YAML is the declarative representation of the ingress resource “test-localhost” that has been created in previous steps. It contains the needed rule to route the external traffic coming through “test.localdev.me” to the target service “demo-app”.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-localhost
namespace: default
spec:
ingressClassName: nginx
rules:
- host: test.localdev.me
http:
paths:
- backend:
service:
name: demo-app
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- test.localdev.me
secretName: self-tls
Now we want to hit the hostname (test.localdev.me) on port 443 to test the HTTPS connection through the port-forward technique. The below command will forward traffic from port 443 on the local machine to the Nginx Ingress Controller service running on port 443 inside the namespace ingress-nginx
.
sudo kubectl port-forward -n ingress-nginx \
$ service/ingress-nginx-controller 443:443
sudo
is required here to allow incoming traffic on port 443. Also keep that command running in a separate session as port-forwarding will be used for the rest of the article.
From another terminal, let’s try to curl the local endpoint with HTTPS protocol:
curl -k -v https://test.localdev.me/ $
-k
is used to skip self-signed certificate verification and -v
to see some logs.
You can see through the logs that only one certificate has been verified which is the server certificate.
Now we have a successful TLS connection which enables server certificate validation. Next, we will enable the mutual TLS to also validate the client identity.
Enabling Mutual TLS Through Self-Signed Certificate
In this section, we will add an extra layer of security which is client certificate validation. This helps to ensure that only authorized clients are able to establish a secure connection with the server, and can prevent unauthorized access to sensitive information. So we will create a CA (Certificate Authority) as our verification gate, and also a client’s certificate and key which are the trusted identity of the client.
First let’s create a CA. The main purpose of a CA is to affirm the identity of the certificate holder so that the recipient of the certificate can trust that the certificate was issued by a reputable and trustworthy entity.
openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key \
$ -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority'
Now you have the CA created “ca.crt” and we want to define it to the Kubernetes cluster through a Kubernetes resource. And again, we’ll define it as a Kubernetes secret. Run the following command to create a secret named ca-secret of type generic to hold the “ca.crt file”
kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt $
Next we need to generate a Certificate Signing Request (CSR) and client key. CSR is a request for a certificate that contains information about the certificate holder. It will be submitted to the CA with the client key to generate the client certificate. The output of the following command is a client CSR “client.csr” and a client key “client.key”.
openssl req -new -newkey rsa:4096 -keyout client.key \
$ -out client.csr -nodes -subj '/CN=My Client'
Now we need to sign this CSR with the CA to generate the client certificate which can be done via the below command.
openssl x509 -req -sha256 -days 365 -in client.csr \
$ -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt
At this point, we have a client key “client.key” and client certificate “client.crt” which will be used when communicating with the server through mutual TLS connection.
Again the ingress resource needs to be modified to add client verification annotations. One of these annotations “nginx.ingress.kubernetes.io/auth-tls-secret” is to refer to the CA secret “default/ca-secret” which will verify the identity of the clients requesting to communicate with the server. Another one “nginx.ingress.kubernetes.io/auth-tls-verify-client” is to enable verifying the client certificate.
kubectl edit ingress test-localhost $
Below are the 4 annotations that need to be added to ingress resource to allow client TLS verification.
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: \
"true"
nginx.ingress.kubernetes.io/auth-tls-secret: default/ca-secret
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
Validating the Mutual TLS Scenario
Now if you try to curl the Kubernetes exposed endpoint without providing the client cert, you should get an error mentioning that no SSL certificate provided, which means that annotations in the previous step are working correctly.
curl -k https://test.localdev.me $
As you see, the error appears as expected because no client SSL certificate was provided during the call.
Try the same call by providing a client key and a certificate and use ‘-v’ to get more logs:
curl -k -v https://test.localdev.me/ --key client.key \
$ --cert client.crt
As you can see in the output of the command that certificate verification “CERT verify” has been made twice, one for server certificate and another one for the client certificate then the call works as expected. Both client and server identities are verified and trusted for a mutual secure connection.
You can find the complete ingress file on GitHub.
Notes on Applying Mutual TLS in Production Environment
These notes are to be taken into consideration when you try the mutual TLS implementation in a Kubernetes production environment:
- Nginx Ingress Controller will be installed according to your Kubernetes production environment. You can check this installation guide for more context.
- Once you have the Ingress Controller installed, you need to configure a DNS record of type A for your domain name to point to the public IP of the Ingress Controller.
- Server certificate and CA certificate should be issued from a trusted certificate provider and you just need to apply them into Kubernetes as secrets.
Conclusion
Mutual TLS offers comprehensive security for client-server communication, ensuring both parties are verified. We’ve explored the distinctions between TLS and mTLS, and illustrated how to secure Kubernetes Nginx Ingress Controller endpoints.
Once you’ve secured your Kubernetes environment, you might want to streamline your build process. Consider exploring Earthly, your next favorite build automation tool, to make this process more efficient and reliable.
Earthly Cloud: Consistent, Fast Builds, Any CI
Consistent, repeatable builds across all environments. Advanced caching for faster builds. Easy integration with any CI. 6,000 build minutes per month included.