[{"content":"The social media application you\u0026rsquo;ve just created has hit critical mass with thousands of user interactions per second. You\u0026rsquo;ve gathered your users\u0026rsquo; feedback and want to introduce a new feature that will drive engagement. However, you\u0026rsquo;re cautious about not disrupting active users with potential downtime if the new feature causes an overload on your servers.\nCurrently, you have a Kubernetes cluster with the Ingress-Nginx controller and a blue-green deployment setup. You can switch 100% of traffic to the new version immediately — but you\u0026rsquo;d rather divert only a small portion first and monitor the effects before fully committing. Fortunately, there\u0026rsquo;s a common solution to this: the canary deployment.\nCanary Deployments What do birds have to do with deploying software? In the days when coal mining was prevalent, miners would send canaries deep into the mines to detect early signs of carbon monoxide before fully venturing forth. In software development, a similar technique can be applied to network traffic: a small population of users act as the canaries and venture onto the new software version to stress-test the system.\nThe new version is typically accompanied by a monitoring tool such as Prometheus or OpenTelemetry, which reports back metrics like error rates and network latency. If the metrics meet predetermined standards, the operator incrementally shifts more traffic toward the new version.\nIn our hypothetical deployment, we have three namespaces: bg-switch, blue, and green. The bg-switch namespace is the single point of entry where a software operator can divert traffic from the old deployment (blue) to the new one (green). You can read more about the namespaced blue-green approach in Kubernetes Namespaces: The Secret Weapon for Zero-Risk Blue-Green Deployments.\nThe issue with the pure blue-green implementation is that it exposes 100% of traffic to the new version the moment you switch. Canary deployments limit that exposure area while still giving you the option to do a zero-to-a-hundred traffic switch if desired.\nImplementation We\u0026rsquo;ll implement canary deployments using Ingress-Nginx annotations combined with Kubernetes namespace segregation. The test application is Nginx. Any Kubernetes hosting solution works — k3d or a managed cluster from a cloud provider.\n1. Install Ingress-Nginx helm upgrade --install ingress-nginx ingress-nginx \\ --repo https://kubernetes.github.io/ingress-nginx \\ --namespace ingress-nginx --create-namespace The following examples use the hostname canary.localhost, a custom DNS entry under /etc/hosts.\n2. Create the Namespaces kubectl create namespace nginx-blue kubectl create namespace nginx-green kubectl create namespace canary-bg-switch nginx-green — hosts version 1.0 (current production) nginx-blue — hosts version 2.0 (the canary) canary-bg-switch — traffic controller, tunes the split between blue and green 3. Create the Canary Traffic Controller Components We create three components in canary-bg-switch: two ingresses and two services.\ngreen-ingress.yaml — the default ingress; all traffic flows here initially\napiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: canary-green-ingress namespace: canary-bg-switch spec: ingressClassName: nginx rules: - host: \u0026#34;canary.localhost\u0026#34; http: paths: - pathType: Prefix path: \u0026#34;/\u0026#34; backend: service: name: green-bg-switch-service port: number: 80 kubectl apply -f green-ingress.yaml -n canary-bg-switch blue-ingress.yaml — the canary ingress; starts at 0% traffic\napiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: canary-blue-ingress namespace: canary-bg-switch annotations: nginx.ingress.kubernetes.io/canary: \u0026#34;true\u0026#34; nginx.ingress.kubernetes.io/canary-weight: \u0026#34;0\u0026#34; spec: ingressClassName: nginx rules: - host: \u0026#34;canary.localhost\u0026#34; http: paths: - pathType: Prefix path: \u0026#34;/\u0026#34; backend: service: name: blue-bg-switch-service port: number: 80 The two annotations are the key:\nnginx.ingress.kubernetes.io/canary: \u0026quot;true\u0026quot; — marks this as the canary ingress nginx.ingress.kubernetes.io/canary-weight: \u0026quot;0\u0026quot; — sets 0% of traffic to blue; 100% stays on green kubectl apply -f blue-ingress.yaml -n canary-bg-switch green-service.yaml and blue-service.yaml — ExternalName services that route across namespaces\napiVersion: v1 kind: Service metadata: name: green-bg-switch-service namespace: canary-bg-switch spec: type: ExternalName externalName: nginx-green-svc.nginx-green.svc.cluster.local apiVersion: v1 kind: Service metadata: name: blue-bg-switch-service namespace: canary-bg-switch spec: type: ExternalName externalName: nginx-blue-svc.nginx-blue.svc.cluster.local ExternalName acts as a CNAME, using Kubernetes-native DNS (service-name.namespace.svc.cluster.local) to route traffic across namespaces. Note: you cannot reference a cross-namespace DNS name directly inside an ingress backend.service.name — that\u0026rsquo;s why this intermediate service is necessary.\nkubectl apply -f green-service.yaml -n canary-bg-switch kubectl apply -f blue-service.yaml -n canary-bg-switch 4. Create the Blue and Green Applications blue-app.yaml\napiVersion: v1 kind: Service metadata: name: nginx-blue-svc namespace: nginx-blue spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-blue-config namespace: nginx-blue data: index.html: | \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;Blue Deployment\u0026lt;/title\u0026gt; \u0026lt;style\u0026gt;body { background-color: blue; }\u0026lt;/style\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt;\u0026lt;h1\u0026gt;Blue Deployment\u0026lt;/h1\u0026gt;\u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-blue-deployment namespace: nginx-blue spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - name: nginx-config mountPath: /usr/share/nginx/html/index.html subPath: index.html volumes: - name: nginx-config configMap: name: nginx-blue-config kubectl apply -f blue-app.yaml -n nginx-blue kubectl port-forward svc/nginx-blue-svc 8000:80 -n nginx-blue green-app.yaml is identical except for green labels and background color:\nkubectl apply -f green-app.yaml -n nginx-green kubectl port-forward svc/nginx-green-svc 8001:80 -n nginx-green 5. Canary Traffic Switch in Action Verify all components are running:\nkubectl get all -n nginx-green kubectl get all -n nginx-blue kubectl get ingress -n canary-bg-switch kubectl get service -n canary-bg-switch With canary-weight: \u0026quot;0\u0026quot;, 100% of traffic at http://canary.localhost goes to the green deployment. To observe the split in real time, use this monitoring script:\nmonitor.sh\n#!/bin/bash TOTAL=1000 counter=0 blue=0 green=0 while [ $counter -lt $TOTAL ]; do response=$(curl -s http://canary.localhost) if [[ $response == *\u0026#34;Blue Deployment\u0026#34;* ]]; then ((blue++)) elif [[ $response == *\u0026#34;Green Deployment\u0026#34;* ]]; then ((green++)) fi ((counter++)) echo -ne \u0026#34;Blue: $blue ($(( blue * 100 / counter ))%), Green: $green ($(( green * 100 / counter ))%), Total: $counter\\r\u0026#34; sleep 0.1 done echo -e \u0026#34;\\nFinal split:\u0026#34; echo \u0026#34;Blue: $blue ($(( blue * 100 / TOTAL ))%)\u0026#34; echo \u0026#34;Green: $green ($(( green * 100 / TOTAL ))%)\u0026#34; chmod +x ./monitor.sh \u0026amp;\u0026amp; ./monitor.sh At 0% canary weight, the output will show:\nBlue: 0 (0%), Green: 80 (100%), Total: 80 Now shift 20% of traffic to the blue canary:\nkubectl patch ingress canary-blue-ingress -n canary-bg-switch \\ --type=json \\ -p \u0026#39;[{\u0026#34;op\u0026#34;: \u0026#34;replace\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/metadata/annotations/nginx.ingress.kubernetes.io~1canary-weight\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;20\u0026#34;}]\u0026#39; The monitor will converge toward the 80/20 split:\nBlue: 9 (20%), Green: 36 (80%), Total: 45 Keep incrementally raising the canary weight as your metrics hold. If something goes wrong, patch it back to \u0026quot;0\u0026quot; and you\u0026rsquo;re fully back on green.\nThe full code for this post is available on GitHub.\n","permalink":"/posts/kubernetes-canary-deployments/","summary":"\u003cp\u003eThe social media application you\u0026rsquo;ve just created has hit critical mass with thousands of user interactions per second. You\u0026rsquo;ve gathered your users\u0026rsquo; feedback and want to introduce a new feature that will drive engagement. However, you\u0026rsquo;re cautious about not disrupting active users with potential downtime if the new feature causes an overload on your servers.\u003c/p\u003e\n\u003cp\u003eCurrently, you have a Kubernetes cluster with the Ingress-Nginx controller and a blue-green deployment setup. You can switch 100% of traffic to the new version immediately — but you\u0026rsquo;d rather divert only a small portion first and monitor the effects before fully committing. Fortunately, there\u0026rsquo;s a common solution to this: the canary deployment.\u003c/p\u003e","title":"Kubernetes Canary: The Art of Zero Downtime Deployments"},{"content":"Kubernetes has made managing container images seamless with its multitude of features such as built-in horizontal scalability, service discovery, and so much more. Kubernetes provides a rich and open framework to which an operator can take advantage of when managing their software development lifecycle. This extensibility and freedom sometimes makes it difficult to provide a single solution to everyone\u0026rsquo;s needs — such as how to update a live running application without disrupting the user experience.\nImagine a scenario where you have hundreds of users a day who are posting messages to your new social media application. You have a new feature that you want to roll out that will allow people to rate messages. The testing team wants to run a final smoke test1 to ensure everything works as expected before releasing the feature to production. This is where a technique such as blue-green deployments shine.\nBlue-Green Deployments So what exactly is a blue-green deployment? It\u0026rsquo;s a technique for delivering software where two identical instances of an application run simultaneously, but production traffic is only routed to one of them.\nIn our hypothetical scenario, we have the current social media application without the rating feature — version 1.0 (blue) — running live at http://example.ganba.local. We deploy the new version 2.0 (green), with the rating feature, at http://internal.example.ganba.local — a URL only accessible to teams on the local network. When the testing team validates the new version, the operations team switches traffic from blue to green. At that point, rolling back is as simple as switching back. Once the team is confident, the old resources can be safely removed.\nImplementation The approach I\u0026rsquo;m sharing uses Kubernetes namespaces. While there\u0026rsquo;s another popular approach using labels on deployments, namespaces are safer — they offer resource isolation and prevent scenarios like duplicate resource names or unintentionally overwriting existing resources.\nWe\u0026rsquo;ll use Nginx as our test application. Any Kubernetes hosting solution works — minikube or a managed cluster from a cloud provider.\n1. Install Ingress-Nginx We\u0026rsquo;ll use ingress-nginx as our ingress controller. If you have Helm installed:\nhelm upgrade --install ingress-nginx ingress-nginx \\ --repo https://kubernetes.github.io/ingress-nginx \\ --namespace ingress-nginx --create-namespace The following examples use the hostname ganba.local, a custom DNS entry under /etc/hosts.\n2. Create the Namespaces Set up three namespaces for the blue-green pipeline:\nkubectl create namespace nginx-blue kubectl create namespace nginx-green kubectl create namespace bg-switch nginx-blue — hosts version 1.0 nginx-green — hosts version 2.0 bg-switch — the traffic controller, routing between blue and green 3. Create the Traffic Controller Components Create two components in bg-switch: an ingress and a service.\ningress.yaml\napiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: blue-green-ingress namespace: bg-switch spec: ingressClassName: nginx rules: - host: \u0026#34;example.ganba.local\u0026#34; http: paths: - pathType: Prefix path: \u0026#34;/\u0026#34; backend: service: name: bg-switch-service port: number: 80 kubectl apply -f ingress.yaml -n bg-switch service-switch.yaml\napiVersion: v1 kind: Service metadata: name: bg-switch-service namespace: bg-switch spec: type: ExternalName externalName: nginx-blue-svc.nginx-blue.svc.cluster.local kubectl apply -f service-switch.yaml -n bg-switch The key here is type: ExternalName — the secret sauce of this implementation. It maps to a DNS name, and since Kubernetes provides internal DNS names in the form service-name.namespace.svc.cluster.local, we can redirect traffic to services in other namespaces entirely.\nThis service-switch.yaml is where you perform the blue-green switch. To flip to green, change externalName and re-apply:\nspec: type: ExternalName externalName: nginx-green-svc.nginx-green.svc.cluster.local 4. Create the Blue and Green Applications blue-app.yaml\n--- apiVersion: v1 kind: Service metadata: name: nginx-blue-svc namespace: nginx-blue spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-blue-config namespace: nginx-blue data: index.html: | \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;Blue Deployment\u0026lt;/title\u0026gt; \u0026lt;style\u0026gt;body { background-color: blue; }\u0026lt;/style\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt;\u0026lt;h1\u0026gt;Blue Deployment\u0026lt;/h1\u0026gt;\u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-blue-deployment namespace: nginx-blue spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - name: nginx-config mountPath: /usr/share/nginx/html/index.html subPath: index.html volumes: - name: nginx-config configMap: name: nginx-blue-config kubectl apply -f blue-app.yaml -n nginx-blue Verify it locally:\nkubectl port-forward svc/nginx-blue-svc 8000:80 -n nginx-blue Open http://localhost:8000 to see the blue deployment.\ngreen-app.yaml follows the same structure with green labels and a green background. Apply it:\nkubectl apply -f green-app.yaml -n nginx-green kubectl port-forward svc/nginx-green-svc 8001:80 -n nginx-green 5. The Switch in Action Verify everything is running:\nkubectl get all -n nginx-blue kubectl get all -n nginx-green kubectl get ingress -n bg-switch kubectl get svc -n bg-switch With the service pointing to nginx-blue, http://example.ganba.local serves the blue deployment. To switch to green:\nkubectl patch service bg-switch-service -n bg-switch \\ --type=merge \\ -p \u0026#39;{\u0026#34;spec\u0026#34;:{\u0026#34;externalName\u0026#34;:\u0026#34;nginx-green-svc.nginx-green.svc.cluster.local\u0026#34;}}\u0026#39; Navigate to http://example.ganba.local and you\u0026rsquo;re now on the green deployment. To roll back, patch it back to blue. No downtime, no drama.\nThe full code for this post is available on GitHub.\nI challenge you to take this further — automate the switch through a UI or custom scripts.\nSmoke Testing (software) — Wikipedia\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"/posts/kubernetes-blue-green-deployments/","summary":"\u003cp\u003eKubernetes has made managing container images seamless with its multitude of features such as built-in horizontal scalability, service discovery, and so much more. Kubernetes provides a rich and open framework to which an operator can take advantage of when managing their software development lifecycle. This extensibility and freedom sometimes makes it difficult to provide a single solution to everyone\u0026rsquo;s needs — such as how to update a live running application without disrupting the user experience.\u003c/p\u003e","title":"Kubernetes Namespaces: The Secret Weapon for Zero-Risk Blue-Green Deployments"}]