[My Journey to CCIE Automation #10] From Docker Compose to Kubernetes
My journey continues
👋 Hi, I’m Bjørnar Lintvedt
I’m a Senior Network Consultant at Bluetree, working at the intersection of networking and software development.
As mentioned in my first blog post, I’m preparing for the CCIE Automation lab exam — Cisco’s most advanced certification for network automation and programmability. I’m documenting the journey here with weekly, hands-on blog posts tied to the blueprint.
During my studies I’ve been building a real application called Nautix — a modular, container-based automation platform that ties together everything I’ve learned so far.
Blog post #10
Up until now, Nautix has been running using Docker Compose. That worked well in the early stages, but for Blueprint 4.3 – Package and deploy a solution by using Kubernetes, it was time to take the next step.
In this post, I’ll show how I migrated parts of Nautix to Kubernetes, running locally on my development laptop, and how that maps directly to the CCIE Automation blueprint.
Why Kubernetes?
Docker Compose is great for local development, but Kubernetes gives you:
-
Declarative deployments
-
Built-in health checking and self-healing
-
Native secrets and configuration management
-
Service discovery and load balancing
-
A consistent operational model across environments
Local Kubernetes setup using kind
For local development, I chose kind (Kubernetes IN Docker).
Why kind?
-
Lightweight and fast
-
Runs entirely inside Docker
-
Perfect for labs and experimentation
-
Uses real Kubernetes APIs and tooling
Prerequisites
-
Docker Desktop (with WSL2 integration)
-
kubectl -
kind
Create a kind cluster with ingress support
I created the cluster with explicit port mappings so the ingress controller could be reached from my laptop:
# kind-nautix.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
Create the cluster:
kind create cluster --name nautix --config kind-nautix.yaml
Install ingress-nginx:
kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
At this point, I had a fully functional Kubernetes cluster running locally.
Namespace: isolating Nautix
First, I created a dedicated namespace:
apiVersion: v1
kind: Namespace
metadata:
name: nautix
kubectl apply -f k8s/00-namespace.yaml
kubectl config set-context --current --namespace=nautix
Namespaces are a simple but powerful concept — they provide isolation and make it much easier to reason about resources.
Handling secrets
In Docker Compose, secrets often end up in .env files.
In Kubernetes, secrets are first-class objects.
Instead of committing secrets to Git, I created them imperatively using kubectl:
kubectl create secret generic nautix-secrets \
--from-literal=VAULT_DEV_ROOT_TOKEN_ID=root-token \
-n nautix
This creates a native Kubernetes Secret that can be safely referenced by Deployments without storing sensitive values in source control.
Deploying the Inventory service
The Inventory service is a stateless Flask API, making it a perfect candidate for a Deployment.
Inventory Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
namespace: nautix
spec:
replicas: 1
selector:
matchLabels:
app: inventory
template:
metadata:
labels:
app: inventory
spec:
containers:
- name: inventory
image: nautix-inventory:dev
ports:
- containerPort: 8000
readinessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 3
livenessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 10
Inventory Service
apiVersion: v1
kind: Service
metadata:
name: inventory
namespace: nautix
spec:
selector:
app: inventory
ports:
- port: 8000
type: ClusterIP
Deploying Vault with persistent storage
Vault runs in dev mode for this lab, but I still wanted to demonstrate volumes and persistent storage.
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vault-data
namespace: nautix
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Vault Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: vault
namespace: nautix
spec:
replicas: 1
selector:
matchLabels:
app: vault
template:
metadata:
labels:
app: vault
spec:
containers:
- name: vault
image: vault:1.8.7
ports:
- containerPort: 8200
env:
- name: VAULT_DEV_LISTEN_ADDRESS
value: "0.0.0.0:8200"
- name: VAULT_DEV_ROOT_TOKEN_ID
valueFrom:
secretKeyRef:
name: nautix-secrets
key: VAULT_DEV_ROOT_TOKEN_ID
volumeMounts:
- name: data
mountPath: /vault/file
volumes:
- name: data
persistentVolumeClaim:
claimName: vault-data
Vault Service
apiVersion: v1
kind: Service
metadata:
name: vault
namespace: nautix
spec:
selector:
app: vault
ports:
- port: 8200
type: ClusterIP
Exposing services with Ingress
To expose the services externally, I used host-based routing via Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nautix
namespace: nautix
spec:
ingressClassName: nginx
rules:
- host: inventory.nautix.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: inventory
port:
number: 8000
- host: vault.nautix.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vault
port:
number: 8200
With the appropriate host entries on my laptop, I could now access:
-
http://inventory.nautix.local -
http://vault.nautix.local
Spinning everything up
With all manifests in place:
kubectl apply -f k8s/
Verify:
kubectl get pods
kubectl get svc
kubectl get ingress
At this point, both Inventory and Vault were running fully inside Kubernetes.
Managing pod lifecycle with kubectl
This is where Kubernetes really shines.
Scaling
kubectl scale deploy inventory --replicas=3
kubectl get pods -l app=inventory
Logs
kubectl logs deploy/inventory
kubectl logs -f deploy/inventory
Self-healing
kubectl delete pod <inventory-pod>
The pod is automatically recreated — no manual intervention required.
Health checks and self-healing
Instead of Docker Compose depends_on, Kubernetes uses readiness and liveness probes.
-
Readiness controls whether traffic is sent to a pod
-
Liveness controls when a pod should be restarted
This makes the platform far more resilient by default.
Closing thoughts
By migrating even a small part of Nautix to Kubernetes, I was able to demonstrate every requirement in Blueprint 4.3 using real workloads:
-
Declarative deployments
-
Secure secret handling
-
Persistent storage
-
Ingress routing
-
Health checks
-
Scaling and lifecycle management
-
Full control via
kubectl
🔗 Useful Links
Blog series
- [#1] Intro + Building a Python CLI app
- [#2] Building a Inventory REST API + Docker
- [#3] Orchestration API + NETCONF
- [#4] Automating Network Discovery and Reports with Python & Ansible
- [#5] Building Network Pipelines for Reliable Changes with pyATS & GitLab CI
- [#6] Automating Cisco ACI Deployments with Terraform, Vault, and GitLab CI
- [#7] Exploring Model-Driven Telemetry for Real-Time Network Insights