Kubernetes IN Docker - local clusters for testing Kubernetes
$ go install sigs.k8s.io/[email protected] && time kind create cluster
kind - Kubernetes in Docker, is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
Features
- Spin up, tear down and rebuild clusters in seconds—literally.
- Runs with limited compute resources : a single CPU core and 1 GB of RAM is already enough.
- supports multi-node (including HA) clusters; of course, more nodes need more resources.
- Pick whichever Kubernetes version you’d like to test.
- Built-in Load Balancer: Cloud Provider KIND, Ingress, and Gateway API.
- Supports Linux, macOS and Windows.
Pre-requisite
Required : Installing Docker
Required : Installing kubectl
Recommended : Installing Helm CLI and
Recommended : Installing Cilium CLI
Install KinD
Follow the instructions of KinD official GitHub Repo or Quick Start guide for platform specific.
$ brew install kind
$ [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.30.0/kind-$(uname)-amd64
$ chmod +x ./kind
$ sudo mv ./kind /usr/local/bin/kind
Create a single node cluster with specific Kubernetes version
$ kind create cluster --image kindest/node:v1.33.4
Advanced Usage
Use a configuration file for advanced scenarios: For more complex configurations, such as multi-node clusters or custom networking, you can define the image in a YAML configuration file.
Create a cluster with standard Kubernetes API server address and port
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "0.0.0.0" # Binds to all interfaces
apiServerPort: 6443 # Default Kubernetes API port
Save file and issue command.
$ kind create cluster --config kind-config.yaml
Create multi-nodes cluster
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.34
- role: worker
image: kindest/node:v1.34
- role: worker
image: kindest/node:v1.34
- role: worker
image: kindest/node:v1.34
Now you're all set to create cluster :
$ kind create cluster --config kind-multi-nodes-config.yaml
Create a multi-node HA cluster with Cilium CNI
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
disableDefaultCNI: true
kubeProxyMode: none
Then create an HA cluster without kube-proxy:
$ kind create cluster --config kind-no-proxy-config.yaml
L3/L7 Traffic Management with Gateway API
For serving HTTP2/gRPC/WebSocket, or very large-scale cluster (worker >10+ and services >100+), considering Gateway API rather than Ingress Controller is recommended.
Install Gateway API CRDs:
$ kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
Ensure that installation was successfully.
$ kubectl get crd gatewayclasses.gateway.networking.k8s.io
Install Cilium CNI with Gateway Controller:
$ cilium install --set kubeProxyReplacement=true --set gatewayAPI.enabled=true
$ cilium status --wait
$ cilium config view
Local L2 Load-Balancer with MetalLB
Get Docker network that KinD is running:
$ docker network inspect kind | jq .[].IPAM.Config
Output
[
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
},
{
"Subnet": "fc00:f853:ccd:e793::/64",
"Gateway": "fc00:f853:ccd:e793::1"
}
]
Create an IPAddressPool Resource:
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: kind-pool
namespace: metallb-system
spec:
addresses:
- 172.18.255.1-172.18.255.254 # Use the last IPv4 subnet CIDR from the docker command.
autoAssign: true
avoidBuggyIPs: false
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: lb
namespace: metallb-system
spec:
ipAddressPools:
- kind-pool
Install MetalLB:
$ helm install metallb metallb/metallb --namespace metallb-system --create-namespace
$ kubectl get pods -n metallb-system
Wait til all pod STATUS are READY to configure MetalLB:
$ kubectl apply -f metallb-config.yaml
Test whether MetalLB is working correctly:
$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
$ kubectl expose deployment/kubernetes-bootcamp --type="LoadBalancer" --port 80
$ EXTERNAL_IP=$(kubectl get svc kubernetes-bootcamp -o json | jq -r '.status.loadBalancer.ingress[0].ip')
$ curl http://$EXTERNAL_IP/
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-658f6cbd58-mnrnx | v=1
Deploying Bookinfo Applications
Verify success by installing Istio's Bookinfo applications and Cilium's Gateway with HTTPRoutes:
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.27/samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.18.0/examples/kubernetes/gateway/basic-http.yaml
$ kubectl get svc cilium-gateway-my-gateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-gateway-my-gateway LoadBalancer 10.96.190.51 172.18.255.1 80:32243/TCP 30s
$ kubectl get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
my-gateway cilium 172.18.255.1 True 98s
$ GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}')
$ curl -v -H 'magic: foo' http://"$GATEWAY"\?great\=example
$ curl --fail -s http://$GATEWAY/details/1 | jq .
Output
{
"id": 1,
"author": "William Shakespeare",
"year": 1595,
"type": "paperback",
"pages": 200,
"publisher": "PublisherA",
"language": "English",
"ISBN-10": "1234567890",
"ISBN-13": "123-1234567890"
}
Delete KinD Cluster
$ kind delete cluster