Kubernetes VPN Access
Fix kubectl and Cluster Connectivity
Key Takeaways
VPNs conflict with Kubernetes due to overlapping IP ranges (10.x, 172.x)
Local clusters (minikube, Docker Desktop K8s) are especially affected
Split tunneling routes K8s tools direct while work stays on VPN
Why VPN Breaks Kubernetes Access
Kubernetes uses internal IP ranges for pods and services—typically 10.x.x.x or 172.x.x.x. Corporate VPNs often claim the same ranges. When they collide, VPN wins.
- •
kubectl can't reach the API server
- •
Pod networking breaks
- •
Port forwarding fails
- •
Services become unreachable
Common Kubernetes VPN Issues
You'll recognize these symptoms:
- •
kubectl get pods → connection refused
- •
kubectl exec → timeout
- •
Minikube cluster unreachable after VPN connects
- •
Docker Desktop K8s services inaccessible
- •
Port forwarding hangs or fails
- •
Ingress/LoadBalancer not responding
Diagnosing the Conflict
# Check current routes
netstat -rn | grep -E "^(10\.|172\.)"
# Check K8s cluster IP
kubectl cluster-info
# Test API server connectivity
curl -k https://kubernetes.docker.internal:6443If routes for 10.x or 172.x point to your VPN interface, you've found the conflict.
Local Kubernetes Options
Each local K8s option has VPN challenges:
Minikube
Uses VM or container driver. Often conflicts with VPN routes. Check minikube ip for cluster IP.
Docker Desktop Kubernetes
Runs in Docker VM. Accessible via kubernetes.docker.internal. VPN may redirect this hostname.
Kind (Kubernetes in Docker)
Uses Docker networking. Same issues as Docker + VPN.
Fix 1: Check IP Range Conflicts
# See VPN routes
netstat -rn
# Compare with K8s service IPs
kubectl get svc --all-namespacesIf K8s and VPN use the same subnet—that's your problem.
Fix 2: Change Minikube Network
Start minikube with a different subnet that doesn't overlap with your VPN:
minikube start --service-cluster-ip-range=10.200.0.0/16This requires recreating your cluster.
Fix 3: Use NodePort Instead of LoadBalancer
LoadBalancer IPs often conflict with VPN ranges. NodePort uses localhost:
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080Access via localhost:30080 instead.
Fix 4: Route K8s Tools Direct (Recommended)
The cleanest fix: route your Kubernetes tools outside the VPN tunnel.
Install SplitTunnel on your Mac
Add Docker Desktop to "Direct" routing
Add Terminal (where you run kubectl) to "Direct"
Local K8s clusters become accessible
Best of both worlds: local development works, corporate access stays secure.
Accessing Corporate K8s on VPN
Your corporate Kubernetes cluster needs VPN access. But your local development cluster doesn't. With SplitTunnel, you can have both.
Route Terminal direct for local cluster access, or keep it on VPN for corporate cluster access. Switch kubectl contexts as needed.
Verifying the Fix
After setting up SplitTunnel with Docker Desktop routed direct:
# Connect VPN first
# Test local cluster
kubectl config use-context docker-desktop
kubectl get nodes
# Should show Ready
# Test pod creation
kubectl run test --image=nginx --restart=Never
kubectl port-forward test 8080:80
# Access http://localhost:8080
# Clean up
kubectl delete pod testFrequently Asked Questions
Get Back to Coding
Route K8s tools direct while work apps stay on VPN. Access both clusters.
7-day free trial · Cancel anytime