This blog will help you to install Kubernetes with IPV6 only cluster using Ubuntu VMs on windows or any OS.
Prerequisites:
1.Oracle VM Virtual Box in windows: Download from https://www.virtualbox.org/ You can use similar software in other OS.
2. Add a new virtual host adapter as explained in Add new Adapter Interface if you dont want to use default IP address range for the virtual interface
2. Create 2 VMs for 1 master and 1 worker node (you can add more number of worker nodes) with below requirements
Prerequisite
Make sure you have a default route with IPV6
eg: ip -6 route add default dev enp0s8 via 2019::130
Installation steps:
1. Add ipv6 address to eth1 of master and worker nodes
eg in master node
ifconfig enp0s8 inet6 add 2019::100/64 up
2. Update the /etc/hosts of all nodes with IPv6 address and hostname and reload
2019::100 master1
2019::101 worker1
3. Update /etc/sysctl.conf and add
net.ipv6.conf.all.forwarding=1
4. reload sysctl
sysctl -p /etc/sysctl.conf
5. Turn off swap using
sudo swapoff -a
6. Comment the line with swap in /etc/fstab
7. Install docker
kubeadm init --pod-network-cidr=2019::1:192.168.0.0/112 --apiserver-advertise-address=2019::100 --service-cidr=2019::10.0.0.0/112
b.
mkdir -p $HOME/.kube
d. check whether nodes are running
kubectl get nodes
Note! The STATUS will be NOT READY until you install a CNI
Edit calico.yaml and enable below configurations
vi calico.yaml
- name: CALICO_IPV4POOL_IPIP
value: "off"
- name: CALICO_IPV6POOL_CIDR
value: "2019::1:192.168.0.0/112"
Prerequisites:
1.Oracle VM Virtual Box in windows: Download from https://www.virtualbox.org/ You can use similar software in other OS.
2. Add a new virtual host adapter as explained in Add new Adapter Interface if you dont want to use default IP address range for the virtual interface
2. Create 2 VMs for 1 master and 1 worker node (you can add more number of worker nodes) with below requirements
- Version: Ubuntu (64-bit) [Ubuntu 16.04 +]
- Processor: 2 CPU
- 4 GB RAM
- Network : Select NAT as adapter 1 and Host only adapter as adapter 2
Address schemas used
master - 2019::100
worker1 - 2019::101
pod-network-cidr - 2019::1:192.168.0.0/112
service-cidr - 2019::10.0.0.0/112
Note! you can select own address schema
Prerequisite
Make sure you have a default route with IPV6
eg: ip -6 route add default dev enp0s8 via 2019::130
Note! not mandatory to be functional ie 2019::130 can be an IP in range of the interface CIDR and not assigned to any node.
1. Add ipv6 address to eth1 of master and worker nodes
eg in master node
ifconfig enp0s8 inet6 add 2019::100/64 up
2. Update the /etc/hosts of all nodes with IPv6 address and hostname and reload
2019::100 master1
2019::101 worker1
3. Update /etc/sysctl.conf and add
net.ipv6.conf.all.forwarding=1
4. reload sysctl
sysctl -p /etc/sysctl.conf
5. Turn off swap using
sudo swapoff -a
6. Comment the line with swap in /etc/fstab
7. Install docker
apt-get update && apt-get install -y docker.io
8. Install Kubernetes
On all nodes
a.
Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs"
On Master node
a. Kubeadm init8. Install Kubernetes
On all nodes
a.
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat << EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
b. Install
apt-get update && apt-get install -y kubelet kubeadm kubectl
c. Edit below file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add cgroup-driver Environment variable
On Master node
kubeadm init --pod-network-cidr=2019::1:192.168.0.0/112 --apiserver-advertise-address=2019::100 --service-cidr=2019::10.0.0.0/112
b.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
c. Copy the Join command generated as part of the kubeadmin init output and apply it on worker nodes to join the cluster
sudo chown $(id -u):$(id -g) $HOME/.kube/config
c. Copy the Join command generated as part of the kubeadmin init output and apply it on worker nodes to join the cluster
d. check whether nodes are running
kubectl get nodes
Note! The STATUS will be NOT READY until you install a CNI
e. Install CNI
To Install Version 3.5.4 with fix for IPV6Issue :
vi etcd.yaml
- --advertise-client-urls=http://[$(CALICO_ETCD_IP)]:6666
- --listen-client-urls=http://[::]:6666
- --listen-peer-urls=http://[::]:6667
#update service and change IPV4 addres to IPV6 cluster IP for the calico-etcd service
clusterIP: 2019::a00:9f85
kubectl apply -f etcd.yaml
To install old version V3.3
To Install Version 3.5.4 with fix for IPV6Issue :
wget https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/etcd.yaml
wget https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/calico.yaml
1. Update etcd and update below attributes in daemon set and replace cluster IP to IPV6 address in the servicevi etcd.yaml
- --advertise-client-urls=http://[$(CALICO_ETCD_IP)]:6666
- --listen-client-urls=http://[::]:6666
- --listen-peer-urls=http://[::]:6667
clusterIP: 2019::a00:9f85
kubectl apply -f etcd.yaml
2. Edit calico.yaml . Update changes and enable below configurations
vi calico.yaml
#update etcd_endpoints with IPV6 address
etcd_endpoints: "http://[2019::100]:6666"
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
#update calico-node with below attributes
- name: CALICO_IPV4POOL_IPIP
value: "off"
- name: CALICO_IPV6POOL_CIDR
value: "2019::1:192.168.0.0/112"
vi calico.yaml
#update etcd_endpoints with IPV6 address
etcd_endpoints: "http://[2019::100]:6666"
# update the ipam type in calico-config
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
#update calico-node with below attributes
- name: CALICO_IPV4POOL_IPIP
value: "off"
- name: CALICO_IPV6POOL_CIDR
value: "2019::1:192.168.0.0/112"
- name: FELIX_IPV6SUPPORT
value: "true"
- name: IP6
value: "autodetect"
kubectl apply -f calico.yaml
- name: IP6
value: "autodetect"
kubectl apply -f calico.yaml
To install old version V3.3
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Edit calico.yaml and enable below configurations
vi calico.yaml
- name: CALICO_IPV4POOL_IPIP
value: "off"
- name: CALICO_IPV6POOL_CIDR
value: "2019::1:192.168.0.0/112"
- name: FELIX_IPV6SUPPORT
value: "true"
- name: IP6
value: "autodetect"
kubectl apply -f calico.yaml
Should apply fix for this Issue for calico v 3.3 to peer between nodes if you face same issue as https://github.com/projectcalico/calico/issues/2458. The fix is available in Calico version 3.5.4.
Workaround : Delete BGPPeers and manually add IPV6 BGP Peers. Follow Workaround.
Note! This workaround is not needed from Calico version 3.5.4
f. Check status of pods in all namespace
g. Check status of nodes. The status must be ready and the cluster is ready to use
https://www.projectcalico.org/enable-ipv6-on-kubernetes-with-project-calico/
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/
https://docs.projectcalico.org/v3.5/usage/ipv6
- name: IP6
value: "autodetect"
kubectl apply -f calico.yaml
Should apply fix for this Issue for calico v 3.3 to peer between nodes if you face same issue as https://github.com/projectcalico/calico/issues/2458. The fix is available in Calico version 3.5.4.
Workaround : Delete BGPPeers and manually add IPV6 BGP Peers. Follow Workaround.
Note! This workaround is not needed from Calico version 3.5.4
f. Check status of pods in all namespace
g. Check status of nodes. The status must be ready and the cluster is ready to use
References
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/https://www.projectcalico.org/enable-ipv6-on-kubernetes-with-project-calico/
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/
https://docs.projectcalico.org/v3.5/usage/ipv6