Wednesday, 27 January 2021

How to install IPv4/IPv6 Kubernetes Dual Stack Cluster ?

 

   Steps to install Kubernetes Dual Stack Cluster 

Below lists simple steps that can be followed to set up a dual stack cluster using ubuntu VMs. You can set it up in any OS. I have windows machine.

1 master+ 2 node deployment

 Prerequisites

  • Oracle VM Virtual Box : Download from https://www.virtualbox.org/
  • Turn on hardware virtualization support in BIOS Settings. (Hyper V in windows 10 )

Create VMs

Create 2 VMs for 1 master with 1 worker node cluster with below requirements
  • Version: Ubuntu (64-bit)
  • Processor: 2 CPU
  • 4 GB RAM
  • Network : Select NAT as adapter 1 and Host only adapter as adapter 2

Initial setup on all nodes

  1. Turn off swap  using  sudo swapoff -a
             Comment the line with swap in /etc/fstab
              nano /etc/fstab  
  2. Add IPv6 address on all nodes
            E.g.  master# ifconfig enp0s8 inet6 add 2021::100/64 up
  3. Add IP address (including IPv4 and IPv6 address) of all nodes in etc/hosts with details of ip address and hostname.                     
      4.   Run below command on all nodes
    sudo sysctl -w net.ipv6.conf.all.forwarding=1

      5. Install docker

            apt-get update && apt-get install -y docker.io

      6. Install Kubelet, Kubeadm adn Kubectl

          apt-get update && apt-get install -y apt-transport-https curl
          curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
          cat << EOF >/etc/apt/sources.list.d/kubernetes.list
             deb http://apt.kubernetes.io/ kubernetes-xenial main
           EOF

          apt-get update && apt-get install -y kubelet kubeadm kubectl

    7. Update cgroup-driver in all nodes
            nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
 
          Edit kubeadm.conf file and add below variable after the last Environment
           Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs" 

       #Also add below configuration in all worker nodes to connect pods from master
        Environment="KUBELET_EXTRA_ARGS=--node-ip=<worker IP address>"

   8. Create kubeadmin-config.yaml

    #Example config file
        apiVersion: kubeadm.k8s.io/v1beta2
        featureGates:
          IPv6DualStack: true
        kind: ClusterConfiguration
        apiServer:
          extraArgs:
            advertise-address: <API Address>
        networking:
          dnsDomain: cluster.local
          podSubnet: "<IPV4 range>/16, <IPV6 range>/48"
          serviceSubnet: "<IPV4 range>/16,<IPV4 range>/112"  
        ----
        apiVersion: kubeadm.k8s.io/v1beta2
        kind: InitConfiguration
        localAPIEndpoint:
          advertiseAddress: <API Address>
          bindPort: 6443
        ---

     9.  Install K8s using created config file
            kubeadm init --config /home/kubeadm-config.yaml

   10. Install Calico

       Edit configuration and enable dual stack as explained under section 'Enable dualstack' in

        Kubectl apply -f calico.yaml

    11. Deploy and check your application Pods and see if they have received both IP addresses   :)


Wednesday, 20 May 2020

Rename Interface Name in Ubuntu 18.04

Rename Interface Name in Ubuntu 18.04

The below steps allow you to rename an interface name in Ubuntu 18.04 .

1. Check Mac Address of the interface that needs to be renamed using
      ip a
  2. Create the below file
      nano /etc/udev/rules.d/70-persistent-net.rules
3.  Add below line to the file and provide mac address of the interface and new interface name
 SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="MAC_ADDRESS", NAME="NEW_INTERFACENAME"
4. Change the interface name in /etc/netplan/<filename>.yam l

          network:            ethernets:                NEW_INTERFACENAME:
5. Edit Grub file and add below value
        nano /etc/default/grub
       GRUB_CMDLINE_LINUX="net.ifnames=1 biosdevname=0
6.  Update grub
update-grub
7.  Reboot VM

Monday, 11 March 2019

Calico: Delete default BGPPeers and manually add IPV6 BGP Peers

This will show how to remove default BGP Peers and add node specific peers

Prerequisite:  install-calicoctl-in-kubernetes-cluster.html 

1. Disable full node to mesh peering.

cat << EOF | calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
   name: default
spec:
   logSeverityScreen: Info
   nodeToNodeMeshEnabled: false
   asNumber: 64512
EOF

2. Add Node specific BGP Peer


  • master1 to worker1

cat << EOF | calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
  name: bgppeer-node-worker1
spec:
  peerIP: 2019::101
  node: master1
  asNumber: 64512
EOF


  • worker1 to master1

cat << EOF | calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
  name: bgppeer-node-master1
spec:
  peerIP: 2019::100
  node: worker1
  asNumber: 64512
EOF

root@worker1:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+---------------+-------+----------+-------------+
| PEER ADDRESS |   PEER TYPE   | STATE |  SINCE   |    INFO     |
+--------------+---------------+-------+----------+-------------+
| 172.16.0.6   | node specific | up    | 12:30:30 | Established |
+--------------+---------------+-------+----------+-------------+

IPv6 BGP status
+--------------+---------------+-------+----------+-------------+
| PEER ADDRESS |   PEER TYPE   | STATE |  SINCE   |    INFO     |
+--------------+---------------+-------+----------+-------------+
| 2019::100    | node specific | up    | 12:30:08 | Established |
+--------------+---------------+-------+----------+-------------+


References
https://docs.projectcalico.org/v3.5/usage/configuration/bgp
https://docs.projectcalico.org/v3.5/reference/calicoctl/resources/bgpconfig 


Install calicoctl in kubernetes cluster

This document explains how to install calicoctl in a kubernetes cluster to check the peering status.

1. Download calicoctl to /usr/local/bin
curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.4.0/calicoctl
chmod +x calicoctl

2. Create calicoctl.cfg in /etc/calico folder on each nodes

root@worker1:/usr/local/bin# cat /etc/calico/calicoctl.cfg
cat /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/root/.kube/config"   ## point to your kube config of the cluster

3. Check the status from each nodes

root@worker1:/usr/local/bin# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 172.16.0.6   | node-to-node mesh | up    | 08:27:06 | Established |
+--------------+-------------------+-------+----------+-------------+


Wednesday, 27 February 2019

Set up internal IPV6 only kubernetes cluster on windows laptop

This blog will help you to install Kubernetes with IPV6 only cluster using Ubuntu VMs on windows or any OS.

Prerequisites:
1.Oracle VM Virtual Box in windows: Download from https://www.virtualbox.org/ You can use similar software in other OS.
2. Add a new virtual host adapter as explained in Add new Adapter Interface if you dont want to use default IP address range for the virtual interface
2. Create 2 VMs for 1 master and 1 worker node (you can add more number of worker nodes) with below requirements
  • Version: Ubuntu (64-bit) [Ubuntu 16.04 +]
  • Processor: 2 CPU
  • 4 GB RAM
  • Network : Select NAT as adapter 1 and Host only adapter as adapter 2
Address schemas used

 master                   -        2019::100 
 worker1                -       2019::101 
pod-network-cidr  -        2019::1:192.168.0.0/112 
service-cidr           -        2019::10.0.0.0/112
Note! you can select own address schema

Prerequisite
Make sure you have a default route with IPV6
eg: ip -6 route add default dev enp0s8 via 2019::130
Note! not mandatory to be functional ie 2019::130 can be an IP in range of the interface CIDR and not assigned to any node. 

Installation steps:

1. Add ipv6 address to eth1 of master and worker nodes
eg in master node
ifconfig enp0s8 inet6 add 2019::100/64 up

2. Update the /etc/hosts of all nodes with IPv6 address and hostname and reload

2019::100 master1
2019::101 worker1


3. Update /etc/sysctl.conf and add
net.ipv6.conf.all.forwarding=1

4. reload sysctl
sysctl -p /etc/sysctl.conf

5. Turn off swap  using  
   sudo swapoff -a

6. Comment the line with swap in /etc/fstab

7.  Install docker
                     apt-get update && apt-get install -y docker.io
8.  Install Kubernetes

    On all nodes
     a.
      apt-get update && apt-get install -y apt-transport-https curl
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
      cat << EOF >/etc/apt/sources.list.d/kubernetes.list
      deb http://apt.kubernetes.io/ kubernetes-xenial main
      EOF

     b. Install
    apt-get update && apt-get install -y kubelet kubeadm kubectl

      c. Edit below file  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add cgroup-driver Environment variable 
   
           Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs" 


    On Master node

     a. Kubeadm init
            kubeadm init --pod-network-cidr=2019::1:192.168.0.0/112 --apiserver-advertise-address=2019::100 --service-cidr=2019::10.0.0.0/112
     b.
        mkdir -p $HOME/.kube  
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
       sudo chown $(id -u):$(id -g) $HOME/.kube/config

     c. Copy the Join command generated as part of the kubeadmin init output and apply it  on worker nodes to join the cluster

     d. check whether nodes are running
         kubectl get nodes 

Note! The STATUS will be NOT READY until you install a CNI



    e.  Install CNI

To Install Version 3.5.4  with fix for IPV6Issue  :
wget https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/etcd.yaml
wget https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/calico.yaml
1. Update etcd and update below attributes in daemon set  and replace cluster IP to IPV6 address in the service

 vi etcd.yaml

 - --advertise-client-urls=http://[$(CALICO_ETCD_IP)]:6666
 - --listen-client-urls=http://[::]:6666
 - --listen-peer-urls=http://[::]:6667

#update service and change IPV4 addres to IPV6 cluster IP for the calico-etcd service
clusterIP: 2019::a00:9f85

kubectl apply -f etcd.yaml

2. Edit calico.yaml . Update changes and enable below configurations
vi calico.yaml

#update etcd_endpoints with IPV6 address
etcd_endpoints: "http://[2019::100]:6666"


   # update the ipam type in calico-config
     "ipam": {
            "type": "host-local",
            "subnet": "usePodCidr"
          },

#update calico-node with below attributes
 - name: CALICO_IPV4POOL_IPIP
    value: "off"
 - name: CALICO_IPV6POOL_CIDR
    value: "2019::1:192.168.0.0/112"
 - name: FELIX_IPV6SUPPORT
   value: "true"
- name: IP6
    value: "autodetect"

 kubectl apply -f calico.yaml


To install old version V3.3
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

Edit calico.yaml and enable below configurations

vi calico.yaml
 - name: CALICO_IPV4POOL_IPIP
    value: "off"
 - name: CALICO_IPV6POOL_CIDR
    value: "2019::1:192.168.0.0/112"
 - name: FELIX_IPV6SUPPORT
   value: "true"
- name: IP6
    value: "autodetect"

 kubectl apply -f calico.yaml

Should apply fix for this Issue for calico v 3.3 to peer between nodes if you face same issue as  https://github.com/projectcalico/calico/issues/2458. The fix is available in  Calico version 3.5.4.

Workaround : Delete BGPPeers and manually add IPV6 BGP Peers. Follow Workaround.
Note! This workaround is not needed from Calico version 3.5.4

  f. Check status of pods in all namespace

g. Check status of nodes. The status must be ready and the cluster is ready to use




References

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ 
https://www.projectcalico.org/enable-ipv6-on-kubernetes-with-project-calico/ 
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/
https://docs.projectcalico.org/v3.5/usage/ipv6

Wednesday, 2 January 2019

How to add new Host-only Ethernet Adapter Interface using Oracle Virtual Box

Add new Host only Ethernet Adapter

1. Open Virtual Box and click File and Select Host Network Manager as below

2. Click Create and configure address manually in Adapter tab as below


3. Click  DHCP server tab and select Enable server and add details as below
4. Click Apply . It will create a new adapter interface which will be listed as adapters in Network settings of VMs as below



Tuesday, 24 July 2018

Using MetalLB in Kubernetes for the external connectivity

MetalLB for External Connectivity

   This post explains how we can use MetalLB to attract traffic from a gateway router to kubernetes cluster. 

Prerequisites

  • kubernetes cluster ( I have 1 master + 2 node cluster)
  • Installed pod networking. Calico is used as CNI in my setup
  • Gateway router with BGP. 

Installation and configuration 

  1. Install Metallb using below yaml file. 
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.6.2/manifests/metallb.yaml

This will create one controller and two speaker pods. Controller will be responsible for assigning external IP address from the address pool defined in the config file (step 3).
Speaker nodes will announce the IP address to the gateway router

      2.  You can view the pods running successfully in namespace metallb-system

    







      3. Create a config map with BGP configuration details and address pool from which externalIP for the services will be picked automatically as shown below

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    peers:
    - peer-address: 172.16.0.11
      peer-asn: 65001
      my-asn: 64512
    address-pools:
    - name: default
      protocol: bgp
      addresses:
      - 80.11.12.0/30
In the above configuration peer-address and peer-asn are the IP address and asn of my gateway router.

    4. After creating config map file check whether BGP peer is established with the gateway router in the speaker logs.

root@master1:~/metalLB# kubectl logs -n metallb-system speaker-fx4d9
......
{"caller":"bgp.go:62","event":"sessionUp","localASN":64512,"msg":"BGP session established","peer":"172.16.0.11:179","peerASN":65001,"ts":"2018-07-06T00:19:51.282711062Z"}

Create a service and test


  •      Create a service to check whether metalLB assigns externalIP. If you dont want to assign a fixed externalIP to your service you can use loadBalancerIP. Note that the address must be in the range defined in the config map in step 3.

apiVersion: v1
kind: Service
metadata:
  name: metalservice
  namespace: default
spec:
  ports:
  - port: 5001
    protocol: TCP
    targetPort: 80
  selector:
    run: my-nginx
  loadBalancerIP: 80.11.12.1
  type: LoadBalancer

  •     Check whether the service got external-ip

root@master1:~/metalLB# kubectl get svc
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes     ClusterIP      10.96.0.1                 <none>        443/TCP          17h
metalservice   LoadBalancer   10.111.117.248   80.11.12.1    5001:30879/TCP   2h


In the speaker logs you can see the external IP is advertised 

{"caller":"bgp_controller.go:162","event":"updatedAdvertisements","ip":"80.11.12.1","msg":"making advertisements using BGP","numAds":1,"pool":"default","protocol":"bgp","service":"default/metalservice","ts":"2018-07-06T00:22:18.896880181Z"}


  • Verification from router

In the router you can see the externalIP address and next hop as the worker nodes

vm-011 ~ # gobgp global rib
   Network              Next Hop             AS_PATH              Age        Attrs
*> 80.11.12.1/32    172.16.0.10          64512                02:11:07   [{Origin: ?} {Med: 0} {LocalPref: }]
80.11.12.1/32      172.16.0.9           64512                02:11:07   [{Origin: ?} {Med: 0} {LocalPref:}]
*> 172.16.0.0/16        0.0.0.0                                   00:21:31   [{Origin: i} {Med: 0}]

Check whether you can access the service from the router

vm-011 ~ # wget http://80.11.12.1:5001
Connecting to 80.11.12.1:5001 (80.11.12.1:5001)
index.html           100% |**************************************************************************************************************************************************************|   612   0:00:00 ETA
vm-011 ~ #


References