Wednesday, 26 February 2025

How to extend Kubernetes API controller using Custom Resources?

    How to extend Kubernetes API controller using 

Custom Resources?


     
Lets start with defining  Kubernetes API, Resource, Custom Resource (CR) and Custom controller

1. Kubernetes API  
    K8s API allows querying and managing the state of API objects such as Pods, Namespaces, ConfigMaps and Events etc

2. Resource 
   Resource is an endpoint in k8s API that stores collection of objects of certain kind like pod resource contains collection of Pod objects 

3. Custom resource 
     CR is an extension to k8s API which allows adding our own API objects like creating new type of resources beyond the built-in one for example Pods, Services etc.
   User can create and access its object using kubectl once CR is installed 

Custom resource Definition is a way to define Custom resource that act like blueprint about the structure and behavior of Custom resource.

4. Custom controller
    Custom controller watches custom resources and manages them to behave as expected by automatically handling their lifecycle.


Common use cases for Custom Resources 

 Custom resources are used to extend K8s with domain specific configurations or functionalities that are not covered by built-in resources. 

1. Defining Application specific configurations eg. CR for managing database clusters
2. Automating workflows _ eg. CR for managing deployments with complex rules.
3. Implementing Controllers and Operators- eg. CR for handling custom workloads like a machine learning job.
4. Storing Persistent Configuration- e.g. CR for defining network policies or storage settings.

Example Custom resource 


Here is a simple Custom Resource Definition and Custom Resource for managing a deployment named 'roshini' with some custom rules

1. Custom Resource Definition (CRD)

This defines a new resource type called ManagedDeployment.


2. Custom Resource (CR)

This is an instance of ManagedDeployment for managing a deployment named "roshini" with complex rules


 

How It Works:

  • The CRD registers ManagedDeployment as a new resource type in Kubernetes.
  • The CR (roshini) specifies:
    • Starts with 3 replicas
    • Enables autoscaling
    • Can scale between 2 and 10 replicas
  • A Custom Controller (if implemented) would read this CR and automatically adjust the deployment based on these rules.

This setup is useful for managing applications dynamically with Kubernetes Operators! 


Wednesday, 27 January 2021

How to install IPv4/IPv6 Kubernetes Dual Stack Cluster ?

 

   Steps to install Kubernetes Dual Stack Cluster 

Below lists simple steps that can be followed to set up a dual stack cluster using ubuntu VMs. You can set it up in any OS. I have windows machine.

1 master+ 2 node deployment

 Prerequisites

  • Oracle VM Virtual Box : Download from https://www.virtualbox.org/
  • Turn on hardware virtualization support in BIOS Settings. (Hyper V in windows 10 )

Create VMs

Create 2 VMs for 1 master with 1 worker node cluster with below requirements
  • Version: Ubuntu (64-bit)
  • Processor: 2 CPU
  • 4 GB RAM
  • Network : Select NAT as adapter 1 and Host only adapter as adapter 2

Initial setup on all nodes

  1. Turn off swap  using  sudo swapoff -a
             Comment the line with swap in /etc/fstab
              nano /etc/fstab  
  2. Add IPv6 address on all nodes
            E.g.  master# ifconfig enp0s8 inet6 add 2021::100/64 up
  3. Add IP address (including IPv4 and IPv6 address) of all nodes in etc/hosts with details of ip address and hostname.                     
      4.   Run below command on all nodes
    sudo sysctl -w net.ipv6.conf.all.forwarding=1

      5. Install docker

            apt-get update && apt-get install -y docker.io

      6. Install Kubelet, Kubeadm adn Kubectl

          apt-get update && apt-get install -y apt-transport-https curl
          curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
          cat << EOF >/etc/apt/sources.list.d/kubernetes.list
             deb http://apt.kubernetes.io/ kubernetes-xenial main
           EOF

          apt-get update && apt-get install -y kubelet kubeadm kubectl

    7. Update cgroup-driver in all nodes
            nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
 
          Edit kubeadm.conf file and add below variable after the last Environment
           Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs" 

       #Also add below configuration in all worker nodes to connect pods from master
        Environment="KUBELET_EXTRA_ARGS=--node-ip=<worker IP address>"

   8. Create kubeadmin-config.yaml

    #Example config file
        apiVersion: kubeadm.k8s.io/v1beta2
        featureGates:
          IPv6DualStack: true
        kind: ClusterConfiguration
        apiServer:
          extraArgs:
            advertise-address: <API Address>
        networking:
          dnsDomain: cluster.local
          podSubnet: "<IPV4 range>/16, <IPV6 range>/48"
          serviceSubnet: "<IPV4 range>/16,<IPV4 range>/112"  
        ----
        apiVersion: kubeadm.k8s.io/v1beta2
        kind: InitConfiguration
        localAPIEndpoint:
          advertiseAddress: <API Address>
          bindPort: 6443
        ---

     9.  Install K8s using created config file
            kubeadm init --config /home/kubeadm-config.yaml

   10. Install Calico

       Edit configuration and enable dual stack as explained under section 'Enable dualstack' in

        Kubectl apply -f calico.yaml

    11. Deploy and check your application Pods and see if they have received both IP addresses   :)


Wednesday, 20 May 2020

Rename Interface Name in Ubuntu 18.04

Rename Interface Name in Ubuntu 18.04

The below steps allow you to rename an interface name in Ubuntu 18.04 .

1. Check Mac Address of the interface that needs to be renamed using
      ip a
  2. Create the below file
      nano /etc/udev/rules.d/70-persistent-net.rules
3.  Add below line to the file and provide mac address of the interface and new interface name
 SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="MAC_ADDRESS", NAME="NEW_INTERFACENAME"
4. Change the interface name in /etc/netplan/<filename>.yam l

          network:            ethernets:                NEW_INTERFACENAME:
5. Edit Grub file and add below value
        nano /etc/default/grub
       GRUB_CMDLINE_LINUX="net.ifnames=1 biosdevname=0
6.  Update grub
update-grub
7.  Reboot VM

Monday, 11 March 2019

Calico: Delete default BGPPeers and manually add IPV6 BGP Peers

This will show how to remove default BGP Peers and add node specific peers

Prerequisite:  install-calicoctl-in-kubernetes-cluster.html 

1. Disable full node to mesh peering.

cat << EOF | calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
   name: default
spec:
   logSeverityScreen: Info
   nodeToNodeMeshEnabled: false
   asNumber: 64512
EOF

2. Add Node specific BGP Peer


  • master1 to worker1

cat << EOF | calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
  name: bgppeer-node-worker1
spec:
  peerIP: 2019::101
  node: master1
  asNumber: 64512
EOF


  • worker1 to master1

cat << EOF | calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
  name: bgppeer-node-master1
spec:
  peerIP: 2019::100
  node: worker1
  asNumber: 64512
EOF

root@worker1:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+---------------+-------+----------+-------------+
| PEER ADDRESS |   PEER TYPE   | STATE |  SINCE   |    INFO     |
+--------------+---------------+-------+----------+-------------+
| 172.16.0.6   | node specific | up    | 12:30:30 | Established |
+--------------+---------------+-------+----------+-------------+

IPv6 BGP status
+--------------+---------------+-------+----------+-------------+
| PEER ADDRESS |   PEER TYPE   | STATE |  SINCE   |    INFO     |
+--------------+---------------+-------+----------+-------------+
| 2019::100    | node specific | up    | 12:30:08 | Established |
+--------------+---------------+-------+----------+-------------+


References
https://docs.projectcalico.org/v3.5/usage/configuration/bgp
https://docs.projectcalico.org/v3.5/reference/calicoctl/resources/bgpconfig 


Install calicoctl in kubernetes cluster

This document explains how to install calicoctl in a kubernetes cluster to check the peering status.

1. Download calicoctl to /usr/local/bin
curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.4.0/calicoctl
chmod +x calicoctl

2. Create calicoctl.cfg in /etc/calico folder on each nodes

root@worker1:/usr/local/bin# cat /etc/calico/calicoctl.cfg
cat /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/root/.kube/config"   ## point to your kube config of the cluster

3. Check the status from each nodes

root@worker1:/usr/local/bin# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 172.16.0.6   | node-to-node mesh | up    | 08:27:06 | Established |
+--------------+-------------------+-------+----------+-------------+


Wednesday, 27 February 2019

Set up internal IPV6 only kubernetes cluster on windows laptop

This blog will help you to install Kubernetes with IPV6 only cluster using Ubuntu VMs on windows or any OS.

Prerequisites:
1.Oracle VM Virtual Box in windows: Download from https://www.virtualbox.org/ You can use similar software in other OS.
2. Add a new virtual host adapter as explained in Add new Adapter Interface if you dont want to use default IP address range for the virtual interface
2. Create 2 VMs for 1 master and 1 worker node (you can add more number of worker nodes) with below requirements
  • Version: Ubuntu (64-bit) [Ubuntu 16.04 +]
  • Processor: 2 CPU
  • 4 GB RAM
  • Network : Select NAT as adapter 1 and Host only adapter as adapter 2
Address schemas used

 master                   -        2019::100 
 worker1                -       2019::101 
pod-network-cidr  -        2019::1:192.168.0.0/112 
service-cidr           -        2019::10.0.0.0/112
Note! you can select own address schema

Prerequisite
Make sure you have a default route with IPV6
eg: ip -6 route add default dev enp0s8 via 2019::130
Note! not mandatory to be functional ie 2019::130 can be an IP in range of the interface CIDR and not assigned to any node. 

Installation steps:

1. Add ipv6 address to eth1 of master and worker nodes
eg in master node
ifconfig enp0s8 inet6 add 2019::100/64 up

2. Update the /etc/hosts of all nodes with IPv6 address and hostname and reload

2019::100 master1
2019::101 worker1


3. Update /etc/sysctl.conf and add
net.ipv6.conf.all.forwarding=1

4. reload sysctl
sysctl -p /etc/sysctl.conf

5. Turn off swap  using  
   sudo swapoff -a

6. Comment the line with swap in /etc/fstab

7.  Install docker
                     apt-get update && apt-get install -y docker.io
8.  Install Kubernetes

    On all nodes
     a.
      apt-get update && apt-get install -y apt-transport-https curl
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
      cat << EOF >/etc/apt/sources.list.d/kubernetes.list
      deb http://apt.kubernetes.io/ kubernetes-xenial main
      EOF

     b. Install
    apt-get update && apt-get install -y kubelet kubeadm kubectl

      c. Edit below file  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add cgroup-driver Environment variable 
   
           Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs" 


    On Master node

     a. Kubeadm init
            kubeadm init --pod-network-cidr=2019::1:192.168.0.0/112 --apiserver-advertise-address=2019::100 --service-cidr=2019::10.0.0.0/112
     b.
        mkdir -p $HOME/.kube  
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
       sudo chown $(id -u):$(id -g) $HOME/.kube/config

     c. Copy the Join command generated as part of the kubeadmin init output and apply it  on worker nodes to join the cluster

     d. check whether nodes are running
         kubectl get nodes 

Note! The STATUS will be NOT READY until you install a CNI



    e.  Install CNI

To Install Version 3.5.4  with fix for IPV6Issue  :
wget https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/etcd.yaml
wget https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/calico.yaml
1. Update etcd and update below attributes in daemon set  and replace cluster IP to IPV6 address in the service

 vi etcd.yaml

 - --advertise-client-urls=http://[$(CALICO_ETCD_IP)]:6666
 - --listen-client-urls=http://[::]:6666
 - --listen-peer-urls=http://[::]:6667

#update service and change IPV4 addres to IPV6 cluster IP for the calico-etcd service
clusterIP: 2019::a00:9f85

kubectl apply -f etcd.yaml

2. Edit calico.yaml . Update changes and enable below configurations
vi calico.yaml

#update etcd_endpoints with IPV6 address
etcd_endpoints: "http://[2019::100]:6666"


   # update the ipam type in calico-config
     "ipam": {
            "type": "host-local",
            "subnet": "usePodCidr"
          },

#update calico-node with below attributes
 - name: CALICO_IPV4POOL_IPIP
    value: "off"
 - name: CALICO_IPV6POOL_CIDR
    value: "2019::1:192.168.0.0/112"
 - name: FELIX_IPV6SUPPORT
   value: "true"
- name: IP6
    value: "autodetect"

 kubectl apply -f calico.yaml


To install old version V3.3
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

Edit calico.yaml and enable below configurations

vi calico.yaml
 - name: CALICO_IPV4POOL_IPIP
    value: "off"
 - name: CALICO_IPV6POOL_CIDR
    value: "2019::1:192.168.0.0/112"
 - name: FELIX_IPV6SUPPORT
   value: "true"
- name: IP6
    value: "autodetect"

 kubectl apply -f calico.yaml

Should apply fix for this Issue for calico v 3.3 to peer between nodes if you face same issue as  https://github.com/projectcalico/calico/issues/2458. The fix is available in  Calico version 3.5.4.

Workaround : Delete BGPPeers and manually add IPV6 BGP Peers. Follow Workaround.
Note! This workaround is not needed from Calico version 3.5.4

  f. Check status of pods in all namespace

g. Check status of nodes. The status must be ready and the cluster is ready to use




References

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ 
https://www.projectcalico.org/enable-ipv6-on-kubernetes-with-project-calico/ 
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/
https://docs.projectcalico.org/v3.5/usage/ipv6

Wednesday, 2 January 2019

How to add new Host-only Ethernet Adapter Interface using Oracle Virtual Box

Add new Host only Ethernet Adapter

1. Open Virtual Box and click File and Select Host Network Manager as below

2. Click Create and configure address manually in Adapter tab as below


3. Click  DHCP server tab and select Enable server and add details as below
4. Click Apply . It will create a new adapter interface which will be listed as adapters in Network settings of VMs as below