Kubernetes 고가용성 클러스터 구축 – Control-plane 3중화

4–5분

▼ kube-vip 구축은 아래 글 참고

Kubernetes Control-plane HA를 위한 kube-vip 구축 방법 – human_log (showinfo8.com)

위와 같이 VIP를 구축할 수 있는 데 이미 kubernetes가 구축되어 있는 상태로 진행 될 것이다. 그래서 이번에는 사전에 kubeadm 구성파일은 구성하고 그 형태로 진행해볼 예정이다.

순서

  • kubeadm init – VIP를 포함한 상태로 다시 진행
    • –upload-certs 붙인 상태로 명령어 실행
  • 생성된 kube-vip manifest파일 -> /etc/kubernetes/manifest/ 옮기기
  • 이 후 control-plane join하기

이렇게 총 3가지 순서로 진행할 예정이다.

먼저 kubeadm-init.yml를 설정 할 것이다

  • 아래와 같이 진행하면 된다.
  • certSANs는 접근할 control-plane에 IP 및 VIP 작성 진행 해주면 된다
  • controlPlaneEndpoint를 반드시 작성 필요(VIP)
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: kubernetes
etcd:
  local:
    dataDir: /var/lib/etcd
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
kubernetesVersion: v1.23.17
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: "192.168.0.20:6443"
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  certSANs:
    - "127.0.0.1"
    - "192.168.0.21"
    - "192.168.0.22"
    - "192.168.0.23"
    - "192.168.0.20"
controllerManager: {}
scheduler: {}

이 후 명령어를 통해 kubeadm-init을 진행한다.

sudo kubeadm init --config=./kubeadm-init.yml --upload-certs

이러면 시작한 node에 /etc/kubernetes/manifest 라는 directory가 생성되고 이 아래에 kube-vip 파일을 옮기면 static pod이 생성된다.

( ※참조 Kubernetes Control-plane HA를 위한 kube-vip 구축 방법 – human_log (showinfo8.com) )

root@ubuntu:~# ls -l /etc/kubernetes/manifests/
total 20
-rw------- 1 root root 2270 May  4 07:40 etcd.yaml
-rw------- 1 root root 4015 May  4 07:40 kube-apiserver.yaml
-rw------- 1 root root 3521 May  4 07:40 kube-controller-manager.yaml
-rw------- 1 root root 1441 May  4 07:40 kube-scheduler.yaml
-rw-r--r-- 1 root root 1454 May  4 07:40 kube-vip.yml

아래와 같이 정상적으로 control-plane 1번노드가 올라온 걸 확인 할 수 있다.

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.0.200:6443 --token qt949j.5t8cdmxstjeann38 \
        --discovery-token-ca-cert-hash sha256:9401439393b2ad6da3ae5120eb77bd40953bd33feb586de2028d59a94f407bda \
        --control-plane --certificate-key 9a6376814a696f6d1de5a736f9b99adf865042ffe33ea58f810ee3282d47291c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.200:6443 --token qt949j.5t8cdmxstjeann38 \
        --discovery-token-ca-cert-hash sha256:9401439393b2ad6da3ae5120eb77bd40953bd33feb586de2028d59a94f407bda

이제 2번째 3번째 노드도 join 하도록 하겠다.

추가적으로 구성한 vm또는 서버에 진행하면 된다.

kubeadm join 192.168.0.200:6443 --token qt949j.5t8cdmxstjeann38 \
        --discovery-token-ca-cert-hash sha256:9401439393b2ad6da3ae5120eb77bd40953bd33feb586de2028d59a94f407bda \
        --control-plane --certificate-key 9a6376814a696f6d1de5a736f9b99adf865042ffe33ea58f810ee3282d47291c

아래와 같이 실행되게 된다.

[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node worker1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node worker1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.


이렇게 진행하고 나면 아래와 같이 node가 조회 되게 된다.

root@ubuntu:~#
root@ubuntu:~#   mkdir -p $HOME/.kube
root@ubuntu:~#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@ubuntu:~#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
root@ubuntu:~# kubectl get node
NAME      STATUS     ROLES                  AGE     VERSION
master1   NotReady   control-plane,master   8m32s   v1.23.17
worker1   NotReady   control-plane,master   3m40s   v1.23.17
worker2   NotReady   control-plane,master   12s     v1.23.17



클러스터 구축을 마무리 짓겠습니다. 이 글에서 소개한 방법은 비교적 단순하지만, 이를 통해 컨트롤 플레인의 고가용성을 효과적으로 향상시킬 수 있습니다. kube-vip를 사용하여 3중화된 컨트롤 플레인 구성은 안정적인 Kubernetes 클러스터 운영에 필수적인 요소입니다. 비즈니스 요구 사항에 맞는 높은 가용성을 확보하는 것은 중단 없는 서비스를 제공하는 데 중요합니다.

이러한 HA 구성은 Kubernetes 클러스터를 보다 견고하게 만들어, 예기치 않은 장애가 발생했을 때에도 빠르게 회복할 수 있게 도와줍니다. 기업 환경에서는 이와 같은 고가용성 설정이 중요하며, kube-vip 같은 도구를 활용함으로써 더욱 안정적인 클러스터 환경을 구축할 수 있습니다.

이상으로 글을 마치도록 하겠습니다. 감사합니다!

다음 주에는 새로운 주제를 가져오도록 하겠습니다.