I've pretty successfully changed a node label in my Kubernetes cluster (created using kubeadm) using kubectl replace and kubectl apply.
Required: If your node configuration was changed manually using imperative command like kubectl label it's required to fix last-applied-configuration annotation using the following command (replace node2 with your node name):
kubectl get node node2 -o yaml | kubectl apply -f -
Note: It works in the same way with all types of Kubernetes objects (with slightly different consequences. Always check the results).
Note2: --export argument for kubectl get is deprecated, and it works well without it, but if you use it the last-applied-configuration annotation appears to be much shorter and easier to read.
Without applying existing configuration, the next kubectl apply command will ignore all fields that are not present in the last-applied-configuration annotation.
The following example illustrate that behavior:
kubectl get node node2 -o yaml | grep node-role
{"apiVersion":"v1","kind":"Node","metadata":{"annotations":{"flannel.alpha.coreos.com/backend-data":"{\"VtepMAC\":\"46:c6:d1:f0:6c:0a\"}","flannel.alpha.coreos.com/backend-type":"vxlan","flannel.alpha.coreos.com/kube-subnet-manager":"true","flannel.alpha.coreos.com/public-ip":"10.156.0.11","kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"creationTimestamp":null,
"labels":{
"beta.kubernetes.io/arch":"amd64",
"beta.kubernetes.io/os":"linux",
"kubernetes.io/arch":"amd64",
"kubernetes.io/hostname":"node2",
"kubernetes.io/os":"linux",
"node-role.kubernetes.io/worker":""}, # <--- important line: only worker label is present
"name":"node2","selfLink":"/api/v1/nodes/node2"},"spec":{"podCIDR":"10.244.2.0/24"},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"architecture":"","bootID":"","containerRuntimeVersion":"","kernelVersion":"","kubeProxyVersion":"","kubeletVersion":"","machineID":"","operatingSystem":"","osImage":"","systemUUID":""}}}
node-role.kubernetes.io/santa: ""
node-role.kubernetes.io/worker: ""
Let's check what happened with node-role.kubernetes.io/santa label if I try to replace the worker with infra and remove santa, ( worker is present in the annotation):
# kubectl diff is used to comare the current online configuration, and the configuration as it would be if applied
kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/worker: ""@node-role.kubernetes.io/infra: ""@' | sed 's@node-role.kubernetes.io/santa: ""@@'| kubectl diff -f -
diff -u -N /tmp/LIVE-380689040/v1.Node..node2 /tmp/MERGED-682760879/v1.Node..node2
--- /tmp/LIVE-380689040/v1.Node..node2 2020-04-08 17:20:18.108809972 +0000
+++ /tmp/MERGED-682760879/v1.Node..node2 2020-04-08 17:20:18.120809972 +0000
@@ -18,8 +18,8 @@
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
+ node-role.kubernetes.io/infra: "" # <-- created as desired
node-role.kubernetes.io/santa: "" # <-- ignored, because the label isn't present in the last-applied-configuration annotation
- node-role.kubernetes.io/worker: "" # <-- removed as desired
name: node2
resourceVersion: "60973814"
selfLink: /api/v1/nodes/node2
exit status 1
After fixing annotation (by running kubectl get node node2 -o yaml | kubectl apply -f - ), kubectl apply works pretty well replacing and removing labels:
kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/worker: ""@node-role.kubernetes.io/infra: ""@' | sed 's@node-role.kubernetes.io/santa: ""@@'| kubectl diff -f -
diff -u -N /tmp/LIVE-107488917/v1.Node..node2 /tmp/MERGED-924858096/v1.Node..node2
--- /tmp/LIVE-107488917/v1.Node..node2 2020-04-08 18:01:55.776699954 +0000
+++ /tmp/MERGED-924858096/v1.Node..node2 2020-04-08 18:01:55.792699954 +0000
@@ -18,8 +18,7 @@
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
- node-role.kubernetes.io/santa: "" # <-- removed as desired
- node-role.kubernetes.io/worker: "" # <-- removed as desired, literally replaced with the following label
+ node-role.kubernetes.io/infra: "" # <-- created as desired
name: node2
resourceVersion: "60978298"
selfLink: /api/v1/nodes/node2
exit status 1
Here are a few more examples:
# Check the original label ( last filter removes last applied config annotation line)
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# Replace the label "infra" with "worker" using kubectl replace syntax
$ kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/infra: ""@node-role.kubernetes.io/worker: ""@' | kubectl replace -f -
node/node2 replaced
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/worker: ""
# label replaced -------^^^^^^
# Replace the label "worker" back to "infra" using kubectl apply syntax
$ kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/worker: ""@node-role.kubernetes.io/infra: ""@' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# label replaced -------^^^^^
# Remove the label from the node ( for demonstration purpose)
$ kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/infra: ""@@' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
# empty output
# label "infra" has been removed
You may see the following warning when you use kubectl apply -f on the resource created using imperative commands like kubectl create or kubectl expose for the first time:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
In this case last-applied-configuration annotation will be created with the content of the file used in kubectl apply -f filename.yaml command. It may not contain all parameters and labels that are present in the live object.