008.kubernets的调度系统之标签选择器
2021-04-29 01:29
                         标签:default   nav   核心   count   bre   指定   声明   pod   匹配    除了让 kubernetes 集群调度器自动为 pod 资源选择某个节点(默认调度考虑的是资源足够,并且 load 尽量平均),有些情况我们希望能更多地控制 pod 应该如何调度。比如,集群中有些机器的配置更好( SSD,更好的内存等),我们希望比较核心的服务(比如说数据库)运行在上面;或者某两个服务的网络传输很频繁,我们希望它们最好在同一台机器上,或者同一个机房。有一些特殊的应用,我们就希望它们能运行在我们指定的节点上,还有些情况下,我们希望某个应用在所有的节点上都能运行一份。 针对不同的应用场景,kubernetes内置了多种调度方式可供选择。包括标签选择器,daemonsets,节点亲和性,pod亲和性,污点与容忍等。 这种方式其实就是我们最常用的使用label的方式,给某一个node打上特定的标签,然后在启动pod的时候,通过nodeSelector指定要调度到的node节点的标签。 [root@docker-server1 ingress]# kubectl get nodes root@docker-server1 ingress]# kubectl get pods  -n ingress-nginx -o wide [root@docker-server1 ingress]# kubectl get nodes --show-labels 默认为节点打的标签:系统版本,节点名称,系统架构 [root@docker-server1 ingress]# vi nginx-controller.yaml  这个标签对于目前的条件,没有任何意义 标签是可以是多个,而且是逻辑与的关系,即当所有的标签都满足,才能成立,如果都不满足,就会pengding [root@docker-server1 ingress]# kubectl apply -f nginx-controller.yaml [root@docker-server1 ingress]# kubectl get pods  -n ingress-nginx -o wide [root@docker-server1 ingress]# kubectl get pods  -n ingress-nginx -o wide 然后就运行在了192.168.132.133上 [root@docker-server1 deployment]# kubectl apply -f busybox-deployment.yaml  [root@docker-server1 deployment]# kubectl get pods [root@docker-server1 deployment]# kubectl describe pods busybox-674bd96f74-m4fst 一直pengding状态,是因为没有匹配的标签 [root@docker-server1 deployment]# kubectl label node 192.168.132.132 aaa=bbb [root@docker-server1 deployment]# kubectl get pods  [root@docker-server1 deployment]# kubectl describe pods busybox-674bd96f74-m4fst [root@docker-server1 deployment]# kubectl label node 192.168.132.132  aaa- 删除标签后,已经运行的pod会继续运行 [root@docker-server1 deployment]# kubectl get pods -o wide [root@docker-server1 deployment]# kubectl delete pods busybox-674bd96f74-m4fst root@docker-server1 deployment]# kubectl get pods -o wide 一直不能启动,是因为没有匹配的标签 为192.168.132.132 打自定义标签 [root@docker-server1 deployment]# kubectl label node 192.168.132.132  ingress=enable [root@docker-server1 deployment]# vim /yamls/ingress/nginx-controller.yaml [root@docker-server1 deployment]# kubectl apply -f /yamls/ingress/nginx-controller.yaml [root@docker-server1 deployment]# kubectl get pods -n ingress-nginx -o wide [root@docker-server1 deployment]# kubectl get pods -n ingress-nginx -o wide 另一种,是直接使用nodeName:指定节点IP [root@docker-server1 deployment]# vim /yamls/ingress/nginx-controller.yaml  [root@docker-server1 deployment]# kubectl get nodes 尽量不要指定nodeName,因为当这个节点断掉之后,必须修改新的nodeName,但是使用标签选择器,直接打标签就可以了 可以这么操作,运行节点的副本设为2,然后给其中两个节点打上相同的标签,在使用标签选择器选择标签,这样就可以让ingress运行在固定的节点上 博主声明:本文的内容来源主要来自誉天教育晏威老师,由本人实验完成操作验证,需要的博友请联系誉天教育(http://www.yutianedu.com/),获得官方同意或者晏老师(https://www.cnblogs.com/breezey/)本人同意即可转载,谢谢! 008.kubernets的调度系统之标签选择器 标签:default   nav   核心   count   bre   指定   声明   pod   匹配    原文地址:https://www.cnblogs.com/zyxnhr/p/12189639.html一 Kubernetes 调度简介
二 标签选择器
2.1  nodeSelector打标签
NAME              STATUS   ROLES    AGE     VERSION
192.168.132.131   Ready    master   3d18h   v1.17.0
192.168.132.132   Ready    
NAME                                        READY   STATUS    RESTARTS   AGE   IP                NODE              NOMINATED NODE   READINESS GATES
nginx-ingress-controller-5c6985f9cc-wkngv   1/1     Running   0          22h   192.168.132.132   192.168.132.132   
NAME              STATUS   ROLES    AGE     VERSION   LABELS
192.168.132.131   Ready    master   3d18h   v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.132.131,kubernetes.io/os=linux,node-role.kubernetes.io/master=
192.168.132.132   Ready    
nodeSelector:
        kubernetes.io/os: linux
2.2 重新添加一个唯一标签
nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/hostname: 192.168.132.133    #这是一个危矣标签
namespace/ingress-nginx unchanged
configmap/nginx-configuration unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
serviceaccount/nginx-ingress-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole unchanged
role.rbac.authorization.k8s.io/nginx-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding unchanged
deployment.apps/nginx-ingress-controller configured
limitrange/ingress-nginx configured
NAME                                        READY   STATUS              RESTARTS   AGE   IP                NODE              NOMINATED NODE   READINESS GATES
nginx-ingress-controller-5c6985f9cc-wkngv   1/1     Running             0          22h   192.168.132.132   192.168.132.132   
NAME                                        READY   STATUS    RESTARTS   AGE    IP                NODE              NOMINATED NODE   READINESS GATES
nginx-ingress-controller-5cffd956bf-dm9qf   1/1     Running   0          4m3s   192.168.132.133   192.168.132.133   
2.3 无满足标签
apiVersion: apps/v1
kind: Deployment
metadata:
    name: busybox
    namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      name: busybox
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  template:
    metadata:
      labels:
        name: busybox
    spec:
      nodeSelector:
        aaa: bbb
      containers:
      - name: busybox
        image: busybox
        command:
          - /bin/sh
          - -c
          - "sleep 3600"
deployment.apps/busybox configured
busybox-546555c84-2psbb             1/1     Running   13         24h
busybox-674bd96f74-m4fst            0/1     Pending   0          13s
Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  
2.4 为节点打标签
node/192.168.132.132 labeled
NAME                                READY   STATUS              RESTARTS   AGE
busybox-546555c84-2psbb             1/1     Running             13         24h
busybox-674bd96f74-m4fst            0/1     ContainerCreating   0          7m26s
Events:
  Type     Reason            Age        From                      Message
  ----     ------            ----       ----                      -------
  Warning  FailedScheduling  
2.5 删除标签
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE   READINESS GATES
busybox-674bd96f74-m4fst            1/1     Running   0          12m     10.244.1.29   192.168.132.132   
2.6 删除pods
pod "busybox-674bd96f74-m4fst" deleted
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE   READINESS GATES
busybox-674bd96f74-8d7ml            0/1     Pending   0          65s     
2.7 自定义标签
node/192.168.132.132 labeled
nodeSelector:
        ingress: enable
namespace/ingress-nginx unchanged
configmap/nginx-configuration unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
serviceaccount/nginx-ingress-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole unchanged
role.rbac.authorization.k8s.io/nginx-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding unchanged
deployment.apps/nginx-ingress-controller configured
limitrange/ingress-nginx configured
NAME                                        READY   STATUS        RESTARTS   AGE   IP                NODE              NOMINATED NODE   READINESS GATES
nginx-ingress-controller-5cffd956bf-dm9qf   1/1     Terminating   0          32m   192.168.132.133   192.168.132.133   
NAME                                        READY   STATUS    RESTARTS   AGE   IP                NODE              NOMINATED NODE   READINESS GATES
nginx-ingress-controller-79669b846b-nlrxl   1/1     Running   0          95s   192.168.132.132   192.168.132.132   
hostNetwork: true
      nodeName: "192.168.132.133"    
NAME              STATUS   ROLES    AGE     VERSION
192.168.132.131   Ready    master   3d19h   v1.17.0
192.168.132.132   Ready