6 Star 72 Fork 28

JustryDeng / notebook

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
[06]k8s部署应用镜像.md 31.16 KB
一键复制 编辑 原始数据 按行查看 历史
JustryDeng 提交于 2023-03-03 18:04 . k8s部署应用镜像

k8s部署应用镜像

准备工作

镜像准备

k8s都是针对镜像进行部署的,所以我们需要先将应用包打成镜像;

整体流程是:

  1. 将应用打包成镜像
  2. 将镜像发布到镜像仓库
  3. k8s拉取镜像进行部署

这里为了演示方便,本人直接省略步骤2,直接到k8s所在的任意一台机器上打镜像,然后k8s使用本地镜像进行部署

  1. 准备一个项目,这个项目有个获取基本信息的接口:

    1661150563635

  2. 将项目打成jar包,并上传至任意一台k8s集群节点机器上

    1661150635775

  3. 使用运行时(docker或containerd等),将应用包打成镜像文件

    提示:安装k8s时,是需要提前安装好运行时的

    编写dockerfile

    # 编写dockerfile文件
    # FROM 拉取一个jdk8的镜像作为基础镜像
    # MAINTAINER 主义者发行商信息
    # COPY 将上下文中的文件复制到镜像中的指定位置
    # RUN 在终端操作的shell命令 (方式一:RUN <命令行命令>, 方式二:RUN ["可执行文件", "参数1", "参数2"])
    # ENTRYPOINT容器启动之后执行的命令 (前台启动)
    cat > Dockerfile <<EOF
    FROM java:8
    MAINTAINER  JustryDeng<13548417409@163.com>
    COPY my-demo-project.jar /
    RUN echo 'Asia/Shanghai' >/etc/timezone
    ENTRYPOINT ["java", "-jar", "/my-demo-project.jar"]
    EOF

    打镜像

    # 这里使用containerd的加强工具nerdctl构建镜像(需要指定命名空间为k8s,否则k8s获取不到镜像)
    nerdctl build -t my-demo-project:1.0.0 . --namespace k8s.io
    # 查看镜像
    nerdctl images --namespace k8s.io | grep my-demo-project
    
    # 再使用k8s的cri客户端工具crictl确认一下,有镜像(k8s实际上就是通过此指令操作镜像的,如果此指令获取不到镜像,那么k8s也获取不到)
    crictl images | grep my-demo-project

    1661325462119

    1661325700901

  4. 确保每个节点都能获取到这个镜像

    注:如果是直接从镜像仓库拉取的镜像,那没问题(不需要执行此步骤);如果是使用自己本地的镜像,那么需要保证每个节点都有这个镜像的

    直接在其它节点执行前面的指令,确保k8s能获取到对应的镜像

k8s部署镜像

第一步:准备namespace

# 执行命名空间 dev
kubectl create namespace dev
# 删除命名空间
# kubectl delete namespaces {命名空间}
# 查看所有命名空间
kubectl get namespaces

第二步:使用deployment自动拉起Pod

  1. 编写Deployment

    提示:更多deployment配置可见here

    cat > my-demo-project-deployment.yaml <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-demo-project # 指定当前Deployment的名称
      namespace: dev
    spec:
      replicas: 3 # 部署3个Pod
      selector:
        matchLabels:
          app: my-demo-project
      template: # deployment会以被spec.selector.matchLabels匹配到的template为模板,创建指定spec.replicas数量的pod
        metadata:
          labels:
            app: my-demo-project
        spec:
          containers:
          - image: my-demo-project:1.0.0 # 镜像名,可通过nerdctl images --names指令查看
            name: my-demo-project  # 指定容器名
            imagePullPolicy: IfNotPresent  # 拉取镜像的策略(本地有就拉本地的,本地没有就从镜像仓库拉)
            ports:
            - containerPort: 8080 # 容器端口
            resources: # 资源配额
              limits:  # 限制资源(上限)
                cpu: "2" # CPU限制,单位是core数
                memory: "1024Mi" # 内存限制, Gi Mi G M均可
              requests: # 请求资源(下限)
                cpu: "1"  # CPU限制,单位是core数
                memory: "800Mi"  # 内存限制, Gi Mi G M均可
    EOF
  2. 启动deployment

    # 启动deployment
    kubectl apply -f my-demo-project-deployment.yaml
    # 查看所有deployment
    kubectl get deployment -o wide -A
    # 查看所有pod
    kubectl get pod -o wide -A

    1661326106733

第二步:service整合Pod,发布应用

背景说明:因为pod的ip是动态变化的,出现在哪个node上也是由k8s动态调配而不确定的,所以service出现了。service自带有pod的服务发现及负载均衡,service和pod关联后,访问service时,service会自动负载均衡调用对应的pod

  1. 编写Service配置

    cat > my-demo-project-service.yaml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: my-demo-project-service # 指定service名称
      namespace: dev # 指定命名空间
    spec:
      ports:
      - port: 80 # 设置service服务的端口为80
        protocol: TCP
        targetPort: 8080 # pod内应用的端口是8080
        nodePort: 30018 # node节点上的端口设置为30018
      selector:
        app: my-demo-project # 选择器匹配命中到的pod会被此servcie"统合"
      type: NodePort # service类型为 NodePort
    EOF
  2. 启动service

    # 启动deployment
    kubectl apply -f my-demo-project-service.yaml
    # 查看所有service
    kubectl get svc -o wide -A

    1661327577060

    service对应的选择器也知道了(与选择器selector匹配的pod,就会被service关联):

    1661328147479

  3. 访问测试

    提示:到这里其实已经能通过pod所在node的ip及端口进行访问了

    • 首先获取service关联的pod

      通过kubectl get svc -o wide -A查看service后,我们知道service的selector是app=my-demo-project,我们再根据选择器定位到pod

      kubectl get pod -o wide -A --show-labels| grep 'app=my-demo-project'

      1661328336293

      可以看到,service关联了三个pod,分别在node135、node136、node136这三台机器上

    • 再查看node135、node136、node136这三台机器的ip地址

       cat /etc/hosts

      1661328469660

    • 访问测试

      访问servcie:{nodeIp}:{nodePort}/{应用url}

      说明:通过(service关联的)任意pod所在node都可以访问到service服务,访问当service服务后,service会自动负载均衡,调用pod完成业务逻辑;为了整合关联pod,service自带有服务发现及负载均衡等能力

      • 访问node135:http://192.168.46.135:30018/info/李四

        1661328677975

      • 访问node135:http://192.168.46.136:30018/info/李四

        1661328697405

第三步:ingress整合service,发布应用

ingress由两部分组成:

  • ingress controller:将新加入的Ingress转化成Nginx的配置文件并使之生效;没有ingress controller的话,光有ingress服务也是不会生效的
  • ingress服务:将Nginx的配置抽象成一个Ingress对象,每添加一个新的服务只需写一个新的Ingress的yaml文件即可

ingress的工作原理:

  • step1:ingress contronler通过与k8s的api进行交互,动态的去感知k8s集群中ingress服务规则的变化,然后读取它,并按照定义的ingress规则,转发到k8s集群中对应的service

  • step2:而这个ingress规则写明了哪个域名对应k8s集群中的哪个service,然后再根据ingress-controller中的nginx配置模板,生成一段对应的nginx配置

  • step3:然后再把该配置动态的写到ingress-controller的pod里,该ingress-controller的pod里面运行着一个nginx服务,控制器会把生成的nginx配置写入到nginx的配置文件中,然后reload一下,使其配置生效,以此来达到域名分配置及动态更新的效果

第一小步(已部署则跳过):k8s部署ingress-controller

提示:ingress-controller的实现产品有很多,我们选ingress-nginx,k8s与ingress-nginx的版本存在对应关系,若安装不对应的版本,可能安装失败,本人的k8s用的是1.24.1的,对应的ingress-nginx采用v1.2.0的(1.24*版本的k8s,都应使用1.2.0ingress-nginx)

1. 下载ingress-nginx的deploy.yaml文件
# 可能需要翻墙
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml

注:如果你使用的是v1.2.0版本的deploy.yaml文件的话,可以无需下载,直接使用下面给出的修改好了的即可

2. 编辑deploy.yaml

修改项:

  1. 将Deployment部署方式改为DaemonSet

  2. 在部署yaml下加上hostNetwork: true

  3. 将国外的镜像仓库换成你自己的私有镜像仓库(或者国内的仓库)

    注:如果换成你自己的仓库,你可以先随便起个镜像名,然后我们在下一步的时候会生成该镜像

  4. 设置部署选择器标签(自定义标签名即可),使pod只会部署在拥有指定标签的node上(,这里先设置标签, 下一步再去node上打对应的标签即可)

给出1.2.0版本的修改好的deploy.yaml文件内容:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resourceNames:
  - ingress-controller-leader
  resources:
  - configmaps
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Local
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
# kind: Deployment # Deployment可能会把多个pod调度到同一个node,那就失去高可用的意义了。而DaemonSet在一个节点上只会有一个Pod,符合我们的要求
kind: DaemonSet
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
    spec:
      # 添加hostNetwork为true
      hostNetwork: true
      containers:
      - args:
        - /nginx-ingress-controller
        - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
        - --election-id=ingress-controller-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        # 换成国内的镜像地址(换成国内开源的镜像仓库或者你私有的镜像仓库), 因为直接拉取官网的话,被墙了,所以这里拉取国内阿里同步的镜像
        #image: k8s.gcr.io/ingress-nginx/controller:v1.2.0@sha256:d8196e3bc1e72547c5dec66d6556c0ff92a23f6d0919b206be170bc90d5f9185
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        # 设置端口; 如果不设置,ingress的pod所在的节点将访问不了
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
        # 设置将pod部署在拥有我们自定义的custem/allow-deploy-ingress-nginx=true标签的节点上
        custem/allow-deploy-ingress-nginx: 'true'
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
# 准入控制器, ingress-nginx-admission会提前检查ingress配置, 有问题的话,ingress-nginx-controller就不去reload
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.2.0
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # 换成国内的镜像地址(换成国内开源的镜像仓库或者你私有的镜像仓库), 因为直接拉取官网的话,被墙了,所以这里拉取国内阿里同步的镜像
        #image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.2.0
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # 换成国内的镜像地址(换成国内开源的镜像仓库或者你私有的镜像仓库), 因为直接拉取官网的话,被墙了,所以这里拉取国内阿里同步的镜像
        #image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None
3. 给要允许部署ingress-nginx的node打标签

我这里允许node135和node136节点部署ingress-nginx的pod,所以分别给他两打上标签(此标签需要与上一步中设置的标签一致)

kubectl label nodes node135 custem/allow-deploy-ingress-nginx=true
kubectl label nodes node136 custem/allow-deploy-ingress-nginx=true

# 查看一下打了该标签的node
kubectl get node --show-labels | grep 'allow-deploy-ingress-nginx=true'
4. 在每个node节点,开启配置文件中指定的端口

因为本人安装k8s时,处于安全考虑,没有关闭防火墙;所以这里需要在允许部署ingress-nginx的相关节点(本人这里是node135和node136节点)依次开启配置文件中指定相关探针检查等端口

# 有的端口可能在以前就开启过了,不过没关系,这里再开启一次
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=8443/tcp
firewall-cmd --permanent --add-port=10254/tcp
# 刷新生效
firewall-cmd --reload
# 查看开放了的端口
firewall-cmd --zone=public --list-ports
5. 执行安装deploy.yaml

绑定用户、secret、角色,并安装ingress-controller

安装deploy.yaml

kubectl apply -f deploy.yaml

(稍等一会儿,)检查一下:

# 查看状态
kubectl get pod -o wide -A
# 查看详情
#kubectl describe pods -n {命名空间} {pod名称}
# 查看日志
#kubectl logs -f -n {命名空间}  {podName}

1661330325416

注:后面创建完ingress服务后,准入程序admisssion就会启动然后检查配置的nginx对不对,不对的话都不会走到ingress-controller的

使用curl测试一下:

curl node135
curl node136

1661330325416

第二小步:ingress整合service

背景说明:一旦pod发生变化后,可能就不在原来那台node上了,那么我们通过特定node的ip去访问service的话,可能就访问不到了,此时,我们可以通过ingress来进一步整合service

1. 编写Ingress配置
cat > my-demo-project-ingress.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-demo-project-ingress # 指定m名称
  namespace: dev # 指定命名空间
spec:
  ingressClassName: nginx # 指定nginx; 否则规则不会被添加到nginx.conf中
  rules:
  - host: test.k8s.idea-aedi.com # 指定域名(必须是域名)
    http:
      paths:
      # 示例:默认的,若指定了path为/abc,那么当你访问{地址}/abc/xyz时,会将url定向至指定的service下的/abc/xyz
      - path: / # 指定path
        pathType: Prefix
        backend:
          service:
            name: my-demo-project-service # service的名称
            port:
              number: 80 # 该service的端口
EOF
2. 启动ingress
# 启动deployment
kubectl apply -f my-demo-project-ingress.yaml

1661330325416

3. 配置域名映射&访问测试

(a) 先确认ingress-controller所安装的node及node的ip地址

kubectl get pod -o wide -A | grep ingress-nginx-controller

1661425228596

(b) 在客户端机器(我使用windows发起请求的,所以这台windows就是客户端),配置hostname,使域名能够被解析

C:\Windows\System32\drivers\etc\hosts文件中追加,ip-域名映射

192.168.46.135 test.k8s.idea-aedi.com
192.168.46.136 test.k8s.idea-aedi.com

1661425400195

(c) 访问域名

  • 80域名:http://test.k8s.idea-aedi.com/info/DengShuai

    1661425496052

  • 443域名:https://test.k8s.idea-aedi.com/info/DengShuai

    1661425484826

相关资料

1
https://gitee.com/JustryDeng/notebook.git
git@gitee.com:JustryDeng/notebook.git
JustryDeng
notebook
notebook
master

搜索帮助