• 对于注定会优秀的人来说,他所需要的,只是时间----博主
  • 手懒得,必受贫穷,手勤的,必得富足----《圣经》
  • 帮助别人,成就自己。愿君在本站能真正有所收获!
  • 如果你在本站中发现任何问题,欢迎留言指正!
  • 本站开启了防爆破关小黑屋机制,如果您是正常登录但被关进小黑屋,请联系站长解除!

<十>使用kubeadm安装k8s-1.10集群

kubernetes eryajf 1周前 (12-06) 14°C 已收录 1个评论
本文预计阅读时间 2 分钟

所有需要的包都已经下载并统一打包,基于这些包,就能够完成部署安装。

文件包已上传百度云,链接如下:

文件下载

  文件名称:kubeadm 所需安装包  文件大小:八九百兆
  下载声明:本站文件大多来自于网络,仅供学习和研究使用,不得用于商业用途,如果有版权问题,请联系博主进行相关处理!
  下载地址:https://pan.baidu.com/s/1YPYO9PKpkdxRzsJDGsu5jw

获取提取码方式如下:

注意:本段内容须成功“回复本文”后“刷新本页”方可查看!

这里使用两台服务器做集群,一台作为 master,一台作为 node。

1,初始化。

两台服务器都做这些初始化操作。

yum -y install wget ntpdate lrzsz curl && ntpdate -u cn.pool.ntp.org

关闭 swap。

swapoff -a
sed -i '11s/^/#/g' /etc/fstab  #这个永久关闭根据自己的实际位置操作。

配置 hosts。

192.168.106.4 master
192.168.106.5 node

优化参数。

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

然后加载。

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

2,传包进来。

1,使用打好的包安装。

tar xf k8s.rpm.tar.gz
cd k8s.rpm
yum -y install ./*.rpm

这一步是将需要的包都缓存下来然后可以直接安装了,如果按照原有的路数走,那么需要进行如下方式安装。

2,使用原始方式安装。

  • step 1: 安装必要的一些系统工具
    • sudo yum install -y yum-utils device-mapper-persistent-data lvm2
      
  • Step 2: 添加软件源信息
    • wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo && mv docker-ce.repo /etc/yum.repos.d
      
  • Step 3: 列举出来有哪些 docker 的版本可以安装
    • yum list docker-ce.x86_64 –showduplicates | sort -r
      
  • Step4: 指定版本来安装
    • yum -y install docker-ce-[VERSION]
      

    这里安装 17 的版本

      yum -y install docker-ce-17.12.1.ce-1.el7.centos
      
  • Step5: 安装 kubeadm、kubelet、kubectl
  • 首先配置阿里云源。

      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      

    然后安装

      yum makecache fast && yum install -y kubelet-1.10.0-0 kubeadm-1.10.0-0 kubectl-1.10.0-0
      

    3,启动 docker。

    启动 docker 并加入开机自启动。

    systemctl start docker && systemctl enable docker && systemctl status docker
    

    4,导入镜像。

    master 上操作:

    docker load -i image-master.tar
    

    更改一下 tag。

    docker tag cnych/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
    docker tag cnych/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
    docker tag cnych/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
    docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
    docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
    docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
    docker tag cnych/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
    docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
    docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
    

    node 上操作:

    docker load -i image-node.tar
    

    更改一下 tag:

    docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
    docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
    docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
    docker tag cnych/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
    docker tag cnych/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
    docker tag cnych/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
    docker tag cnych/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2
    

    5,配置 kubelet。

    安装完成后,我们还需要对 kubelet 进行配置,因为用 yum 源的方式安装的 kubelet 生成的配置文件将参数–cgroup-driver 改成了 systemd,而 docker 的 cgroup-driver 是 cgroupfs,这二者必须一致才行,我们可以通过 docker info 命令查看:

    [root@localhost ~]$docker info |grep Cgroup
    Cgroup Driver: cgroupfs
    

    修改文件 kubelet 的配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,将其中的 KUBELET_CGROUP_ARGS 参数更改成 cgroupfs:

    vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
    

    修改完成之后,重新加载一下。

    systemctl daemon-reload
    

    6,集群初始化。

    到这里我们的准备工作就完成了,接下来我们就可以在 master 节点上用 kubeadm 命令来初始化我们的集群了:

    [root@localhost ~]$kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.106.4
    [init] Using Kubernetes version: v1.10.0
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
        [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    [preflight] Starting the kubelet service
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.106.4]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [master] and IPs [192.168.106.4]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
    [init] This might take a minute or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 19.502623 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [markmaster] Will mark node master as master by adding a label and a taint
    [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: y6829y.a3qvzgz8vm0horcr
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.106.4:6443 --token y6829y.a3qvzgz8vm0horcr --discovery-token-ca-cert-hash sha256:1c63e983c5319dc9b59a61133ad6e7941886125a603aa888f28547afc50a41ba
    

    根据人家提示的执行如下指令:

    mkdir -p $HOME/.kube && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    特别要注意最后的那个输出,以后 node 节点加入进来都是靠的这条指令。

    此时可以在本机查看节点状态:

    [root@localhost ~]$kubectl get nodes
    NAME      STATUS     ROLES     AGE       VERSION
    master    NotReady      master    2m        v1.10.0
    

    那么:

    [root@localhost ~]$kubectl apply -f kube-flannel.yml
    clusterrole.rbac.authorization.k8s.io "flannel" created
    clusterrolebinding.rbac.authorization.k8s.io "flannel" created
    serviceaccount "flannel" created
    configmap "kube-flannel-cfg" created
    daemonset.extensions "kube-flannel-ds-amd64" created
    daemonset.extensions "kube-flannel-ds-arm64" created
    daemonset.extensions "kube-flannel-ds-arm" created
    daemonset.extensions "kube-flannel-ds-ppc64le" created
    daemonset.extensions "kube-flannel-ds-s390x" created
    

    等一会儿再查看一下。

    [root@localhost ~]$kubectl get nodes
    NAME      STATUS     ROLES     AGE       VERSION
    master    Ready      master    2m        v1.10.0
    

    7,加入 master。

    在 node 节点执行刚刚记录下来的指令:

    kubeadm join 192.168.106.4:6443 --token y6829y.a3qvzgz8vm0horcr --discovery-token-ca-cert-hash sha256:1c63e983c5319dc9b59a61133ad6e7941886125a603aa888f28547afc50a41ba
    

    等下在 master 查看状态:

    [root@localhost ~]$kubectl get nodes
    NAME      STATUS    ROLES     AGE       VERSION
    master    Ready     master    3m        v1.10.0
    node      Ready     <none>    1m        v1.10.0
    

    8,部署 dashboard。

    添加两个文件。

    dashboard.yaml:

    [root@localhost dashboard]$cat dashboard.yaml
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: kubernetes-dashboard
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
      labels:
        k8s-app: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kubernetes-dashboard
      namespace: kube-system
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: kubernetes-dashboard
      namespace: kube-system
      labels:
        k8s-app: kubernetes-dashboard
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
          annotations:
            scheduler.alpha.kubernetes.io/critical-pod: ''
        spec:
          serviceAccountName: kubernetes-dashboard
          containers:
          - name: kubernetes-dashboard
            image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
            resources:
              limits:
                cpu: 100m
                memory: 300Mi
              requests:
                cpu: 100m
                memory: 100Mi
            ports:
            - containerPort: 9090
            livenessProbe:
              httpGet:
                path: /
                port: 9090
              initialDelaySeconds: 30
              timeoutSeconds: 30
          tolerations:
          - key: "CriticalAddonsOnly"
            operator: "Exists"
    

    dashboard-svc.yaml:

    [root@localhost dashboard]$cat dashboard-svc.yaml
    
    apiVersion: v1
    kind: Service
    metadata:
      name: kubernetes-dashboard
      namespace: kube-system
      labels:
        k8s-app: kubernetes-dashboard
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      selector:
        k8s-app: kubernetes-dashboard
      type: NodePort
      ports:
      - port: 9090
        targetPort: 9090
        nodePort: 30033
    

    然后通过指令创建:

    [root@localhost dashboard]$kubectl create -f dashboard.yaml
    serviceaccount "kubernetes-dashboard" created
    
    [root@localhost dashboard]$kubectl create -f dashboard-svc.yaml
    service "kubernetes-dashboard" created
    

    然后拿上任一一个节点的 ip:30033 进行访问即可看到 ui 界面了。

    本文整理参考自:

    https://www.jianshu.com/p/9c2e6733ef9b
    https://www.qikqiak.com/post/manual-install-high-available-kubernetes-cluster/#10-%E9%83%A8%E7%BD%B2dashboard-%E6%8F%92%E4%BB%B6-a-id-dashboard-a


    weinxin
    扫码订阅,第一时间获得更新
    微信扫码二维码,订阅我们网站的动态,另外不定时发送 WordPress 小技巧,你可以随时退订,欢迎订阅哦~

    二丫讲梵 , 版权所有丨如未注明 , 均为原创丨本网站采用BY-NC-SA协议进行授权 , 转载请注明<十>使用kubeadm安装k8s-1.10集群
    喜欢 (0)
    [如果想支持本站,可支付宝赞助]
    分享 (0)
    eryajf
    发表我的评论
    取消评论
    表情 贴图 加粗 删除线 居中 斜体 签到

    Hi,您需要填写昵称和邮箱!

    • 昵称 (必填)
    • 邮箱 (必填)
    • 网址
    (1)个小伙伴在吐槽
    1. eryajf
      签到成功!签到时间:2018-12-07 20:36:52,每日打卡,生活更精彩哦~
      eryajf2018-12-07 20:36 (6天前)回复 Windows 7 | Chrome 70.0.3538.9