• 对于注定会优秀的人来说,他所需要的,只是时间----博主
  • 手懒得,必受贫穷,手勤的,必得富足----《圣经》
  • 帮助别人,成就自己。愿君在本站能真正有所收获!
  • 如果你在本站中发现任何问题,欢迎留言指正!
  • 本站开启了防爆破关小黑屋机制,如果您是正常登录但被关进小黑屋,请联系站长解除!

<七>部署web-ui-kubernetes-1.8.6集群搭建

kubernetes eryajf 2个月前 (10-06) 26°C 已收录 0个评论
本文预计阅读时间 49 分钟

*系列汇总*

这将会是一个系列文章在本站,如果你也想学习 k8s-1.8 版本的安装,那么可以根据这个教程进行搭建以及学习,同时也欢迎提出在搭建过程中遇到的任何问题,与博主一起探讨,毕竟我个人在搭建的时候也遇到过许许多多的坑,不过好在心态还不错,一点一点倒是也走出来了。一句话,世上无难事,只怕有心人。

汇总地址如右所示:k8s 系列汇总。

1,安装 DNS 插件

在 Master 节点 上进行安装操作。

下载安装文件:

# cd
# wget https://github.com/kubernetes/kubernetes/releases/download/v1.8.6/kubernetes.tar.gz
# tar xzvf kubernetes.tar.gz

# cd /root/kubernetes/cluster/addons/dns
# mv  kubedns-svc.yaml.sed kubedns-svc.yaml

#把文件中$DNS_SERVER_IP 替换成 10.254.0.2
# sed -i 's/$DNS_SERVER_IP/10.254.0.2/g' ./kubedns-svc.yaml

# mv ./kubedns-controller.yaml.sed ./kubedns-controller.yaml

#把$DNS_DOMAIN 替换成 cluster.local
# sed -i 's/$DNS_DOMAIN/cluster.local/g' ./kubedns-controller.yaml

# ls *.yaml
kubedns-cm.yaml  kubedns-controller.yaml  kubedns-sa.yaml  kubedns-svc.yaml

[root@master1 dns]# kubectl create -f .
configmap "kube-dns" created
deployment "kube-dns" created
serviceaccount "kube-dns" created
service "kube-dns" created

2,部署 dashboard 插件

在 master 节点部署

新增部署配置文件

需要 2 个文件。

1,文件一。

dashboard.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 9090
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"

2,文件二。

dashboard-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 8888

3,分别执行 2 个部署文件进行创建。

kubectl create -f dashboard.yaml

kubectl create -f dashboard-svc.yaml

4,如何访问?

[root@localhost ~]$kubectl get pod -n kube-system -o wide
NAME                                    READY     STATUS              RESTARTS   AGEIP            NODE
kube-dns-778977457c-2rwdj               0/3       ErrImagePull        0          1m172.30.24.2   192.168.106.5
kubernetes-dashboard-8665cd4dfb-jzxg6   0/1       ContainerCreating   0          26s<none>        192.168.106.4

首先查看一下部署的服务是否正常?

如上可以很清晰的看到 咱们部署的 kubernetes-dashboard 在 192.168.106.4 上面已经部署了。

那我们可以去访问下:

http://192.168.106.4:8888

5,部署 heapster 插件

1,下载安装文件

# wget https://github.com/kubernetes/heapster/archive/v1.5.0.tar.gz

# tar xzvf ./v1.5.0.tar.gz

# cd ./heapster-1.5.0/

[root@master1 heapster-1.5.0]# kubectl create -f deploy/kube-config/influxdb/
deployment "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created

[root@master1 heapster-1.5.0]#  kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml
clusterrolebinding "heapster" created

2,部署失败。

但是这个地方在部署的时候失败了,我们可以通过命令行来查看 pod 的状态。

[root@master ~]$kubectl get pod -n kube-system -o wide
NAME                                    READY     STATUS             RESTARTS   AGE       IP             NODE
heapster-55c5d9c56b-wh88h               0/1       ImagePullBackOff(镜像问题)   0          1d        172.30.101.2   192.168.106.4
helloworldapi-6ff9f5b6db-hstbr          0/1       ImagePullBackOff   0          4h        172.30.24.4    192.168.106.5
helloworldapi-6ff9f5b6db-qtg85          1/1       Running            0          4h        172.30.101.5   192.168.106.4
kube-dns-778977457c-2rwdj               0/3       ErrImagePull       0          1d        172.30.24.2    192.168.106.5
kubernetes-dashboard-8665cd4dfb-jzxg6   1/1       Running            1          1d        172.30.101.4   192.168.106.4
monitoring-grafana-5bccc9f786-2tqsn     0/1       ImagePullBackOff   0          1d        172.30.24.3    192.168.106.5
monitoring-influxdb-85cb4985d4-q8dp6    0/1       ImagePullBackOff   0          1d        172.30.101.3   192.168.106.4

同时失败的信息也可以在 k8s 的 web 界面看到:

可以看到报的问题是拉取镜像失败,哈哈哈,被墙了。

此时可以先在一个能够访问的主机上将镜像下载下来,然后再进行加载。

3,下载镜像。

配置代理请走如下方向:http://doc.eryajf.net/docs/test/test-1am8o26r6csm9

然后在对应主机上下载对应镜像:

[root@localhost ~]$docker pull gcr.io/google_containers/heapster-amd64:v1.4.2

Trying to pull repository gcr.io/google_containers/heapster-amd64 ...
v1.4.2: Pulling from gcr.io/google_containers/heapster-amd64
4e4eeec2f874: Pull complete
5c479041def4: Pull complete
Digest: sha256:f58ded16b56884eeb73b1ba256bcc489714570bacdeca43d4ba3b91ef9897b20
Status: Downloaded newer image for gcr.io/google_containers/heapster-amd64:v1.4.2

4,保存镜像。

下载之后保存镜像。

查看镜像
[root@localhost ~]$docker images

REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/heapster-amd64   v1.4.2              d4e02f5922ca        13 months ago       73.4 MB

保存镜像
[root@localhost ~]$docker save gcr.io/google_containers/heapster-amd64 -o heapster-amd64

5,加载镜像。

然后把镜像传到 k8s 的 master 上,进行加载。

传输
[root@localhost ~]$scp heapster-amd64 root@192.168.106.3:/mnt/

加载
[root@master mnt]$cat heapster-amd64 | ssh root@node01 'docker load'
Loaded image: gcr.io/google_containers/heapster-amd64:v1.4.2

接着去对应的 node 主机上看看镜像:

[root@node01 ~]$docker images
REPOSITORY                                                                 TAG                 IMAGE ID            CREATED             SIZE
registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64   head                cf8ed7dd44bb        4 days ago          122MB
192.168.106.3:5000/centos                                                  v0323               5182e96772bf        7 weeks ago         200MB
192.168.106.3:5000/helloworldapi                                           v2.2                85378ce05520        6 months ago        329MB
registry.access.redhat.com/rhel7/pod-infrastructure                        latest              99965fb98423        11 months ago       209MB
gcr.io/google_containers/heapster-amd64                                    v1.4.2              d4e02f5922ca        13 months ago       73.4MB

6,验证效果。

当镜像正常加载之后,k8s 的 kube-controller-manage 会自动 restart 容器的。

先在主机上看:

[root@master mnt]$kubectl get pod -n kube-system -o wide
NAME                                    READY     STATUS             RESTARTS   AGE       IP             NODE
heapster-55c5d9c56b-wh88h               1/1       Running(跑起来了)            0          1d        172.30.101.2   192.168.106.4
helloworldapi-6ff9f5b6db-hstbr          0/1       ImagePullBackOff   0          5h        172.30.24.4    192.168.106.5
helloworldapi-6ff9f5b6db-qtg85          1/1       Running            0          5h        172.30.101.5   192.168.106.4
kube-dns-778977457c-2rwdj               0/3       ImagePullBackOff   0          1d        172.30.24.2    192.168.106.5
kubernetes-dashboard-8665cd4dfb-jzxg6   1/1       Running            1          1d        172.30.101.4   192.168.106.4
monitoring-grafana-5bccc9f786-2tqsn     0/1       ImagePullBackOff   0          1d        172.30.24.3    192.168.106.5
monitoring-influxdb-85cb4985d4-q8dp6    0/1       ImagePullBackOff   0          1d        172.30.101.3   192.168.106.4

然后去 web 界面看看:

再等一会儿,给他一个加载搜集的时间,即可看到监控了。

一会儿过去了,,,

查看总览:

查看节点:

节点详情:

6,另外几个插件。

当我查看创建的 pod 的时候,发现除了 dashboard 成了之后,其余的都失败了,那么接下来,就如上边方法炮制,下载对应镜像,对之进行整理。

这里为了快速下载镜像,可以选择将 google 的镜像换成阿里云的。

gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1
docker pull registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head


gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
docker pull registry.cn-hangzhou.aliyuncs.com/wonders/k8s-dns-kube-dns-amd64:1.14.5


gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
docker pull registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-grafana-amd64:v4.4.3


gcr.io/google_containers/heapster-amd64:v1.4.2
docker pull registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-amd64:v1.4.2


gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
docker pull registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-influxdb-amd64:v1.3.3


gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
docker pull registry.cn-hangzhou.aliyuncs.com/inspur_research/k8s-dns-dnsmasq-nanny-amd64:1.14.5

网友贡献:

hub.c.163.com/zhijiansd/k8s-dns-kube-dns-amd64:1.14.5

hub.c.163.com/zhijiansd/k8s-dns-dnsmasq-nanny-amd64:1.14.5

hub.c.163.com/zhijiansd/k8s-dns-sidecar-amd64:1.14.5

或者使用阿里的镜像源:

https://dev.aliyun.com/search.html

下载之后,如法炮制,可以看到:

但是新的问题来了,因为这个地方使用阿里云的仓库地址 pull 的镜像,最后发现部署仍旧是失败的。

ok,接着再来问题解决大法。

去服务器看看 pod 情况。

[root@master mnt]$kubectl get pods
No resources found.

什么鬼,怎么会没数据呢。这个地方已经不能这么看了,因为部署的 pod 都是在创建的 namespace 之中的,所以以后要指定 namespace 才能查询啦。

[root@master mnt]$kubectl get pods -n kube-system
NAME                                    READY     STATUS             RESTARTS   AGE
heapster-55c5d9c56b-wh88h               1/1       Running            0          1d
helloworldapi-6ff9f5b6db-hstbr          0/1       ErrImagePull       0          6h
helloworldapi-6ff9f5b6db-qtg85          1/1       Running            0          6h
kube-dns-778977457c-2rwdj               0/3       CrashLoopBackOff   6          1d
kubernetes-dashboard-8665cd4dfb-jzxg6   1/1       Running            1          1d
monitoring-grafana-5bccc9f786-2tqsn     1/1       Running            0          1d
monitoring-influxdb-85cb4985d4-q8dp6    1/1       Running            0          1d

看看详细日志:

[root@master mnt]$kubectl -n kube-system describe pods kube-dns-778977457c-2rwdj
Name:           kube-dns-778977457c-2rwdj
Namespace:      kube-system
Node:           192.168.106.5/192.168.106.5
Start Time:     Sat, 29 Sep 2018 11:44:34 +0800
Labels:         k8s-app=kube-dns
                pod-template-hash=3345330137
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-778977457c","uid":"fa63df41-c399-11e8-823b-525400c7...
                scheduler.alpha.kubernetes.io/critical-pod=
Status:         Pending
IP:             172.30.24.2
Created By:     ReplicaSet/kube-dns-778977457c
Controlled By:  ReplicaSet/kube-dns-778977457c
Containers:
  kubedns:
    Container ID:
    Image:         gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
    Image ID:
    Ports:         10053/UDP, 10053/TCP, 10055/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-vsmtl (ro)
  dnsmasq:
    Container ID:  docker://61e6ea7f3c478c0d925b221ca31b803ffeb023c1a5bc8a391fe490f63bffa971
    Image:         gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
    Image ID:      docker://sha256:96c7a868215290b21a9600a46ca2f80a7ceaa49ec6811ca2be7da6fcd8aaac77
    Ports:         53/UDP, 53/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Sun, 30 Sep 2018 15:49:59 +0800
      Finished:     Sun, 30 Sep 2018 15:51:59 +0800
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-vsmtl (ro)
  sidecar:
    Container ID:
    Image:         gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
    Image ID:
    Port:          10054/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-vsmtl (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  kube-dns-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-dns
    Optional:  true
  kube-dns-token-vsmtl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-dns-token-vsmtl
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
Events:
  Type     Reason      Age                  From                    Message
  ----     ------      ----                 ----                    -------
  Warning  FailedSync  38m (x6369 over 1d)  kubelet, 192.168.106.5  Error syncing pod
  Warning  Failed      33m (x313 over 1d)   kubelet, 192.168.106.5  Failed to pull image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed      28m (x314 over 1d)   kubelet, 192.168.106.5  Failed to pull image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5": rpc error: code = Unknown desc = Errorresponse from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   BackOff     18m (x6135 over 1d)  kubelet, 192.168.106.5  Back-off pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
  Warning  Unhealthy   8m (x14 over 14m)    kubelet, 192.168.106.5  Liveness probe failed: Get http://172.30.24.2:10054/healthcheck/dnsmasq: dial tcp 172.30.24.2:10054: getsockopt: connection refused
  Normal   Killing     3m (x6 over 13m)     kubelet, 192.168.106.5  Killing container with id docker://dnsmasq:Container failed liveness probe.. Container will be killed and recreated.

看最后的信息,大概说的是依然找不到对应的镜像,我们去 192.168.106.5 这台主机上看看是否有这个镜像。

[root@node02 log]$docker images
REPOSITORY                                                                      TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/heapster-grafana-amd64                                 v4.4.3              edd7018a59de        8 months ago        152MB
registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-grafana-amd64        v4.4.3              edd7018a59de        8 months ago        152MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64                            1.14.5              96c7a8682152        8 months ago        41.4MB
registry.cn-hangzhou.aliyuncs.com/inspur_research/k8s-dns-dnsmasq-nanny-amd64   1.14.5              96c7a8682152        8 months ago        41.4MB
registry.cn-hangzhou.aliyuncs.com/wonders/k8s-dns-kube-dns-amd64                1.14.5              38844359ff14        9 months ago        49.4MB
registry.access.redhat.com/rhel7/pod-infrastructure                             latest              99965fb98423        11 months ago       209MB

话说第五个不就是么,但是注意,这个地方因为用阿里云仓库下载,所以 tag 名称与人家报出来的不一致,因此导致失败。解决办法就是修改对应的 tag。

注意修改的时候带上后边的版本号。

[root@node02 log]$docker tag registry.cn-hangzhou.aliyuncs.com/wonders/k8s-dns-kube-dns-amd64:1.14.5 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
[root@node02 log]$
[root@node02 log]$docker images
REPOSITORY                                                                      TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/heapster-grafana-amd64                                 v4.4.3              edd7018a59de        8 months ago        152MB
registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-grafana-amd64        v4.4.3              edd7018a59de        8 months ago        152MB
registry.cn-hangzhou.aliyuncs.com/inspur_research/k8s-dns-dnsmasq-nanny-amd64   1.14.5              96c7a8682152        8 months ago        41.4MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64                            1.14.5              38844359ff14        9 months ago        49.4MB
registry.cn-hangzhou.aliyuncs.com/wonders/k8s-dns-kube-dns-amd64                1.14.5              38844359ff14        9 months ago        49.4MB
registry.access.redhat.com/rhel7/pod-infrastructure                             latest              99965fb98423        11 months ago       209MB

改完之后就可以去 master 看看 pod 的状态了。

这个时候的日志,也可以通过 node 节点的/var/log/messages 进行监控。

那么到此刻,一个所谓的安装流程算是到此完毕。事实上这只是一小小小步而已,经过了这遍安装也只是熟悉了 k8s 所需要的组件有哪些,至于其中各个组件分别代表什么,又分别有什么功能,之间的协作是怎样的,k8s 的实际应用又是那些,这都是需要深入的地方。


weinxin
扫码订阅,第一时间获得更新
微信扫码二维码,订阅我们网站的动态,另外不定时发送 WordPress 小技巧,你可以随时退订,欢迎订阅哦~

二丫讲梵 , 版权所有丨如未注明 , 均为原创丨本网站采用BY-NC-SA协议进行授权 , 转载请注明<七>部署web-ui-kubernetes-1.8.6集群搭建
喜欢 (2)
[如果想支持本站,可支付宝赞助]
分享 (0)
eryajf
发表我的评论
取消评论
表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址