Kubernetes 安装

来自ling
跳转至: 导航搜索

脚本备份

Kubernetes安装脚本 Kubernetes之kubectl命令行工具简介、安装配置及常用命令

sealos

https://sealyun.com/

准备

虚拟机安装 点击获取rhel-8.1-x86_64-dvd.iso 提取码:9n6h wangbo/Wb19831010!

master:192.168.64.128,192.168.64.129,192.168.64.130 node:192.168.31.31,192.168.31.32,192.168.31.33

http://store.lameleg.com/ https://www.kubernetes.org.cn/5904.html https://www.cnblogs.com/hi-linux/archive/2019/10/14/11673002.html

保证每个机器的vi /etc/hostname和vi /etc/hosts 正确

修改hostname必须重启后

tee /etc/hostname <<-'EOF'
k8smaster1
EOF

tee /etc/hosts <<-'EOF'
127.0.0.1   k8smaster1
::1         k8smaster1
EOF

tee /etc/hostname <<-'EOF'
k8smaster2
EOF

tee /etc/hosts <<-'EOF'
127.0.0.1   k8smaster2
::1         k8smaster2
EOF

tee /etc/hostname <<-'EOF'
k8smaster3
EOF

tee /etc/hosts <<-'EOF'
127.0.0.1   k8smaster3
::1         k8smaster3
EOF

tee /etc/hostname <<-'EOF'
k8snode1
EOF

tee /etc/hosts <<-'EOF'
127.0.0.1   k8snode1
::1         k8snode1
EOF

安装

su root
cd /
mkdir k8s
cd /k8s
wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/latest/sealos && \
   chmod +x sealos && mv sealos /usr/bin
wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/7b6af025d4884fdd5cd51a674994359c-1.18.0/kube1.18.0.tar.gz
sealos init --passwd Wb19831010! \
--master 192.168.31.21  --master 192.168.31.22  --master 192.168.31.23  \
--node 192.168.31.31 \
--pkg-url /k8s/kube1.18.0.tar.gz \
--version v1.18.0
sealos init --passwd Wb19831010! \
--master 192.168.31.21  --master 192.168.31.22  --master 192.168.31.23  \
--node 192.168.31.31 \
--pkg-url /k8s/kube1.19.0.tar.gz \
--version v1.19.0


sealos init --passwd Wb19831010! \
--master 192.168.31.21 \
--pkg-url /k8s/kube1.19.0.tar.gz \
--version v1.19.0
sealos join --master 192.168.31.22
kubectl get node
--user root \
--passwd your-server-password \

运维

增加master

sealos join --master 192.168.0.6 --master 192.168.0.7
sealos join --master 192.168.0.6-192.168.0.9  # 或者多个连续IP

增加node

sealos join --node 192.168.0.6 --node 192.168.0.7
sealos join --node 192.168.0.6-192.168.0.9  # 或者多个连续IP

删除指定master节点

sealos clean --master 192.168.0.6 --master 192.168.0.7
sealos clean --master 192.168.0.6-192.168.0.9  # 或者多个连续IP

删除指定node节点

sealos clean --node 192.168.0.6 --node 192.168.0.7
sealos clean --node 192.168.0.6-192.168.0.9  # 或者多个连续IP

清理集群

sealos clean


sealos install --pkg-url dashboard.tar
sealos install --pkg-url prometheus.tar
sealos install --pkg-url contour.tar
sealos install --pkg-url kuboard.tar

重要

必须要使用ecs的固定ip,应为kubeadm init的api-advertise-addresses貌似只能是ip而且会在 kubeadm join时使用,并且join后从节点会通过api-advertise-addresses指定的ip去和master交互

kubeadm reset && systemctl start kubelet
kubeadm init --use-kubernetes-version v1.5.2 --api-advertise-addresses=106.14.12.142
kubeadm reset && systemctl start kubelet
kubeadm join --token=4096e4.94518c0d951b358b 106.14.12.142
kubectl get nodes
kubectl get po -n=kube-system -o wide

查看etcd状态

etcdctl member list
docker rm $(docker ps -a -q) #删除所有停止的container
netstat -tunlp

原理

Kubernetes-architecture.png

<s>yum install -y etcd kubernetes --skip-broken

rpm -Va --nofiles --nodigest

getting-started-guides kubeadm快速部署kubernetes1.5.2

组成

  • docker: the container runtime, which Kubernetes depends on. v1.11.2 is recommended, but v1.10.3 and v1.12.1 are known to work as well.
  • kubelet: the most core component of Kubernetes. It runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command to control the cluster once it’s running. You will only need this on the master, but it can be useful to have on the other nodes as well.
  • kubeadm: the command to bootstrap the cluster.

开始前准备

配置hostname和hosts使跨域访问

surface

tee /etc/hostname <<-'EOF'
surface
EOF

tee /etc/hosts <<-'EOF'
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1   surface.ling2.cn
127.0.0.1   surface
106.14.12.142   master
EOF

sysctl kernel.hostname=surface

ecs

tee /etc/hostname <<-'EOF'
master
EOF

tee /etc/hosts <<-'EOF'
127.0.0.1 localhost
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.19.154.195 iZuf658obmsge7tzo8o2ekZ
127.0.0.1 www.ling2.cn
127.0.0.1   master
180.154.185.200  surface
EOF

sysctl kernel.hostname=master

已经安装完docker

Installing kubelet and kubeadm on your hosts

  • 设置镜像源

centos

cat <<EOF > /etc/yum.repos.d/k8s.repo
[kubelet]
name=kubelet
baseurl=http://files.rm-rf.ca/rpms/kubelet/
enabled=1
gpgcheck=0
EOF

ubantu 参考centos

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# apt-get update
# # Install docker if you don't have it already.
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
  • 安装并启动kubelet
 setenforce 0
 yum install -y kubelet kubeadm kubectl kubernetes-cni
 systemctl enable kubelet
 systemctl start kubelet

Initializing your master

The master is the machine where the ��control plane” components runl including etcd (the cluster database) and the API server (wh)ch the kubectl CLI communicates with). All of these components 2un in pods started by kubelet. JRight now you can’t run kubea$m init twice without tearing down the cluster in between, see Tear down. If you try to run kubeadm init and your machine is in a state that is incompatible with starting a Kubernetes cluster, kubeadm will warn you about things that might not work or it will error out for unsatisfied mandatory requirements. To initialize the master, pick one of the machines you previously installed kubelet and kubeadm on, and run:

获取镜像

因为强的原因,先下载镜像,然后使用类似 cloudnil-hub镜像支持 cloudnil-github代码

images=(kube-proxy-amd64:v1.4.4 kube-discovery-amd64:1.0 kubedns-amd64:1.7 kube-scheduler-amd64:v1.4.4 kube-controller-manager-amd64:v1.4.4 kube-apiserver-amd64:v1.4.4 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.3 exechealthz-amd64:1.1 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.4.1)
for imageName in ${images[@]} ; do
  docker pull mritd/$imageName
  docker tag mritd/$imageName gcr.io/google_containers/$imageName
  docker rmi mritd/$imageName
done

或

#!/bin/bash
images=(kube-proxy-amd64:v1.5.2 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.2 kube-controller-manager-amd64:v1.5.2 kube-apiserver-amd64:v1.5.2 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.4 dnsmasq-metrics-amd64:1.0 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0)
for imageName in ${images[@]} ; do
  docker pull cloudnil/$imageName
  docker tag cloudnil/$imageName gcr.io/google_containers/$imageName
  docker rmi cloudnil/$imageName
done

或

#!/bin/bash
images=(kube-proxy-amd64:v1.5.2 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.2 kube-controller-manager-amd64:v1.5.2 kube-apiserver-amd64:v1.5.2 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.4 dnsmasq-metrics-amd64:1.0 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 nginx-ingress-controller:0.8.3)
for imageName in ${images[@]} ; do
  docker pull 102010cncger/$imageName
  docker tag 102010cncger/$imageName gcr.io/google_containers/$imageName
  docker rmi 102010cncger/$imageName
done

或
docker pull gcr.io/google_containers/kube-proxy-amd64:v1.6.0
docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0
docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.6.0

docker pull gcr.io/google_containers/pause-amd64:3.0
docker pull gcr.io/google_containers/etcd-amd64:3.0.17

docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1

docker pull gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
docker pull gcr.io/google_containers/nginx-ingress-controller:0.8.3

docker pull gcr.io/google_containers/kube-dnsmasq-amd64
docker pull gcr.io/google_containers/kubedns-amd64
docker pull gcr.io/google_containers/kube-discovery-amd64
docker pull gcr.io/google_containers/exechealthz-amd64
docker pull gcr.io/google_containers/kubernetes-dashboard-amd64

kubeadm init

  • 由于kubeadm和kubelet安装过程中会生成/etc/kubernetes目录,而kubeadm init会先检测该目录是否存在,所以我们先使用kubeadm初始化环境。
kubeadm reset && systemctl start kubelet
kubeadm init --api-advertise-addresses=172.16.1.101 --use-kubernetes-version v1.5.2
  • 如果使用外部etcd集群:
kubeadm init --use-kubernetes-version v1.5.2 --external-etcd-endpoints http://www.ling2.cn:2379
kubeadm init --use-kubernetes-version v1.5.2 --external-etcd-endpoints http://surface.ling2.cn:2379
  • 如果打算使用flannel网络,请加上:--pod-network-cidr=10.244.0.0/16。参考Kubernetes_安装#安装Calico网络
  • 如果有多网卡的,请根据实际情况配置--api-advertise-addresses=<ip-address>或者1.6.0:kubeadm init --apiserver-advertise-address <ip-address> 单网卡情况可以省略。
  • 如果出现ebtables not found in system path的错误,要先安装ebtables包,我安装的过程中未提示,该包系统已经自带了。
yum install -y ebtables
  • 可以通过以下命令可以查看相关信息
tail -f /var/log/messages

Note: this will autodetect the network interface to advertise the master on as the interface with 4he default gateway. If you want`to use a different interface, specify --api-advertise-addresses=<ip-address> argument to kubeadm init. If you want to use flannel as the pod network, specify --pod-network-cidr=10.244.0.0/16 if you’re using the daemonset manifest below. However, please note that this is not required for any other networks besides Flannel. Please refer to the kubeadm reference doc if you want to read more about the flags kubeadm init provides. This will download and install the cluster database and “control plane” components. This may take several minutes.

例如

kubeadm init --api-advertise-addresses=192.168.44.129 --use-kubernetes-version v1.4.5

The output should look like:

<master/tokens> generated token: "f0c861.753c505740ecde4c"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 61.346626 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 4.506807 seconds
<master/discovery> created essential addon: kube-discovery
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns

Kubernetes master initialised successfully!

You can connect any number of nodes by running:

kubeadm join --token <token> <master-ip>
kubeadm join --token=49926a.e6d4a7ecd26812d6 192.168.1.30

必须妥善保存上面的token,是其他节点加入上面创建的主节点的必须命令

默认主节点是不启动pod的,如需要能启动,比如单机环境下,请执行下面的命令

kubectl taint nodes --all dedicated-
node "test-01" tainted
taint key="dedicated" and effect="" not fo5nd.
taint key="dedicated" and`effect="" not found.

This will remove the “dedicated” taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.

检查 kubelet 状态

systemctl status kubelet

查询集群pods:

kubectl get nodes

Master Images

Image Name Version
gcr.io/google_containers/kube-apiserver-amd64 v1.6.0
gcr.io/google_containers/kube-controller-manager-amd64 v1.6.0
gcr.io/google_containers/kube-scheduler-amd64 v1.6.0
gcr.io/google_containers/kube-proxy-amd64 v1.6.0
gcr.io/google_containers/etcd-amd64 3.0.17
gcr.io/google_containers/pause-amd64 3.0
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.1
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.1

Master Isolation

By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

With output looking something like:

node "test-01" tainted
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.

Installing a pod network

You must install a pod network add-on so that your pods can communicate with each other.

The network must be deployed before any applications. Also, kube-dns, a helper service, will not start up before a network is installed. kubeadm only supports CNI based networks (and does not support kubenet).

Several projects provide Kubernetes pod networks using CNI, some of which also support Network Policy. See the add-ons page for a complete list of available network add-ons.

New for Kubernetes 1.6: kubeadm 1.6 sets up a more secure cluster by default. As such it uses RBAC to grant limited privileges to workloads running on the cluster. This includes networking integrations. As such, ensure that you are using a network system that has been updated to run with 1.6 and RBAC.

You can install a pod network add-on with the following command:

kubectl apply -f <add-on.yaml>

Please refer to the specific add-on installation guide for exact details. You should only install one pod network per cluster.

If you are on another architecture than amd64, you should use the flannel or Weave Net overlay networks as described in the multi-platform section

NOTE: You can install only one pod network per cluster.

Once a pod network has been installed, you can confirm that it is working by checking that the kube-dns pod is Running in the output of

kubectl get pods --all-namespaces

And once the kube-dns pod is up and running, you can continue by joining your nodes.

If your network is not working or kube-dns is not in the Running state, check out the troubleshooting secion below

部署 weave 网络

再没部署 weave 时,dns 是启动不了的,如下 官方给出的命令是这样的

kubectl apply -f https://git.io/weave-kube
kubectl get pods

本着 “刨根问底挖祖坟” 的精神,先把这个 yaml 搞下来

wget https://git.io/weave-kube -O weave-kube.yaml weave-kube.yaml

然后同样的套路,打开看一下镜像,利用 Docker Hub 做中转,搞下来再 load 进去,然后 create -f 就行了

docker pull mritd/weave-kube:1.7.2
docker tag mritd/weave-kube:1.7.2 weaveworks/weave-kube:1.7.2
docker rmi mritd/weave-kube:1.7.2
docker pull weaveworks/weave-kube:1.9.4
docker pull weaveworks/weave-npc:1.9.4
kubectl create -f weave-kube.yaml

安装Calico网络(失败)

网络组件选择很多,可以根据自己的需要选择calico、weave、flannel,calico性能最好,weave和flannel差不多。Addons中有配置好的yaml,部署环境使用的阿里云的VPC,官方提供的flannel.yaml创建的flannel网络有问题,所以本文中尝试calico网络,。

kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml

如果使用了外部etcd,去掉其中以下内容,并修改etcd_endpoints: [ETCD_ENDPOINTS]:

Calico Kubernetes Hosted Install Calico Kubernetes Hosted Install

calico.yaml

Flannel(失败)

github-kube-flannel.yml kube-flannel.yml

添加节点

kubeadm reset && systemctl start kubelet
kubeadm join 49926a.e6d4a7ecd26812d6 192.168.1.30

The nodes are where your workloads (containers and pods, etc) run. If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g. sudo su -) and run the command that was output by kubeadm init. For example:

# kubeadm join --token <token> <master-ip>
<util/tokens> validating provided token
<node/discovery> created cluster info discovery client, requesting info from "http://138.68.156.129:9898/cluster-info/v1/?token-id=0f8588"
<node/discovery> cluster info object received, verifying signature using given token
<node/discovery> cluster info signature and contents are valid, will use API endpoints [https://138.68.156.129:443]
<node/csr> created API client to obtain unique certificate for this node, generating keys and certificate signing request
<node/csr> received signed certificate from the API server, generating kubelet configuration
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"

Node join complete:

  • Certificate signing request sent to master and response received.
  • Kubelet informed of new secure connection details.
kubectl get nodes

Deployment

  • 在Kubernetes中,Deployment是用来负责创建和更新应用程序实例的
  • 在Kubernetes中,为了能够被部署,应用程序需要被打包成为其所支持的容器格式(docker/rkt)
  • 首先,我们要创建一个kubernetes的Deployment。这个Deployment是用来负责创建和更新我们应用程序实例的。
  • 而一旦这个应用创建好了之后,Kubernetes的Master会协调在集群的哪个node上俩创建应用实例。
  • 而一旦应用实例被创建之后,Kubernetes的Deployment Controller就会持续的监视这些应用实例。
  • 一旦发生机器故障或者其他不可预知的情况导致应用实例停止时,一直在监视的Deployment Controller就立即知道这一情况,然后它就会重新生成新的应用实例
  • Kubernetes提供了这种在故障发生时的自愈机制,这个机制也是使用Kubernetes提案的时候反复被背书的能力。
  • 可以使用Kubernetes的命令行接口kubectl,kubectl只安装在Master上,通过使用kubernetes的API与集群进行交互。
  • 命令:kubectl version。 其实我们在安装完成的时候如果你还有印象的话就会记得已经用过这条命令了,其结果显示如下,client和Server均为1.4.1版本
  • 命令:kubectl get nodes。 通过这条命令我们可以确认到此集群的构成以及各组成node的状态是否都是ready
  • 可以使用kubectl run的方式创建也可以使用yaml文件+kubectl create的方式进行创建。本次我们采用后者。首先下载一下kubernetes-dashboard.yaml。而这个文件和easypack_kubernetes.sh在同级目录,上篇文章中在git clone取得的时候已经在本地了。
  • 此文件根官方最新文件的不同点仅在于其版本号我们使用的是前面下载下来的1.4.1,现在最新应该已经是1.4.2了。另外还有一点就是imagePullPolicy我们从Always修改成IfNotPresent了。不然,无论如何它都回去pull这个镜像,网络有不允许,基本上kubernetes1.4的安装和使用就只有和这点相关的有些小坑,体验真心不错。
  • 使用kubectl get deployments可以列出当前的Deployment及其他信息
  • kubectl describe svc kubernetes-dashboard –namespace=kube-system 使用此条命令可以确认出该service对外暴露出的可以访问的端口,通过此端口我们可以访问Kubernetes的Dashboard UI界面
  • kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
kubectl get deployments --namespace=kube-system
kubectl describe svc kubernetes-dashboard --namespace=kube-system #通过 describe 命令我们可以查看其暴露出的 NodePoint,然后便可访问
kubectl get pod --namespace=kube-system

port The port that the service is exposed on the service’s cluster ip (virsual ip). Port is the service port which is accessed by others with cluster ip.

即,这里的port表示:service暴露在cluster ip上的端口,<cluster ip>:port 是提供给集群内部客户访问service的入口。

nodePort On top of having a cluster-internal IP, expose the service on a port on each node of the cluster (the same port on each node). You'll be able to contact the service on any<nodeIP>:nodePortaddress. So nodePort is alse the service port which can be accessed by the node ip by others with external ip.

port

首先,nodePort是kubernetes提供给集群外部客户访问service入口的一种方式(另一种方式是LoadBalancer),所以,<nodeIP>:nodePort 是提供给集群外部客户访问service的入口。

targetPort The port on the pod that the service should proxy traffic to.

targetPort很好理解,targetPort是pod上的端口,从port和nodePort上到来的数据最终经过kube-proxy流入到后端pod的targetPort上进入容器。

port、nodePort总结 总的来说,port和nodePort都是service的端口,前者暴露给集群内客户访问服务,后者暴露给集群外客户访问服务。从这两个端口到来的数据都需要经过反向代理kube-proxy流入后端pod的targetPod,从而到达pod上的容器内。

参考

此文详细描述了本步骤的大量坑和墙问题的解决方式 更多参考kubeadm 搭建 Kubernetes 集群

  • 从https://github.com/liumiaocn/easypack/blob/master/k8s/easypack_kubernetes.sh 复制 easypack_kubernetes.sh
  • mkdir /docker/kubernetes
  • vi /docker/kubernetes/easypack_kubernetes.sh
  • sh /docker/kubernetes/easypack_kubernetes.sh MASTER


mkdir -p /alidata/dockerdata
cd /alidata/dockerdata
yum -y install git

master

git clone https://github.com/liumiaocn/easypack
cd easypack/k8s
sh easypack_kubernetes_1.5.1_nowall.sh MASTER

node

git clone https://github.com/liumiaocn/easypack
cd easypack/k8s
sh easypack_kubernetes.sh NODE token MASTERIP
kubectl version

问题解决

/etc/kubernetes is not empty

cd /etc/kubernetes 查看manifests,没啥内容删除就好了

rm -rf /etc/kubernetes/manifests

然后再执行 init

镜像pull失败

kubeadm reset
systemctl start kubelet.service
kubeadm init or kubeadm join

更多参考kubeadm 搭建 Kubernetes 集群

  • 从https://github.com/liumiaocn/easypack/blob/master/k8s/easypack_kubernetes.sh 复制
  • mkdir /docker
  • vi /docker/easypack_kubernetes.sh
  • sh /docker/easypack_kubernetes.sh MASTER

Setup Picking the Right Solution Independent Solutions Running Kubernetes Locally via Minikube Installing Kubernetes on Linux with kubeadm Creating a Custom Cluster from Scratch Deprecated Alternatives Hosted Solutions Running Kubernetes on Google Container Engine Running Kubernetes on Azure Container Service Running Kubernetes on IBM Bluemix Container Service (Beta) Turn-key Cloud Solutions Running Kubernetes on Google Compute Engine Running Kubernetes on AWS EC2 Running Kubernetes on Azure Running Kubernetes on CenturyLink Cloud Running Kubernetes on IBM Bluemix Running Kubernetes on Multiple Clouds with Stackpoint.io Custom Solutions Custom Cloud Solutions CoreOS on AWS or GCE Kubernetes on Ubuntu CoreOS on Rackspace Installing Kubernetes on AWS with kops Installing Kubernetes On-premise/Cloud Providers with Kargo Building and Running cloud-controller-manager On-Premise VMs CoreOS on AWS or GCE Cloudstack VMware vSphere VMware Photon Controller DCOS CoreOS on libvirt oVirt OpenStack Heat rkt Running Kubernetes with rkt Known Issues when Using rkt Kubernetes on Mesos Kubernetes on Mesos on Docker Bare Metal Offline Fedora via Ansible Fedora (Single Node) Fedora (Multi Node) CentOS CoreOS on AWS or GCE Kubernetes on Ubuntu Ubuntu Kubernetes on Ubuntu Validation Backups Upgrades Scaling Setting up Kubernetes with Juju Monitoring Networking Security Considerations Storage Troubleshooting Decommissioning Operational Considerations Glossary and Terminology Local Kubernetes development with LXD Logging Manually Deploying Kubernetes on Ubuntu Nodes Windows Server Containers Validate Node Setup Configuring Kubernetes with Salt Building Large Clusters Running in Multiple Zones Building High-Availability Clusters Downloading or Building Kubernetes Edit This Page Installing Kubernetes on Linux with kubeadm

Overview

This quickstart shows you how to easily install a Kubernetes cluster on machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+. The installation uses a tool called kubeadm which is part of Kubernetes. As of v1.6, kubeadm aims to create a secure cluster out of the box via mechanisms such as RBAC. This process works with local VMs, physical servers and/or cloud servers. It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc). See the full kubeadm reference for information on all kubeadm command-line flags and for advice on automating kubeadm itself. kubeadm assumes you have a set of machines (virtual or real) that are up and running. It is designed to be part of a large provisioning system - or just for easy manual provisioning. kubeadm is a great choice where you have your own infrastructure (e.g. bare metal), or where you have an existing orchestration system (e.g. Puppet) that you have to integrate with. If you are not constrained, there are other higher-level tools built to give you complete clusters: On GCE, Google Container Engine gives you one-click Kubernetes clusters On AWS, kops makes cluster installation and management easy. kops supports building high availability clusters (a feature that kubeadm is currently lacking but is building toward). kubeadm Maturity Aspect Maturity Level Command line UX beta Config file alpha Selfhosting alpha kubeadm alpha commands alpha Implementation alpha The experience for the command line is currently in beta and we are trying hard not to change command line flags and break that flow. Other parts of the experience are still under active development. Specifically, kubeadm relies on some features (bootstrap tokens, cluster signing), that are still considered alpha. The implementation may change as the tool evolves to support easy upgrades and high availability (HA). Any commands under kubeadm alpha (not documented here) are, of course, alpha. Be sure to read the limitations. Specifically, configuring cloud providers is difficult. Upgrades are also not well documented or particularly easy. Prerequisites

One or more machines running Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+ 1GB or more of RAM per machine (any less will leave little room for your apps) Full network connectivity between all machines in the cluster (public or private network is fine) Objectives

Install a secure Kubernetes cluster on your machines Install a pod network on the cluster so that application components (pods) can talk to each other Install a sample microservices application (a socks shop) on the cluster Instructions

(1/4) Installing kubelet and kubeadm on your hosts You will install the following packages on all the machines: docker: the container runtime, which Kubernetes depends on. v1.12 is recommended, but v1.10 and v1.11 are known to work as well. v1.13 and 17.03+ have not yet been tested and verified by the Kubernetes node team. kubelet: the most core component of Kubernetes. It runs on all of the machines in your cluster and does things like starting pods and containers. kubectl: the command to control the cluster once it’s running. You will only need this on the master, but it can be useful to have on the other nodes as well. kubeadm: the command to bootstrap the cluster. Note: If you already have kubeadm installed, you should do a apt-get update && apt-get upgrade or yum update to get the latest version of kubeadm. See the kubeadm release notes if you want to read about the different kubeadm releases For each host in turn: SSH into the machine and become root if you are not already (for example, run sudo su -). If the machine is running Ubuntu or HypriotOS, run: apt-get update && apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update

  1. Install docker if you don't have it already.

apt-get install -y docker-engine apt-get install -y kubelet kubeadm kubectl kubernetes-cni If the machine is running CentOS, run: cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF setenforce 0 yum install -y docker kubelet kubeadm kubectl kubernetes-cni systemctl enable docker && systemctl start docker systemctl enable kubelet && systemctl start kubelet The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do. Note: Disabling SELinux by running setenforce 0 is required in order to allow containers to access the host filesystem, which is required by pod networks for example. You have to do this until SELinux support is improved in the kubelet. While this guide is correct for kubeadm 1.6, the previous version is still available but can be a bit tricky to install. See below for details. (2/4) Initializing your master The master is the machine where the “control plane” components run, including etcd (the cluster database) and the API server (which the kubectl CLI communicates with). To initialize the master, pick one of the machines you previously installed kubeadm on, and run: kubeadm init Note: this will autodetect the network interface to advertise the master on as the interface with the default gateway. If you want to use a different interface, specify --apiserver-advertise-address <ip-address> argument to kubeadm init. There are pod network implementations where the master also plays a role in allocating a set of network address space for each node. When using flannel as the pod network (described in step 3), specify --pod-network-cidr 10.244.0.0/16. This is not required for any other networks besides Flannel. Please refer to the kubeadm reference doc if you want to read more about the flags kubeadm init provides. kubeadm init will first run a series of prechecks to ensure that the machine is ready to run Kubernetes. It will expose warnings and exit on errors. It will then download and install the cluster database and “control plane” components. This may take several minutes. You can’t run kubeadm init twice without tearing down the cluster in between, see Tear Down. The output should look like: [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.6.0 [init] Using Authorization mode: RBAC [preflight] Running pre-flight checks [preflight] Starting the kubelet service [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 16.772251 seconds [apiclient] Waiting for at least one node to register and become ready [apiclient] First node is ready after 5.002536 seconds [apiclient] Test deployment succeeded [token] Using token: <token> [apiconfig] Created RBAC rules [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

 sudo cp /etc/kubernetes/admin.conf $HOME/
 sudo chown $(id -u):$(id -g) $HOME/admin.conf
 export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

 http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node as root:

 kubeadm join --token <token> <master-ip>:<master-port>

Make a record of the kubeadm join command that kubeadm init outputs. You will need this in a moment. The token is used for mutual authentication between the master and the joining nodes. The token included here is secret, keep it safe — anyone with this token can add authenticated nodes to your cluster. These tokens can be listed, created and deleted with the kubeadm token command. See the reference guide. Master Images All of these components run in pods started by kubelet and the following images are required and will be automatically pulled by kubelet if they are absent while kubeadm init is initializing your master: Image Name Version gcr.io/google_containers/kube-apiserver-amd64 v1.6.0 gcr.io/google_containers/kube-controller-manager-amd64 v1.6.0 gcr.io/google_containers/kube-scheduler-amd64 v1.6.0 gcr.io/google_containers/kube-proxy-amd64 v1.6.0 gcr.io/google_containers/etcd-amd64 3.0.17 gcr.io/google_containers/pause-amd64 3.0 gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.1 Master Isolation By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. a single-machine Kubernetes cluster for development, run: kubectl taint nodes --all node-role.kubernetes.io/master- With output looking something like: node "test-01" tainted taint key="dedicated" and effect="" not found. taint key="dedicated" and effect="" not found. This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere. (3/4) Installing a pod network You must install a pod network add-on so that your pods can communicate with each other. The network must be deployed before any applications. Also, kube-dns, a helper service, will not start up before a network is installed. kubeadm only supports CNI based networks (and does not support kubenet). Several projects provide Kubernetes pod networks using CNI, some of which also support Network Policy. See the add-ons page for a complete list of available network add-ons. New for Kubernetes 1.6: kubeadm 1.6 sets up a more secure cluster by default. As such it uses RBAC to grant limited privileges to workloads running on the cluster. This includes networking integrations. As such, ensure that you are using a network system that has been updated to run with 1.6 and RBAC. You can install a pod network add-on with the following command: kubectl apply -f <add-on.yaml> Please refer to the specific add-on installation guide for exact details. You should only install one pod network per cluster. If you are on another architecture than amd64, you should use the flannel or Weave Net overlay networks as described in the multi-platform section NOTE: You can install only one pod network per cluster. Once a pod network has been installed, you can confirm that it is working by checking that the kube-dns pod is Running in the output of kubectl get pods --all-namespaces. And once the kube-dns pod is up and running, you can continue by joining your nodes. If your network is not working or kube-dns is not in the Running state, check out the troubleshooting secion below. (4/4) Joining your nodes The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine: SSH to the machine Become root (e.g. sudo su -) Run the command that was output by kubeadm init. For example: kubeadm join --token <token> <master-ip>:<master-port> The output should look something like: [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "10.138.0.4:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.138.0.4:6443" [discovery] Cluster info signature and contents are valid, will use API Server "https://10.138.0.4:6443" [discovery] Successfully established connection with API Server "10.138.0.4:6443" [bootstrap] Detected server version: v1.6.0-beta.3 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1) [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server, generating KubeConfig... [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:

  • Certificate signing request sent to master and response
 received.
  • Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join. A few seconds later, you should notice this node in the output from kubectl get nodes when run on the master. (Optional) Controlling your cluster from machines other than the master In order to get a kubectl on some other computer (e.g. laptop) to talk to your cluster, you need to copy the kubeconfig file from your master to your workstation like this: scp root@<master ip>:/etc/kubernetes/admin.conf . kubectl --kubeconfig ./admin.conf get nodes Note: If you are using GCE, instances, by default, disable ssh access for root. First log in to the machine, copy the file someplace that can be accessed and then use gcloud compute copy-files (Optional) Connecting to the API Server If you want to connect to the API Server from outside the cluster you can use kubectl proxy: scp root@<master ip>:/etc/kubernetes/admin.conf . kubectl --kubeconfig ./admin.conf proxy You can now access the API Server locally at http://localhost:8001/api/v1 (Optional) Installing a sample application Now it is time to take your new cluster for a test drive. Sock Shop is a sample microservices application that shows how to run and connect a set of services on Kubernetes. To learn more about the sample microservices app, see the GitHub README. Note that the Sock Shop demo only works on amd64. kubectl create namespace sock-shop kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true" You can then find out the port that the NodePort feature of services allocated for the front-end service by running: kubectl -n sock-shop get svc front-end Output: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE front-end 10.110.250.153 <nodes> 80:30001/TCP 59s It takes several minutes to download and start all the containers, watch the output of kubectl get pods -n sock-shop to see when they’re all up and running. Then go to the IP address of your cluster’s master node in your browser, and specify the given port. So for example, http://<master_ip>:<port>. In the example above, this was 30001, but it may be a different port for you. If there is a firewall, make sure it exposes this port to the internet before you try to access it. To uninstall the socks shop, run kubectl delete namespace sock-shop on the master. Tear down

To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down. Talking to the master with the appropriate credentials, run: kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name> Then, on the node being removed, reset all kubeadm installed state: kubeadm reset If you wish to start over simply run kubeadm init or kubeadm join with the appropriate arguments. Explore other add-ons

See the list of add-ons to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster. What’s next

Learn about kubeadm’s advanced usage on the advanced reference doc Learn more about Kubernetes concepts and kubectl in Kubernetes 101. Feedback

Slack Channel: #sig-cluster-lifecycle Mailing List: kubernetes-sig-cluster-lifecycle GitHub Issues in the kubeadm repository kubeadm is multi-platform

kubeadm deb packages and binaries are built for amd64, arm and arm64, following the multi-platform proposal. deb-packages are released for ARM and ARM 64-bit, but not RPMs (yet, reach out if there’s interest). Currently, only the pod networks flannel and Weave Net work on multiple architectures. For Weave Net just use its standard install. Flannel requires special installation instructions: export ARCH=amd64 curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" | sed "s/amd64/${ARCH}/g" | kubectl create -f - Replace ARCH=amd64 with ARCH=arm or ARCH=arm64 depending on the platform you’re running on. Note that the Raspberry Pi 3 is in ARM 32-bit mode, so for RPi 3 you should set ARCH to arm, not arm64. Cloudprovider integrations (experimental)

Enabling specific cloud providers is a common request. This currently requires manual configuration and is therefore not yet fully supported. If you wish to do so, edit the kubeadm dropin for the kubelet service (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf) on all nodes, including the master. If your cloud provider requires any extra packages installed on host, for example for volume mounting/unmounting, install those packages. Specify the --cloud-provider flag to kubelet and set it to the cloud of your choice. If your cloudprovider requires a configuration file, create the file /etc/kubernetes/cloud-config on every node. The exact format and content of that file depends on the requirements imposed by your cloud provider. If you use the /etc/kubernetes/cloud-config file, you must append it to the kubelet arguments as follows: --cloud-config=/etc/kubernetes/cloud-config Next, specify the cloud provider in the kubeadm config file. Create a file called kubeadm.conf with the following contents: kind: MasterConfiguration apiVersion: kubeadm.k8s.io/v1alpha1 cloudProvider: <cloud provider> Lastly, run kubeadm init --config=kubeadm.conf to bootstrap your cluster with the cloud provider. This workflow is not yet fully supported, however we hope to make it extremely easy to spin up clusters with cloud providers in the future. (See this proposal for more information) The Kubelet Dynamic Settings feature may also help to fully automate this process in the future. Limitations

Please note: kubeadm is a work in progress and these limitations will be addressed in due course. The cluster created here has a single master, with a single etcd database running on it. This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch. Adding HA support (multiple etcd servers, multiple API servers, etc) to kubeadm is still a work-in-progress. Workaround: regularly back up etcd. The etcd data directory configured by kubeadm is at /var/lib/etcd on the master. The HostPort and HostIP functionality does not work with kubeadm due to that CNI networking is used, see issue #31307. Workaround: use the NodePort feature of services instead, or use HostNetwork. Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, eg. cat /etc/sysctl.d/k8s.conf Should have: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 Users can list, create and delete tokens using the kubeadm token command. See the reference guide for details. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that hostname -i returns a routable IP address (i.e. one on the second network interface, not the first one). By default, it doesn’t do this and kubelet ends-up using first non-loopback network interface, which is usually NATed. Workaround: Modify /etc/hosts, take a look at this Vagrantfile for how this can be achieved. Troubleshooting

Pod Network Troubleshooting

You may have trouble in the configuration if you see the following statuses. This example is for canal but there may be similar errors for other pod network systems.

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system canal-node-f0lqp 2/3 RunContainerError 2 48s

kube-system canal-node-77d0h 2/3 CrashLoopBackOff 3 3m

kube-system kube-dns-2924299975-7q1vq 0/4 ContainerCreating 0 15m

The three statuses RunContainerError and CrashLoopBackOff and ContainerCreating are very common.

To help diagnose what happened, you can use the following command to check what is in the logs:

kubectl describe -n kube-system po {YOUR_POD_NAME}

Do not use kubectl logs as they only work with Pods that have started. If you run:

kubectl logs -n kube-system canal-node-f0lqp

You will got the following error:

Error from server (BadRequest): the server rejected our request for an unknown reason (get pods canal-node-f0lqp)

The kubectl describe comand gives you more details about what went wrong.

kubectl describe -n kube-system po kube-dns-2924299975-1l2t7

The events should show something like this:

 2m		2m		1	{kubelet nac}	spec.containers{flannel}		Warning		Failed		Failed to start container with docker id 927e7ccdc32b with error: Error response from daemon: {"message":"chown /etc/resolv.conf: operation not permitted"}

Or this:

 6m	1m	191	{kubelet nac}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2924299975-1l2t7_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2924299975-1l2t7_kube-system(dee8ef21-fbcb-11e6-ba19-38d547e0006a)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

A web search on the error message may help narrow down the issue. Or communicate the errors you are seeing to the community/company that provides the pod network implementation you are using. Installing kubeadm 1.

</s>