· 6 years ago · Apr 16, 2020, 04:46 PM
1W0416 16:19:38.932835 1292 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
2[init] Using Kubernetes version: v1.18.1
3[preflight] Running pre-flight checks
4 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
5 [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
6[preflight] The system verification failed. Printing the output from the verification:
7KERNEL_VERSION: 4.15.0-1060-gcp
8DOCKER_VERSION: 19.03.5
9DOCKER_GRAPH_DRIVER: overlay2
10OS: Linux
11CGROUPS_CPU: enabled
12CGROUPS_CPUACCT: enabled
13CGROUPS_CPUSET: enabled
14CGROUPS_DEVICES: enabled
15CGROUPS_FREEZER: enabled
16CGROUPS_MEMORY: enabled
17 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1
18[preflight] Pulling images required for setting up a Kubernetes cluster
19[preflight] This might take a minute or two, depending on the speed of your internet connection
20[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
21[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
22[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
23[kubelet-start] Starting the kubelet
24[certs] Using certificateDir folder "/etc/kubernetes/pki"
25[certs] Generating "ca" certificate and key
26[certs] Generating "apiserver" certificate and key
27[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.185.32.221]
28[certs] Generating "apiserver-kubelet-client" certificate and key
29[certs] Generating "front-proxy-ca" certificate and key
30[certs] Generating "front-proxy-client" certificate and key
31[certs] Generating "etcd/ca" certificate and key
32[certs] Generating "etcd/server" certificate and key
33[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [10.185.32.221 127.0.0.1 ::1]
34[certs] Generating "etcd/peer" certificate and key
35[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [10.185.32.221 127.0.0.1 ::1]
36[certs] Generating "etcd/healthcheck-client" certificate and key
37[certs] Generating "apiserver-etcd-client" certificate and key
38[certs] Generating "sa" key and public key
39[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
40[kubeconfig] Writing "admin.conf" kubeconfig file
41[kubeconfig] Writing "kubelet.conf" kubeconfig file
42[kubeconfig] Writing "controller-manager.conf" kubeconfig file
43[kubeconfig] Writing "scheduler.conf" kubeconfig file
44[control-plane] Using manifest folder "/etc/kubernetes/manifests"
45[control-plane] Creating static Pod manifest for "kube-apiserver"
46[control-plane] Creating static Pod manifest for "kube-controller-manager"
47W0416 16:20:05.505210 1292 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
48[control-plane] Creating static Pod manifest for "kube-scheduler"
49W0416 16:20:05.506303 1292 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
50[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
51[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
52[kubelet-check] Initial timeout of 40s passed.
53[kubelet-check] It seems like the kubelet isn't running or healthy.
54[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
55[kubelet-check] It seems like the kubelet isn't running or healthy.
56[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
57[kubelet-check] It seems like the kubelet isn't running or healthy.
58[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
59[kubelet-check] It seems like the kubelet isn't running or healthy.
60[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
61[kubelet-check] It seems like the kubelet isn't running or healthy.
62[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
63
64 Unfortunately, an error has occurred:
65 timed out waiting for the condition
66
67 This error is likely caused by:
68 - The kubelet is not running
69 - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
70
71 If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
72 - 'systemctl status kubelet'
73 - 'journalctl -xeu kubelet'
74
75 Additionally, a control plane component may have crashed or exited when started by the container runtime.
76 To troubleshoot, list all containers using your preferred container runtimes CLI.
77
78 Here is one example how you may list all Kubernetes containers running in docker:
79 - 'docker ps -a | grep kube | grep -v pause'
80 Once you have found the failing container, you can inspect its logs with:
81 - 'docker logs CONTAINERID'
82
83error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
84To see the stack trace of this error execute with --v=5 or higher