· 7 years ago · May 08, 2018, 05:10 AM
1[szh@localhost cucushift]$ cucumber features/svc-catalog_asb/16628.feature
2Using the default, devel and _devel profiles...
3Feature: Ansible-service-broker related scenarios
4
5waiting for operation up to 3600 seconds..
6waiting for operation up to 3600 seconds..
7waiting for operation up to 3600 seconds..
8waiting for operation up to 3600 seconds..
9waiting for operation up to 3600 seconds..
10waiting for operation up to 3600 seconds..
11waiting for operation up to 3600 seconds..
12waiting for operation up to 3600 seconds..
13 # @author zhsun@redhat.com
14 # @case_id OCP-15939
15 @admin @destructive
16 Scenario: [ASB] Support concurrent, multiple APB source adapters # features/svc-catalog_asb/16628.feature:7
17 [03:37:50] INFO> === Before Scenario: [ASB] Support concurrent, multiple APB source adapters ===
18 [03:37:50] INFO> Shell Commands: mkdir -v -p '/home/szh/workdir/localhost-szh'
19 mkdir: 已创建目录 "/home/szh/workdir/localhost-szh"
20
21 [03:37:50] INFO> Exit Status: 0
22 [03:37:50] INFO> === End Before Scenario: [ASB] Support concurrent, multiple APB source adapters ===
23 Given I have a project # features/step_definitions/project.rb:7
24 [03:37:50] INFO> HTTP GET zhsun_1@https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443/oauth/authorize
25 [03:37:51] INFO> HTTP GET took 1.679 sec: 302 Found
26 [03:37:51] INFO> HTTP GET https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443
27 [03:37:53] INFO> HTTP GET took 1.252 sec: 200 OK | application/json 4037 bytes
28
29 [03:37:53] INFO> REST get_user for user 'CucuShift::APIAccessor:@ose', base_opts: {:options=>{:oapi_version=>"v1", :api_version=>"v1", :accept=>"application/json", :content_type=>"application/json", :oauth_token=>"KQycFFotoZqvgO0EexddnrAqqThgwf2EcZGAGP8ECwY"}, :base_url=>"https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443", :headers=>{"Accept"=>"<accept>", "Content-Type"=>"<content_type>", "Authorization"=>"Bearer <oauth_token>"}}, opts: {:username=>"~"}
30 [03:37:53] INFO> HTTP GET https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443/oapi/v1/users/~
31 [03:37:54] INFO> HTTP GET took 1.357 sec: 200 OK | application/json 263 bytes
32
33 [03:37:54] INFO> cleaning-up user zhsun_1 projects
34 [03:37:54] INFO> Shell Commands: rm -f -- /home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig
35
36 [03:37:54] INFO> Exit Status: 0
37 [03:37:54] INFO> Shell Commands: oc version --config=/tmp/kubeconfig20180508-22784-c0oghm
38 oc v3.9.11
39 kubernetes v1.9.1+a0ce1bc657
40 features: Basic-Auth GSSAPI Kerberos SPNEGO
41
42 [03:37:54] INFO> Exit Status: 0
43 [03:37:54] INFO> Shell Commands: oc login --token=KQycFFotoZqvgO0EexddnrAqqThgwf2EcZGAGP8ECwY --server=https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443 --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig --insecure-skip-tls-verify=true
44 Logged into "https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443" as "zhsun_1" using the token provided.
45
46 You don't have any projects. You can try to create a new project, by running
47
48 oc new-project <projectname>
49
50
51 [03:37:58] INFO> Exit Status: 0
52 [03:38:02] INFO> Shell Commands: oc new-project lxlyp --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig
53 Now using project "lxlyp" on server "https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443".
54
55 You can add applications to this project with the 'new-app' command. For example, try:
56
57 oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
58
59 to build a new example application in Ruby.
60
61 [03:38:05] INFO> Exit Status: 0
62 [03:38:07] INFO> oc get projects lxlyp --output=yaml --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig
63 [03:38:07] INFO> After 1 iterations and 2 seconds:
64 apiVersion: project.openshift.io/v1
65 kind: Project
66 metadata:
67 annotations:
68 openshift.io/description: ""
69 openshift.io/display-name: ""
70 openshift.io/requester: zhsun_1
71 openshift.io/sa.scc.mcs: s0:c20,c5
72 openshift.io/sa.scc.supplemental-groups: 1000390000/10000
73 openshift.io/sa.scc.uid-range: 1000390000/10000
74 creationTimestamp: 2018-05-08T03:36:44Z
75 name: lxlyp
76 resourceVersion: "193552"
77 selfLink: /apis/project.openshift.io/v1/projects/lxlyp
78 uid: 07756455-5271-11e8-a042-fa163edc217c
79 spec:
80 finalizers:
81 - openshift.io/origin
82 - kubernetes
83 status:
84 phase: Active
85
86waiting for operation up to 3600 seconds..
87waiting for operation up to 3600 seconds..
88waiting for operation up to 3600 seconds..
89waiting for operation up to 3600 seconds..
90waiting for operation up to 3600 seconds..
91waiting for operation up to 3600 seconds..
92waiting for operation up to 3600 seconds..
93waiting for operation up to 3600 seconds..
94 And I select a random node's host # features/step_definitions/node.rb:9
95 [03:38:10] INFO> Remote cmd: `mkdir -v -p '/tmp/workdir/localhost-szh'` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
96 mkdir: 已创建目录 "/tmp/workdir/localhost-szh"
97
98 [03:38:11] INFO> Exit Status: 0
99 apiVersion: v1
100 clusters:
101 - cluster:
102 certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2akNDQWRLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFtTVNRd0lnWURWUVFEREJ0dmNHVnUKYzJocFpuUXRjMmxuYm1WeVFERTFNalUyTlRnek5qSXdIaGNOTVRnd05UQTNNREUxT1RJeVdoY05Nak13TlRBMgpNREUxT1RJeldqQW1NU1F3SWdZRFZRUUREQnR2Y0dWdWMyaHBablF0YzJsbmJtVnlRREUxTWpVMk5UZ3pOakl3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUN5cjViT2hXZjJoNThKZnVFdEJTREsKTlkrTStTMTFOYVVyNTdsOXUxRlIvZmtZMFUrUjFnNHppVGl2Nk1GWms4RzRhQnhTOVR0KzlId3RDZTFHVEcwZwpmcldEUFdmTVRHQy9VaHpxK0Y4Q2FqTXVlcnc5MElpdmRrZnZZQWlucnE4eFlJd1VNUU9YV2dVazdabE9yZ0p5CkdLcFg2eXFqMGR0RzVlcTQ2NUtKRHM1Nkg3VHBBaVdhV2NsWGQ2aWNiWnl6OUEyT3ZHMk0yMmNDVW9Pbzk0bU4KYWE1dyt3RGVJK1R2ZmdWZjZBRkpqL1NMaXJSdTJxM09LQy9EbDBCdFdoU0JtYU9iZWsvbm03THlVdlA0MU1sawp0TnBQY3l2dVdMaEJ4L2dzcFpIUEQybU1ucEZ3YzFETnZCUXREOEFuZEdCNzg3NWZZbmk0RE93YUM1bzVvTEFiCkFnTUJBQUdqSXpBaE1BNEdBMVVkRHdFQi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQU04RE1NbGNKNWNtcWE5aUxuYnJ3YnhCL1AvTWtwaFFsRXE4TFVtbktFaGpoMgpmRGhIeWZyVjNER0E2cDUweTYwNGhKMFFRdEVkeCtnMldoaFJQbFIxREIxV3kyQUxicU5teFZIcUdvUm5NSHM4CjN1dWUwUDFIdVBJYTlPV2hhT21HUVYyVTRHRU5UQzRBUHhJaXpWM1lMYmR6TVRYT0tXS0RRMWxxOGE5QWRIa00KL0hTZ0xWVGduOGU2bHNQcFZrNytGTis2RzVUdDYrTVNua25vTjhNR0VpYlJMU3lMdHlOa09PNmNUY3hsN0hNUQpEdnUvSWg0VjBvZDlETlJnUDBpb1pOZkp4VStjWVRhM05JZysxTUo4dFk5OEpsTEFuYk44VllQWDdHc3NXMjZoCnMzeXV5aVp3L0diNkFBQnIvWE5nOEhEQko1eTE0VzVPNFVOOWpNVTEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
103 server: https://172.16.120.63:8443
104 name: 172-16-120-63:8443
105 - cluster:
106 certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2akNDQWRLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFtTVNRd0lnWURWUVFEREJ0dmNHVnUKYzJocFpuUXRjMmxuYm1WeVFERTFNalUyTlRnek5qSXdIaGNOTVRnd05UQTNNREUxT1RJeVdoY05Nak13TlRBMgpNREUxT1RJeldqQW1NU1F3SWdZRFZRUUREQnR2Y0dWdWMyaHBablF0YzJsbmJtVnlRREUxTWpVMk5UZ3pOakl3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUN5cjViT2hXZjJoNThKZnVFdEJTREsKTlkrTStTMTFOYVVyNTdsOXUxRlIvZmtZMFUrUjFnNHppVGl2Nk1GWms4RzRhQnhTOVR0KzlId3RDZTFHVEcwZwpmcldEUFdmTVRHQy9VaHpxK0Y4Q2FqTXVlcnc5MElpdmRrZnZZQWlucnE4eFlJd1VNUU9YV2dVazdabE9yZ0p5CkdLcFg2eXFqMGR0RzVlcTQ2NUtKRHM1Nkg3VHBBaVdhV2NsWGQ2aWNiWnl6OUEyT3ZHMk0yMmNDVW9Pbzk0bU4KYWE1dyt3RGVJK1R2ZmdWZjZBRkpqL1NMaXJSdTJxM09LQy9EbDBCdFdoU0JtYU9iZWsvbm03THlVdlA0MU1sawp0TnBQY3l2dVdMaEJ4L2dzcFpIUEQybU1ucEZ3YzFETnZCUXREOEFuZEdCNzg3NWZZbmk0RE93YUM1bzVvTEFiCkFnTUJBQUdqSXpBaE1BNEdBMVVkRHdFQi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQU04RE1NbGNKNWNtcWE5aUxuYnJ3YnhCL1AvTWtwaFFsRXE4TFVtbktFaGpoMgpmRGhIeWZyVjNER0E2cDUweTYwNGhKMFFRdEVkeCtnMldoaFJQbFIxREIxV3kyQUxicU5teFZIcUdvUm5NSHM4CjN1dWUwUDFIdVBJYTlPV2hhT21HUVYyVTRHRU5UQzRBUHhJaXpWM1lMYmR6TVRYT0tXS0RRMWxxOGE5QWRIa00KL0hTZ0xWVGduOGU2bHNQcFZrNytGTis2RzVUdDYrTVNua25vTjhNR0VpYlJMU3lMdHlOa09PNmNUY3hsN0hNUQpEdnUvSWg0VjBvZDlETlJnUDBpb1pOZkp4VStjWVRhM05JZysxTUo4dFk5OEpsTEFuYk44VllQWDdHc3NXMjZoCnMzeXV5aVp3L0diNkFBQnIvWE5nOEhEQko1eTE0VzVPNFVOOWpNVTEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
107 server: https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443
108 name: host-8-249-82-host-centralci-eng-rdu2-redhat-com:8443
109 contexts:
110 - context:
111 cluster: 172-16-120-63:8443
112 namespace: default
113 user: system:admin/172-16-120-63:8443
114 name: default/172-16-120-63:8443/system:admin
115 - context:
116 cluster: 172-16-120-63:8443
117 namespace: default
118 user: zhsun/172-16-120-63:8443
119 name: default/172-16-120-63:8443/zhsun
120 - context:
121 cluster: host-8-249-82-host-centralci-eng-rdu2-redhat-com:8443
122 namespace: default
123 user: system:admin/172-16-120-63:8443
124 name: default/host-8-249-82-host-centralci-eng-rdu2-redhat-com:8443/system:admin
125 - context:
126 cluster: 172-16-120-63:8443
127 namespace: kube-service-catalog
128 user: zhsun/172-16-120-63:8443
129 name: kube-service-catalog/172-16-120-63:8443/zhsun
130 - context:
131 cluster: 172-16-120-63:8443
132 namespace: openshift-ansible-service-broker
133 user: zhsun/172-16-120-63:8443
134 name: openshift-ansible-service-broker/172-16-120-63:8443/zhsun
135 - context:
136 cluster: 172-16-120-63:8443
137 namespace: szh-project1
138 user: test/172-16-120-63:8443
139 name: szh-project1/172-16-120-63:8443/test
140 - context:
141 cluster: 172-16-120-63:8443
142 namespace: szh-project1
143 user: zhsun/172-16-120-63:8443
144 name: szh-project1/172-16-120-63:8443/zhsun
145 current-context: openshift-ansible-service-broker/172-16-120-63:8443/zhsun
146 kind: Config
147 preferences: {}
148 users:
149 - name: system:admin/172-16-120-63:8443
150 user:
151 client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKRENDQWd5Z0F3SUJBZ0lCQmpBTkJna3Foa2lHOXcwQkFRc0ZBREFtTVNRd0lnWURWUVFEREJ0dmNHVnUKYzJocFpuUXRjMmxuYm1WeVFERTFNalUyTlRnek5qSXdIaGNOTVRnd05UQTNNREUxT1RJeldoY05NakF3TlRBMgpNREUxT1RJMFdqQk9NVFV3SEFZRFZRUUtFeFZ6ZVhOMFpXMDZZMngxYzNSbGNpMWhaRzFwYm5Nd0ZRWURWUVFLCkV3NXplWE4wWlcwNmJXRnpkR1Z5Y3pFVk1CTUdBMVVFQXhNTWMzbHpkR1Z0T21Ga2JXbHVNSUlCSWpBTkJna3EKaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFtaFQ5K2hNSmV5a0JkYjI2NlFJUzhzbHVYTVhrNk5IYgp4RUFCdVdxUnFLK1VBNmJxU2J2K2JOTVdaOGpXSCtCNU5rRkhObHhOd2drWlhPak5jUElvNXBCa0ExTlE4TFZHCmNwd0FFc0x5OE4xZlJGbTBMZ1JmVmpPWVNvMXdqKy84bThWdVFOL2hSWG9KajVRbWRvNXkvVmZORWxtdnY3YmIKWGVMb3d5RmYrODZGYktEMVE0Y0NJNjVqTHowOEluT0R5RkdiNGdZRVl6UDFob0NqLzRzYWQ3WmN2clp6cTVXegpPY3praXpCR2dmMGUrVi94SkJwTTh0d3d2a1c0VU5GOVkyeGM1dEV4VVBMcGYvUzJabkhpcFdvTlJXN1dzV3BBCnB6VjhoSm5SSTB3TndlK0lDSGVSTU5PbWRPNlFYSkRkRFgyUm80LzJpQzFycUpaRG55eUNEUUlEQVFBQm96VXcKTXpBT0JnTlZIUThCQWY4RUJBTUNCYUF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdJd0RBWURWUjBUQVFILwpCQUl3QURBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXNQTHRMaGY4ZzNHZEFhTERCRUFHU3lVZHBJaHFYcUNwClVaQ2QrTTNYKytjOU45NmxZL3Q5TzlFUzlYZldjNlR2VmRyRy90RFJLbmQxc0QxenNSbmxGQ3NVOWFQMEFCNjQKRHVXbTBXNWorbFZvTEpXbEtmanJmRkNodXhCUlF4U0tOOTR5VFEzb1FZeWZ4S25EOFdKVDlPWEZ1Wi82TXNoWQpBREJkWnk3bEdGVFR6Z3pPSDhpcWJ4Ym9neXh5TjV5TElEcE1OOEF0eTBObkl6M2RXbDdYbmlYRE5YL29EZmhoCktHUkRVWjg1VGkzdXlObnlnY05VbUpuQzhDS0FxNlhpbmdKN3NLUkp0Mkovd1hRS3pPelN6eWFwRjFHcmd3UXAKRGhCUUlqWVpyejkvdEU4L2tCekVxbnFXMVVBSlZPS2lVYkRKYWI2a3RqbDBLQmllUUQ2ekRBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
152 client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBbWhUOStoTUpleWtCZGIyNjZRSVM4c2x1WE1YazZOSGJ4RUFCdVdxUnFLK1VBNmJxClNiditiTk1XWjhqV0grQjVOa0ZITmx4Tndna1pYT2pOY1BJbzVwQmtBMU5ROExWR2Nwd0FFc0x5OE4xZlJGbTAKTGdSZlZqT1lTbzF3aisvOG04VnVRTi9oUlhvSmo1UW1kbzV5L1ZmTkVsbXZ2N2JiWGVMb3d5RmYrODZGYktEMQpRNGNDSTY1akx6MDhJbk9EeUZHYjRnWUVZelAxaG9Dai80c2FkN1pjdnJaenE1V3pPY3praXpCR2dmMGUrVi94CkpCcE04dHd3dmtXNFVORjlZMnhjNXRFeFVQTHBmL1MyWm5IaXBXb05SVzdXc1dwQXB6VjhoSm5SSTB3TndlK0kKQ0hlUk1OT21kTzZRWEpEZERYMlJvNC8yaUMxcnFKWkRueXlDRFFJREFRQUJBb0lCQVFDWm9nakRpcXZQYzhtUwozc1U1Zytua3oxZ05oUHlEOEl3U21FZWYyMVMxUDZ4MEg4QklHUHpOQVlTN294TnQ0V0s2NkVmYk9ob0dPUkJqCkJYV2pBcklwZ3h0Vi9ZTWRINExJMENkNmpZdXpBdWYwdlFUZFJWclNGc3ZvdWpMY01reEwvWVc3aGYrV1NPS3oKbU9McEg0d2tjNkYwaEp5cXFlYjlMRDB1STE4VE1RTGMrdHo3TThKeWNXbjlBNy92V203L3d3bG15MGxvbjA1MQpBbG12bi92RlROeFE2c1BPdGh0bVVMMHVhYWNqSXlBaTd3Z0V6WGR0dU9oNVdIcDRWVVkzb0F4V2Y0M01KQ3Y1CkE0MEsrZThaOTRmM3Y1UjBzZWRwNGF5bGNMSTY3T3plaFU5TzBRcDc3bjVyTFEyWWoveCszYzB1Tjk5OENCL0cKMlV1d2xwVjVBb0dCQU1ROG5QQzBlMUU2WCtnQklvSXFZVUxOeVh4ekhrTU80bGRHMWh3LzNtV293NnQwMkpPNQppRm85VkNxTkZlUTkyQWxjcC9FdW1hUCtMWnhHOWhacEZvOUduOFYvZHhscGRSSXNyUytZRmRHa01ianZJVXhMClU0ZzloSmlUdzQ1bXkzZzRCSjl4QlVMWld3bjdlTTJDU2FScWhTWVV6WVhsUEhyeVRqMEdHdW9uQW9HQkFNa0IKMUFMM3pMS0NTaW1MRmYrQ2xLOXVVQVNoTG1HTFViay9ZNFJaTFBhYWZ5SG5xK29rSkp3ajQ5Zi9PR0Q2WmhhVgpZV0s0K0Q5N05oYlgwazhUY2FFb3lUandzaUx6L3ljdlhWanh5Mk9NbFNGSElMcnYxcGkyWUxmcEN5Sk8wQVN5CitUTmUwZTREQVBGT3dKZWVrMVFRZVdVOEMvK0hGcWptQW1CZHoxYXJBb0dBTGR4TWdTUnN2V3I3QnVsYyt6YVUKVEZ4emZoWVpPR3ErRXMrSE1rcnEycTg2SzFPL1dhYSthdmh5TncrSjBqRnh6NTVMMFYybW9tREFvQWtsY2M5VgphbDZDOEZEOXNINm8wWHFIYnR3SUhWcFdGSFl1UGZwTXAwWlpPcDh0MEpYTmIyY0lEWXNJUTdzd1A0RDVsbTJnCmNFQkVwY1d1MGwva1pvZENLWmpIcVhrQ2dZRUF3M1Nxb3lRd2dOaGVveHpqREN6K3hjUGZ2VkM2R0JId0t0RXYKeE84dDRMSUVzNFJpVC9CTFJTVkZGYkRRTXNUcDVrWGxoNmpUaEc4Yi9vUm90WW56c1VGR3diNlRpWmQzWWJRVgo4anBVaXYyVnVlRS9PMmVpWmFiYVQ1c2x2M3VoblNBbllFTndvUjk4bUNqNjc3UThFTDlnUEFkeXAwdkZ4Z0M3CkNOZlBtTGNDZ1lBbG41Wm4zcDFXN0F6enArT3VTT08xVFZWeC9YSXRzQ0piZXlDd1VTSE9LOTBoZk9nZEVzbUgKTE52ZDhhWXMwaXdzMmdNSFAyMGs0VVdSNVNxTnhsSWRFWVgzSy9RWVl1WE4zWXNNRWtwNUJqaFhsNE1GWXN5eQpTUlQxZG5QbUs0Mnc1STN3cjl0VDg0eFVKbnB2QXQ2Sld1TFZBZ3YzSVFTWFZTS0h5d21aNWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
153 - name: test/172-16-120-63:8443
154 user:
155 token: jUThRNlil95UQ9tsa5HPYfuMiLQ239amBy3VjpkGA-g
156 - name: zhsun/172-16-120-63:8443
157 user:
158 token: qjeBLlYD4U9w1qdvgwVvvfUwbyLfMUB49kiryJFNErc
159
160 [03:38:13] INFO> Shell Commands: rm -f -- /home/szh/workdir/localhost-szh/ose_admin.kubeconfig
161
162 [03:38:13] INFO> Exit Status: 0
163 [03:38:13] INFO> Shell Commands: oc config set-credentials admin --client-certificate=/home/szh/workdir/localhost-szh/clcert20180508-22784-2i8mpy --client-key=/home/szh/workdir/localhost-szh/clkey20180508-22784-xmlqpo --embed-certs=true --server=https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443 --config=/home/szh/workdir/localhost-szh/ose_admin.kubeconfig --insecure-skip-tls-verify=true
164 User "admin" set.
165
166 [03:38:13] INFO> Exit Status: 0
167 [03:38:13] INFO> Shell Commands: oc config set-cluster default --server=https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443 --config=/home/szh/workdir/localhost-szh/ose_admin.kubeconfig --insecure-skip-tls-verify=true
168 Cluster "default" set.
169
170 [03:38:13] INFO> Exit Status: 0
171 [03:38:13] INFO> Shell Commands: oc config set-context default --cluster=default --user=admin --config=/home/szh/workdir/localhost-szh/ose_admin.kubeconfig --insecure-skip-tls-verify=true
172 Context "default" created.
173
174 [03:38:13] INFO> Exit Status: 0
175 [03:38:13] INFO> Shell Commands: oc config use-context default --config=/home/szh/workdir/localhost-szh/ose_admin.kubeconfig --insecure-skip-tls-verify=true
176 Switched to context "default".
177
178 [03:38:14] INFO> Exit Status: 0
179 [03:38:14] INFO> Shell Commands: oc get nodes --output=yaml --config=/home/szh/workdir/localhost-szh/ose_admin.kubeconfig
180 apiVersion: v1
181 items:
182 - apiVersion: v1
183 kind: Node
184 metadata:
185 annotations:
186 volumes.kubernetes.io/controller-managed-attach-detach: "true"
187 creationTimestamp: 2018-05-07T02:06:17Z
188 labels:
189 beta.kubernetes.io/arch: amd64
190 beta.kubernetes.io/instance-type: "3"
191 beta.kubernetes.io/os: linux
192 failure-domain.beta.kubernetes.io/region: regionOne
193 failure-domain.beta.kubernetes.io/zone: nova
194 kubernetes.io/hostname: 172.16.120.63
195 node-role.kubernetes.io/master: "true"
196 role: node
197 name: 172.16.120.63
198 namespace: ""
199 resourceVersion: "193591"
200 selfLink: /api/v1/nodes/172.16.120.63
201 uid: 3a9bf586-519b-11e8-9f32-fa163edc217c
202 spec:
203 externalID: e923cf0d-25dd-4e5f-9b3d-52fa092f8d97
204 providerID: openstack:///e923cf0d-25dd-4e5f-9b3d-52fa092f8d97
205 status:
206 addresses:
207 - address: 172.16.120.63
208 type: InternalIP
209 - address: 10.8.249.82
210 type: ExternalIP
211 - address: 172.16.120.63
212 type: Hostname
213 allocatable:
214 cpu: "2"
215 memory: 3779188Ki
216 pods: "250"
217 capacity:
218 cpu: "2"
219 memory: 3881588Ki
220 pods: "250"
221 conditions:
222 - lastHeartbeatTime: 2018-05-08T03:36:49Z
223 lastTransitionTime: 2018-05-07T02:06:17Z
224 message: kubelet has sufficient disk space available
225 reason: KubeletHasSufficientDisk
226 status: "False"
227 type: OutOfDisk
228 - lastHeartbeatTime: 2018-05-08T03:36:49Z
229 lastTransitionTime: 2018-05-07T02:06:17Z
230 message: kubelet has sufficient memory available
231 reason: KubeletHasSufficientMemory
232 status: "False"
233 type: MemoryPressure
234 - lastHeartbeatTime: 2018-05-08T03:36:49Z
235 lastTransitionTime: 2018-05-07T02:06:17Z
236 message: kubelet has no disk pressure
237 reason: KubeletHasNoDiskPressure
238 status: "False"
239 type: DiskPressure
240 - lastHeartbeatTime: 2018-05-08T03:36:49Z
241 lastTransitionTime: 2018-05-08T03:10:30Z
242 message: kubelet is posting ready status
243 reason: KubeletReady
244 status: "True"
245 type: Ready
246 daemonEndpoints:
247 kubeletEndpoint:
248 Port: 10250
249 images:
250 - names:
251 - registry.reg-aws.openshift.com:443/openshift3/ose-deployer@sha256:019d782e2d5e4fd1a04155a7d2b0fb5bdd37dcee2aab0f51409c3625c450d69b
252 - registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.9.27
253 sizeBytes: 1249398675
254 - names:
255 - registry.reg-aws.openshift.com:443/openshift3/ose@sha256:b3da66417a58f8393fb5e3c18ab5ff0aa871cbf39ec9ef9499ad488217a87c0a
256 - registry.reg-aws.openshift.com:443/openshift3/ose:v3.9.27
257 sizeBytes: 1249387724
258 - names:
259 - registry.reg-aws.openshift.com:443/openshift3/ose-web-console@sha256:2b4e7533d9f4ee450fdb9dac3b096ef619538da7c0b9f8489e5aee4cee18cc3e
260 - registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27
261 sizeBytes: 480353079
262 - names:
263 - registry.reg-aws.openshift.com:443/openshift3/ose-template-service-broker@sha256:80eed2e98877a6252d2ba497ce8b33047fd37f9fabd01e9d1fbfd76f819f8a6a
264 - registry.reg-aws.openshift.com:443/openshift3/ose-template-service-broker:v3.9.27
265 sizeBytes: 309580007
266 - names:
267 - brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-service-catalog@sha256:333cecc0e30e3938737d04cdf3b4055ea7a8d746ca3ef89138caff70ce7ac860
268 - brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-service-catalog:v3.9.27
269 sizeBytes: 297671204
270 - names:
271 - registry.reg-aws.openshift.com:443/openshift3/registry-console@sha256:9f97701c4f588c8d6d1679e4262759f03ed8751ce7a72b3b7a7e7a11cd985141
272 - registry.reg-aws.openshift.com:443/openshift3/registry-console:v3.9
273 sizeBytes: 241904374
274 - names:
275 - docker.io/kubernetes/pause@sha256:2088df8eb02f10aae012e6d4bc212cabb0ada93cb05f09e504af0c9811e0ca14
276 - docker.io/kubernetes/pause:latest
277 sizeBytes: 250665
278 nodeInfo:
279 architecture: amd64
280 bootID: 4aa9cd8a-ded9-43ae-bee7-e194809656b6
281 containerRuntimeVersion: cri-o://1.9.11
282 kernelVersion: 3.10.0-693.21.1.el7.x86_64
283 kubeProxyVersion: v1.9.1+a0ce1bc657
284 kubeletVersion: v1.9.1+a0ce1bc657
285 machineID: 59c182966c84403bb130ee850992895b
286 operatingSystem: linux
287 osImage: Red Hat Enterprise Linux Server 7.4 (Maipo)
288 systemUUID: E923CF0D-25DD-4E5F-9B3D-52FA092F8D97
289 - apiVersion: v1
290 kind: Node
291 metadata:
292 annotations:
293 volumes.kubernetes.io/controller-managed-attach-detach: "true"
294 creationTimestamp: 2018-05-07T02:06:17Z
295 labels:
296 beta.kubernetes.io/arch: amd64
297 beta.kubernetes.io/instance-type: "3"
298 beta.kubernetes.io/os: linux
299 failure-domain.beta.kubernetes.io/region: regionOne
300 failure-domain.beta.kubernetes.io/zone: nova
301 kubernetes.io/hostname: 172.16.120.67
302 node-role.kubernetes.io/compute: "true"
303 registry: enabled
304 role: node
305 router: enabled
306 name: 172.16.120.67
307 namespace: ""
308 resourceVersion: "193596"
309 selfLink: /api/v1/nodes/172.16.120.67
310 uid: 3a5706a5-519b-11e8-9f32-fa163edc217c
311 spec:
312 externalID: e74d28d6-4fa7-45fc-bb83-eff4bf4d81e6
313 providerID: openstack:///e74d28d6-4fa7-45fc-bb83-eff4bf4d81e6
314 status:
315 addresses:
316 - address: 172.16.120.67
317 type: InternalIP
318 - address: 10.8.243.255
319 type: ExternalIP
320 - address: 172.16.120.67
321 type: Hostname
322 allocatable:
323 cpu: "2"
324 memory: 3779192Ki
325 pods: "250"
326 capacity:
327 cpu: "2"
328 memory: 3881592Ki
329 pods: "250"
330 conditions:
331 - lastHeartbeatTime: 2018-05-08T03:36:56Z
332 lastTransitionTime: 2018-05-07T02:06:16Z
333 message: kubelet has sufficient disk space available
334 reason: KubeletHasSufficientDisk
335 status: "False"
336 type: OutOfDisk
337 - lastHeartbeatTime: 2018-05-08T03:36:56Z
338 lastTransitionTime: 2018-05-07T02:06:16Z
339 message: kubelet has sufficient memory available
340 reason: KubeletHasSufficientMemory
341 status: "False"
342 type: MemoryPressure
343 - lastHeartbeatTime: 2018-05-08T03:36:56Z
344 lastTransitionTime: 2018-05-07T02:06:16Z
345 message: kubelet has no disk pressure
346 reason: KubeletHasNoDiskPressure
347 status: "False"
348 type: DiskPressure
349 - lastHeartbeatTime: 2018-05-08T03:36:56Z
350 lastTransitionTime: 2018-05-08T03:27:04Z
351 message: kubelet is posting ready status
352 reason: KubeletReady
353 status: "True"
354 type: Ready
355 daemonEndpoints:
356 kubeletEndpoint:
357 Port: 10250
358 images:
359 - names:
360 - registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router@sha256:77ca6449b1f3ab06190fed80af9a71d1a2c3680bb652b3ca60fbd9b5bfd82a44
361 - registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.9.27
362 sizeBytes: 1270002104
363 - names:
364 - registry.reg-aws.openshift.com:443/openshift3/ose-deployer@sha256:019d782e2d5e4fd1a04155a7d2b0fb5bdd37dcee2aab0f51409c3625c450d69b
365 - registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.9.27
366 sizeBytes: 1249398675
367 - names:
368 - registry.reg-aws.openshift.com:443/openshift3/ose-sti-builder@sha256:555cbc72c24104d8561df96ccb94917e524cf9b10bee898bcd3eedee6490870a
369 - registry.reg-aws.openshift.com:443/openshift3/ose-sti-builder:v3.9.27
370 sizeBytes: 1249398416
371 - names:
372 - registry.reg-aws.openshift.com:443/openshift3/ose@sha256:b3da66417a58f8393fb5e3c18ab5ff0aa871cbf39ec9ef9499ad488217a87c0a
373 - registry.reg-aws.openshift.com:443/openshift3/ose:v3.9.27
374 sizeBytes: 1249387724
375 - names:
376 - docker.io/szh1124/midiawiki-apb@sha256:4ee73bf861e06208d8fc9c14e4e8344879c3092133f21a53f8f618acafeb5e1b
377 - docker.io/szh1124/midiawiki-apb:latest
378 sizeBytes: 971542838
379 - names:
380 - docker.io/szh1124/mysql-apb@sha256:17da5453b592be7831894bdbb772098bab9c934ff8e0c12b13aeee6c7d10f7db
381 - docker.io/szh1124/mysql-apb:latest
382 sizeBytes: 971380404
383 - names:
384 - docker.io/szh1124/mariadb-apb@sha256:ba34f30678ceae1f9e671c03f178eb3cffbd45fe4d85907aadceec151137e0dc
385 - docker.io/szh1124/mariadb-apb:latest
386 sizeBytes: 971379832
387 - names:
388 - brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-ansible-service-broker@sha256:553c14e0f91f45e6288b926f3fc116534e6012f081ee498dc8c81d7c02f26de5
389 - brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-ansible-service-broker:v3.9.27
390 sizeBytes: 565399278
391 - names:
392 - docker-registry.default.svc:5000/install-test/nodejs-mongodb-example@sha256:d53db1ac5d2557fc297179624d14a621f5c895835b5bcf9f86ba44134d63ada6
393 sizeBytes: 538751235
394 - names:
395 - registry.access.redhat.com/openshift3/mediawiki-123@sha256:4810c9517de198c2cdeb47c101d0255d95101c197f2c06636d73c8c9d7e89a9f
396 - registry.access.redhat.com/openshift3/mediawiki-123:latest
397 sizeBytes: 523128495
398 - names:
399 - brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhscl/mongodb-32-rhel7@sha256:4ed8eb86e5ab93e8c74ce86fbfff4f06e6d9474c28c05ddeefa48464f579abf2
400 sizeBytes: 457827666
401 - names:
402 - registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry@sha256:7e0aa7672d91923f784b0188065904598cbd31a6b09acc9733f73ed6f6a1449d
403 - registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.9.27
404 sizeBytes: 450135351
405 - names:
406 - registry.access.redhat.com/rhscl/mariadb-102-rhel7@sha256:2d8735e18e525da02c93e7b7c096d821bc154cc5725fa99f0f680856693d9a87
407 - registry.access.redhat.com/rhscl/mariadb-102-rhel7:latest
408 sizeBytes: 448233343
409 - names:
410 - registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:96ba047263b37e769a01bf62b384c122477a82a56a8e2477de85b8355d4459da
411 - registry.access.redhat.com/rhscl/mysql-57-rhel7:latest
412 sizeBytes: 442560309
413 - names:
414 - registry.reg-aws.openshift.com:443/openshift3/ose-template-service-broker@sha256:80eed2e98877a6252d2ba497ce8b33047fd37f9fabd01e9d1fbfd76f819f8a6a
415 - registry.reg-aws.openshift.com:443/openshift3/ose-template-service-broker:v3.9.27
416 sizeBytes: 309580007
417 - names:
418 - registry.access.redhat.com/rhel7/etcd@sha256:f3de7e64562d2237afff095c4fc6937d64092343072834dcb9319129b8f4b60a
419 - registry.access.redhat.com/rhel7/etcd:latest
420 sizeBytes: 266094780
421 - names:
422 - docker.io/library/registry@sha256:946480a23b33480b8e7cdb89b82c1bd6accae91a8e66d017e21e8b56551f6209
423 sizeBytes: 40069443
424 - names:
425 - docker.io/aosqe/pod-for-ping@sha256:23ff71f5a8774055faedb0ae5aa882fcf8c1dd93e4a19b09ca1085478ad315ab
426 - docker.io/aosqe/pod-for-ping:latest
427 sizeBytes: 36424563
428 - names:
429 - docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
430 sizeBytes: 35754932
431 - names:
432 - docker.io/kubernetes/pause@sha256:2088df8eb02f10aae012e6d4bc212cabb0ada93cb05f09e504af0c9811e0ca14
433 - docker.io/kubernetes/pause:latest
434 sizeBytes: 250665
435 nodeInfo:
436 architecture: amd64
437 bootID: b52aa2f2-797d-44a0-8eb6-fb17e807f152
438 containerRuntimeVersion: cri-o://1.9.11
439 kernelVersion: 3.10.0-693.21.1.el7.x86_64
440 kubeProxyVersion: v1.9.1+a0ce1bc657
441 kubeletVersion: v1.9.1+a0ce1bc657
442 machineID: 59c182966c84403bb130ee850992895b
443 operatingSystem: linux
444 osImage: Red Hat Enterprise Linux Server 7.4 (Maipo)
445 systemUUID: E74D28D6-4FA7-45FC-BB83-EFF4BF4D81E6
446 volumesAttached:
447 - devicePath: /dev/vdb
448 name: kubernetes.io/cinder/570625a8-3a71-4a14-aee8-07c8059a512e
449 - devicePath: /dev/vdc
450 name: kubernetes.io/cinder/30a265bd-94e5-4700-a8ea-4c0fa5eab6ba
451 volumesInUse:
452 - kubernetes.io/cinder/30a265bd-94e5-4700-a8ea-4c0fa5eab6ba
453 - kubernetes.io/cinder/570625a8-3a71-4a14-aee8-07c8059a512e
454 kind: List
455 metadata:
456 resourceVersion: ""
457 selfLink: ""
458
459 [03:38:16] INFO> Exit Status: 0
460 [03:38:16] INFO> Shell Commands: oc get nodes 172.16.120.63 --output=yaml --config=/home/szh/workdir/localhost-szh/ose_admin.kubeconfig
461 apiVersion: v1
462 kind: Node
463 metadata:
464 annotations:
465 volumes.kubernetes.io/controller-managed-attach-detach: "true"
466 creationTimestamp: 2018-05-07T02:06:17Z
467 labels:
468 beta.kubernetes.io/arch: amd64
469 beta.kubernetes.io/instance-type: "3"
470 beta.kubernetes.io/os: linux
471 failure-domain.beta.kubernetes.io/region: regionOne
472 failure-domain.beta.kubernetes.io/zone: nova
473 kubernetes.io/hostname: 172.16.120.63
474 node-role.kubernetes.io/master: "true"
475 role: node
476 name: 172.16.120.63
477 resourceVersion: "193591"
478 selfLink: /api/v1/nodes/172.16.120.63
479 uid: 3a9bf586-519b-11e8-9f32-fa163edc217c
480 spec:
481 externalID: e923cf0d-25dd-4e5f-9b3d-52fa092f8d97
482 providerID: openstack:///e923cf0d-25dd-4e5f-9b3d-52fa092f8d97
483 status:
484 addresses:
485 - address: 172.16.120.63
486 type: InternalIP
487 - address: 10.8.249.82
488 type: ExternalIP
489 - address: 172.16.120.63
490 type: Hostname
491 allocatable:
492 cpu: "2"
493 memory: 3779188Ki
494 pods: "250"
495 capacity:
496 cpu: "2"
497 memory: 3881588Ki
498 pods: "250"
499 conditions:
500 - lastHeartbeatTime: 2018-05-08T03:36:49Z
501 lastTransitionTime: 2018-05-07T02:06:17Z
502 message: kubelet has sufficient disk space available
503 reason: KubeletHasSufficientDisk
504 status: "False"
505 type: OutOfDisk
506 - lastHeartbeatTime: 2018-05-08T03:36:49Z
507 lastTransitionTime: 2018-05-07T02:06:17Z
508 message: kubelet has sufficient memory available
509 reason: KubeletHasSufficientMemory
510 status: "False"
511 type: MemoryPressure
512 - lastHeartbeatTime: 2018-05-08T03:36:49Z
513 lastTransitionTime: 2018-05-07T02:06:17Z
514 message: kubelet has no disk pressure
515 reason: KubeletHasNoDiskPressure
516 status: "False"
517 type: DiskPressure
518 - lastHeartbeatTime: 2018-05-08T03:36:49Z
519 lastTransitionTime: 2018-05-08T03:10:30Z
520 message: kubelet is posting ready status
521 reason: KubeletReady
522 status: "True"
523 type: Ready
524 daemonEndpoints:
525 kubeletEndpoint:
526 Port: 10250
527 images:
528 - names:
529 - registry.reg-aws.openshift.com:443/openshift3/ose-deployer@sha256:019d782e2d5e4fd1a04155a7d2b0fb5bdd37dcee2aab0f51409c3625c450d69b
530 - registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.9.27
531 sizeBytes: 1249398675
532 - names:
533 - registry.reg-aws.openshift.com:443/openshift3/ose@sha256:b3da66417a58f8393fb5e3c18ab5ff0aa871cbf39ec9ef9499ad488217a87c0a
534 - registry.reg-aws.openshift.com:443/openshift3/ose:v3.9.27
535 sizeBytes: 1249387724
536 - names:
537 - registry.reg-aws.openshift.com:443/openshift3/ose-web-console@sha256:2b4e7533d9f4ee450fdb9dac3b096ef619538da7c0b9f8489e5aee4cee18cc3e
538 - registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27
539 sizeBytes: 480353079
540 - names:
541 - registry.reg-aws.openshift.com:443/openshift3/ose-template-service-broker@sha256:80eed2e98877a6252d2ba497ce8b33047fd37f9fabd01e9d1fbfd76f819f8a6a
542 - registry.reg-aws.openshift.com:443/openshift3/ose-template-service-broker:v3.9.27
543 sizeBytes: 309580007
544 - names:
545 - brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-service-catalog@sha256:333cecc0e30e3938737d04cdf3b4055ea7a8d746ca3ef89138caff70ce7ac860
546 - brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-service-catalog:v3.9.27
547 sizeBytes: 297671204
548 - names:
549 - registry.reg-aws.openshift.com:443/openshift3/registry-console@sha256:9f97701c4f588c8d6d1679e4262759f03ed8751ce7a72b3b7a7e7a11cd985141
550 - registry.reg-aws.openshift.com:443/openshift3/registry-console:v3.9
551 sizeBytes: 241904374
552 - names:
553 - docker.io/kubernetes/pause@sha256:2088df8eb02f10aae012e6d4bc212cabb0ada93cb05f09e504af0c9811e0ca14
554 - docker.io/kubernetes/pause:latest
555 sizeBytes: 250665
556 nodeInfo:
557 architecture: amd64
558 bootID: 4aa9cd8a-ded9-43ae-bee7-e194809656b6
559 containerRuntimeVersion: cri-o://1.9.11
560 kernelVersion: 3.10.0-693.21.1.el7.x86_64
561 kubeProxyVersion: v1.9.1+a0ce1bc657
562 kubeletVersion: v1.9.1+a0ce1bc657
563 machineID: 59c182966c84403bb130ee850992895b
564 operatingSystem: linux
565 osImage: Red Hat Enterprise Linux Server 7.4 (Maipo)
566 systemUUID: E923CF0D-25DD-4E5F-9B3D-52FA092F8D97
567
568 [03:38:18] INFO> Exit Status: 0
569 [03:38:18] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
570 getent ahosts '172.16.120.63' | awk '{print $1; exit}'` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
571 172.16.120.63
572
573 [03:38:19] INFO> Exit Status: 0
574 [03:38:19] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
575 ip route get '172.16.120.63' | sed -rn 's/^.*src (([0-9]+.?){4}|[0-9a-f:]+).*/\1/p'` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
576 172.16.120.63
577
578 [03:38:21] INFO> Exit Status: 0
579waiting for operation up to 3600 seconds..
580waiting for operation up to 600 seconds..
581waiting for operation up to 3600 seconds..
582waiting for operation up to 900 seconds..
583waiting for operation up to 3600 seconds..
584waiting for operation up to 3600 seconds..
585waiting for operation up to 3600 seconds..
586waiting for operation up to 3600 seconds..
587waiting for operation up to 600 seconds..
588waiting for operation up to 3600 seconds..
589waiting for operation up to 900 seconds..
590waiting for operation up to 3600 seconds..
591waiting for operation up to 3600 seconds..
592waiting for operation up to 3600 seconds..
593waiting for operation up to 600 seconds..
594waiting for operation up to 3600 seconds..
595waiting for operation up to 900 seconds..
596waiting for operation up to 3600 seconds..
597 And I have a registry with htpasswd authentication enabled in my project # features/step_definitions/helper_services.rb:612
598 [03:38:21] INFO> Shell Commands: oc new-app --docker-image=registry:2 --config=/home/szh/workdir/localhost-szh/ose_admin.kubeconfig --namespace=lxlyp
599 --> Found Docker image d1fd7d8 (3 months old) from Docker Hub for "registry:2"
600
601 * An image stream will be created as "registry:2" that will track this image
602 * This image will be deployed in deployment config "registry"
603 * Port 5000/tcp will be load balanced by service "registry"
604 * Other containers can access this service through the hostname "registry"
605 * This image declares volumes and will default to use non-persistent, host-local storage.
606 You can add persistent volumes later by running 'volume dc/registry --add ...'
607 * WARNING: Image "registry:2" runs as the 'root' user which may not be permitted by your cluster administrator
608
609 --> Creating resources ...
610 imagestream "registry" created
611 deploymentconfig "registry" created
612 service "registry" created
613 --> Success
614 Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
615 'oc expose svc/registry'
616 Run 'oc status' to view your app.
617
618 [03:38:25] INFO> Exit Status: 0
619 [03:38:27] INFO> oc get pods --output=yaml -l deploymentconfig\=registry --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig -n lxlyp
620 [03:38:27] INFO> 1 iterations for 2 sec, returned 1 pods, 1 matching
621 [03:38:29] INFO> oc get pods registry-1-w75qt --output=yaml --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig --namespace=lxlyp
622 [03:38:29] INFO> After 1 iterations and 2 seconds:
623 apiVersion: v1
624 kind: Pod
625 metadata:
626 annotations:
627 openshift.io/deployment-config.latest-version: "1"
628 openshift.io/deployment-config.name: registry
629 openshift.io/deployment.name: registry-1
630 openshift.io/generated-by: OpenShiftNewApp
631 openshift.io/scc: restricted
632 creationTimestamp: 2018-05-08T03:37:06Z
633 generateName: registry-1-
634 labels:
635 app: registry
636 deployment: registry-1
637 deploymentconfig: registry
638 name: registry-1-w75qt
639 namespace: lxlyp
640 ownerReferences:
641 - apiVersion: v1
642 blockOwnerDeletion: true
643 controller: true
644 kind: ReplicationController
645 name: registry-1
646 uid: 133b8609-5271-11e8-a042-fa163edc217c
647 resourceVersion: "193669"
648 selfLink: /api/v1/namespaces/lxlyp/pods/registry-1-w75qt
649 uid: 14810760-5271-11e8-a042-fa163edc217c
650 spec:
651 containers:
652 - image: registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
653 imagePullPolicy: IfNotPresent
654 name: registry
655 ports:
656 - containerPort: 5000
657 protocol: TCP
658 resources: {}
659 securityContext:
660 capabilities:
661 drop:
662 - KILL
663 - MKNOD
664 - SETGID
665 - SETUID
666 runAsUser: 1000390000
667 terminationMessagePath: /dev/termination-log
668 terminationMessagePolicy: File
669 volumeMounts:
670 - mountPath: /var/lib/registry
671 name: registry-volume-1
672 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
673 name: default-token-fmdcs
674 readOnly: true
675 dnsPolicy: ClusterFirst
676 imagePullSecrets:
677 - name: default-dockercfg-hq7mv
678 nodeName: 172.16.120.67
679 nodeSelector:
680 node-role.kubernetes.io/compute: "true"
681 restartPolicy: Always
682 schedulerName: default-scheduler
683 securityContext:
684 fsGroup: 1000390000
685 seLinuxOptions:
686 level: s0:c20,c5
687 serviceAccount: default
688 serviceAccountName: default
689 terminationGracePeriodSeconds: 30
690 volumes:
691 - emptyDir: {}
692 name: registry-volume-1
693 - name: default-token-fmdcs
694 secret:
695 defaultMode: 420
696 secretName: default-token-fmdcs
697 status:
698 conditions:
699 - lastProbeTime: null
700 lastTransitionTime: 2018-05-08T03:37:12Z
701 status: "True"
702 type: Initialized
703 - lastProbeTime: null
704 lastTransitionTime: 2018-05-08T03:37:14Z
705 status: "True"
706 type: Ready
707 - lastProbeTime: null
708 lastTransitionTime: 2018-05-08T03:37:06Z
709 status: "True"
710 type: PodScheduled
711 containerStatuses:
712 - containerID: cri-o://e93db227c1cde2511d1aea17539efde9392c2d0585c50b0231a76805be1f0eae
713 image: docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
714 imageID: docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
715 lastState: {}
716 name: registry
717 ready: true
718 restartCount: 0
719 state:
720 running:
721 startedAt: 2018-05-08T03:37:14Z
722 hostIP: 172.16.120.67
723 phase: Running
724 podIP: 10.128.0.124
725 qosClass: BestEffort
726 startTime: 2018-05-08T03:37:12Z
727
728 [03:38:29] INFO> HTTP GET https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/registry/htpasswd
729 [03:38:29] INFO> HTTP GET took 0.469 sec: 200 OK | text/plain 71 bytes
730
731 [03:38:29] INFO> Shell Commands: oc secrets new htpasswd-secret ./htpasswd --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig --namespace=lxlyp
732 secret/htpasswd-secret
733
734 STDERR:
735 Command "new" is deprecated, use oc create secret
736
737 [03:38:31] INFO> Exit Status: 0
738 [03:38:31] INFO> Shell Commands: oc volume dc/registry --add=true --mount-path=/auth --type=secret --secret-name=htpasswd-secret --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig --namespace=lxlyp
739 deploymentconfig "registry" updated
740
741 STDERR:
742 info: Generated volume name: volume-2rr4t
743
744 [03:38:47] INFO> Exit Status: 0
745 [03:38:47] INFO> Shell Commands: oc env dc/registry -e REGISTRY_AUTH_HTPASSWD_PATH\=/auth/htpasswd -e REGISTRY_AUTH_HTPASSWD_REALM\=Registry\ Realm -e REGISTRY_AUTH\=htpasswd --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig --namespace=lxlyp
746 deploymentconfig "registry" updated
747
748 [03:38:49] INFO> Exit Status: 0
749 [03:38:51] INFO> oc get pods --output=yaml -l deploymentconfig\=registry --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig -n lxlyp
750 [03:38:51] INFO> 1 iterations for 2 sec, returned 1 pods, 1 matching
751 [03:38:53] INFO> oc get pods registry-1-w75qt --output=yaml --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig --namespace=lxlyp
752 [03:38:53] INFO> After 1 iterations and 2 seconds:
753 apiVersion: v1
754 kind: Pod
755 metadata:
756 annotations:
757 openshift.io/deployment-config.latest-version: "1"
758 openshift.io/deployment-config.name: registry
759 openshift.io/deployment.name: registry-1
760 openshift.io/generated-by: OpenShiftNewApp
761 openshift.io/scc: restricted
762 creationTimestamp: 2018-05-08T03:37:06Z
763 generateName: registry-1-
764 labels:
765 app: registry
766 deployment: registry-1
767 deploymentconfig: registry
768 name: registry-1-w75qt
769 namespace: lxlyp
770 ownerReferences:
771 - apiVersion: v1
772 blockOwnerDeletion: true
773 controller: true
774 kind: ReplicationController
775 name: registry-1
776 uid: 133b8609-5271-11e8-a042-fa163edc217c
777 resourceVersion: "193669"
778 selfLink: /api/v1/namespaces/lxlyp/pods/registry-1-w75qt
779 uid: 14810760-5271-11e8-a042-fa163edc217c
780 spec:
781 containers:
782 - image: registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
783 imagePullPolicy: IfNotPresent
784 name: registry
785 ports:
786 - containerPort: 5000
787 protocol: TCP
788 resources: {}
789 securityContext:
790 capabilities:
791 drop:
792 - KILL
793 - MKNOD
794 - SETGID
795 - SETUID
796 runAsUser: 1000390000
797 terminationMessagePath: /dev/termination-log
798 terminationMessagePolicy: File
799 volumeMounts:
800 - mountPath: /var/lib/registry
801 name: registry-volume-1
802 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
803 name: default-token-fmdcs
804 readOnly: true
805 dnsPolicy: ClusterFirst
806 imagePullSecrets:
807 - name: default-dockercfg-hq7mv
808 nodeName: 172.16.120.67
809 nodeSelector:
810 node-role.kubernetes.io/compute: "true"
811 restartPolicy: Always
812 schedulerName: default-scheduler
813 securityContext:
814 fsGroup: 1000390000
815 seLinuxOptions:
816 level: s0:c20,c5
817 serviceAccount: default
818 serviceAccountName: default
819 terminationGracePeriodSeconds: 30
820 volumes:
821 - emptyDir: {}
822 name: registry-volume-1
823 - name: default-token-fmdcs
824 secret:
825 defaultMode: 420
826 secretName: default-token-fmdcs
827 status:
828 conditions:
829 - lastProbeTime: null
830 lastTransitionTime: 2018-05-08T03:37:12Z
831 status: "True"
832 type: Initialized
833 - lastProbeTime: null
834 lastTransitionTime: 2018-05-08T03:37:14Z
835 status: "True"
836 type: Ready
837 - lastProbeTime: null
838 lastTransitionTime: 2018-05-08T03:37:06Z
839 status: "True"
840 type: PodScheduled
841 containerStatuses:
842 - containerID: cri-o://e93db227c1cde2511d1aea17539efde9392c2d0585c50b0231a76805be1f0eae
843 image: docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
844 imageID: docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
845 lastState: {}
846 name: registry
847 ready: true
848 restartCount: 0
849 state:
850 running:
851 startedAt: 2018-05-08T03:37:14Z
852 hostIP: 172.16.120.67
853 phase: Running
854 podIP: 10.128.0.124
855 qosClass: BestEffort
856 startTime: 2018-05-08T03:37:12Z
857
858 [03:38:53] INFO> Shell Commands: oc get services registry --output=yaml --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig --namespace=lxlyp
859 apiVersion: v1
860 kind: Service
861 metadata:
862 annotations:
863 openshift.io/generated-by: OpenShiftNewApp
864 creationTimestamp: 2018-05-08T03:37:04Z
865 labels:
866 app: registry
867 name: registry
868 namespace: lxlyp
869 resourceVersion: "193637"
870 selfLink: /api/v1/namespaces/lxlyp/services/registry
871 uid: 1364beab-5271-11e8-a042-fa163edc217c
872 spec:
873 clusterIP: 172.30.248.135
874 ports:
875 - name: 5000-tcp
876 port: 5000
877 protocol: TCP
878 targetPort: 5000
879 selector:
880 app: registry
881 deploymentconfig: registry
882 sessionAffinity: None
883 type: ClusterIP
884 status:
885 loadBalancer: {}
886
887 [03:38:55] INFO> Exit Status: 0
888 [03:38:55] INFO> Shell Commands: oc patch dc registry -p \{\"spec\":\{\"template\":\{\"spec\":\{\"containers\":\[\{\"name\":\"registry\",\"readinessProbe\":\{\"httpGet\":\{\"httpHeaders\":\[\{\"name\":\"Authorization\",\"value\":\"Basic\ dGVzdHVzZXI6dGVzdHBhc3N3b3Jk\"\}\],\"path\":\"/v2/\",\"port\":5000,\"scheme\":\"HTTP\"\}\}\}\]\}\}\}\} --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig
889 deploymentconfig "registry" patched
890
891 [03:38:57] INFO> Exit Status: 0
892 [03:38:59] INFO> oc get pods --output=yaml -l deploymentconfig\=registry --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig -n lxlyp
893 [03:38:59] INFO> 1 iterations for 2 sec, returned 1 pods, 1 matching
894 [03:39:01] INFO> oc get pods registry-1-w75qt --output=yaml --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig --namespace=lxlyp
895 [03:39:01] INFO> After 1 iterations and 2 seconds:
896 apiVersion: v1
897 kind: Pod
898 metadata:
899 annotations:
900 openshift.io/deployment-config.latest-version: "1"
901 openshift.io/deployment-config.name: registry
902 openshift.io/deployment.name: registry-1
903 openshift.io/generated-by: OpenShiftNewApp
904 openshift.io/scc: restricted
905 creationTimestamp: 2018-05-08T03:37:06Z
906 generateName: registry-1-
907 labels:
908 app: registry
909 deployment: registry-1
910 deploymentconfig: registry
911 name: registry-1-w75qt
912 namespace: lxlyp
913 ownerReferences:
914 - apiVersion: v1
915 blockOwnerDeletion: true
916 controller: true
917 kind: ReplicationController
918 name: registry-1
919 uid: 133b8609-5271-11e8-a042-fa163edc217c
920 resourceVersion: "193669"
921 selfLink: /api/v1/namespaces/lxlyp/pods/registry-1-w75qt
922 uid: 14810760-5271-11e8-a042-fa163edc217c
923 spec:
924 containers:
925 - image: registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
926 imagePullPolicy: IfNotPresent
927 name: registry
928 ports:
929 - containerPort: 5000
930 protocol: TCP
931 resources: {}
932 securityContext:
933 capabilities:
934 drop:
935 - KILL
936 - MKNOD
937 - SETGID
938 - SETUID
939 runAsUser: 1000390000
940 terminationMessagePath: /dev/termination-log
941 terminationMessagePolicy: File
942 volumeMounts:
943 - mountPath: /var/lib/registry
944 name: registry-volume-1
945 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
946 name: default-token-fmdcs
947 readOnly: true
948 dnsPolicy: ClusterFirst
949 imagePullSecrets:
950 - name: default-dockercfg-hq7mv
951 nodeName: 172.16.120.67
952 nodeSelector:
953 node-role.kubernetes.io/compute: "true"
954 restartPolicy: Always
955 schedulerName: default-scheduler
956 securityContext:
957 fsGroup: 1000390000
958 seLinuxOptions:
959 level: s0:c20,c5
960 serviceAccount: default
961 serviceAccountName: default
962 terminationGracePeriodSeconds: 30
963 volumes:
964 - emptyDir: {}
965 name: registry-volume-1
966 - name: default-token-fmdcs
967 secret:
968 defaultMode: 420
969 secretName: default-token-fmdcs
970 status:
971 conditions:
972 - lastProbeTime: null
973 lastTransitionTime: 2018-05-08T03:37:12Z
974 status: "True"
975 type: Initialized
976 - lastProbeTime: null
977 lastTransitionTime: 2018-05-08T03:37:14Z
978 status: "True"
979 type: Ready
980 - lastProbeTime: null
981 lastTransitionTime: 2018-05-08T03:37:06Z
982 status: "True"
983 type: PodScheduled
984 containerStatuses:
985 - containerID: cri-o://e93db227c1cde2511d1aea17539efde9392c2d0585c50b0231a76805be1f0eae
986 image: docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
987 imageID: docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
988 lastState: {}
989 name: registry
990 ready: true
991 restartCount: 0
992 state:
993 running:
994 startedAt: 2018-05-08T03:37:14Z
995 hostIP: 172.16.120.67
996 phase: Running
997 podIP: 10.128.0.124
998 qosClass: BestEffort
999 startTime: 2018-05-08T03:37:12Z
1000
1001waiting for operation up to 3600 seconds..
1002waiting for operation up to 3600 seconds..
1003 And I add the insecure registry to docker config on the node # features/step_definitions/registry.rb:93
1004 [03:39:01] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1005 systemctl status atomic-openshift-node` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1006 â— atomic-openshift-node.service - OpenShift Node
1007 Loaded: loaded (/etc/systemd/system/atomic-openshift-node.service; enabled; vendor preset: disabled)
1008 Drop-In: /usr/lib/systemd/system/atomic-openshift-node.service.d
1009 └─openshift-sdn-ovs.conf
1010 Active: active (running) since 一 2018-05-07 23:10:16 EDT; 27min ago
1011 Docs: https://github.com/openshift/origin
1012 Process: 86945 ExecStopPost=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string: (code=exited, status=0/SUCCESS)
1013 Process: 86944 ExecStopPost=/usr/bin/rm /etc/dnsmasq.d/node-dnsmasq.conf (code=exited, status=0/SUCCESS)
1014 Process: 86951 ExecStartPre=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:/in-addr.arpa/127.0.0.1,/cluster.local/127.0.0.1 (code=exited, status=0/SUCCESS)
1015 Process: 86949 ExecStartPre=/usr/bin/cp /etc/origin/node/node-dnsmasq.conf /etc/dnsmasq.d/ (code=exited, status=0/SUCCESS)
1016 Main PID: 86953 (openshift)
1017 Tasks: 16
1018 Memory: 64.3M
1019 CGroup: /system.slice/atomic-openshift-node.service
1020 └─86953 /usr/bin/openshift start node --config=/etc/origin/node/node-config.yaml --loglevel=5
1021
1022 5月 07 23:37:42 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:42.872732 86953 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[24] Content-Type:[application/json] Access-Control-Allow-Origin:[*]] 0xc42109d4a0 24 [] true false map[] 0xc4223a4000 <nil>}
1023 5月 07 23:37:42 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:42.872848 86953 prober.go:118] Readiness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1024 5月 07 23:37:43 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:43.157770 86953 generic.go:183] GenericPLEG: Relisting
1025 5月 07 23:37:43 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:43.308787 86953 kubelet.go:1924] SyncLoop (housekeeping)
1026 5月 07 23:37:43 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:43.317084 86953 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1027 5月 07 23:37:43 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:43.318589 86953 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1028 5月 07 23:37:43 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:43.444033 86953 prober.go:165] HTTP-Probe Host: https://10.129.0.4, Port: 8443, Path: /healthz
1029 5月 07 23:37:43 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:43.444078 86953 prober.go:168] HTTP-Probe Headers: map[]
1030 5月 07 23:37:43 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:43.450785 86953 http.go:96] Probe succeeded for https://10.129.0.4:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Date:[Tue, 08 May 2018 03:37:43 GMT] Content-Type:[text/plain; charset=utf-8] Content-Length:[2]] 0xc4212d2c00 2 [] false false map[] 0xc4223a4200 0xc420c6ce70}
1031 5月 07 23:37:43 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:43.450853 86953 prober.go:118] Readiness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1032 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.158940 86953 generic.go:183] GenericPLEG: Relisting
1033 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.191284 86953 eviction_manager.go:221] eviction manager: synchronize housekeeping
1034 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.216922 86953 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 15712836, capacity: 15510Ki, time: 2018-05-07 23:37:39.096633994 -0400 EDT m=+1642.783424747
1035 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.217859 86953 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 14511288Ki, capacity: 31010Mi, time: 2018-05-07 23:37:39.096633994 -0400 EDT m=+1642.783424747
1036 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.218293 86953 helpers.go:827] eviction manager: observations: signal=imagefs.inodesFree, available: 15712836, capacity: 15510Ki, time: 2018-05-07 23:37:39.096633994 -0400 EDT m=+1642.783424747
1037 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.218965 86953 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 3664168Ki, capacity: 3881588Ki
1038 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.219587 86953 helpers.go:827] eviction manager: observations: signal=memory.available, available: 2509952Ki, capacity: 3881588Ki, time: 2018-05-07 23:37:39.096633994 -0400 EDT m=+1642.783424747
1039 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.219960 86953 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 14511288Ki, capacity: 31010Mi, time: 2018-05-07 23:37:39.096633994 -0400 EDT m=+1642.783424747
1040 5月 07 23:37:44 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:44.220361 86953 eviction_manager.go:325] eviction manager: no resources are starved
1041 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.117040 86953 prober.go:165] HTTP-Probe Host: https://10.129.0.8, Port: 8443, Path: /healthz
1042 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.117069 86953 prober.go:168] HTTP-Probe Headers: map[]
1043 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.123319 86953 http.go:96] Probe succeeded for https://10.129.0.8:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Tue, 08 May 2018 03:37:45 GMT]] 0xc421169940 2 [] false false map[] 0xc4223a4500 0xc4214c91e0}
1044 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.123359 86953 prober.go:118] Readiness probe for "apiserver-qq6rl_openshift-template-service-broker(8beaacd9-519c-11e8-9f32-fa163edc217c):c" succeeded
1045 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.160531 86953 generic.go:183] GenericPLEG: Relisting
1046 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.296309 86953 kubelet.go:1901] SyncLoop (SYNC): 1 pods; webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c)
1047 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.296354 86953 kubelet.go:1924] SyncLoop (housekeeping)
1048 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.301893 86953 kubelet_pods.go:1381] Generating status for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c)"
1049 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.302383 86953 status_manager.go:353] Ignoring same status for pod "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-05-06 22:09:32 -0400 EDT Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-05-07 23:10:23 -0400 EDT Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-05-06 22:09:32 -0400 EDT Reason: Message:}] Message: Reason: HostIP:172.16.120.63 PodIP:10.129.0.4 StartTime:2018-05-06 22:09:32 -0400 EDT InitContainerStatuses:[] ContainerStatuses:[{Name:webconsole State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2018-05-06 22:14:10 -0400 EDT,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 ImageID:registry.reg-aws.openshift.com:443/openshift3/ose-web-console@sha256:2b4e7533d9f4ee450fdb9dac3b096ef619538da7c0b9f8489e5aee4cee18cc3e ContainerID:cri-o://138ebee17264174a3b2aad338b2a9d31108041c14f0b3f1e021721333b79e0b1}] QOSClass:Burstable}
1050 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.304513 86953 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1051 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.305258 86953 volume_manager.go:344] Waiting for volumes to attach and mount for pod "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c)"
1052 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.305767 86953 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1053 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.316463 86953 desired_state_of_world_populator.go:298] Added volume "serving-cert" (volSpec="serving-cert") for pod "aebd73ce-519b-11e8-9f32-fa163edc217c" to desired state.
1054 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.316496 86953 desired_state_of_world_populator.go:298] Added volume "webconsole-config" (volSpec="webconsole-config") for pod "aebd73ce-519b-11e8-9f32-fa163edc217c" to desired state.
1055 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.316507 86953 desired_state_of_world_populator.go:298] Added volume "webconsole-token-rdcw4" (volSpec="webconsole-token-rdcw4") for pod "aebd73ce-519b-11e8-9f32-fa163edc217c" to desired state.
1056 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.407964 86953 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
1057 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416578 86953 operation_executor.go:895] Starting operationExecutor.MountVolume for volume "webconsole-token-rdcw4" (UniqueName: "kubernetes.io/secret/aebd73ce-519b-11e8-9f32-fa163edc217c-webconsole-token-rdcw4") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c") Volume is already mounted to pod, but remount was requested.
1058 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416641 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/secret
1059 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416699 86953 reconciler.go:264] operationExecutor.MountVolume started for volume "webconsole-token-rdcw4" (UniqueName: "kubernetes.io/secret/aebd73ce-519b-11e8-9f32-fa163edc217c-webconsole-token-rdcw4") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c") Volume is already mounted to pod, but remount was requested.
1060 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416713 86953 operation_executor.go:895] Starting operationExecutor.MountVolume for volume "serving-cert" (UniqueName: "kubernetes.io/secret/aebd73ce-519b-11e8-9f32-fa163edc217c-serving-cert") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c") Volume is already mounted to pod, but remount was requested.
1061 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416722 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/secret
1062 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416746 86953 reconciler.go:264] operationExecutor.MountVolume started for volume "serving-cert" (UniqueName: "kubernetes.io/secret/aebd73ce-519b-11e8-9f32-fa163edc217c-serving-cert") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c") Volume is already mounted to pod, but remount was requested.
1063 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416766 86953 operation_executor.go:895] Starting operationExecutor.MountVolume for volume "webconsole-config" (UniqueName: "kubernetes.io/configmap/aebd73ce-519b-11e8-9f32-fa163edc217c-webconsole-config") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c") Volume is already mounted to pod, but remount was requested.
1064 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416776 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/configmap
1065 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416813 86953 reconciler.go:264] operationExecutor.MountVolume started for volume "webconsole-config" (UniqueName: "kubernetes.io/configmap/aebd73ce-519b-11e8-9f32-fa163edc217c-webconsole-config") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c") Volume is already mounted to pod, but remount was requested.
1066 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416870 86953 configmap.go:187] Setting up volume webconsole-config for pod aebd73ce-519b-11e8-9f32-fa163edc217c at /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~configmap/webconsole-config
1067 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416893 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
1068 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.416899 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
1069 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.417341 86953 secret.go:186] Setting up volume webconsole-token-rdcw4 for pod aebd73ce-519b-11e8-9f32-fa163edc217c at /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/webconsole-token-rdcw4
1070 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.417361 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
1071 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.417367 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
1072 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.417548 86953 secret.go:186] Setting up volume serving-cert for pod aebd73ce-519b-11e8-9f32-fa163edc217c at /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/serving-cert
1073 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.417561 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
1074 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.417566 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
1075 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.421521 86953 secret.go:216] Received secret openshift-web-console/webconsole-token-rdcw4 containing (4) pieces of data, 4172 total bytes
1076 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.421663 86953 atomic_writer.go:332] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/webconsole-token-rdcw4: current paths: [ca.crt namespace service-ca.crt token]
1077 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.421676 86953 atomic_writer.go:344] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/webconsole-token-rdcw4: new paths: [ca.crt namespace service-ca.crt token]
1078 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.421683 86953 atomic_writer.go:347] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/webconsole-token-rdcw4: paths to remove: map[]
1079 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.421787 86953 atomic_writer.go:156] pod openshift-web-console/webconsole-55dd868cdf-crvth volume webconsole-token-rdcw4: no update required for target directory /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/webconsole-token-rdcw4
1080 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.421937 86953 operation_generator.go:552] MountVolume.SetUp succeeded for volume "webconsole-token-rdcw4" (UniqueName: "kubernetes.io/secret/aebd73ce-519b-11e8-9f32-fa163edc217c-webconsole-token-rdcw4") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c")
1081 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422258 86953 configmap.go:217] Received configMap openshift-web-console/webconsole-config containing (1) pieces of data, 706 total bytes
1082 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422345 86953 atomic_writer.go:332] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~configmap/webconsole-config: current paths: [webconsole-config.yaml]
1083 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422355 86953 atomic_writer.go:344] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~configmap/webconsole-config: new paths: [webconsole-config.yaml]
1084 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422360 86953 atomic_writer.go:347] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~configmap/webconsole-config: paths to remove: map[]
1085 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422395 86953 atomic_writer.go:156] pod openshift-web-console/webconsole-55dd868cdf-crvth volume webconsole-config: no update required for target directory /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~configmap/webconsole-config
1086 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422506 86953 operation_generator.go:552] MountVolume.SetUp succeeded for volume "webconsole-config" (UniqueName: "kubernetes.io/configmap/aebd73ce-519b-11e8-9f32-fa163edc217c-webconsole-config") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c")
1087 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422639 86953 secret.go:216] Received secret openshift-web-console/webconsole-serving-cert containing (2) pieces of data, 4148 total bytes
1088 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422699 86953 atomic_writer.go:332] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/serving-cert: current paths: [tls.crt tls.key]
1089 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422708 86953 atomic_writer.go:344] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/serving-cert: new paths: [tls.crt tls.key]
1090 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422714 86953 atomic_writer.go:347] /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/serving-cert: paths to remove: map[]
1091 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422757 86953 atomic_writer.go:156] pod openshift-web-console/webconsole-55dd868cdf-crvth volume serving-cert: no update required for target directory /var/lib/origin/openshift.local.volumes/pods/aebd73ce-519b-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/serving-cert
1092 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.422865 86953 operation_generator.go:552] MountVolume.SetUp succeeded for volume "serving-cert" (UniqueName: "kubernetes.io/secret/aebd73ce-519b-11e8-9f32-fa163edc217c-serving-cert") pod "webconsole-55dd868cdf-crvth" (UID: "aebd73ce-519b-11e8-9f32-fa163edc217c")
1093 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.605452 86953 volume_manager.go:372] All volumes are attached and mounted for pod "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c)"
1094 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.610371 86953 kuberuntime_manager.go:442] Syncing Pod "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:webconsole-55dd868cdf-crvth,GenerateName:webconsole-55dd868cdf-,Namespace:openshift-web-console,SelfLink:/api/v1/namespaces/openshift-web-console/pods/webconsole-55dd868cdf-crvth,UID:aebd73ce-519b-11e8-9f32-fa163edc217c,ResourceVersion:186779,Generation:0,CreationTimestamp:2018-05-06 22:09:32 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: openshift-web-console,pod-template-hash: 1188424789,webconsole: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-05-07T23:10:17.279967297-04:00,kubernetes.io/config.source: api,openshift.io/scc: restricted,},OwnerReferences:[{extensions/v1beta1 ReplicaSet webconsole-55dd868cdf ae182bb5-519b-11e8-9f32-fa163edc217c 0xc42135b090 0xc42135b091}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{serving-cert {nil nil nil nil nil SecretVolumeSource{SecretName:webconsole-serving-cert,Items:[],DefaultMode:*400,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {webconsole-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:webconsole-config,},Items:[],DefaultMode:*440,Optional:nil,} nil nil nil nil nil nil nil nil}} {webconsole-token-rdcw4 {nil nil nil nil nil &SecretVolumeSource{SecretName:webconsole-token-rdcw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600
1095 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1096 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1097 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1098 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: echo 'webconsole-config.yaml has changed.'; \
1099 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: exit 1; \
1100 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: true,},ServiceAccountName:webconsole,DeprecatedServiceAccount:webconsole,NodeName:172.16.120.63,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c9,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000080000,},ImagePullSecrets:[{webconsole-dockercfg-rdx22}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/memory-pressure Exists NoSchedule <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-05-07 23:10:23 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT }],Message:,Reason:,HostIP:172.16.120.63,PodIP:10.129.0.4,StartTime:2018-05-06 22:09:32 -0400 EDT,ContainerStatuses:[{webconsole {nil ContainerStateRunning{StartedAt:2018-05-06 22:14:10 -0400 EDT,} nil} {nil nil nil} true 0 registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 regis
1101 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: try.reg-aws.openshift.com:443/openshift3/ose-web-console@sha256:2b4e7533d9f4ee450fdb9dac3b096ef619538da7c0b9f8489e5aee4cee18cc3e cri-o://138ebee17264174a3b2aad338b2a9d31108041c14f0b3f1e021721333b79e0b1}],QOSClass:Burstable,InitContainerStatuses:[],},}
1102 5月 07 23:37:45 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:45.610956 86953 kuberuntime_manager.go:571] computePodActions got {KillPod:false CreateSandbox:false SandboxID:54955088b7d6b394b62aaeb500d3a72e13578ca962010a059d2161750b0df99b Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c)"
1103 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.163235 86953 generic.go:183] GenericPLEG: Relisting
1104 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.638213 86953 iptables.go:101] Syncing openshift iptables rules
1105 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.638297 86953 iptables.go:419] running iptables -N [OPENSHIFT-FIREWALL-FORWARD -t filter]
1106 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.640332 86953 iptables.go:419] running iptables -C [FORWARD -t filter -m comment --comment firewall overrides -j OPENSHIFT-FIREWALL-FORWARD]
1107 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.642274 86953 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-FORWARD -t filter -s 10.128.0.0/14 -m comment --comment attempted resend after connection close -m conntrack --ctstate INVALID -j DROP]
1108 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.645913 86953 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-FORWARD -t filter -d 10.128.0.0/14 -m comment --comment forward traffic from SDN -j ACCEPT]
1109 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.650636 86953 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-FORWARD -t filter -s 10.128.0.0/14 -m comment --comment forward traffic to SDN -j ACCEPT]
1110 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.653374 86953 iptables.go:419] running iptables -N [OPENSHIFT-MASQUERADE -t nat]
1111 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.656779 86953 iptables.go:419] running iptables -C [POSTROUTING -t nat -m comment --comment rules for masquerading OpenShift traffic -j OPENSHIFT-MASQUERADE]
1112 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.660046 86953 iptables.go:419] running iptables -C [OPENSHIFT-MASQUERADE -t nat -s 10.128.0.0/14 -m comment --comment masquerade pod-to-service and pod-to-external traffic -j MASQUERADE]
1113 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.662607 86953 iptables.go:419] running iptables -N [OPENSHIFT-ADMIN-OUTPUT-RULES -t filter]
1114 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.671710 86953 iptables.go:419] running iptables -C [FORWARD -t filter -i tun0 ! -o tun0 -m comment --comment administrator overrides -j OPENSHIFT-ADMIN-OUTPUT-RULES]
1115 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.675918 86953 iptables.go:419] running iptables -N [OPENSHIFT-FIREWALL-ALLOW -t filter]
1116 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.677913 86953 iptables.go:419] running iptables -C [INPUT -t filter -m comment --comment firewall overrides -j OPENSHIFT-FIREWALL-ALLOW]
1117 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.682371 86953 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-ALLOW -t filter -p udp --dport 4789 -m comment --comment VXLAN incoming -j ACCEPT]
1118 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.693985 86953 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-ALLOW -t filter -i tun0 -m comment --comment from SDN to localhost -j ACCEPT]
1119 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.696371 86953 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-ALLOW -t filter -i docker0 -m comment --comment from docker to localhost -j ACCEPT]
1120 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.706928 86953 iptables.go:99] syncIPTableRules took 68.71621ms
1121 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.794050 86953 iptables.go:419] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
1122 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.796334 86953 iptables.go:419] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
1123 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.798299 86953 iptables.go:419] running iptables -N [KUBE-PORTALS-HOST -t nat]
1124 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.799761 86953 iptables.go:419] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
1125 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.803372 86953 iptables.go:419] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
1126 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.804883 86953 iptables.go:419] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
1127 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.806763 86953 iptables.go:419] running iptables -N [KUBE-NODEPORT-HOST -t nat]
1128 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.807948 86953 iptables.go:419] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
1129 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.810746 86953 iptables.go:419] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
1130 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.811953 86953 iptables.go:419] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
1131 5月 07 23:37:46 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:46.943761 86953 dnsmasq.go:123] Instructing dnsmasq to set the following servers: [/in-addr.arpa/127.0.0.1 /cluster.local/127.0.0.1]
1132 5月 07 23:37:47 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:47.164573 86953 generic.go:183] GenericPLEG: Relisting
1133 5月 07 23:37:47 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:47.230910 86953 ovs.go:145] Executing: ovs-ofctl -O OpenFlow13 dump-flows br0 table=253
1134 5月 07 23:37:47 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:47.245908 86953 healthcheck.go:98] SDN healthcheck succeeded
1135 5月 07 23:37:47 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:47.296353 86953 kubelet.go:1924] SyncLoop (housekeeping)
1136 5月 07 23:37:47 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:47.308156 86953 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1137 5月 07 23:37:47 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:47.323110 86953 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1138 5月 07 23:37:48 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:48.166388 86953 generic.go:183] GenericPLEG: Relisting
1139 5月 07 23:37:49 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:49.170281 86953 generic.go:183] GenericPLEG: Relisting
1140 5月 07 23:37:49 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:49.296384 86953 kubelet.go:1924] SyncLoop (housekeeping)
1141 5月 07 23:37:49 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:49.311355 86953 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1142 5月 07 23:37:49 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:49.316248 86953 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1143 5月 07 23:37:50 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:50.173417 86953 generic.go:183] GenericPLEG: Relisting
1144 5月 07 23:37:50 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:50.408601 86953 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
1145 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.174784 86953 generic.go:183] GenericPLEG: Relisting
1146 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.296335 86953 kubelet.go:1924] SyncLoop (housekeeping)
1147 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.301265 86953 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1148 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.303338 86953 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1149 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.316591 86953 openstack_instances.go:39] openstack.Instances() called
1150 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.316633 86953 openstack_instances.go:46] Claiming to support Instances
1151 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.316643 86953 openstack_instances.go:69] NodeAddresses(172.16.120.63) called
1152 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.617915 86953 openstack_instances.go:76] NodeAddresses(172.16.120.63) => [{InternalIP 172.16.120.63} {ExternalIP 10.8.249.82}]
1153 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.977344 86953 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
1154 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.977384 86953 prober.go:168] HTTP-Probe Headers: map[]
1155 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.978493 86953 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[24] Content-Type:[application/json] Access-Control-Allow-Origin:[*]] 0xc421945620 24 [] true false map[] 0xc4229d6400 <nil>}
1156 5月 07 23:37:51 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:51.978536 86953 prober.go:118] Liveness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1157 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.030226 86953 prober.go:150] Exec-Probe Pod: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:webconsole-55dd868cdf-crvth,GenerateName:webconsole-55dd868cdf-,Namespace:openshift-web-console,SelfLink:/api/v1/namespaces/openshift-web-console/pods/webconsole-55dd868cdf-crvth,UID:aebd73ce-519b-11e8-9f32-fa163edc217c,ResourceVersion:186779,Generation:0,CreationTimestamp:2018-05-06 22:09:32 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: openshift-web-console,pod-template-hash: 1188424789,webconsole: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-05-07T23:10:17.279967297-04:00,kubernetes.io/config.source: api,openshift.io/scc: restricted,},OwnerReferences:[{extensions/v1beta1 ReplicaSet webconsole-55dd868cdf ae182bb5-519b-11e8-9f32-fa163edc217c 0xc4211dd400 0xc4211dd401}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{serving-cert {nil nil nil nil nil SecretVolumeSource{SecretName:webconsole-serving-cert,Items:[],DefaultMode:*400,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {webconsole-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:webconsole-config,},Items:[],DefaultMode:*440,Optional:nil,} nil nil nil nil nil nil nil nil}} {webconsole-token-rdcw4 {nil nil nil nil nil &SecretVolumeSource{SecretName:webconsole-token-rdcw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false
1158 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1159 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1160 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1161 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: echo 'webconsole-config.yaml has changed.'; \
1162 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: exit 1; \
1163 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: true,},ServiceAccountName:webconsole,DeprecatedServiceAccount:webconsole,NodeName:172.16.120.63,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c9,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000080000,},ImagePullSecrets:[{webconsole-dockercfg-rdx22}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/memory-pressure Exists NoSchedule <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-05-07 22:54:19 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT }],Message:,Reason:,HostIP:172.16.120.63,PodIP:10.129.0.4,StartTime:2018-05-06 22:09:32 -0400 EDT,ContainerStatuses:[{webconsole {nil ContainerStateRunning{StartedAt:2018-05-06 22:14:10 -0400 EDT,} nil} {nil nil nil} true 0 registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 regis
1164 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: try.reg-aws.openshift.com:443/openshift3/ose-web-console@sha256:2b4e7533d9f4ee450fdb9dac3b096ef619538da7c0b9f8489e5aee4cee18cc3e cri-o://138ebee17264174a3b2aad338b2a9d31108041c14f0b3f1e021721333b79e0b1}],QOSClass:Burstable,InitContainerStatuses:[],},}, Container: {webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] &Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1165 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1166 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1167 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: echo 'webconsole-config.yaml has changed.'; \
1168 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: exit 1; \
1169 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}, Command: [/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1170 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1171 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1172 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: echo 'webconsole-config.yaml has changed.'; \
1173 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: exit 1; \
1174 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: fi && curl -k -f https://0.0.0.0:8443/console/]
1175 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.178659 86953 generic.go:183] GenericPLEG: Relisting
1176 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.228561 86953 exec.go:38] Exec probe response: "<!doctype html>\n<html class=\"no-js layout-pf layout-pf-fixed\">\n<head>\n<meta http-equiv=\"X-UA-Compatible\" content=\"IE=EDGE\"/>\n<meta charset=\"utf-8\">\n<base href=\"/console/\">\n<title>OpenShift Web Console</title>\n<meta name=\"description\" content=\"\">\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n<link rel=\"icon\" type=\"image/png\" href=\"images/favicon.png\"/>\n<link rel=\"icon\" type=\"image/x-icon\" href=\"images/favicon.ico\"/>\n<link rel=\"apple-touch-icon-precomposed\" sizes=\"144x144\" href=\"images/apple-touch-icon-precomposed.png\">\n<link rel=\"mask-icon\" href=\"images/mask-icon.svg\" color=\"#DB242F\">\n<meta name=\"application-name\" content=\"OpenShift\">\n<meta name=\"msapplication-TileColor\" content=\"#000000\">\n<meta name=\"msapplication-TileImage\" content=\"images/mstile-144x144.png\">\n<link rel=\"stylesheet\" href=\"styles/vendor.css\">\n<link rel=\"stylesheet\" href=\"styles/main.css\">\n<style type=\"text/css\"></style>\n</head>\n<body class=\"console-os\" ng-class=\"{ 'has-project-bar': view.hasProject, 'has-project-search': view.hasProjectSearch }\">\n<osc-header></osc-header>\n<toast-notifications></toast-notifications>\n<notification-drawer-wrapper></notification-drawer-wrapper>\n<div class=\"container-pf-nav-pf-vertical\" ng-class=\"{ 'collapsed-nav': nav.collapsed }\">\n<div ng-view class=\"view\">\n<div class=\"middle\">\n<div class=\"middle-content\">\n<div class=\"empty-state-message loading\">\n<h2 class=\"text-center\" id=\"temporary-loading-message\" style=\"display: none\">Loading...</h2>\n<script>document.getElementById('temporary-loading-message').style.display = \"\";</script>\n</div>\n<noscript>\n<div class=\"attention-message\">\n<h1>JavaScript Required</h1>\n<p>The OpenShift web console requires JavaScript to provide a rich interactive experience. Please enable JavaScript to continue. If you do not wish to enable JavaScript or are unable to do so, you may use the
1177 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: command-line tools to manage your projects and applications instead.</p>\n</div>\n</noscript>\n</div>\n</div>\n</div>\n</div>\n<script src=\"config.js\"></script>\n<!--[if lt IE 9]>\n <script src=\"scripts/oldieshim.js\"></script>\n <![endif]-->\n<script src=\"scripts/vendor.js\"></script>\n<script src=\"scripts/templates.js\"></script>\n<script src=\"scripts/scripts.js\"></script>\n</body>\n</html> % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 2243 0 2243 0 0 46797 0 --:--:-- --:--:-- --:--:-- 47723\n"
1178 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.228648 86953 prober.go:118] Liveness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1179 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.659914 86953 config.go:141] Calling handler.OnEndpointsUpdate
1180 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.671146 86953 proxier.go:872] Setting endpoints for "lxlyp/registry:5000-tcp" to [10.128.0.124:5000]
1181 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.671194 86953 proxier.go:872] Setting endpoints for "lxlyp/registry:5000-tcp" to [10.128.0.124:5000]
1182 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.870358 86953 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
1183 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.870410 86953 prober.go:168] HTTP-Probe Headers: map[]
1184 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.871331 86953 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[application/json] Access-Control-Allow-Origin:[*] Content-Length:[24]] 0xc421128880 24 [] true false map[] 0xc4229deb00 <nil>}
1185 5月 07 23:37:52 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:52.871377 86953 prober.go:118] Readiness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1186 5月 07 23:37:53 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:53.181611 86953 generic.go:183] GenericPLEG: Relisting
1187 5月 07 23:37:53 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:53.296334 86953 kubelet.go:1924] SyncLoop (housekeeping)
1188 5月 07 23:37:53 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:53.303058 86953 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1189 5月 07 23:37:53 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:53.303874 86953 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1190 5月 07 23:37:53 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:53.443987 86953 prober.go:165] HTTP-Probe Host: https://10.129.0.4, Port: 8443, Path: /healthz
1191 5月 07 23:37:53 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:53.444033 86953 prober.go:168] HTTP-Probe Headers: map[]
1192 5月 07 23:37:53 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:53.456410 86953 http.go:96] Probe succeeded for https://10.129.0.4:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Tue, 08 May 2018 03:37:53 GMT]] 0xc421255460 2 [] false false map[] 0xc4229df300 0xc422037ad0}
1193 5月 07 23:37:53 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:53.456509 86953 prober.go:118] Readiness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1194 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.182973 86953 generic.go:183] GenericPLEG: Relisting
1195 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.222456 86953 eviction_manager.go:221] eviction manager: synchronize housekeeping
1196 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.254115 86953 helpers.go:827] eviction manager: observations: signal=memory.available, available: 2508588Ki, capacity: 3881588Ki, time: 2018-05-07 23:37:53.657053147 -0400 EDT m=+1657.343843956
1197 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.254178 86953 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 14511344Ki, capacity: 31010Mi, time: 2018-05-07 23:37:53.657053147 -0400 EDT m=+1657.343843956
1198 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.254196 86953 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 15712847, capacity: 15510Ki, time: 2018-05-07 23:37:53.657053147 -0400 EDT m=+1657.343843956
1199 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.254211 86953 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 14511344Ki, capacity: 31010Mi, time: 2018-05-07 23:37:53.657053147 -0400 EDT m=+1657.343843956
1200 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.254219 86953 helpers.go:827] eviction manager: observations: signal=imagefs.inodesFree, available: 15712847, capacity: 15510Ki, time: 2018-05-07 23:37:53.657053147 -0400 EDT m=+1657.343843956
1201 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.254226 86953 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 3663640Ki, capacity: 3881588Ki
1202 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.254253 86953 eviction_manager.go:325] eviction manager: no resources are starved
1203 5月 07 23:37:54 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:54.831142 86953 reflector.go:428] github.com/openshift/origin/pkg/network/generated/informers/internalversion/factory.go:57: Watch close - *network.HostSubnet total 0 items received
1204 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.117036 86953 prober.go:165] HTTP-Probe Host: https://10.129.0.8, Port: 8443, Path: /healthz
1205 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.117087 86953 prober.go:168] HTTP-Probe Headers: map[]
1206 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.200836 86953 http.go:96] Probe succeeded for https://10.129.0.8:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Tue, 08 May 2018 03:37:55 GMT]] 0xc4211cacc0 2 [] false false map[] 0xc422e7f300 0xc4222aa4d0}
1207 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.200914 86953 prober.go:118] Readiness probe for "apiserver-qq6rl_openshift-template-service-broker(8beaacd9-519c-11e8-9f32-fa163edc217c):c" succeeded
1208 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.201180 86953 generic.go:183] GenericPLEG: Relisting
1209 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.296371 86953 kubelet.go:1924] SyncLoop (housekeeping)
1210 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.303808 86953 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1211 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.304374 86953 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1212 5月 07 23:37:55 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:55.409319 86953 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
1213 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.205142 86953 generic.go:183] GenericPLEG: Relisting
1214 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.296343 86953 kubelet.go:1901] SyncLoop (SYNC): 1 pods; registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c)
1215 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.297188 86953 kubelet_pods.go:1381] Generating status for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c)"
1216 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.297781 86953 status_manager.go:353] Ignoring same status for pod "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-05-06 22:13:14 -0400 EDT Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-05-07 23:10:22 -0400 EDT Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-05-06 22:13:14 -0400 EDT Reason: Message:}] Message: Reason: HostIP:172.16.120.63 PodIP:10.129.0.5 StartTime:2018-05-06 22:13:14 -0400 EDT InitContainerStatuses:[] ContainerStatuses:[{Name:registry-console State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2018-05-06 22:14:35 -0400 EDT,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:registry.reg-aws.openshift.com:443/openshift3/registry-console:v3.9 ImageID:registry.reg-aws.openshift.com:443/openshift3/registry-console@sha256:9f97701c4f588c8d6d1679e4262759f03ed8751ce7a72b3b7a7e7a11cd985141 ContainerID:cri-o://588f71d23c17a26cafa38ec43a17571bde4b3f6de433eee8c47fd541168eb69b}] QOSClass:BestEffort}
1217 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.298380 86953 volume_manager.go:344] Waiting for volumes to attach and mount for pod "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c)"
1218 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.305133 86953 desired_state_of_world_populator.go:298] Added volume "default-token-z8sch" (volSpec="default-token-z8sch") for pod "32c8e1e7-519c-11e8-9f32-fa163edc217c" to desired state.
1219 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.405337 86953 operation_executor.go:895] Starting operationExecutor.MountVolume for volume "default-token-z8sch" (UniqueName: "kubernetes.io/secret/32c8e1e7-519c-11e8-9f32-fa163edc217c-default-token-z8sch") pod "registry-console-1-gnzd7" (UID: "32c8e1e7-519c-11e8-9f32-fa163edc217c") Volume is already mounted to pod, but remount was requested.
1220 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.406178 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/secret
1221 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.406650 86953 reconciler.go:264] operationExecutor.MountVolume started for volume "default-token-z8sch" (UniqueName: "kubernetes.io/secret/32c8e1e7-519c-11e8-9f32-fa163edc217c-default-token-z8sch") pod "registry-console-1-gnzd7" (UID: "32c8e1e7-519c-11e8-9f32-fa163edc217c") Volume is already mounted to pod, but remount was requested.
1222 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.406783 86953 secret.go:186] Setting up volume default-token-z8sch for pod 32c8e1e7-519c-11e8-9f32-fa163edc217c at /var/lib/origin/openshift.local.volumes/pods/32c8e1e7-519c-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/default-token-z8sch
1223 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.407409 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
1224 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.407424 86953 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
1225 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.412753 86953 secret.go:216] Received secret default/default-token-z8sch containing (4) pieces of data, 4109 total bytes
1226 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.412910 86953 atomic_writer.go:332] /var/lib/origin/openshift.local.volumes/pods/32c8e1e7-519c-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/default-token-z8sch: current paths: [ca.crt namespace service-ca.crt token]
1227 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.412924 86953 atomic_writer.go:344] /var/lib/origin/openshift.local.volumes/pods/32c8e1e7-519c-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/default-token-z8sch: new paths: [ca.crt namespace service-ca.crt token]
1228 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.412932 86953 atomic_writer.go:347] /var/lib/origin/openshift.local.volumes/pods/32c8e1e7-519c-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/default-token-z8sch: paths to remove: map[]
1229 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.413326 86953 atomic_writer.go:156] pod default/registry-console-1-gnzd7 volume default-token-z8sch: no update required for target directory /var/lib/origin/openshift.local.volumes/pods/32c8e1e7-519c-11e8-9f32-fa163edc217c/volumes/kubernetes.io~secret/default-token-z8sch
1230 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.413977 86953 operation_generator.go:552] MountVolume.SetUp succeeded for volume "default-token-z8sch" (UniqueName: "kubernetes.io/secret/32c8e1e7-519c-11e8-9f32-fa163edc217c-default-token-z8sch") pod "registry-console-1-gnzd7" (UID: "32c8e1e7-519c-11e8-9f32-fa163edc217c")
1231 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.598671 86953 volume_manager.go:372] All volumes are attached and mounted for pod "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c)"
1232 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.602872 86953 kuberuntime_manager.go:442] Syncing Pod "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:registry-console-1-gnzd7,GenerateName:registry-console-1-,Namespace:default,SelfLink:/api/v1/namespaces/default/pods/registry-console-1-gnzd7,UID:32c8e1e7-519c-11e8-9f32-fa163edc217c,ResourceVersion:186774,Generation:0,CreationTimestamp:2018-05-06 22:13:14 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: registry-console,deployment: registry-console-1,deploymentconfig: registry-console,name: registry-console,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-05-07T23:10:17.279999287-04:00,kubernetes.io/config.source: api,openshift.io/deployment-config.latest-version: 1,openshift.io/deployment-config.name: registry-console,openshift.io/deployment.name: registry-console-1,openshift.io/generated-by: OpenShiftNewApp,openshift.io/scc: restricted,},OwnerReferences:[{v1 ReplicationController registry-console-1 a7766e41-519b-11e8-9f32-fa163edc217c 0xc42068f630 0xc42068f631}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z8sch {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z8sch,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{registry-console registry.reg-aws.openshift.com:443/openshift3/registry-console:v3.9 [] [] [{ 0 9090 TCP }] [] [{OPENSHIFT_OAUTH_PROVIDER_URL https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443 nil} {OPENSHIFT_OAUTH_CLIENT_ID cockpit-oauth-client nil} {KUBERNETES_INSECURE false nil} {COCKPIT_KUBE_INSECURE false nil} {REGISTRY_ONLY true nil} {REGISTRY_HOST docker-registry-default.apps.0506-49c.qe.rhcloud.com nil}] {map[] map[]} [{default-token-z8sch true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ping,Por
1233 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: t:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ping,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000000000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.16.120.63,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c1,c0,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000000000,},ImagePullSecrets:[{default-dockercfg-tk975}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:13:14 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-05-07 23:10:22 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:13:14 -0400 EDT }],Message:,Reason:,HostIP:172.16.120.63,PodIP:10.129.0.5,StartTime:2018-05-06 22:13:14 -0400 EDT,ContainerStatuses:[{registry-console {nil ContainerStateRunning{StartedAt:2018-05-06 22:14:35 -0400 EDT,} nil} {nil nil nil} true 0 registry.reg-aws.openshift.com:443/openshift3/registry-console:v3.9 registry.reg-aws.openshift.com:443/openshift3/registry-console@sha256:9f97701c4f588c8d6d1679e4262759f03ed8751ce7a72b3b7a7e7a11cd985
1234 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: 141 cri-o://588f71d23c17a26cafa38ec43a17571bde4b3f6de433eee8c47fd541168eb69b}],QOSClass:BestEffort,InitContainerStatuses:[],},}
1235 5月 07 23:37:56 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:56.605343 86953 kuberuntime_manager.go:571] computePodActions got {KillPod:false CreateSandbox:false SandboxID:53e8bc2004d7684a9446d1516edde199a3656586a19fcd19981b0e48be6aad1d Attempt:0 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c)"
1236 5月 07 23:37:57 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:57.207277 86953 generic.go:183] GenericPLEG: Relisting
1237 5月 07 23:37:57 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:57.296654 86953 kubelet.go:1924] SyncLoop (housekeeping)
1238 5月 07 23:37:57 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:57.303956 86953 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1239 5月 07 23:37:57 host-172-16-120-63 atomic-openshift-node[86953]: I0507 23:37:57.305191 86953 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1240
1241 [03:39:20] INFO> Exit Status: 0
1242 [03:39:20] INFO> Node service will be restarted after scenario on 172.16.120.63
1243 [03:39:20] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1244 find '/etc/sysconfig/docker' -maxdepth 0 -type f` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1245 /etc/sysconfig/docker
1246
1247 [03:39:21] INFO> Exit Status: 0
1248 [03:39:21] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1249 tar --selinux --acls --xattrs -cvPf '/etc/sysconfig/docker.tar' '/etc/sysconfig/docker'` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1250 /etc/sysconfig/docker
1251
1252 [03:39:23] INFO> Exit Status: 0
1253 [03:39:23] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1254 sed -i '/^INSECURE_REGISTRY*/d' /etc/sysconfig/docker` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1255
1256 [03:39:25] INFO> Exit Status: 0
1257 [03:39:25] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1258 echo "INSECURE_REGISTRY='--insecure-registry 172.30.248.135:5000'" >> /etc/sysconfig/docker` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1259
1260 [03:39:26] INFO> Exit Status: 0
1261 [03:39:26] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1262 systemctl restart docker` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1263
1264 [03:39:31] INFO> Exit Status: 0
1265 [03:39:35] INFO> oc get pods --output=yaml -l app\=registry -l deploymentconfig\=registry --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig -n lxlyp
1266 [03:39:35] INFO> After 1 iterations and 4 seconds:
1267 apiVersion: v1
1268 items:
1269 - apiVersion: v1
1270 kind: Pod
1271 metadata:
1272 annotations:
1273 openshift.io/deployment-config.latest-version: "4"
1274 openshift.io/deployment-config.name: registry
1275 openshift.io/deployment.name: registry-4
1276 openshift.io/generated-by: OpenShiftNewApp
1277 openshift.io/scc: restricted
1278 creationTimestamp: 2018-05-08T03:37:50Z
1279 generateName: registry-4-
1280 labels:
1281 app: registry
1282 deployment: registry-4
1283 deploymentconfig: registry
1284 name: registry-4-7xqw9
1285 namespace: lxlyp
1286 ownerReferences:
1287 - apiVersion: v1
1288 blockOwnerDeletion: true
1289 controller: true
1290 kind: ReplicationController
1291 name: registry-4
1292 uid: 2c7c6486-5271-11e8-a042-fa163edc217c
1293 resourceVersion: "193884"
1294 selfLink: /api/v1/namespaces/lxlyp/pods/registry-4-7xqw9
1295 uid: 2ec60375-5271-11e8-a042-fa163edc217c
1296 spec:
1297 containers:
1298 - env:
1299 - name: REGISTRY_AUTH_HTPASSWD_PATH
1300 value: /auth/htpasswd
1301 - name: REGISTRY_AUTH_HTPASSWD_REALM
1302 value: Registry Realm
1303 - name: REGISTRY_AUTH
1304 value: htpasswd
1305 image: registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
1306 imagePullPolicy: IfNotPresent
1307 name: registry
1308 ports:
1309 - containerPort: 5000
1310 protocol: TCP
1311 readinessProbe:
1312 failureThreshold: 3
1313 httpGet:
1314 httpHeaders:
1315 - name: Authorization
1316 value: Basic dGVzdHVzZXI6dGVzdHBhc3N3b3Jk
1317 path: /v2/
1318 port: 5000
1319 scheme: HTTP
1320 periodSeconds: 10
1321 successThreshold: 1
1322 timeoutSeconds: 1
1323 resources: {}
1324 securityContext:
1325 capabilities:
1326 drop:
1327 - KILL
1328 - MKNOD
1329 - SETGID
1330 - SETUID
1331 runAsUser: 1000390000
1332 terminationMessagePath: /dev/termination-log
1333 terminationMessagePolicy: File
1334 volumeMounts:
1335 - mountPath: /var/lib/registry
1336 name: registry-volume-1
1337 - mountPath: /auth
1338 name: volume-2rr4t
1339 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1340 name: default-token-fmdcs
1341 readOnly: true
1342 dnsPolicy: ClusterFirst
1343 imagePullSecrets:
1344 - name: default-dockercfg-hq7mv
1345 nodeName: 172.16.120.67
1346 nodeSelector:
1347 node-role.kubernetes.io/compute: "true"
1348 restartPolicy: Always
1349 schedulerName: default-scheduler
1350 securityContext:
1351 fsGroup: 1000390000
1352 seLinuxOptions:
1353 level: s0:c20,c5
1354 serviceAccount: default
1355 serviceAccountName: default
1356 terminationGracePeriodSeconds: 30
1357 volumes:
1358 - emptyDir: {}
1359 name: registry-volume-1
1360 - name: volume-2rr4t
1361 secret:
1362 defaultMode: 420
1363 secretName: htpasswd-secret
1364 - name: default-token-fmdcs
1365 secret:
1366 defaultMode: 420
1367 secretName: default-token-fmdcs
1368 status:
1369 conditions:
1370 - lastProbeTime: null
1371 lastTransitionTime: 2018-05-08T03:37:56Z
1372 status: "True"
1373 type: Initialized
1374 - lastProbeTime: null
1375 lastTransitionTime: 2018-05-08T03:38:04Z
1376 status: "True"
1377 type: Ready
1378 - lastProbeTime: null
1379 lastTransitionTime: 2018-05-08T03:37:50Z
1380 status: "True"
1381 type: PodScheduled
1382 containerStatuses:
1383 - containerID: cri-o://7f01224dd6199579bbe1b88649747df141e87288a0a7839d1139187e486bec34
1384 image: docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
1385 imageID: docker.io/library/registry@sha256:feb40d14cd33e646b9985e2d6754ed66616fedb840226c4d917ef53d616dcd6c
1386 lastState: {}
1387 name: registry
1388 ready: true
1389 restartCount: 0
1390 state:
1391 running:
1392 startedAt: 2018-05-08T03:37:58Z
1393 hostIP: 172.16.120.67
1394 phase: Running
1395 podIP: 10.128.0.128
1396 qosClass: BestEffort
1397 startTime: 2018-05-08T03:37:56Z
1398 kind: List
1399 metadata:
1400 resourceVersion: ""
1401 selfLink: ""
1402
1403 And I log into auth registry on the node # features/step_definitions/registry.rb:138
1404 [03:39:35] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1405 docker login -u testuser -p testpassword 172.30.248.135:5000` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1406 Login Succeeded
1407
1408 [03:39:37] INFO> Exit Status: 0
1409waiting for operation up to 3600 seconds..
1410 Given default registry service ip is stored in the :integrated_reg_ip clipboard # features/step_definitions/registry.rb:332
1411 [03:39:37] INFO> Shell Commands: oc get services docker-registry --output=yaml --config=/home/szh/workdir/localhost-szh/ose_admin.kubeconfig --namespace=default
1412 apiVersion: v1
1413 kind: Service
1414 metadata:
1415 creationTimestamp: 2018-05-07T02:07:43Z
1416 labels:
1417 docker-registry: default
1418 name: docker-registry
1419 namespace: default
1420 resourceVersion: "1899"
1421 selfLink: /api/v1/namespaces/default/services/docker-registry
1422 uid: 6db00884-519b-11e8-9f32-fa163edc217c
1423 spec:
1424 clusterIP: 172.30.10.211
1425 ports:
1426 - name: 5000-tcp
1427 port: 5000
1428 protocol: TCP
1429 targetPort: 5000
1430 selector:
1431 docker-registry: default
1432 sessionAffinity: ClientIP
1433 sessionAffinityConfig:
1434 clientIP:
1435 timeoutSeconds: 10800
1436 type: ClusterIP
1437 status:
1438 loadBalancer: {}
1439
1440 [03:39:39] INFO> Exit Status: 0
1441 When I run commands on the host: # features/step_definitions/node.rb:65
1442 [03:39:39] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1443 docker pull docker.io/ansibleplaybookbundle/mediawiki-apb:v3.9` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1444 Trying to pull repository docker.io/ansibleplaybookbundle/mediawiki-apb ...
1445 v3.9: Pulling from docker.io/ansibleplaybookbundle/mediawiki-apb
1446 Digest: sha256:ab0b16ee1118d29747b295dae1ab5149a5686dd8fd780f5ab21ab08f919b7fde
1447 Status: Image is up to date for docker.io/ansibleplaybookbundle/mediawiki-apb:v3.9
1448
1449 [03:39:41] INFO> Exit Status: 0
1450 | docker pull docker.io/ansibleplaybookbundle/mediawiki-apb:v3.9 |
1451 Then the step should succeed # features/step_definitions/common.rb:4
1452 And I run commands on the host: # features/step_definitions/node.rb:65
1453 [03:39:41] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1454 docker tag docker.io/ansibleplaybookbundle/mediawiki-apb:v3.9 172.30.10.211:5000/openshift/mediawiki-apb:latest` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1455
1456 [03:39:42] INFO> Exit Status: 0
1457 | docker tag docker.io/ansibleplaybookbundle/mediawiki-apb:v3.9 <%= cb.integrated_reg_ip %>/openshift/mediawiki-apb:latest |
1458 # | docker tag docker.io/ansibleplaybookbundle/mediawiki-apb:v3.9 <%= cb.integrated_reg_ip %>/<%= project.name %>/mediawiki-apb:latest |
1459 Then the step should succeed # features/step_definitions/common.rb:4
1460 When I run commands on the host: # features/step_definitions/node.rb:65
1461 [03:39:42] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1462 docker push 172.30.10.211:5000/openshift/mediawiki-apb:latest` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1463 The push refers to a repository [172.30.10.211:5000/openshift/mediawiki-apb]
1464 b21f4ee3875d: Preparing
1465 cf793eeee526: Preparing
1466 ab2b24e436b8: Preparing
1467 6a0866bebe27: Preparing
1468 e15afa4858b6: Preparing
1469 6a0866bebe27: Layer already exists
1470 e15afa4858b6: Layer already exists
1471 ab2b24e436b8: Layer already exists
1472 cf793eeee526: Layer already exists
1473 b21f4ee3875d: Mounted from 4twct/mediawiki-apb
1474 latest: digest: sha256:ab0b16ee1118d29747b295dae1ab5149a5686dd8fd780f5ab21ab08f919b7fde size: 1370
1475
1476 [03:39:45] INFO> Exit Status: 0
1477 | docker push <%= cb.integrated_reg_ip %>/openshift/mediawiki-apb:latest |
1478 Then the step should succeed # features/step_definitions/common.rb:4
1479 When I run commands on the host: # features/step_definitions/node.rb:65
1480 [03:39:45] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1481 docker pull docker.io/ansibleplaybookbundle/mariadb-apb:v3.9` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1482 Trying to pull repository docker.io/ansibleplaybookbundle/mariadb-apb ...
1483 v3.9: Pulling from docker.io/ansibleplaybookbundle/mariadb-apb
1484 Digest: sha256:cb95d4e7c600a1e459ecdbff056d174850c2df95286dedc51dc0498d21ff74c4
1485 Status: Image is up to date for docker.io/ansibleplaybookbundle/mariadb-apb:v3.9
1486
1487 [03:39:48] INFO> Exit Status: 0
1488 | docker pull docker.io/ansibleplaybookbundle/mariadb-apb:v3.9 |
1489 Then the step should succeed # features/step_definitions/common.rb:4
1490 And I run commands on the host: # features/step_definitions/node.rb:65
1491 [03:39:48] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1492 docker tag docker.io/ansibleplaybookbundle/mariadb-apb:v3.9 172.30.10.211:5000/openshift/mariadb-apb:latest` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1493
1494 [03:39:50] INFO> Exit Status: 0
1495 | docker tag docker.io/ansibleplaybookbundle/mariadb-apb:v3.9 <%= cb.integrated_reg_ip %>/openshift/mariadb-apb:latest |
1496 Then the step should succeed # features/step_definitions/common.rb:4
1497 # When I docker push on the node to the registry the following images:
1498 # | <%= cb.integrated_reg_ip %>/openshift/mariadb-apb:v3.9 | docker.io/ansibleplaybookbundle/mariadb-apb:v3.9 |
1499 When I run commands on the host: # features/step_definitions/node.rb:65
1500 [03:39:50] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1501 docker push 172.30.10.211:5000/openshift/mariadb-apb:latest` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1502 The push refers to a repository [172.30.10.211:5000/openshift/mariadb-apb]
1503 4edfbc7c2de9: Preparing
1504 cf793eeee526: Preparing
1505 ab2b24e436b8: Preparing
1506 6a0866bebe27: Preparing
1507 e15afa4858b6: Preparing
1508 e15afa4858b6: Mounted from openshift/mediawiki-apb
1509 6a0866bebe27: Mounted from openshift/mediawiki-apb
1510 ab2b24e436b8: Mounted from openshift/mediawiki-apb
1511 cf793eeee526: Mounted from openshift/mediawiki-apb
1512 4edfbc7c2de9: Mounted from openshift/mediawiki-apb
1513 latest: digest: sha256:cb95d4e7c600a1e459ecdbff056d174850c2df95286dedc51dc0498d21ff74c4 size: 1370
1514
1515 [03:39:53] INFO> Exit Status: 0
1516 | docker push <%= cb.integrated_reg_ip %>/openshift/mariadb-apb:latest |
1517 Then the step should succeed # features/step_definitions/common.rb:4
1518waiting for operation up to 3600 seconds..
1519waiting for operation up to 3600 seconds..
1520waiting for operation up to 3600 seconds..
1521waiting for operation up to 3600 seconds..
1522waiting for operation up to 3600 seconds..
1523waiting for operation up to 3600 seconds..
1524waiting for operation up to 3600 seconds..
1525 [03:39:53] INFO> === After Scenario: [ASB] Support concurrent, multiple APB source adapters ===
1526 [03:39:53] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1527 tar xvPf '/etc/sysconfig/docker.tar' && rm -f '/etc/sysconfig/docker.tar'` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1528 /etc/sysconfig/docker
1529
1530 [03:39:55] INFO> Exit Status: 0
1531 [03:39:55] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1532 systemctl restart docker` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1533
1534 [03:39:59] INFO> Exit Status: 0
1535 [03:40:02] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1536 systemctl is-active docker` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1537 active
1538
1539 [03:40:04] INFO> Exit Status: 0
1540 [03:40:22] INFO> before restart status of service atomic-openshift-node.service on host-8-249-82.host.centralci.eng.rdu2.redhat.com is: active
1541 [03:40:22] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1542 systemctl restart atomic-openshift-node.service` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1543
1544 [03:40:24] INFO> Exit Status: 0
1545 [03:40:44] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1546 systemctl status atomic-openshift-node.service` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1547 â— atomic-openshift-node.service - OpenShift Node
1548 Loaded: loaded (/etc/systemd/system/atomic-openshift-node.service; enabled; vendor preset: disabled)
1549 Drop-In: /usr/lib/systemd/system/atomic-openshift-node.service.d
1550 └─openshift-sdn-ovs.conf
1551 Active: active (running) since 一 2018-05-07 23:39:04 EDT; 21s ago
1552 Docs: https://github.com/openshift/origin
1553 Process: 96634 ExecStopPost=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string: (code=exited, status=0/SUCCESS)
1554 Process: 96632 ExecStopPost=/usr/bin/rm /etc/dnsmasq.d/node-dnsmasq.conf (code=exited, status=0/SUCCESS)
1555 Process: 96640 ExecStartPre=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:/in-addr.arpa/127.0.0.1,/cluster.local/127.0.0.1 (code=exited, status=0/SUCCESS)
1556 Process: 96637 ExecStartPre=/usr/bin/cp /etc/origin/node/node-dnsmasq.conf /etc/dnsmasq.d/ (code=exited, status=0/SUCCESS)
1557 Main PID: 96642 (openshift)
1558 Tasks: 13
1559 Memory: 76.2M
1560 CGroup: /system.slice/atomic-openshift-node.service
1561 └─96642 /usr/bin/openshift start node --config=/etc/origin/node/node-config.yaml --loglevel=5
1562
1563 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1564 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1565 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1566 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1567 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}, Command: [/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1568 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1569 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1570 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1571 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1572 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/]
1573 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.603022 96642 exec.go:38] Exec probe response: "<!doctype html>\n<html class=\"no-js layout-pf layout-pf-fixed\">\n<head>\n<meta http-equiv=\"X-UA-Compatible\" content=\"IE=EDGE\"/>\n<meta charset=\"utf-8\">\n<base href=\"/console/\">\n<title>OpenShift Web Console</title>\n<meta name=\"description\" content=\"\">\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n<link rel=\"icon\" type=\"image/png\" href=\"images/favicon.png\"/>\n<link rel=\"icon\" type=\"image/x-icon\" href=\"images/favicon.ico\"/>\n<link rel=\"apple-touch-icon-precomposed\" sizes=\"144x144\" href=\"images/apple-touch-icon-precomposed.png\">\n<link rel=\"mask-icon\" href=\"images/mask-icon.svg\" color=\"#DB242F\">\n<meta name=\"application-name\" content=\"OpenShift\">\n<meta name=\"msapplication-TileColor\" content=\"#000000\">\n<meta name=\"msapplication-TileImage\" content=\"images/mstile-144x144.png\">\n<link rel=\"stylesheet\" href=\"styles/vendor.css\">\n<link rel=\"stylesheet\" href=\"styles/main.css\">\n<style type=\"text/css\"></style>\n</head>\n<body class=\"console-os\" ng-class=\"{ 'has-project-bar': view.hasProject, 'has-project-search': view.hasProjectSearch }\">\n<osc-header></osc-header>\n<toast-notifications></toast-notifications>\n<notification-drawer-wrapper></notification-drawer-wrapper>\n<div class=\"container-pf-nav-pf-vertical\" ng-class=\"{ 'collapsed-nav': nav.collapsed }\">\n<div ng-view class=\"view\">\n<div class=\"middle\">\n<div class=\"middle-content\">\n<div class=\"empty-state-message loading\">\n<h2 class=\"text-center\" id=\"temporary-loading-message\" style=\"display: none\">Loading...</h2>\n<script>document.getElementById('temporary-loading-message').style.display = \"\";</script>\n</div>\n<noscript>\n<div class=\"attention-message\">\n<h1>JavaScript Required</h1>\n<p>The OpenShift web console requires JavaScript to provide a rich interactive experience. Please enable JavaScript to continue. If you do not wish to enable JavaScript or are unable to do so, you may use the
1574 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: command-line tools to manage your projects and applications instead.</p>\n</div>\n</noscript>\n</div>\n</div>\n</div>\n</div>\n<script src=\"config.js\"></script>\n<!--[if lt IE 9]>\n <script src=\"scripts/oldieshim.js\"></script>\n <![endif]-->\n<script src=\"scripts/vendor.js\"></script>\n<script src=\"scripts/templates.js\"></script>\n<script src=\"scripts/scripts.js\"></script>\n</body>\n</html> % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 2243 0 2243 0 0 69949 0 --:--:-- --:--:-- --:--:-- 72354\n"
1575 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.603148 96642 prober.go:118] Liveness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1576 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.710572 96642 prober.go:165] HTTP-Probe Host: https://10.129.0.4, Port: 8443, Path: /healthz
1577 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.710636 96642 prober.go:168] HTTP-Probe Headers: map[]
1578 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.718778 96642 http.go:96] Probe succeeded for https://10.129.0.4:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Date:[Tue, 08 May 2018 03:39:26 GMT] Content-Type:[text/plain; charset=utf-8] Content-Length:[2]] 0xc421087220 2 [] false false map[] 0xc4224a0600 0xc420cd5340}
1579 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.718850 96642 prober.go:118] Readiness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1580 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.760704 96642 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
1581 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.760757 96642 prober.go:168] HTTP-Probe Headers: map[]
1582 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.762413 96642 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[application/json] Access-Control-Allow-Origin:[*] Content-Length:[24]] 0xc42107b9c0 24 [] true false map[] 0xc4224a0800 <nil>}
1583 5月 07 23:39:26 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:26.762457 96642 prober.go:118] Liveness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1584 5月 07 23:39:27 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:27.262283 96642 kubelet.go:1924] SyncLoop (housekeeping)
1585 5月 07 23:39:27 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:27.268864 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1586 5月 07 23:39:27 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:27.270005 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1587 5月 07 23:39:27 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:27.338388 96642 generic.go:183] GenericPLEG: Relisting
1588 5月 07 23:39:27 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:27.944874 96642 openstack_instances.go:39] openstack.Instances() called
1589 5月 07 23:39:27 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:27.944917 96642 openstack_instances.go:46] Claiming to support Instances
1590 5月 07 23:39:27 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:27.944924 96642 openstack_instances.go:69] NodeAddresses(172.16.120.63) called
1591 5月 07 23:39:27 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:27.997389 96642 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
1592 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.058442 96642 eviction_manager.go:221] eviction manager: synchronize housekeeping
1593 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.097302 96642 helpers.go:827] eviction manager: observations: signal=memory.available, available: 2443384Ki, capacity: 3881588Ki, time: 2018-05-07 23:39:22.356079658 -0400 EDT m=+18.287880560
1594 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.097372 96642 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 14494868Ki, capacity: 31010Mi, time: 2018-05-07 23:39:22.356079658 -0400 EDT m=+18.287880560
1595 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.097385 96642 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 15712843, capacity: 15510Ki, time: 2018-05-07 23:39:22.356079658 -0400 EDT m=+18.287880560
1596 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.097397 96642 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 14494868Ki, capacity: 31010Mi, time: 2018-05-07 23:39:22.356079658 -0400 EDT m=+18.287880560
1597 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.097407 96642 helpers.go:827] eviction manager: observations: signal=imagefs.inodesFree, available: 15712843, capacity: 15510Ki, time: 2018-05-07 23:39:22.356079658 -0400 EDT m=+18.287880560
1598 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.097418 96642 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 3660956Ki, capacity: 3881588Ki
1599 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.097449 96642 eviction_manager.go:325] eviction manager: no resources are starved
1600 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.228514 96642 openstack_instances.go:76] NodeAddresses(172.16.120.63) => [{InternalIP 172.16.120.63} {ExternalIP 10.8.249.82}]
1601 5月 07 23:39:28 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:28.339640 96642 generic.go:183] GenericPLEG: Relisting
1602 5月 07 23:39:29 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:29.262234 96642 kubelet.go:1924] SyncLoop (housekeeping)
1603 5月 07 23:39:29 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:29.272503 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1604 5月 07 23:39:29 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:29.273905 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1605 5月 07 23:39:29 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:29.341167 96642 generic.go:183] GenericPLEG: Relisting
1606 5月 07 23:39:30 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:30.344184 96642 generic.go:183] GenericPLEG: Relisting
1607 5月 07 23:39:31 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:31.262317 96642 kubelet.go:1924] SyncLoop (housekeeping)
1608 5月 07 23:39:31 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:31.270611 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1609 5月 07 23:39:31 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:31.271737 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1610 5月 07 23:39:31 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:31.349188 96642 generic.go:183] GenericPLEG: Relisting
1611 5月 07 23:39:32 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:32.353818 96642 generic.go:183] GenericPLEG: Relisting
1612 5月 07 23:39:32 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:32.998348 96642 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
1613 5月 07 23:39:33 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:33.262249 96642 kubelet.go:1924] SyncLoop (housekeeping)
1614 5月 07 23:39:33 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:33.274572 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1615 5月 07 23:39:33 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:33.276506 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1616 5月 07 23:39:33 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:33.355507 96642 generic.go:183] GenericPLEG: Relisting
1617 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.260454 96642 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
1618 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.262858 96642 prober.go:168] HTTP-Probe Headers: map[]
1619 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.270966 96642 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[application/json] Access-Control-Allow-Origin:[*] Content-Length:[24]] 0xc4210ba0a0 24 [] true false map[] 0xc421afa900 <nil>}
1620 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.272001 96642 prober.go:118] Readiness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1621 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.357474 96642 generic.go:183] GenericPLEG: Relisting
1622 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.411144 96642 iptables.go:101] Syncing openshift iptables rules
1623 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.415717 96642 iptables.go:419] running iptables -N [OPENSHIFT-FIREWALL-FORWARD -t filter]
1624 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.418748 96642 ovs.go:145] Executing: ovs-ofctl -O OpenFlow13 dump-flows br0 table=253
1625 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.433334 96642 iptables.go:419] running iptables -C [FORWARD -t filter -m comment --comment firewall overrides -j OPENSHIFT-FIREWALL-FORWARD]
1626 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.435590 96642 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-FORWARD -t filter -s 10.128.0.0/14 -m comment --comment attempted resend after connection close -m conntrack --ctstate INVALID -j DROP]
1627 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.442698 96642 healthcheck.go:98] SDN healthcheck succeeded
1628 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.443942 96642 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-FORWARD -t filter -d 10.128.0.0/14 -m comment --comment forward traffic from SDN -j ACCEPT]
1629 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.446980 96642 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-FORWARD -t filter -s 10.128.0.0/14 -m comment --comment forward traffic to SDN -j ACCEPT]
1630 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.449380 96642 iptables.go:419] running iptables -N [OPENSHIFT-MASQUERADE -t nat]
1631 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.451080 96642 iptables.go:419] running iptables -C [POSTROUTING -t nat -m comment --comment rules for masquerading OpenShift traffic -j OPENSHIFT-MASQUERADE]
1632 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.454621 96642 iptables.go:419] running iptables -C [OPENSHIFT-MASQUERADE -t nat -s 10.128.0.0/14 -m comment --comment masquerade pod-to-service and pod-to-external traffic -j MASQUERADE]
1633 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.461874 96642 iptables.go:419] running iptables -N [OPENSHIFT-ADMIN-OUTPUT-RULES -t filter]
1634 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.463469 96642 iptables.go:419] running iptables -C [FORWARD -t filter -i tun0 ! -o tun0 -m comment --comment administrator overrides -j OPENSHIFT-ADMIN-OUTPUT-RULES]
1635 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.465270 96642 iptables.go:419] running iptables -N [OPENSHIFT-FIREWALL-ALLOW -t filter]
1636 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.466848 96642 iptables.go:419] running iptables -C [INPUT -t filter -m comment --comment firewall overrides -j OPENSHIFT-FIREWALL-ALLOW]
1637 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.468676 96642 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-ALLOW -t filter -p udp --dport 4789 -m comment --comment VXLAN incoming -j ACCEPT]
1638 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.472707 96642 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-ALLOW -t filter -i tun0 -m comment --comment from SDN to localhost -j ACCEPT]
1639 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.474511 96642 iptables.go:419] running iptables -C [OPENSHIFT-FIREWALL-ALLOW -t filter -i docker0 -m comment --comment from docker to localhost -j ACCEPT]
1640 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.476249 96642 iptables.go:99] syncIPTableRules took 65.138118ms
1641 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.562969 96642 iptables.go:419] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
1642 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.565492 96642 iptables.go:419] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
1643 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.567326 96642 iptables.go:419] running iptables -N [KUBE-PORTALS-HOST -t nat]
1644 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.567831 96642 dnsmasq.go:123] Instructing dnsmasq to set the following servers: [/in-addr.arpa/127.0.0.1 /cluster.local/127.0.0.1]
1645 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.571587 96642 iptables.go:419] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
1646 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.575826 96642 iptables.go:419] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
1647 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.577528 96642 iptables.go:419] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
1648 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.579763 96642 iptables.go:419] running iptables -N [KUBE-NODEPORT-HOST -t nat]
1649 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.581886 96642 iptables.go:419] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
1650 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.583940 96642 iptables.go:419] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
1651 5月 07 23:39:34 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:34.585357 96642 iptables.go:419] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
1652 5月 07 23:39:35 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:35.262274 96642 kubelet.go:1924] SyncLoop (housekeeping)
1653 5月 07 23:39:35 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:35.269474 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1654 5月 07 23:39:35 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:35.270867 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1655 5月 07 23:39:35 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:35.295069 96642 prober.go:165] HTTP-Probe Host: https://10.129.0.8, Port: 8443, Path: /healthz
1656 5月 07 23:39:35 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:35.295491 96642 prober.go:168] HTTP-Probe Headers: map[]
1657 5月 07 23:39:35 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:35.339263 96642 http.go:96] Probe succeeded for https://10.129.0.8:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Tue, 08 May 2018 03:39:35 GMT]] 0xc4210d7940 2 [] false false map[] 0xc4217a0200 0xc420cd5340}
1658 5月 07 23:39:35 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:35.339321 96642 prober.go:118] Readiness probe for "apiserver-qq6rl_openshift-template-service-broker(8beaacd9-519c-11e8-9f32-fa163edc217c):c" succeeded
1659 5月 07 23:39:35 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:35.359555 96642 generic.go:183] GenericPLEG: Relisting
1660 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.360901 96642 generic.go:183] GenericPLEG: Relisting
1661 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.482256 96642 prober.go:150] Exec-Probe Pod: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:webconsole-55dd868cdf-crvth,GenerateName:webconsole-55dd868cdf-,Namespace:openshift-web-console,SelfLink:/api/v1/namespaces/openshift-web-console/pods/webconsole-55dd868cdf-crvth,UID:aebd73ce-519b-11e8-9f32-fa163edc217c,ResourceVersion:189416,Generation:0,CreationTimestamp:2018-05-06 22:09:32 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: openshift-web-console,pod-template-hash: 1188424789,webconsole: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-05-07T23:39:05.254025036-04:00,kubernetes.io/config.source: api,openshift.io/scc: restricted,},OwnerReferences:[{extensions/v1beta1 ReplicaSet webconsole-55dd868cdf ae182bb5-519b-11e8-9f32-fa163edc217c 0xc4211eebf0 0xc4211eebf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{serving-cert {nil nil nil nil nil SecretVolumeSource{SecretName:webconsole-serving-cert,Items:[],DefaultMode:*400,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {webconsole-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:webconsole-config,},Items:[],DefaultMode:*440,Optional:nil,} nil nil nil nil nil nil nil nil}} {webconsole-token-rdcw4 {nil nil nil nil nil &SecretVolumeSource{SecretName:webconsole-token-rdcw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false
1662 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1663 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1664 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1665 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1666 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1667 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: true,},ServiceAccountName:webconsole,DeprecatedServiceAccount:webconsole,NodeName:172.16.120.63,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c9,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000080000,},ImagePullSecrets:[{webconsole-dockercfg-rdx22}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/memory-pressure Exists NoSchedule <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-05-07 23:10:23 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT }],Message:,Reason:,HostIP:172.16.120.63,PodIP:10.129.0.4,StartTime:2018-05-06 22:09:32 -0400 EDT,ContainerStatuses:[{webconsole {nil ContainerStateRunning{StartedAt:2018-05-06 22:14:10 -0400 EDT,} nil} {nil nil nil} true 0 registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 regis
1668 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: try.reg-aws.openshift.com:443/openshift3/ose-web-console@sha256:2b4e7533d9f4ee450fdb9dac3b096ef619538da7c0b9f8489e5aee4cee18cc3e cri-o://138ebee17264174a3b2aad338b2a9d31108041c14f0b3f1e021721333b79e0b1}],QOSClass:Burstable,InitContainerStatuses:[],},}, Container: {webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] &Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1669 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1670 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1671 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1672 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1673 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}, Command: [/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1674 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1675 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1676 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1677 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1678 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/]
1679 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.677193 96642 exec.go:38] Exec probe response: "<!doctype html>\n<html class=\"no-js layout-pf layout-pf-fixed\">\n<head>\n<meta http-equiv=\"X-UA-Compatible\" content=\"IE=EDGE\"/>\n<meta charset=\"utf-8\">\n<base href=\"/console/\">\n<title>OpenShift Web Console</title>\n<meta name=\"description\" content=\"\">\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n<link rel=\"icon\" type=\"image/png\" href=\"images/favicon.png\"/>\n<link rel=\"icon\" type=\"image/x-icon\" href=\"images/favicon.ico\"/>\n<link rel=\"apple-touch-icon-precomposed\" sizes=\"144x144\" href=\"images/apple-touch-icon-precomposed.png\">\n<link rel=\"mask-icon\" href=\"images/mask-icon.svg\" color=\"#DB242F\">\n<meta name=\"application-name\" content=\"OpenShift\">\n<meta name=\"msapplication-TileColor\" content=\"#000000\">\n<meta name=\"msapplication-TileImage\" content=\"images/mstile-144x144.png\">\n<link rel=\"stylesheet\" href=\"styles/vendor.css\">\n<link rel=\"stylesheet\" href=\"styles/main.css\">\n<style type=\"text/css\"></style>\n</head>\n<body class=\"console-os\" ng-class=\"{ 'has-project-bar': view.hasProject, 'has-project-search': view.hasProjectSearch }\">\n<osc-header></osc-header>\n<toast-notifications></toast-notifications>\n<notification-drawer-wrapper></notification-drawer-wrapper>\n<div class=\"container-pf-nav-pf-vertical\" ng-class=\"{ 'collapsed-nav': nav.collapsed }\">\n<div ng-view class=\"view\">\n<div class=\"middle\">\n<div class=\"middle-content\">\n<div class=\"empty-state-message loading\">\n<h2 class=\"text-center\" id=\"temporary-loading-message\" style=\"display: none\">Loading...</h2>\n<script>document.getElementById('temporary-loading-message').style.display = \"\";</script>\n</div>\n<noscript>\n<div class=\"attention-message\">\n<h1>JavaScript Required</h1>\n<p>The OpenShift web console requires JavaScript to provide a rich interactive experience. Please enable JavaScript to continue. If you do not wish to enable JavaScript or are unable to do so, you may use the
1680 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: command-line tools to manage your projects and applications instead.</p>\n</div>\n</noscript>\n</div>\n</div>\n</div>\n</div>\n<script src=\"config.js\"></script>\n<!--[if lt IE 9]>\n <script src=\"scripts/oldieshim.js\"></script>\n <![endif]-->\n<script src=\"scripts/vendor.js\"></script>\n<script src=\"scripts/templates.js\"></script>\n<script src=\"scripts/scripts.js\"></script>\n</body>\n</html> % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 2243 0 2243 0 0 39287 0 --:--:-- --:--:-- --:--:-- 40053\n"
1681 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.677315 96642 prober.go:118] Liveness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1682 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.710416 96642 prober.go:165] HTTP-Probe Host: https://10.129.0.4, Port: 8443, Path: /healthz
1683 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.710457 96642 prober.go:168] HTTP-Probe Headers: map[]
1684 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.718338 96642 http.go:96] Probe succeeded for https://10.129.0.4:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Length:[2] Date:[Tue, 08 May 2018 03:39:36 GMT] Content-Type:[text/plain; charset=utf-8]] 0xc4210f55a0 2 [] false false map[] 0xc421afbc00 0xc420f1af20}
1685 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.718389 96642 prober.go:118] Readiness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1686 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.760668 96642 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
1687 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.760708 96642 prober.go:168] HTTP-Probe Headers: map[]
1688 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.761485 96642 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[application/json] Access-Control-Allow-Origin:[*] Content-Length:[24]] 0xc4210f5680 24 [] true false map[] 0xc4217a0900 <nil>}
1689 5月 07 23:39:36 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:36.761518 96642 prober.go:118] Liveness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1690 5月 07 23:39:37 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:37.262170 96642 kubelet.go:1924] SyncLoop (housekeeping)
1691 5月 07 23:39:37 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:37.270139 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1692 5月 07 23:39:37 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:37.270624 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1693 5月 07 23:39:37 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:37.362323 96642 generic.go:183] GenericPLEG: Relisting
1694 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:37.999882 96642 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
1695 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.097701 96642 eviction_manager.go:221] eviction manager: synchronize housekeeping
1696 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.142787 96642 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 14494928Ki, capacity: 31010Mi, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1697 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.144329 96642 helpers.go:827] eviction manager: observations: signal=imagefs.inodesFree, available: 15712841, capacity: 15510Ki, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1698 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.145901 96642 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 3661108Ki, capacity: 3881588Ki
1699 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.146356 96642 helpers.go:827] eviction manager: observations: signal=memory.available, available: 2479716Ki, capacity: 3881588Ki, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1700 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.146758 96642 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 14494928Ki, capacity: 31010Mi, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1701 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.147197 96642 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 15712841, capacity: 15510Ki, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1702 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.147831 96642 eviction_manager.go:325] eviction manager: no resources are starved
1703 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.257206 96642 openstack_instances.go:39] openstack.Instances() called
1704 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.257563 96642 openstack_instances.go:46] Claiming to support Instances
1705 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.257775 96642 openstack_instances.go:69] NodeAddresses(172.16.120.63) called
1706 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.363499 96642 generic.go:183] GenericPLEG: Relisting
1707 5月 07 23:39:38 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:38.711234 96642 openstack_instances.go:76] NodeAddresses(172.16.120.63) => [{InternalIP 172.16.120.63} {ExternalIP 10.8.249.82}]
1708 5月 07 23:39:39 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:39.262356 96642 kubelet.go:1924] SyncLoop (housekeeping)
1709 5月 07 23:39:39 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:39.273074 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1710 5月 07 23:39:39 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:39.274087 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1711 5月 07 23:39:39 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:39.366168 96642 generic.go:183] GenericPLEG: Relisting
1712 5月 07 23:39:40 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:40.368854 96642 generic.go:183] GenericPLEG: Relisting
1713 5月 07 23:39:41 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:41.262207 96642 kubelet.go:1924] SyncLoop (housekeeping)
1714 5月 07 23:39:41 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:41.268880 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1715 5月 07 23:39:41 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:41.270524 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1716 5月 07 23:39:41 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:41.370552 96642 generic.go:183] GenericPLEG: Relisting
1717
1718 [03:41:03] INFO> Exit Status: 0
1719 [03:41:03] INFO> Remote cmd: `cd '/tmp/workdir/localhost-szh'
1720 systemctl status atomic-openshift-node` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
1721 â— atomic-openshift-node.service - OpenShift Node
1722 Loaded: loaded (/etc/systemd/system/atomic-openshift-node.service; enabled; vendor preset: disabled)
1723 Drop-In: /usr/lib/systemd/system/atomic-openshift-node.service.d
1724 └─openshift-sdn-ovs.conf
1725 Active: active (running) since 一 2018-05-07 23:39:04 EDT; 40s ago
1726 Docs: https://github.com/openshift/origin
1727 Process: 96634 ExecStopPost=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string: (code=exited, status=0/SUCCESS)
1728 Process: 96632 ExecStopPost=/usr/bin/rm /etc/dnsmasq.d/node-dnsmasq.conf (code=exited, status=0/SUCCESS)
1729 Process: 96640 ExecStartPre=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:/in-addr.arpa/127.0.0.1,/cluster.local/127.0.0.1 (code=exited, status=0/SUCCESS)
1730 Process: 96637 ExecStartPre=/usr/bin/cp /etc/origin/node/node-dnsmasq.conf /etc/dnsmasq.d/ (code=exited, status=0/SUCCESS)
1731 Main PID: 96642 (openshift)
1732 Tasks: 13
1733 Memory: 60.9M
1734 CGroup: /system.slice/atomic-openshift-node.service
1735 └─96642 /usr/bin/openshift start node --config=/etc/origin/node/node-config.yaml --loglevel=5
1736
1737 5月 07 23:39:44 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:44.257017 96642 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
1738 5月 07 23:39:44 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:44.257082 96642 prober.go:168] HTTP-Probe Headers: map[]
1739 5月 07 23:39:44 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:44.259323 96642 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[24] Content-Type:[application/json] Access-Control-Allow-Origin:[*]] 0xc4212ef5e0 24 [] true false map[] 0xc422027300 <nil>}
1740 5月 07 23:39:44 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:44.259393 96642 prober.go:118] Readiness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1741 5月 07 23:39:44 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:44.376768 96642 generic.go:183] GenericPLEG: Relisting
1742 5月 07 23:39:45 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:45.262230 96642 kubelet.go:1924] SyncLoop (housekeeping)
1743 5月 07 23:39:45 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:45.269747 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1744 5月 07 23:39:45 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:45.270468 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1745 5月 07 23:39:45 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:45.295020 96642 prober.go:165] HTTP-Probe Host: https://10.129.0.8, Port: 8443, Path: /healthz
1746 5月 07 23:39:45 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:45.295391 96642 prober.go:168] HTTP-Probe Headers: map[]
1747 5月 07 23:39:45 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:45.304579 96642 http.go:96] Probe succeeded for https://10.129.0.8:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Tue, 08 May 2018 03:39:45 GMT]] 0xc4213f75e0 2 [] false false map[] 0xc422c54300 0xc4212b38c0}
1748 5月 07 23:39:45 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:45.304978 96642 prober.go:118] Readiness probe for "apiserver-qq6rl_openshift-template-service-broker(8beaacd9-519c-11e8-9f32-fa163edc217c):c" succeeded
1749 5月 07 23:39:45 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:45.378850 96642 generic.go:183] GenericPLEG: Relisting
1750 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.380579 96642 generic.go:183] GenericPLEG: Relisting
1751 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.481964 96642 prober.go:150] Exec-Probe Pod: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:webconsole-55dd868cdf-crvth,GenerateName:webconsole-55dd868cdf-,Namespace:openshift-web-console,SelfLink:/api/v1/namespaces/openshift-web-console/pods/webconsole-55dd868cdf-crvth,UID:aebd73ce-519b-11e8-9f32-fa163edc217c,ResourceVersion:189416,Generation:0,CreationTimestamp:2018-05-06 22:09:32 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: openshift-web-console,pod-template-hash: 1188424789,webconsole: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-05-07T23:39:05.254025036-04:00,kubernetes.io/config.source: api,openshift.io/scc: restricted,},OwnerReferences:[{extensions/v1beta1 ReplicaSet webconsole-55dd868cdf ae182bb5-519b-11e8-9f32-fa163edc217c 0xc4211eebf0 0xc4211eebf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{serving-cert {nil nil nil nil nil SecretVolumeSource{SecretName:webconsole-serving-cert,Items:[],DefaultMode:*400,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {webconsole-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:webconsole-config,},Items:[],DefaultMode:*440,Optional:nil,} nil nil nil nil nil nil nil nil}} {webconsole-token-rdcw4 {nil nil nil nil nil &SecretVolumeSource{SecretName:webconsole-token-rdcw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false
1752 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1753 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1754 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1755 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1756 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1757 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: true,},ServiceAccountName:webconsole,DeprecatedServiceAccount:webconsole,NodeName:172.16.120.63,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c9,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000080000,},ImagePullSecrets:[{webconsole-dockercfg-rdx22}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/memory-pressure Exists NoSchedule <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-05-07 23:10:23 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT }],Message:,Reason:,HostIP:172.16.120.63,PodIP:10.129.0.4,StartTime:2018-05-06 22:09:32 -0400 EDT,ContainerStatuses:[{webconsole {nil ContainerStateRunning{StartedAt:2018-05-06 22:14:10 -0400 EDT,} nil} {nil nil nil} true 0 registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 regis
1758 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: try.reg-aws.openshift.com:443/openshift3/ose-web-console@sha256:2b4e7533d9f4ee450fdb9dac3b096ef619538da7c0b9f8489e5aee4cee18cc3e cri-o://138ebee17264174a3b2aad338b2a9d31108041c14f0b3f1e021721333b79e0b1}],QOSClass:Burstable,InitContainerStatuses:[],},}, Container: {webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] &Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1759 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1760 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1761 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1762 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1763 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}, Command: [/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1764 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1765 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1766 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1767 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1768 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/]
1769 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.639553 96642 exec.go:38] Exec probe response: "<!doctype html>\n<html class=\"no-js layout-pf layout-pf-fixed\">\n<head>\n<meta http-equiv=\"X-UA-Compatible\" content=\"IE=EDGE\"/>\n<meta charset=\"utf-8\">\n<base href=\"/console/\">\n<title>OpenShift Web Console</title>\n<meta name=\"description\" content=\"\">\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n<link rel=\"icon\" type=\"image/png\" href=\"images/favicon.png\"/>\n<link rel=\"icon\" type=\"image/x-icon\" href=\"images/favicon.ico\"/>\n<link rel=\"apple-touch-icon-precomposed\" sizes=\"144x144\" href=\"images/apple-touch-icon-precomposed.png\">\n<link rel=\"mask-icon\" href=\"images/mask-icon.svg\" color=\"#DB242F\">\n<meta name=\"application-name\" content=\"OpenShift\">\n<meta name=\"msapplication-TileColor\" content=\"#000000\">\n<meta name=\"msapplication-TileImage\" content=\"images/mstile-144x144.png\">\n<link rel=\"stylesheet\" href=\"styles/vendor.css\">\n<link rel=\"stylesheet\" href=\"styles/main.css\">\n<style type=\"text/css\"></style>\n</head>\n<body class=\"console-os\" ng-class=\"{ 'has-project-bar': view.hasProject, 'has-project-search': view.hasProjectSearch }\">\n<osc-header></osc-header>\n<toast-notifications></toast-notifications>\n<notification-drawer-wrapper></notification-drawer-wrapper>\n<div class=\"container-pf-nav-pf-vertical\" ng-class=\"{ 'collapsed-nav': nav.collapsed }\">\n<div ng-view class=\"view\">\n<div class=\"middle\">\n<div class=\"middle-content\">\n<div class=\"empty-state-message loading\">\n<h2 class=\"text-center\" id=\"temporary-loading-message\" style=\"display: none\">Loading...</h2>\n<script>document.getElementById('temporary-loading-message').style.display = \"\";</script>\n</div>\n<noscript>\n<div class=\"attention-message\">\n<h1>JavaScript Required</h1>\n<p>The OpenShift web console requires JavaScript to provide a rich interactive experience. Please enable JavaScript to continue. If you do not wish to enable JavaScript or are unable to do so, you may use the
1770 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: command-line tools to manage your projects and applications instead.</p>\n</div>\n</noscript>\n</div>\n</div>\n</div>\n</div>\n<script src=\"config.js\"></script>\n<!--[if lt IE 9]>\n <script src=\"scripts/oldieshim.js\"></script>\n <![endif]-->\n<script src=\"scripts/vendor.js\"></script>\n<script src=\"scripts/templates.js\"></script>\n<script src=\"scripts/scripts.js\"></script>\n</body>\n</html> % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 2243 0 2243 0 0 54031 0 --:--:-- --:--:-- --:--:-- 56075\n"
1771 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.639712 96642 prober.go:118] Liveness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1772 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.710425 96642 prober.go:165] HTTP-Probe Host: https://10.129.0.4, Port: 8443, Path: /healthz
1773 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.710481 96642 prober.go:168] HTTP-Probe Headers: map[]
1774 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.717206 96642 http.go:96] Probe succeeded for https://10.129.0.4:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Tue, 08 May 2018 03:39:46 GMT]] 0xc421305c40 2 [] false false map[] 0xc4212e0200 0xc420b3fd90}
1775 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.717279 96642 prober.go:118] Readiness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
1776 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.760697 96642 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
1777 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.760747 96642 prober.go:168] HTTP-Probe Headers: map[]
1778 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.762466 96642 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[application/json] Access-Control-Allow-Origin:[*] Content-Length:[24]] 0xc421305de0 24 [] true false map[] 0xc420cbc900 <nil>}
1779 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.762511 96642 prober.go:118] Liveness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1780 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.772943 96642 proxier.go:999] Syncing iptables rules
1781 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.772999 96642 iptables.go:419] running iptables -N [KUBE-EXTERNAL-SERVICES -t filter]
1782 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.780321 96642 iptables.go:419] running iptables -C [INPUT -t filter -m conntrack --ctstate NEW -m comment --comment kubernetes externally-visible service portals -j KUBE-EXTERNAL-SERVICES]
1783 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.782953 96642 iptables.go:419] running iptables -N [KUBE-SERVICES -t filter]
1784 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.784492 96642 iptables.go:419] running iptables -C [OUTPUT -t filter -m conntrack --ctstate NEW -m comment --comment kubernetes service portals -j KUBE-SERVICES]
1785 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.786037 96642 iptables.go:419] running iptables -N [KUBE-SERVICES -t nat]
1786 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.787689 96642 iptables.go:419] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
1787 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.789216 96642 iptables.go:419] running iptables -N [KUBE-SERVICES -t nat]
1788 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.790544 96642 iptables.go:419] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
1789 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.792009 96642 iptables.go:419] running iptables -N [KUBE-POSTROUTING -t nat]
1790 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.793277 96642 iptables.go:419] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
1791 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.794683 96642 iptables.go:419] running iptables -N [KUBE-FORWARD -t filter]
1792 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.796047 96642 iptables.go:419] running iptables -C [FORWARD -t filter -m comment --comment kubernetes forwarding rules -j KUBE-FORWARD]
1793 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.797569 96642 iptables.go:321] running iptables-save [-t filter]
1794 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.811825 96642 iptables.go:321] running iptables-save [-t nat]
1795 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.819979 96642 proxier.go:1596] Restoring iptables rules: *filter
1796 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SERVICES - [0:0]
1797 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-EXTERNAL-SERVICES - [0:0]
1798 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-FORWARD - [0:0]
1799 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x00000001/0x00000001 -j ACCEPT
1800 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: COMMIT
1801 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: *nat
1802 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SERVICES - [0:0]
1803 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-NODEPORTS - [0:0]
1804 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-POSTROUTING - [0:0]
1805 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-MARK-MASQ - [0:0]
1806 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-4JCRTMMYZAAYMIJ2 - [0:0]
1807 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-JOIR3OGVG7VL7AFC - [0:0]
1808 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-DEGCXZMVXZMJS2KL - [0:0]
1809 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-K7VNBRD46WKLYWN2 - [0:0]
1810 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-ADCURMKBWTVYQV3X - [0:0]
1811 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-GLUHWDSLNBH4KGS6 - [0:0]
1812 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-2AXVG4RE24ZKZUZT - [0:0]
1813 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-7IU5JBFVKLQBSPX7 - [0:0]
1814 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-JDVC66NYTQQGMHBE - [0:0]
1815 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-4O3VEBDDLKGTR77A - [0:0]
1816 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-OEA2LYDHMQ4UNN5R - [0:0]
1817 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-AUFIHBXVFLC36ANV - [0:0]
1818 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-ECTPRXTXBM34L34Q - [0:0]
1819 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-K4QQRBK63CES657K - [0:0]
1820 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
1821 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-SEQNQBAUGWAZQ6TC - [0:0]
1822 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
1823 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-LAWYVLBJXYLGCUJ4 - [0:0]
1824 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
1825 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-VKAXQ4QLUM3D3GQO - [0:0]
1826 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-52XAELSJYS7XYM5B - [0:0]
1827 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-Z5TPR34L26QFQHWZ - [0:0]
1828 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-GQKZAHCS5DTMHUQ6 - [0:0]
1829 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-MRDZ6ZVGEEZIJFGP - [0:0]
1830 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-IKV43KYNCXS2W7KZ - [0:0]
1831 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-4LK2W6YQQEHWZIZI - [0:0]
1832 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-53AY4HBMKMJUV7U4 - [0:0]
1833 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-JKVHLMW65BAFBD7B - [0:0]
1834 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-LY4FCGCV5NJRTFFA - [0:0]
1835 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-AITEICTVDCVCKFOA - [0:0]
1836 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-45FDQWGIHPUKH23I - [0:0]
1837 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-65UFWHJENRZOTMNH - [0:0]
1838 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-UQKKMWQZOKYYAV5R - [0:0]
1839 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-YEIHQHG72SRW62I5 - [0:0]
1840 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-GAKIGMNVM2GN3J4G - [0:0]
1841 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SVC-R2SMGUHFZ7VWTVNL - [0:0]
1842 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: :KUBE-SEP-H36UBZ7QZ7RCNGNR - [0:0]
1843 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00000001/0x00000001 -j MASQUERADE
1844 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-MARK-MASQ -j MARK --set-xmark 0x00000001/0x00000001
1845 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "default/router:1936-tcp cluster IP" -m tcp -p tcp -d 172.30.188.244/32 --dport 1936 -j KUBE-SVC-4JCRTMMYZAAYMIJ2
1846 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-4JCRTMMYZAAYMIJ2 -m comment --comment default/router:1936-tcp -j KUBE-SEP-JOIR3OGVG7VL7AFC
1847 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-JOIR3OGVG7VL7AFC -m comment --comment default/router:1936-tcp -s 172.16.120.67/32 -j KUBE-MARK-MASQ
1848 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-JOIR3OGVG7VL7AFC -m comment --comment default/router:1936-tcp -m tcp -p tcp -j DNAT --to-destination 172.16.120.67:1936
1849 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "default/registry-console:registry-console cluster IP" -m tcp -p tcp -d 172.30.207.69/32 --dport 9000 -j KUBE-SVC-DEGCXZMVXZMJS2KL
1850 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-DEGCXZMVXZMJS2KL -m comment --comment default/registry-console:registry-console -j KUBE-SEP-K7VNBRD46WKLYWN2
1851 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-K7VNBRD46WKLYWN2 -m comment --comment default/registry-console:registry-console -s 10.129.0.5/32 -j KUBE-MARK-MASQ
1852 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-K7VNBRD46WKLYWN2 -m comment --comment default/registry-console:registry-console -m tcp -p tcp -j DNAT --to-destination 10.129.0.5:9090
1853 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "openshift-web-console/webconsole:https cluster IP" -m tcp -p tcp -d 172.30.48.159/32 --dport 443 -j KUBE-SVC-ADCURMKBWTVYQV3X
1854 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-ADCURMKBWTVYQV3X -m comment --comment openshift-web-console/webconsole:https -j KUBE-SEP-GLUHWDSLNBH4KGS6
1855 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-GLUHWDSLNBH4KGS6 -m comment --comment openshift-web-console/webconsole:https -s 10.129.0.4/32 -j KUBE-MARK-MASQ
1856 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-GLUHWDSLNBH4KGS6 -m comment --comment openshift-web-console/webconsole:https -m tcp -p tcp -j DNAT --to-destination 10.129.0.4:8443
1857 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "szh-project1/rhscl-mariadb:port-3306 cluster IP" -m tcp -p tcp -d 172.30.83.107/32 --dport 3306 -j KUBE-SVC-2AXVG4RE24ZKZUZT
1858 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-2AXVG4RE24ZKZUZT -m comment --comment szh-project1/rhscl-mariadb:port-3306 -j KUBE-SEP-7IU5JBFVKLQBSPX7
1859 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-7IU5JBFVKLQBSPX7 -m comment --comment szh-project1/rhscl-mariadb:port-3306 -s 10.128.0.39/32 -j KUBE-MARK-MASQ
1860 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-7IU5JBFVKLQBSPX7 -m comment --comment szh-project1/rhscl-mariadb:port-3306 -m tcp -p tcp -j DNAT --to-destination 10.128.0.39:3306
1861 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "szh-project1/mediawiki123:web cluster IP" -m tcp -p tcp -d 172.30.85.130/32 --dport 8080 -j KUBE-SVC-JDVC66NYTQQGMHBE
1862 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-JDVC66NYTQQGMHBE -m comment --comment szh-project1/mediawiki123:web -j KUBE-SEP-4O3VEBDDLKGTR77A
1863 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-4O3VEBDDLKGTR77A -m comment --comment szh-project1/mediawiki123:web -s 10.128.0.26/32 -j KUBE-MARK-MASQ
1864 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-4O3VEBDDLKGTR77A -m comment --comment szh-project1/mediawiki123:web -m tcp -p tcp -j DNAT --to-destination 10.128.0.26:8080
1865 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "openshift-ansible-service-broker/asb-etcd:port-2379 cluster IP" -m tcp -p tcp -d 172.30.48.44/32 --dport 2379 -j KUBE-SVC-OEA2LYDHMQ4UNN5R
1866 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-OEA2LYDHMQ4UNN5R -m comment --comment openshift-ansible-service-broker/asb-etcd:port-2379 -j KUBE-SEP-AUFIHBXVFLC36ANV
1867 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-AUFIHBXVFLC36ANV -m comment --comment openshift-ansible-service-broker/asb-etcd:port-2379 -s 10.128.0.10/32 -j KUBE-MARK-MASQ
1868 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-AUFIHBXVFLC36ANV -m comment --comment openshift-ansible-service-broker/asb-etcd:port-2379 -m tcp -p tcp -j DNAT --to-destination 10.128.0.10:2379
1869 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "default/docker-registry:5000-tcp cluster IP" -m tcp -p tcp -d 172.30.10.211/32 --dport 5000 -j KUBE-SVC-ECTPRXTXBM34L34Q
1870 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-ECTPRXTXBM34L34Q -m comment --comment default/docker-registry:5000-tcp -m recent --name KUBE-SEP-K4QQRBK63CES657K --rcheck --seconds 10800 --reap -j KUBE-SEP-K4QQRBK63CES657K
1871 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-ECTPRXTXBM34L34Q -m comment --comment default/docker-registry:5000-tcp -j KUBE-SEP-K4QQRBK63CES657K
1872 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-K4QQRBK63CES657K -m comment --comment default/docker-registry:5000-tcp -s 10.128.0.4/32 -j KUBE-MARK-MASQ
1873 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-K4QQRBK63CES657K -m comment --comment default/docker-registry:5000-tcp -m recent --name KUBE-SEP-K4QQRBK63CES657K --set -m tcp -p tcp -j DNAT --to-destination 10.128.0.4:5000
1874 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
1875 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-SEQNQBAUGWAZQ6TC --rcheck --seconds 10800 --reap -j KUBE-SEP-SEQNQBAUGWAZQ6TC
1876 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-SEQNQBAUGWAZQ6TC
1877 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-SEQNQBAUGWAZQ6TC -m comment --comment default/kubernetes:https -s 172.16.120.63/32 -j KUBE-MARK-MASQ
1878 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-SEQNQBAUGWAZQ6TC -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-SEQNQBAUGWAZQ6TC --set -m tcp -p tcp -j DNAT --to-destination 172.16.120.63:8443
1879 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
1880 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-LAWYVLBJXYLGCUJ4 --rcheck --seconds 10800 --reap -j KUBE-SEP-LAWYVLBJXYLGCUJ4
1881 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-LAWYVLBJXYLGCUJ4
1882 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-LAWYVLBJXYLGCUJ4 -m comment --comment default/kubernetes:dns -s 172.16.120.63/32 -j KUBE-MARK-MASQ
1883 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-LAWYVLBJXYLGCUJ4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-LAWYVLBJXYLGCUJ4 --set -m udp -p udp -j DNAT --to-destination 172.16.120.63:8053
1884 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
1885 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-VKAXQ4QLUM3D3GQO --rcheck --seconds 10800 --reap -j KUBE-SEP-VKAXQ4QLUM3D3GQO
1886 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-VKAXQ4QLUM3D3GQO
1887 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-VKAXQ4QLUM3D3GQO -m comment --comment default/kubernetes:dns-tcp -s 172.16.120.63/32 -j KUBE-MARK-MASQ
1888 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-VKAXQ4QLUM3D3GQO -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-VKAXQ4QLUM3D3GQO --set -m tcp -p tcp -j DNAT --to-destination 172.16.120.63:8053
1889 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "kube-service-catalog/apiserver:secure cluster IP" -m tcp -p tcp -d 172.30.219.163/32 --dport 443 -j KUBE-SVC-52XAELSJYS7XYM5B
1890 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-52XAELSJYS7XYM5B -m comment --comment kube-service-catalog/apiserver:secure -j KUBE-SEP-Z5TPR34L26QFQHWZ
1891 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-Z5TPR34L26QFQHWZ -m comment --comment kube-service-catalog/apiserver:secure -s 10.129.0.6/32 -j KUBE-MARK-MASQ
1892 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-Z5TPR34L26QFQHWZ -m comment --comment kube-service-catalog/apiserver:secure -m tcp -p tcp -j DNAT --to-destination 10.129.0.6:6443
1893 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "default/router:80-tcp cluster IP" -m tcp -p tcp -d 172.30.188.244/32 --dport 80 -j KUBE-SVC-GQKZAHCS5DTMHUQ6
1894 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-GQKZAHCS5DTMHUQ6 -m comment --comment default/router:80-tcp -j KUBE-SEP-MRDZ6ZVGEEZIJFGP
1895 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-MRDZ6ZVGEEZIJFGP -m comment --comment default/router:80-tcp -s 172.16.120.67/32 -j KUBE-MARK-MASQ
1896 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-MRDZ6ZVGEEZIJFGP -m comment --comment default/router:80-tcp -m tcp -p tcp -j DNAT --to-destination 172.16.120.67:80
1897 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "default/router:443-tcp cluster IP" -m tcp -p tcp -d 172.30.188.244/32 --dport 443 -j KUBE-SVC-IKV43KYNCXS2W7KZ
1898 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-IKV43KYNCXS2W7KZ -m comment --comment default/router:443-tcp -j KUBE-SEP-4LK2W6YQQEHWZIZI
1899 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-4LK2W6YQQEHWZIZI -m comment --comment default/router:443-tcp -s 172.16.120.67/32 -j KUBE-MARK-MASQ
1900 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-4LK2W6YQQEHWZIZI -m comment --comment default/router:443-tcp -m tcp -p tcp -j DNAT --to-destination 172.16.120.67:443
1901 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "lxlyp/registry:5000-tcp cluster IP" -m tcp -p tcp -d 172.30.248.135/32 --dport 5000 -j KUBE-SVC-53AY4HBMKMJUV7U4
1902 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-53AY4HBMKMJUV7U4 -m comment --comment lxlyp/registry:5000-tcp -j KUBE-SEP-JKVHLMW65BAFBD7B
1903 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-JKVHLMW65BAFBD7B -m comment --comment lxlyp/registry:5000-tcp -s 10.128.0.128/32 -j KUBE-MARK-MASQ
1904 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-JKVHLMW65BAFBD7B -m comment --comment lxlyp/registry:5000-tcp -m tcp -p tcp -j DNAT --to-destination 10.128.0.128:5000
1905 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "openshift-ansible-service-broker/asb:port-1338 cluster IP" -m tcp -p tcp -d 172.30.151.204/32 --dport 1338 -j KUBE-SVC-LY4FCGCV5NJRTFFA
1906 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-LY4FCGCV5NJRTFFA -m comment --comment openshift-ansible-service-broker/asb:port-1338 -j KUBE-SEP-AITEICTVDCVCKFOA
1907 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-AITEICTVDCVCKFOA -m comment --comment openshift-ansible-service-broker/asb:port-1338 -s 10.128.0.87/32 -j KUBE-MARK-MASQ
1908 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-AITEICTVDCVCKFOA -m comment --comment openshift-ansible-service-broker/asb:port-1338 -m tcp -p tcp -j DNAT --to-destination 10.128.0.87:1338
1909 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "openshift-template-service-broker/apiserver: cluster IP" -m tcp -p tcp -d 172.30.18.83/32 --dport 443 -j KUBE-SVC-45FDQWGIHPUKH23I
1910 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-45FDQWGIHPUKH23I -m comment --comment openshift-template-service-broker/apiserver: -m statistic --mode random --probability 0.50000 -j KUBE-SEP-65UFWHJENRZOTMNH
1911 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-65UFWHJENRZOTMNH -m comment --comment openshift-template-service-broker/apiserver: -s 10.128.0.9/32 -j KUBE-MARK-MASQ
1912 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-65UFWHJENRZOTMNH -m comment --comment openshift-template-service-broker/apiserver: -m tcp -p tcp -j DNAT --to-destination 10.128.0.9:8443
1913 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-45FDQWGIHPUKH23I -m comment --comment openshift-template-service-broker/apiserver: -j KUBE-SEP-UQKKMWQZOKYYAV5R
1914 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-UQKKMWQZOKYYAV5R -m comment --comment openshift-template-service-broker/apiserver: -s 10.129.0.8/32 -j KUBE-MARK-MASQ
1915 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-UQKKMWQZOKYYAV5R -m comment --comment openshift-template-service-broker/apiserver: -m tcp -p tcp -j DNAT --to-destination 10.129.0.8:8443
1916 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "install-test/mongodb:mongodb cluster IP" -m tcp -p tcp -d 172.30.149.245/32 --dport 27017 -j KUBE-SVC-YEIHQHG72SRW62I5
1917 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-YEIHQHG72SRW62I5 -m comment --comment install-test/mongodb:mongodb -j KUBE-SEP-GAKIGMNVM2GN3J4G
1918 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-GAKIGMNVM2GN3J4G -m comment --comment install-test/mongodb:mongodb -s 10.128.0.13/32 -j KUBE-MARK-MASQ
1919 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-GAKIGMNVM2GN3J4G -m comment --comment install-test/mongodb:mongodb -m tcp -p tcp -j DNAT --to-destination 10.128.0.13:27017
1920 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "install-test/nodejs-mongodb-example:web cluster IP" -m tcp -p tcp -d 172.30.116.143/32 --dport 8080 -j KUBE-SVC-R2SMGUHFZ7VWTVNL
1921 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SVC-R2SMGUHFZ7VWTVNL -m comment --comment install-test/nodejs-mongodb-example:web -j KUBE-SEP-H36UBZ7QZ7RCNGNR
1922 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-H36UBZ7QZ7RCNGNR -m comment --comment install-test/nodejs-mongodb-example:web -s 10.128.0.15/32 -j KUBE-MARK-MASQ
1923 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SEP-H36UBZ7QZ7RCNGNR -m comment --comment install-test/nodejs-mongodb-example:web -m tcp -p tcp -j DNAT --to-destination 10.128.0.15:8080
1924 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
1925 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: COMMIT
1926 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.829984 96642 iptables.go:381] running iptables-restore [-w 5 --noflush --counters]
1927 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.844684 96642 healthcheck.go:235] Not saving endpoints for unknown healthcheck "default/registry-console"
1928 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.844702 96642 healthcheck.go:235] Not saving endpoints for unknown healthcheck "openshift-template-service-broker/apiserver"
1929 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.844709 96642 healthcheck.go:235] Not saving endpoints for unknown healthcheck "kube-service-catalog/apiserver"
1930 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.844715 96642 healthcheck.go:235] Not saving endpoints for unknown healthcheck "openshift-web-console/webconsole"
1931 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.844727 96642 proxier.go:974] syncProxyRules took 75.362843ms
1932 5月 07 23:39:46 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:46.844748 96642 bounded_frequency_runner.go:221] sync-runner: ran, next possible in 0s, periodic in 30s
1933 5月 07 23:39:47 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:47.262182 96642 kubelet.go:1924] SyncLoop (housekeeping)
1934 5月 07 23:39:47 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:47.267837 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1935 5月 07 23:39:47 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:47.269134 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1936 5月 07 23:39:47 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:47.384546 96642 generic.go:183] GenericPLEG: Relisting
1937 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.001701 96642 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
1938 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.153241 96642 eviction_manager.go:221] eviction manager: synchronize housekeeping
1939 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.191439 96642 helpers.go:827] eviction manager: observations: signal=memory.available, available: 2479716Ki, capacity: 3881588Ki, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1940 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.191876 96642 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 14494928Ki, capacity: 31010Mi, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1941 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.192152 96642 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 15712841, capacity: 15510Ki, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1942 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.192412 96642 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 14494928Ki, capacity: 31010Mi, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1943 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.192677 96642 helpers.go:827] eviction manager: observations: signal=imagefs.inodesFree, available: 15712841, capacity: 15510Ki, time: 2018-05-07 23:39:34.002222878 -0400 EDT m=+29.934023787
1944 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.192958 96642 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 3664504Ki, capacity: 3881588Ki
1945 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.193224 96642 eviction_manager.go:325] eviction manager: no resources are starved
1946 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.385711 96642 generic.go:183] GenericPLEG: Relisting
1947 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.739274 96642 openstack_instances.go:39] openstack.Instances() called
1948 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.739312 96642 openstack_instances.go:46] Claiming to support Instances
1949 5月 07 23:39:48 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:48.739331 96642 openstack_instances.go:69] NodeAddresses(172.16.120.63) called
1950 5月 07 23:39:49 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:49.174261 96642 openstack_instances.go:76] NodeAddresses(172.16.120.63) => [{InternalIP 172.16.120.63} {ExternalIP 10.8.249.82}]
1951 5月 07 23:39:49 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:49.262190 96642 kubelet.go:1924] SyncLoop (housekeeping)
1952 5月 07 23:39:49 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:49.270563 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1953 5月 07 23:39:49 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:49.272563 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1954 5月 07 23:39:49 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:49.387833 96642 generic.go:183] GenericPLEG: Relisting
1955 5月 07 23:39:50 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:50.389127 96642 generic.go:183] GenericPLEG: Relisting
1956 5月 07 23:39:51 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:51.262235 96642 kubelet.go:1924] SyncLoop (housekeeping)
1957 5月 07 23:39:51 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:51.267344 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1958 5月 07 23:39:51 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:51.268442 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1959 5月 07 23:39:51 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:51.390673 96642 generic.go:183] GenericPLEG: Relisting
1960 5月 07 23:39:52 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:52.392630 96642 generic.go:183] GenericPLEG: Relisting
1961 5月 07 23:39:53 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:53.002434 96642 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
1962 5月 07 23:39:53 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:53.262201 96642 kubelet.go:1924] SyncLoop (housekeeping)
1963 5月 07 23:39:53 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:53.268178 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1964 5月 07 23:39:53 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:53.269090 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1965 5月 07 23:39:53 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:53.394134 96642 generic.go:183] GenericPLEG: Relisting
1966 5月 07 23:39:54 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:54.256981 96642 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
1967 5月 07 23:39:54 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:54.257024 96642 prober.go:168] HTTP-Probe Headers: map[]
1968 5月 07 23:39:54 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:54.259258 96642 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[application/json] Access-Control-Allow-Origin:[*] Content-Length:[24]] 0xc4210f4240 24 [] true false map[] 0xc4217ba400 <nil>}
1969 5月 07 23:39:54 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:54.259313 96642 prober.go:118] Readiness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
1970 5月 07 23:39:54 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:54.396174 96642 generic.go:183] GenericPLEG: Relisting
1971 5月 07 23:39:55 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:55.262180 96642 kubelet.go:1924] SyncLoop (housekeeping)
1972 5月 07 23:39:55 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:55.274830 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
1973 5月 07 23:39:55 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:55.275929 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
1974 5月 07 23:39:55 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:55.298420 96642 prober.go:165] HTTP-Probe Host: https://10.129.0.8, Port: 8443, Path: /healthz
1975 5月 07 23:39:55 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:55.298466 96642 prober.go:168] HTTP-Probe Headers: map[]
1976 5月 07 23:39:55 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:55.353964 96642 http.go:96] Probe succeeded for https://10.129.0.8:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Tue, 08 May 2018 03:39:55 GMT]] 0xc4210d6520 2 [] false false map[] 0xc4221f4900 0xc420f1be40}
1977 5月 07 23:39:55 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:55.354028 96642 prober.go:118] Readiness probe for "apiserver-qq6rl_openshift-template-service-broker(8beaacd9-519c-11e8-9f32-fa163edc217c):c" succeeded
1978 5月 07 23:39:55 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:55.397379 96642 generic.go:183] GenericPLEG: Relisting
1979 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.399026 96642 generic.go:183] GenericPLEG: Relisting
1980 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.481743 96642 prober.go:150] Exec-Probe Pod: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:webconsole-55dd868cdf-crvth,GenerateName:webconsole-55dd868cdf-,Namespace:openshift-web-console,SelfLink:/api/v1/namespaces/openshift-web-console/pods/webconsole-55dd868cdf-crvth,UID:aebd73ce-519b-11e8-9f32-fa163edc217c,ResourceVersion:189416,Generation:0,CreationTimestamp:2018-05-06 22:09:32 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: openshift-web-console,pod-template-hash: 1188424789,webconsole: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-05-07T23:39:05.254025036-04:00,kubernetes.io/config.source: api,openshift.io/scc: restricted,},OwnerReferences:[{extensions/v1beta1 ReplicaSet webconsole-55dd868cdf ae182bb5-519b-11e8-9f32-fa163edc217c 0xc4211eebf0 0xc4211eebf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{serving-cert {nil nil nil nil nil SecretVolumeSource{SecretName:webconsole-serving-cert,Items:[],DefaultMode:*400,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {webconsole-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:webconsole-config,},Items:[],DefaultMode:*440,Optional:nil,} nil nil nil nil nil nil nil nil}} {webconsole-token-rdcw4 {nil nil nil nil nil &SecretVolumeSource{SecretName:webconsole-token-rdcw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false
1981 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1982 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1983 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1984 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1985 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1986 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: true,},ServiceAccountName:webconsole,DeprecatedServiceAccount:webconsole,NodeName:172.16.120.63,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c9,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000080000,},ImagePullSecrets:[{webconsole-dockercfg-rdx22}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/memory-pressure Exists NoSchedule <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-05-07 23:10:23 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-05-06 22:09:32 -0400 EDT }],Message:,Reason:,HostIP:172.16.120.63,PodIP:10.129.0.4,StartTime:2018-05-06 22:09:32 -0400 EDT,ContainerStatuses:[{webconsole {nil ContainerStateRunning{StartedAt:2018-05-06 22:14:10 -0400 EDT,} nil} {nil nil nil} true 0 registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 regis
1987 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: try.reg-aws.openshift.com:443/openshift3/ose-web-console@sha256:2b4e7533d9f4ee450fdb9dac3b096ef619538da7c0b9f8489e5aee4cee18cc3e cri-o://138ebee17264174a3b2aad338b2a9d31108041c14f0b3f1e021721333b79e0b1}],QOSClass:Burstable,InitContainerStatuses:[],},}, Container: {webconsole registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.9.27 [/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] [] [{ 0 8443 TCP }] [] [] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [{serving-cert false /var/serving-cert <nil>} {webconsole-config false /var/webconsole-config <nil>} {webconsole-token-rdcw4 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] &Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1988 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1989 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1990 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1991 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1992 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000080000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}, Command: [/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
1993 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
1994 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
1995 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: echo 'webconsole-config.yaml has changed.'; \
1996 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: exit 1; \
1997 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: fi && curl -k -f https://0.0.0.0:8443/console/]
1998 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.657814 96642 exec.go:38] Exec probe response: "<!doctype html>\n<html class=\"no-js layout-pf layout-pf-fixed\">\n<head>\n<meta http-equiv=\"X-UA-Compatible\" content=\"IE=EDGE\"/>\n<meta charset=\"utf-8\">\n<base href=\"/console/\">\n<title>OpenShift Web Console</title>\n<meta name=\"description\" content=\"\">\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n<link rel=\"icon\" type=\"image/png\" href=\"images/favicon.png\"/>\n<link rel=\"icon\" type=\"image/x-icon\" href=\"images/favicon.ico\"/>\n<link rel=\"apple-touch-icon-precomposed\" sizes=\"144x144\" href=\"images/apple-touch-icon-precomposed.png\">\n<link rel=\"mask-icon\" href=\"images/mask-icon.svg\" color=\"#DB242F\">\n<meta name=\"application-name\" content=\"OpenShift\">\n<meta name=\"msapplication-TileColor\" content=\"#000000\">\n<meta name=\"msapplication-TileImage\" content=\"images/mstile-144x144.png\">\n<link rel=\"stylesheet\" href=\"styles/vendor.css\">\n<link rel=\"stylesheet\" href=\"styles/main.css\">\n<style type=\"text/css\"></style>\n</head>\n<body class=\"console-os\" ng-class=\"{ 'has-project-bar': view.hasProject, 'has-project-search': view.hasProjectSearch }\">\n<osc-header></osc-header>\n<toast-notifications></toast-notifications>\n<notification-drawer-wrapper></notification-drawer-wrapper>\n<div class=\"container-pf-nav-pf-vertical\" ng-class=\"{ 'collapsed-nav': nav.collapsed }\">\n<div ng-view class=\"view\">\n<div class=\"middle\">\n<div class=\"middle-content\">\n<div class=\"empty-state-message loading\">\n<h2 class=\"text-center\" id=\"temporary-loading-message\" style=\"display: none\">Loading...</h2>\n<script>document.getElementById('temporary-loading-message').style.display = \"\";</script>\n</div>\n<noscript>\n<div class=\"attention-message\">\n<h1>JavaScript Required</h1>\n<p>The OpenShift web console requires JavaScript to provide a rich interactive experience. Please enable JavaScript to continue. If you do not wish to enable JavaScript or are unable to do so, you may use the
1999 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: command-line tools to manage your projects and applications instead.</p>\n</div>\n</noscript>\n</div>\n</div>\n</div>\n</div>\n<script src=\"config.js\"></script>\n<!--[if lt IE 9]>\n <script src=\"scripts/oldieshim.js\"></script>\n <![endif]-->\n<script src=\"scripts/vendor.js\"></script>\n<script src=\"scripts/templates.js\"></script>\n<script src=\"scripts/scripts.js\"></script>\n</body>\n</html> % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 2243 0 2243 0 0 48834 0 --:--:-- --:--:-- --:--:-- 49844\n"
2000 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.657910 96642 prober.go:118] Liveness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
2001 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.710405 96642 prober.go:165] HTTP-Probe Host: https://10.129.0.4, Port: 8443, Path: /healthz
2002 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.710469 96642 prober.go:168] HTTP-Probe Headers: map[]
2003 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.716086 96642 http.go:96] Probe succeeded for https://10.129.0.4:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Tue, 08 May 2018 03:39:56 GMT]] 0xc4210bac40 2 [] false false map[] 0xc4221f4d00 0xc421148370}
2004 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.716134 96642 prober.go:118] Readiness probe for "webconsole-55dd868cdf-crvth_openshift-web-console(aebd73ce-519b-11e8-9f32-fa163edc217c):webconsole" succeeded
2005 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.760666 96642 prober.go:165] HTTP-Probe Host: http://10.129.0.5, Port: 9090, Path: /ping
2006 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.760708 96642 prober.go:168] HTTP-Probe Headers: map[]
2007 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.761481 96642 http.go:96] Probe succeeded for http://10.129.0.5:9090/ping, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[24] Content-Type:[application/json] Access-Control-Allow-Origin:[*]] 0xc4210a2200 24 [] true false map[] 0xc4221f4f00 <nil>}
2008 5月 07 23:39:56 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:56.761521 96642 prober.go:118] Liveness probe for "registry-console-1-gnzd7_default(32c8e1e7-519c-11e8-9f32-fa163edc217c):registry-console" succeeded
2009 5月 07 23:39:57 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:57.262173 96642 kubelet.go:1924] SyncLoop (housekeeping)
2010 5月 07 23:39:57 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:57.268867 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
2011 5月 07 23:39:57 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:57.269863 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
2012 5月 07 23:39:57 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:57.401884 96642 generic.go:183] GenericPLEG: Relisting
2013 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.003161 96642 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
2014 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.193577 96642 eviction_manager.go:221] eviction manager: synchronize housekeeping
2015 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.240697 96642 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 3664780Ki, capacity: 3881588Ki
2016 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.240741 96642 helpers.go:827] eviction manager: observations: signal=memory.available, available: 2478760Ki, capacity: 3881588Ki, time: 2018-05-07 23:39:49.639539838 -0400 EDT m=+45.571340770
2017 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.240769 96642 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 14494940Ki, capacity: 31010Mi, time: 2018-05-07 23:39:49.639539838 -0400 EDT m=+45.571340770
2018 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.240777 96642 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 15712837, capacity: 15510Ki, time: 2018-05-07 23:39:49.639539838 -0400 EDT m=+45.571340770
2019 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.240785 96642 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 14494940Ki, capacity: 31010Mi, time: 2018-05-07 23:39:49.639539838 -0400 EDT m=+45.571340770
2020 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.240792 96642 helpers.go:827] eviction manager: observations: signal=imagefs.inodesFree, available: 15712837, capacity: 15510Ki, time: 2018-05-07 23:39:49.639539838 -0400 EDT m=+45.571340770
2021 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.240859 96642 eviction_manager.go:325] eviction manager: no resources are starved
2022 5月 07 23:39:58 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:58.408010 96642 generic.go:183] GenericPLEG: Relisting
2023 5月 07 23:39:59 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:59.211116 96642 openstack_instances.go:39] openstack.Instances() called
2024 5月 07 23:39:59 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:59.214628 96642 openstack_instances.go:46] Claiming to support Instances
2025 5月 07 23:39:59 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:59.214649 96642 openstack_instances.go:69] NodeAddresses(172.16.120.63) called
2026 5月 07 23:39:59 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:59.262180 96642 kubelet.go:1924] SyncLoop (housekeeping)
2027 5月 07 23:39:59 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:59.269725 96642 kubelet_pods.go:1118] Killing unwanted pod "registry-console-1-deploy"
2028 5月 07 23:39:59 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:59.271178 96642 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
2029 5月 07 23:39:59 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:59.412501 96642 generic.go:183] GenericPLEG: Relisting
2030 5月 07 23:39:59 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:39:59.645047 96642 openstack_instances.go:76] NodeAddresses(172.16.120.63) => [{InternalIP 172.16.120.63} {ExternalIP 10.8.249.82}]
2031 5月 07 23:40:00 host-172-16-120-63 atomic-openshift-node[96642]: I0507 23:40:00.414333 96642 generic.go:183] GenericPLEG: Relisting
2032
2033 [03:41:22] INFO> Exit Status: 0
2034 [03:41:22] INFO> cleaning-up user zhsun_1 projects
2035 [03:41:24] INFO> Shell Commands: oc delete projects --all --config=/home/szh/workdir/localhost-szh/ose_zhsun_1.kubeconfig
2036 project "lxlyp" deleted
2037
2038 [03:41:26] INFO> Exit Status: 0
2039 [03:41:26] INFO> waiting up to 30 seconds for user clean-up to take place
2040 [03:41:34] INFO> REST delete_oauthaccesstoken for user 'CucuShift::APIAccessor:zhsun_1@ose', base_opts: {:options=>{:oapi_version=>"v1", :api_version=>"v1", :accept=>"application/json", :content_type=>"application/json", :oauth_token=>"KQycFFotoZqvgO0EexddnrAqqThgwf2EcZGAGP8ECwY"}, :base_url=>"https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443", :headers=>{"Accept"=>"<accept>", "Content-Type"=>"<content_type>", "Authorization"=>"Bearer <oauth_token>"}}, opts: {:token_to_delete=>"KQycFFotoZqvgO0EexddnrAqqThgwf2EcZGAGP8ECwY"}
2041 [03:41:34] INFO> HTTP DELETE https://host-8-249-82.host.centralci.eng.rdu2.redhat.com:8443/oapi/v1/oauthaccesstokens/KQycFFotoZqvgO0EexddnrAqqThgwf2EcZGAGP8ECwY
2042 [03:41:36] INFO> HTTP DELETE took 1.270 sec: 200 OK | application/json 206 bytes
2043
2044 [03:41:36] INFO> Remote cmd: `rm -r -f -- /tmp/workdir/localhost-szh` @ssh://root@host-8-249-82.host.centralci.eng.rdu2.redhat.com
2045
2046 [03:41:38] INFO> Exit Status: 0
2047 [03:41:39] INFO> Shell Commands: rm -r -f -- /home/szh/workdir/localhost-szh
2048
2049 [03:41:39] INFO> Exit Status: 0
2050 [03:41:39] INFO> === End After Scenario: [ASB] Support concurrent, multiple APB source adapters ===
2051
20521 scenario (1 passed)
205318 steps (18 passed)
20543m48.956s
2055[03:41:39] INFO> === At Exit ===
2056[szh@localhost cucushift]$
2057[szh@localhost cucushift]$ cucumber features/svc-catalog_asb/16628.feature