登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
2025 Gitee 年度开源项目评选投票进行中,快为你的心仪项目助力!
代码拉取完成,页面将自动刷新
仓库状态说明
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
12
Star
0
Fork
17
src-openEuler
/
KubeOS
关闭
代码
Issues
0
Pull Requests
2
Wiki
统计
流水线
服务
JavaDoc
PHPDoc
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
opstype=config,不配置sysconfigs并apply yaml,等待1分钟后kubectl get osinstances查看,高概率出现部分节点未清空spec.sysconfigs.version和status.sysconfigs.version
已验收
#IANSZA
缺陷
哦那真是一级棒
创建于
2024-08-31 10:47
**【环境信息】** EBS-openEuler-24.09/rc2_openeuler-2024-08-20-08-34-38 kubeos版本:KubeOS-1.0.6-3.oe2409 **【问题复现步骤】**,请描述具体的操作步骤 1. 搭建3个子节点的k8s集群,并部署kubeos相关组件和镜像 2. apply如下yaml: ``` + cat upgrade_v1alpha1_os.yaml apiVersion: upgrade.openeuler.org/v1alpha1 kind: OS metadata: name: os-sample spec: imagetype: containerd opstype: upgrade osversion: KubeOS openEuler-24.09-2024-08-20-08-34-38 maxunavailable: 3 containerimage: 9.82.186.7:5000/kubeos-x86_64-uefi:openeuler-24.09-2024-08-20-08-34-38 evictpodforce: true imageurl: "" checksum: 994182f2a0747dc5671bbc7c945e1e94cffbdaf2c71d816faa27b3ac88b16e09 flagSafe: true mtls: false upgradeconfigs: version: upg_1 configs: - model: kernel.sysctl.persist contents: - key: upg_1 value: '1000' upgradeconfigs: version: upg_2 configs: - model: kernel.sysctl.persist contents: - key: upg_2 value: '1000' sysconfigs: version: sys_1 configs: - model: kernel.sysctl.persist contents: - key: sys_1 value: '1000' sysconfigs: version: sys_2 configs: - model: kernel.sysctl.persist contents: - key: sys_2 value: '1000' ``` 所有节点正常升级、配置,最后状态符合预期: ``` + kubectl get osinstances -o custom-columns=NAME:.metadata.name,NODESTATUS:.spec.nodestatus,UPGRADESYSCONFIG_SPEC:.spec.upgradeconfigs.version,UPGRADESYSCONFIG:status.upgradeconfigs.version,SYSCONFIG_SPEC:spec.sysconfigs.version,SYSCONFIG:status.sysconfigs.version NAME NODESTATUS UPGRADESYSCONFIG_SPEC UPGRADESYSCONFIG SYSCONFIG_SPEC SYSCONFIG node1 idle upg_2 upg_2 sys_2 sys_2 node2 idle upg_2 upg_2 sys_2 sys_2 node3 idle upg_2 upg_2 sys_2 sys_2 ``` 3. 再apply如下yaml: ``` + cat upgrade_v1alpha1_os.yaml apiVersion: upgrade.openeuler.org/v1alpha1 kind: OS metadata: name: os-sample spec: imagetype: containerd opstype: config osversion: KubeOS openEuler-24.09-2024-08-20-08-34-38 maxunavailable: 3 containerimage: 9.82.186.7:5000/kubeos-x86_64-uefi:openeuler-24.09-2024-08-20-08-34-38 evictpodforce: true imageurl: "" checksum: 994182f2a0747dc5671bbc7c945e1e94cffbdaf2c71d816faa27b3ac88b16e09 flagSafe: true mtls: false upgradeconfigs: version: upg_3 configs: - model: kernel.sysctl.persist contents: - key: upg_3 value: '1000' upgradeconfigs: version: upg_4 configs: - model: kernel.sysctl.persist contents: - key: upg_4 value: '1000' sysconfigs: version: sys_1 configs: - model: kernel.sysctl.persist contents: - key: sys_1 value: '1000' ``` 所有节点正常进行配置,最后状态符合预期: ``` + kubectl get osinstances -o custom-columns=NAME:.metadata.name,NODESTATUS:.spec.nodestatus,UPGRADESYSCONFIG_SPEC:.spec.upgradeconfigs.version,UPGRADESYSCONFIG:status.upgradeconfigs.version,SYSCONFIG_SPEC:spec.sysconfigs.version,SYSCONFIG:status.sysconfigs.version NAME NODESTATUS UPGRADESYSCONFIG_SPEC UPGRADESYSCONFIG SYSCONFIG_SPEC SYSCONFIG node1 idle upg_2 upg_2 sys_1 sys_1 node2 idle upg_2 upg_2 sys_1 sys_1 node3 idle upg_2 upg_2 sys_1 sys_1 ``` 4. 不配置sysconfigs,apply如下yaml: ``` + cat upgrade_v1alpha1_os.yaml apiVersion: upgrade.openeuler.org/v1alpha1 kind: OS metadata: name: os-sample spec: imagetype: containerd opstype: config osversion: KubeOS openEuler-24.09-2024-08-20-08-34-38 maxunavailable: 3 containerimage: 9.82.186.7:5000/kubeos-x86_64-uefi:openeuler-24.09-2024-08-20-08-34-38 evictpodforce: true imageurl: "" checksum: 994182f2a0747dc5671bbc7c945e1e94cffbdaf2c71d816faa27b3ac88b16e09 flagSafe: true mtls: false ``` 5. 预期没有报错,所有节点原有的sysconfig状态标签都被清空,但实际等待1分钟后仍然有节点保留上一次的"sys_1"标签 ``` + kubectl get osinstances -o custom-columns=NAME:.metadata.name,NODESTATUS:.spec.nodestatus,UPGRADESYSCONFIG_SPEC:.spec.upgradeconfigs.version,UPGRADESYSCONFIG:status.upgradeconfigs.version,SYSCONFIG_SPEC:spec.sysconfigs.version,SYSCONFIG:status.sysconfigs.version NAME NODESTATUS UPGRADESYSCONFIG_SPEC UPGRADESYSCONFIG SYSCONFIG_SPEC SYSCONFIG node1 idle upg_2 upg_2 sys_1 sys_1 node2 idle upg_2 upg_2 node3 idle upg_2 upg_2 ``` **【实际结果】**,请描述出问题的结果和影响 见步骤汇总的描述 **【其他相关附件信息】** 1. operator和proxy日志: ``` + kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-55fc758c88-p75dh 1/1 Running 0 7m43s 153.202.219.66 master <none> <none> kube-system calico-node-fhd9t 1/1 Running 1 (2m47s ago) 4m49s 9.82.220.255 node2 <none> <none> kube-system calico-node-nzmqq 1/1 Running 1 (2m36s ago) 4m53s 9.82.167.91 node3 <none> <none> kube-system calico-node-pdxnw 1/1 Running 0 7m43s 9.82.235.161 master <none> <none> kube-system calico-node-vhp66 1/1 Running 1 (2m47s ago) 5m2s 9.82.195.149 node1 <none> <none> kube-system coredns-6d4b75cb6d-57w6h 1/1 Running 0 7m43s 153.202.219.65 master <none> <none> kube-system coredns-6d4b75cb6d-svd7t 1/1 Running 0 7m43s 153.202.219.67 master <none> <none> kube-system etcd-master 1/1 Running 0 7m58s 9.82.235.161 master <none> <none> kube-system kube-apiserver-master 1/1 Running 0 7m58s 9.82.235.161 master <none> <none> kube-system kube-controller-manager-master 1/1 Running 0 7m58s 9.82.235.161 master <none> <none> kube-system kube-proxy-9hz99 1/1 Running 1 (2m36s ago) 4m53s 9.82.167.91 node3 <none> <none> kube-system kube-proxy-nfkvr 1/1 Running 0 7m43s 9.82.235.161 master <none> <none> kube-system kube-proxy-phbwb 1/1 Running 1 (2m47s ago) 4m49s 9.82.220.255 node2 <none> <none> kube-system kube-proxy-r7dzr 1/1 Running 1 (2m47s ago) 5m2s 9.82.195.149 node1 <none> <none> kube-system kube-scheduler-master 1/1 Running 0 7m58s 9.82.235.161 master <none> <none> upgrade-system upgrade-operator-596499fc4c-6flwd 1/1 Running 0 7m36s 153.202.219.68 master <none> <none> upgrade-system upgrade-proxy-cq29r 1/1 Running 1 (2m47s ago) 4m51s 153.202.166.130 node1 <none> <none> upgrade-system upgrade-proxy-stz7x 1/1 Running 1 (2m47s ago) 4m29s 153.202.104.2 node2 <none> <none> upgrade-system upgrade-proxy-xdxds 1/1 Running 1 (2m36s ago) 4m43s 153.202.135.2 node3 <none> <none> + for pod in ${pod_list} + echo '===================================== upgrade-operator-596499fc4c-6flwd log =====================================' ===================================== upgrade-operator-596499fc4c-6flwd log ===================================== + kubectl logs -n upgrade-system upgrade-operator-596499fc4c-6flwd + tail -n 20 ts=2024-08-30T14:21:01.837866742Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:21:01.843871714Z level=info logger=operator.OS msg="Add node upgrading label upgrade.openeuler.org/upgrading successfully" ts=2024-08-30T14:21:01.843899112Z level=info logger=operator.OS msg="Upgrading node node2" ts=2024-08-30T14:21:01.84819556Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:21:01.852125776Z level=info logger=operator.OS msg="Add node upgrading label upgrade.openeuler.org/upgrading successfully" ts=2024-08-30T14:23:30.736791968Z level=info logger=operator.OS msg="Configuring node node1" ts=2024-08-30T14:23:30.74546552Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:23:30.75120661Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" ts=2024-08-30T14:23:30.75125454Z level=info logger=operator.OS msg="Configuring node node3" ts=2024-08-30T14:23:30.756366713Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:23:30.762743075Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" ts=2024-08-30T14:23:30.762778901Z level=info logger=operator.OS msg="Configuring node node2" ts=2024-08-30T14:23:30.767629618Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:23:30.772334077Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" ts=2024-08-30T14:24:06.757355479Z level=info logger=operator.OS msg="Configuring node node3" ts=2024-08-30T14:24:06.766480417Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:24:06.772216025Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" ts=2024-08-30T14:24:06.772254601Z level=info logger=operator.OS msg="Configuring node node2" ts=2024-08-30T14:24:06.778089799Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:24:06.78350536Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" + for pod in ${pod_list} + echo '===================================== upgrade-proxy-cq29r log =====================================' ===================================== upgrade-proxy-cq29r log ===================================== + kubectl logs -n upgrade-system upgrade-proxy-cq29r + tail -n 20 [2024-08-30T14:23:13Z INFO proxy] os-proxy version is 1.0.6, start renconcile [2024-08-30T14:23:13Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=object updated [2024-08-30T14:23:13Z INFO proxy::controller::controller] Uncordon successfully nodenode1 [2024-08-30T14:23:28Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:29Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:44Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:59Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:05Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:20Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:35Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:50Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:25:05Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry + for pod in ${pod_list} + echo '===================================== upgrade-proxy-stz7x log =====================================' ===================================== upgrade-proxy-stz7x log ===================================== + kubectl logs -n upgrade-system upgrade-proxy-stz7x + tail -n 20 [2024-08-30T14:23:13Z INFO proxy] os-proxy version is 1.0.6, start renconcile [2024-08-30T14:23:13Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=object updated [2024-08-30T14:23:13Z INFO proxy::controller::controller] Uncordon successfully nodenode2 [2024-08-30T14:23:28Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:30Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:45Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:00Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:06Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:21Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:21Z INFO proxy::controller::controller] config is none, No content can be configured. [2024-08-30T14:24:36Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:51Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:25:06Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry + for pod in ${pod_list} + echo '===================================== upgrade-proxy-xdxds log =====================================' ===================================== upgrade-proxy-xdxds log ===================================== + kubectl logs -n upgrade-system upgrade-proxy-xdxds + tail -n 20 [2024-08-30T14:23:23Z INFO proxy] os-proxy version is 1.0.6, start renconcile [2024-08-30T14:23:23Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=object updated [2024-08-30T14:23:23Z INFO proxy::controller::controller] Uncordon successfully nodenode3 [2024-08-30T14:23:29Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:44Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:59Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:05Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:05Z INFO proxy::controller::controller] config is none, No content can be configured. [2024-08-30T14:24:20Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:35Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:50Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:25:05Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry ``` 2. node1 os-agnet日志: ``` root@9.82.195.149's password: ● os-agent.service - Agent For KubeOS Loaded: loaded (/usr/lib/systemd/system/os-agent.service; enabled; preset: disabled) Active: active (running) since Fri 2024-08-30 22:22:26 CST; 2min 47s ago Main PID: 602 (os-agent) Tasks: 2 (limit: 23176) Memory: 4.4M () CGroup: /system.slice/os-agent.service └─602 /usr/bin/os-agent Aug 30 22:22:26 localhost os-agent[602]: [2024-08-30T14:22:26Z INFO os_agent] os-agent version is: 1.0.6 Aug 30 22:22:26 localhost os-agent[602]: [2024-08-30T14:22:26Z INFO os_agent] os-agent started, waiting for requests... Aug 30 22:23:13 node1 os-agent[602]: [2024-08-30T14:23:13Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:13 node1 os-agent[602]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:13 node1 os-agent[602]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Add configuration "sys_2=1000" Aug 30 22:23:13 node1 os-agent[602]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" Aug 30 22:23:29 node1 os-agent[602]: [2024-08-30T14:23:29Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:29 node1 os-agent[602]: [2024-08-30T14:23:29Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:29 node1 os-agent[602]: [2024-08-30T14:23:29Z INFO manager::sys_mgmt::config] Add configuration "sys_1=1000" Aug 30 22:23:29 node1 os-agent[602]: [2024-08-30T14:23:29Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" ``` 3. node2 os-agnet日志: ``` ● os-agent.service - Agent For KubeOS Loaded: loaded (/usr/lib/systemd/system/os-agent.service; enabled; preset: disabled) Active: active (running) since Fri 2024-08-30 22:22:26 CST; 2min 49s ago Main PID: 603 (os-agent) Tasks: 2 (limit: 23176) Memory: 4.4M () CGroup: /system.slice/os-agent.service └─603 /usr/bin/os-agent Aug 30 22:22:26 localhost os-agent[603]: [2024-08-30T14:22:26Z INFO os_agent] os-agent version is: 1.0.6 Aug 30 22:22:26 localhost os-agent[603]: [2024-08-30T14:22:26Z INFO os_agent] os-agent started, waiting for requests... Aug 30 22:23:13 node2 os-agent[603]: [2024-08-30T14:23:13Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:13 node2 os-agent[603]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:13 node2 os-agent[603]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Add configuration "sys_2=1000" Aug 30 22:23:13 node2 os-agent[603]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" Aug 30 22:23:45 node2 os-agent[603]: [2024-08-30T14:23:45Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:45 node2 os-agent[603]: [2024-08-30T14:23:45Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:45 node2 os-agent[603]: [2024-08-30T14:23:45Z INFO manager::sys_mgmt::config] Add configuration "sys_1=1000" Aug 30 22:23:45 node2 os-agent[603]: [2024-08-30T14:23:45Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" ``` 4. node3 os-agnet日志: ``` ● os-agent.service - Agent For KubeOS Loaded: loaded (/usr/lib/systemd/system/os-agent.service; enabled; preset: disabled) Active: active (running) since Fri 2024-08-30 22:22:36 CST; 2min 39s ago Main PID: 600 (os-agent) Tasks: 2 (limit: 23176) Memory: 4.4M () CGroup: /system.slice/os-agent.service └─600 /usr/bin/os-agent Aug 30 22:22:37 localhost os-agent[600]: [2024-08-30T14:22:37Z INFO os_agent] os-agent version is: 1.0.6 Aug 30 22:22:37 localhost os-agent[600]: [2024-08-30T14:22:37Z INFO os_agent] os-agent started, waiting for requests... Aug 30 22:23:23 node3 os-agent[600]: [2024-08-30T14:23:23Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:23 node3 os-agent[600]: [2024-08-30T14:23:23Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:23 node3 os-agent[600]: [2024-08-30T14:23:23Z INFO manager::sys_mgmt::config] Add configuration "sys_2=1000" Aug 30 22:23:23 node3 os-agent[600]: [2024-08-30T14:23:23Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" Aug 30 22:23:44 node3 os-agent[600]: [2024-08-30T14:23:44Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:44 node3 os-agent[600]: [2024-08-30T14:23:44Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:44 node3 os-agent[600]: [2024-08-30T14:23:44Z INFO manager::sys_mgmt::config] Add configuration "sys_1=1000" Aug 30 22:23:44 node3 os-agent[600]: [2024-08-30T14:23:44Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" ```
**【环境信息】** EBS-openEuler-24.09/rc2_openeuler-2024-08-20-08-34-38 kubeos版本:KubeOS-1.0.6-3.oe2409 **【问题复现步骤】**,请描述具体的操作步骤 1. 搭建3个子节点的k8s集群,并部署kubeos相关组件和镜像 2. apply如下yaml: ``` + cat upgrade_v1alpha1_os.yaml apiVersion: upgrade.openeuler.org/v1alpha1 kind: OS metadata: name: os-sample spec: imagetype: containerd opstype: upgrade osversion: KubeOS openEuler-24.09-2024-08-20-08-34-38 maxunavailable: 3 containerimage: 9.82.186.7:5000/kubeos-x86_64-uefi:openeuler-24.09-2024-08-20-08-34-38 evictpodforce: true imageurl: "" checksum: 994182f2a0747dc5671bbc7c945e1e94cffbdaf2c71d816faa27b3ac88b16e09 flagSafe: true mtls: false upgradeconfigs: version: upg_1 configs: - model: kernel.sysctl.persist contents: - key: upg_1 value: '1000' upgradeconfigs: version: upg_2 configs: - model: kernel.sysctl.persist contents: - key: upg_2 value: '1000' sysconfigs: version: sys_1 configs: - model: kernel.sysctl.persist contents: - key: sys_1 value: '1000' sysconfigs: version: sys_2 configs: - model: kernel.sysctl.persist contents: - key: sys_2 value: '1000' ``` 所有节点正常升级、配置,最后状态符合预期: ``` + kubectl get osinstances -o custom-columns=NAME:.metadata.name,NODESTATUS:.spec.nodestatus,UPGRADESYSCONFIG_SPEC:.spec.upgradeconfigs.version,UPGRADESYSCONFIG:status.upgradeconfigs.version,SYSCONFIG_SPEC:spec.sysconfigs.version,SYSCONFIG:status.sysconfigs.version NAME NODESTATUS UPGRADESYSCONFIG_SPEC UPGRADESYSCONFIG SYSCONFIG_SPEC SYSCONFIG node1 idle upg_2 upg_2 sys_2 sys_2 node2 idle upg_2 upg_2 sys_2 sys_2 node3 idle upg_2 upg_2 sys_2 sys_2 ``` 3. 再apply如下yaml: ``` + cat upgrade_v1alpha1_os.yaml apiVersion: upgrade.openeuler.org/v1alpha1 kind: OS metadata: name: os-sample spec: imagetype: containerd opstype: config osversion: KubeOS openEuler-24.09-2024-08-20-08-34-38 maxunavailable: 3 containerimage: 9.82.186.7:5000/kubeos-x86_64-uefi:openeuler-24.09-2024-08-20-08-34-38 evictpodforce: true imageurl: "" checksum: 994182f2a0747dc5671bbc7c945e1e94cffbdaf2c71d816faa27b3ac88b16e09 flagSafe: true mtls: false upgradeconfigs: version: upg_3 configs: - model: kernel.sysctl.persist contents: - key: upg_3 value: '1000' upgradeconfigs: version: upg_4 configs: - model: kernel.sysctl.persist contents: - key: upg_4 value: '1000' sysconfigs: version: sys_1 configs: - model: kernel.sysctl.persist contents: - key: sys_1 value: '1000' ``` 所有节点正常进行配置,最后状态符合预期: ``` + kubectl get osinstances -o custom-columns=NAME:.metadata.name,NODESTATUS:.spec.nodestatus,UPGRADESYSCONFIG_SPEC:.spec.upgradeconfigs.version,UPGRADESYSCONFIG:status.upgradeconfigs.version,SYSCONFIG_SPEC:spec.sysconfigs.version,SYSCONFIG:status.sysconfigs.version NAME NODESTATUS UPGRADESYSCONFIG_SPEC UPGRADESYSCONFIG SYSCONFIG_SPEC SYSCONFIG node1 idle upg_2 upg_2 sys_1 sys_1 node2 idle upg_2 upg_2 sys_1 sys_1 node3 idle upg_2 upg_2 sys_1 sys_1 ``` 4. 不配置sysconfigs,apply如下yaml: ``` + cat upgrade_v1alpha1_os.yaml apiVersion: upgrade.openeuler.org/v1alpha1 kind: OS metadata: name: os-sample spec: imagetype: containerd opstype: config osversion: KubeOS openEuler-24.09-2024-08-20-08-34-38 maxunavailable: 3 containerimage: 9.82.186.7:5000/kubeos-x86_64-uefi:openeuler-24.09-2024-08-20-08-34-38 evictpodforce: true imageurl: "" checksum: 994182f2a0747dc5671bbc7c945e1e94cffbdaf2c71d816faa27b3ac88b16e09 flagSafe: true mtls: false ``` 5. 预期没有报错,所有节点原有的sysconfig状态标签都被清空,但实际等待1分钟后仍然有节点保留上一次的"sys_1"标签 ``` + kubectl get osinstances -o custom-columns=NAME:.metadata.name,NODESTATUS:.spec.nodestatus,UPGRADESYSCONFIG_SPEC:.spec.upgradeconfigs.version,UPGRADESYSCONFIG:status.upgradeconfigs.version,SYSCONFIG_SPEC:spec.sysconfigs.version,SYSCONFIG:status.sysconfigs.version NAME NODESTATUS UPGRADESYSCONFIG_SPEC UPGRADESYSCONFIG SYSCONFIG_SPEC SYSCONFIG node1 idle upg_2 upg_2 sys_1 sys_1 node2 idle upg_2 upg_2 node3 idle upg_2 upg_2 ``` **【实际结果】**,请描述出问题的结果和影响 见步骤汇总的描述 **【其他相关附件信息】** 1. operator和proxy日志: ``` + kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-55fc758c88-p75dh 1/1 Running 0 7m43s 153.202.219.66 master <none> <none> kube-system calico-node-fhd9t 1/1 Running 1 (2m47s ago) 4m49s 9.82.220.255 node2 <none> <none> kube-system calico-node-nzmqq 1/1 Running 1 (2m36s ago) 4m53s 9.82.167.91 node3 <none> <none> kube-system calico-node-pdxnw 1/1 Running 0 7m43s 9.82.235.161 master <none> <none> kube-system calico-node-vhp66 1/1 Running 1 (2m47s ago) 5m2s 9.82.195.149 node1 <none> <none> kube-system coredns-6d4b75cb6d-57w6h 1/1 Running 0 7m43s 153.202.219.65 master <none> <none> kube-system coredns-6d4b75cb6d-svd7t 1/1 Running 0 7m43s 153.202.219.67 master <none> <none> kube-system etcd-master 1/1 Running 0 7m58s 9.82.235.161 master <none> <none> kube-system kube-apiserver-master 1/1 Running 0 7m58s 9.82.235.161 master <none> <none> kube-system kube-controller-manager-master 1/1 Running 0 7m58s 9.82.235.161 master <none> <none> kube-system kube-proxy-9hz99 1/1 Running 1 (2m36s ago) 4m53s 9.82.167.91 node3 <none> <none> kube-system kube-proxy-nfkvr 1/1 Running 0 7m43s 9.82.235.161 master <none> <none> kube-system kube-proxy-phbwb 1/1 Running 1 (2m47s ago) 4m49s 9.82.220.255 node2 <none> <none> kube-system kube-proxy-r7dzr 1/1 Running 1 (2m47s ago) 5m2s 9.82.195.149 node1 <none> <none> kube-system kube-scheduler-master 1/1 Running 0 7m58s 9.82.235.161 master <none> <none> upgrade-system upgrade-operator-596499fc4c-6flwd 1/1 Running 0 7m36s 153.202.219.68 master <none> <none> upgrade-system upgrade-proxy-cq29r 1/1 Running 1 (2m47s ago) 4m51s 153.202.166.130 node1 <none> <none> upgrade-system upgrade-proxy-stz7x 1/1 Running 1 (2m47s ago) 4m29s 153.202.104.2 node2 <none> <none> upgrade-system upgrade-proxy-xdxds 1/1 Running 1 (2m36s ago) 4m43s 153.202.135.2 node3 <none> <none> + for pod in ${pod_list} + echo '===================================== upgrade-operator-596499fc4c-6flwd log =====================================' ===================================== upgrade-operator-596499fc4c-6flwd log ===================================== + kubectl logs -n upgrade-system upgrade-operator-596499fc4c-6flwd + tail -n 20 ts=2024-08-30T14:21:01.837866742Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:21:01.843871714Z level=info logger=operator.OS msg="Add node upgrading label upgrade.openeuler.org/upgrading successfully" ts=2024-08-30T14:21:01.843899112Z level=info logger=operator.OS msg="Upgrading node node2" ts=2024-08-30T14:21:01.84819556Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:21:01.852125776Z level=info logger=operator.OS msg="Add node upgrading label upgrade.openeuler.org/upgrading successfully" ts=2024-08-30T14:23:30.736791968Z level=info logger=operator.OS msg="Configuring node node1" ts=2024-08-30T14:23:30.74546552Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:23:30.75120661Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" ts=2024-08-30T14:23:30.75125454Z level=info logger=operator.OS msg="Configuring node node3" ts=2024-08-30T14:23:30.756366713Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:23:30.762743075Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" ts=2024-08-30T14:23:30.762778901Z level=info logger=operator.OS msg="Configuring node node2" ts=2024-08-30T14:23:30.767629618Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:23:30.772334077Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" ts=2024-08-30T14:24:06.757355479Z level=info logger=operator.OS msg="Configuring node node3" ts=2024-08-30T14:24:06.766480417Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:24:06.772216025Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" ts=2024-08-30T14:24:06.772254601Z level=info logger=operator.OS msg="Configuring node node2" ts=2024-08-30T14:24:06.778089799Z level=info logger=operator.OS msg="Update osinstance spec successfully" ts=2024-08-30T14:24:06.78350536Z level=info logger=operator.OS msg="Add node configuring label upgrade.openeuler.org/configuring successfully" + for pod in ${pod_list} + echo '===================================== upgrade-proxy-cq29r log =====================================' ===================================== upgrade-proxy-cq29r log ===================================== + kubectl logs -n upgrade-system upgrade-proxy-cq29r + tail -n 20 [2024-08-30T14:23:13Z INFO proxy] os-proxy version is 1.0.6, start renconcile [2024-08-30T14:23:13Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=object updated [2024-08-30T14:23:13Z INFO proxy::controller::controller] Uncordon successfully nodenode1 [2024-08-30T14:23:28Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:29Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:44Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:59Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:05Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:20Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:35Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:50Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:25:05Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry + for pod in ${pod_list} + echo '===================================== upgrade-proxy-stz7x log =====================================' ===================================== upgrade-proxy-stz7x log ===================================== + kubectl logs -n upgrade-system upgrade-proxy-stz7x + tail -n 20 [2024-08-30T14:23:13Z INFO proxy] os-proxy version is 1.0.6, start renconcile [2024-08-30T14:23:13Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=object updated [2024-08-30T14:23:13Z INFO proxy::controller::controller] Uncordon successfully nodenode2 [2024-08-30T14:23:28Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:30Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:45Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:00Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:06Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:21Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:21Z INFO proxy::controller::controller] config is none, No content can be configured. [2024-08-30T14:24:36Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:51Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:25:06Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry + for pod in ${pod_list} + echo '===================================== upgrade-proxy-xdxds log =====================================' ===================================== upgrade-proxy-xdxds log ===================================== + kubectl logs -n upgrade-system upgrade-proxy-xdxds + tail -n 20 [2024-08-30T14:23:23Z INFO proxy] os-proxy version is 1.0.6, start renconcile [2024-08-30T14:23:23Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=object updated [2024-08-30T14:23:23Z INFO proxy::controller::controller] Uncordon successfully nodenode3 [2024-08-30T14:23:29Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:44Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:23:59Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:05Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:05Z INFO proxy::controller::controller] config is none, No content can be configured. [2024-08-30T14:24:20Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:35Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:24:50Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry [2024-08-30T14:25:05Z INFO kube_runtime::controller] reconciling object; object.ref=OS.v1alpha1.upgrade.openeuler.org/os-sample.default object.reason=reconciler requested retry ``` 2. node1 os-agnet日志: ``` root@9.82.195.149's password: ● os-agent.service - Agent For KubeOS Loaded: loaded (/usr/lib/systemd/system/os-agent.service; enabled; preset: disabled) Active: active (running) since Fri 2024-08-30 22:22:26 CST; 2min 47s ago Main PID: 602 (os-agent) Tasks: 2 (limit: 23176) Memory: 4.4M () CGroup: /system.slice/os-agent.service └─602 /usr/bin/os-agent Aug 30 22:22:26 localhost os-agent[602]: [2024-08-30T14:22:26Z INFO os_agent] os-agent version is: 1.0.6 Aug 30 22:22:26 localhost os-agent[602]: [2024-08-30T14:22:26Z INFO os_agent] os-agent started, waiting for requests... Aug 30 22:23:13 node1 os-agent[602]: [2024-08-30T14:23:13Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:13 node1 os-agent[602]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:13 node1 os-agent[602]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Add configuration "sys_2=1000" Aug 30 22:23:13 node1 os-agent[602]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" Aug 30 22:23:29 node1 os-agent[602]: [2024-08-30T14:23:29Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:29 node1 os-agent[602]: [2024-08-30T14:23:29Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:29 node1 os-agent[602]: [2024-08-30T14:23:29Z INFO manager::sys_mgmt::config] Add configuration "sys_1=1000" Aug 30 22:23:29 node1 os-agent[602]: [2024-08-30T14:23:29Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" ``` 3. node2 os-agnet日志: ``` ● os-agent.service - Agent For KubeOS Loaded: loaded (/usr/lib/systemd/system/os-agent.service; enabled; preset: disabled) Active: active (running) since Fri 2024-08-30 22:22:26 CST; 2min 49s ago Main PID: 603 (os-agent) Tasks: 2 (limit: 23176) Memory: 4.4M () CGroup: /system.slice/os-agent.service └─603 /usr/bin/os-agent Aug 30 22:22:26 localhost os-agent[603]: [2024-08-30T14:22:26Z INFO os_agent] os-agent version is: 1.0.6 Aug 30 22:22:26 localhost os-agent[603]: [2024-08-30T14:22:26Z INFO os_agent] os-agent started, waiting for requests... Aug 30 22:23:13 node2 os-agent[603]: [2024-08-30T14:23:13Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:13 node2 os-agent[603]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:13 node2 os-agent[603]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Add configuration "sys_2=1000" Aug 30 22:23:13 node2 os-agent[603]: [2024-08-30T14:23:13Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" Aug 30 22:23:45 node2 os-agent[603]: [2024-08-30T14:23:45Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:45 node2 os-agent[603]: [2024-08-30T14:23:45Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:45 node2 os-agent[603]: [2024-08-30T14:23:45Z INFO manager::sys_mgmt::config] Add configuration "sys_1=1000" Aug 30 22:23:45 node2 os-agent[603]: [2024-08-30T14:23:45Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" ``` 4. node3 os-agnet日志: ``` ● os-agent.service - Agent For KubeOS Loaded: loaded (/usr/lib/systemd/system/os-agent.service; enabled; preset: disabled) Active: active (running) since Fri 2024-08-30 22:22:36 CST; 2min 39s ago Main PID: 600 (os-agent) Tasks: 2 (limit: 23176) Memory: 4.4M () CGroup: /system.slice/os-agent.service └─600 /usr/bin/os-agent Aug 30 22:22:37 localhost os-agent[600]: [2024-08-30T14:22:37Z INFO os_agent] os-agent version is: 1.0.6 Aug 30 22:22:37 localhost os-agent[600]: [2024-08-30T14:22:37Z INFO os_agent] os-agent started, waiting for requests... Aug 30 22:23:23 node3 os-agent[600]: [2024-08-30T14:23:23Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:23 node3 os-agent[600]: [2024-08-30T14:23:23Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:23 node3 os-agent[600]: [2024-08-30T14:23:23Z INFO manager::sys_mgmt::config] Add configuration "sys_2=1000" Aug 30 22:23:23 node3 os-agent[600]: [2024-08-30T14:23:23Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" Aug 30 22:23:44 node3 os-agent[600]: [2024-08-30T14:23:44Z INFO os_agent::rpc::agent_impl] Start to configure Aug 30 22:23:44 node3 os-agent[600]: [2024-08-30T14:23:44Z INFO manager::sys_mgmt::config] Start setting kernel.sysctl.persist Aug 30 22:23:44 node3 os-agent[600]: [2024-08-30T14:23:44Z INFO manager::sys_mgmt::config] Add configuration "sys_1=1000" Aug 30 22:23:44 node3 os-agent[600]: [2024-08-30T14:23:44Z INFO manager::sys_mgmt::config] Write configuration to file "/etc/sysctl.conf" ```
评论 (
1
)
登录
后才可以发表评论
状态
已验收
待办的
已挂起
修复中
已确认
已完成
已验收
已取消
负责人
未设置
liyuanr
li-yuanrong
负责人
协作者
+负责人
+协作者
标签
sig/sig-CloudNative
未设置
项目
未立项任务
未立项任务
里程碑
openEuler-24.09-round-2
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (22)
标签 (17)
master
openEuler-24.03-LTS-SP2
openEuler-24.03-LTS-Next
openEuler-24.03-LTS-SP3
openEuler-25.03
openEuler-25.09
openEuler-24.03-LTS-SP1
openEuler-24.09
openEuler-24.03-LTS
openEuler-22.03-LTS-SP3
openEuler-22.03-LTS-SP4
openEuler-22.03-LTS-SP1
openEuler-22.03-LTS-SP2
openEuler-23.09
openEuler-23.03
openEuler-22.03-LTS-Next
openEuler-22.09
openEuler-22.03-LTS
openEuler-20.03-LTS-SP3
openEuler-20.03-LTS-SP4
openEuler-21.09
openEuler-20.03-LTS-Next
openEuler-25.09-release
openEuler-24.03-LTS-SP2-release
openEuler-25.03-release
openEuler-24.03-LTS-SP1-release
openEuler-22.03-LTS-SP4-release
openEuler-24.09-release
openEuler-24.03-LTS-release
openEuler-22.03-LTS-SP3-release
openEuler-23.09-rc5
openEuler-22.03-LTS-SP1-release
openEuler-22.09-release
openEuler-22.09-rc5
openEuler-22.09-20220829
openEuler-22.03-LTS-20220331
openEuler-22.03-LTS-round5
openEuler-22.03-LTS-round4
openEuler-20.03-LTS-SP3-release
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(1)
1
https://gitee.com/src-openeuler/KubeOS.git
git@gitee.com:src-openeuler/KubeOS.git
src-openeuler
KubeOS
KubeOS
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册