1 Star 0 Fork 0

zhuchance / kubernetes

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
known-issues.md 3.13 KB
一键复制 编辑 原始数据 按行查看 历史
Chao Xu 提交于 2015-12-14 10:37 . run hack/update-generated-docs.sh

WARNING WARNING WARNING WARNING WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree

If you are using a released version of Kubernetes, you should refer to the docs that go with that version.

The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/user-guide/known-issues.md).

Documentation for other releases can be found at releases.k8s.io.

Known Issues

This document summarizes known issues with existing Kubernetes releases.

Please consult this document before filing new bugs.

Release 1.0.1

  • exec liveness/readiness probes leak resources due to Docker exec leaking resources (#10659)
  • docker load sometimes hangs which causes the kube-apiserver not to start. Restarting the Docker daemon should fix the issue (#10868)
  • The kubelet on the master node doesn't register with the kube-apiserver so statistics aren't collected for master daemons (#10891)
  • Heapster and InfluxDB both leak memory (#10653)
  • Wrong node cpu/memory limit metrics from Heapster (https://github.com/GoogleCloudPlatform/heapster/issues/399)
  • Services that set type=LoadBalancer can not use port 10250 because of Google Compute Engine firewall limitations
  • Add-on services can not be created or deleted via kubectl or the Kubernetes API (#11435)
  • If a pod with a GCE PD is created and deleted in rapid succession, it may fail to attach/mount correctly leaving PD data inaccessible (or corrupted in the worst case). (http://issue.k8s.io/11231#issuecomment-122049113)
    • Suggested temporary work around: introduce a 1-2 minute delay between deleting and recreating a pod with a PD on the same node.
  • Explicit errors while detaching GCE PD could prevent PD from ever being detached (#11321)
  • GCE PDs may sometimes fail to attach (#11302)
  • If multiple Pods use the same RBD volume in read-write mode, it is possible data on the RBD volume could get corrupted. This problem has been found in environments where both apiserver and etcd rebooted and Pods were redistributed.
    • A workaround is to ensure there is no other Ceph client using the RBD volume before mapping RBD image in read-write mode. For example, rados -p poolname listwatchers image_name.rbd can list RBD clients that are mapping the image.

Analytics

马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
Go
1
https://gitee.com/meoom/kubernetes.git
git@gitee.com:meoom/kubernetes.git
meoom
kubernetes
kubernetes
v1.3.0-alpha.0

搜索帮助

344bd9b3 5694891 D2dac590 5694891