代码拉取完成,页面将自动刷新
This document is meant to highlight and consolidate in one place configuration best practices that are introduced throughout the user-guide and getting-started documentation and examples. This is a living document so if you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
kubectl create -f <directory>
where possible. This looks for config objects in all .yaml
, .yml
, and .json
files in <directory>
and passes them to create.hostPort
unless absolutely necessary (e.g., for a node daemon) as it will prevent certain scheduling configurations due to port conflicts. Use the apiserver proxying or port forwarding for debug/admin access, or a service for external service access. If you need to expose a pod's port on the host machine, consider using a NodePort service before resorting to hostPort
. If you only need access to the port for debugging purposes, you can also use the kubectl proxy and apiserver proxy or kubectl port-forward.hostNetwork
for the same reasons as hostPort
.service: myservice
) and another to represent the replication controller managing the pods (e.g., controller: mycontroller
), attach labels that identify semantic attributes of your application or deployment and select the appropriate subsets in your service and replication controller, such as { app: myapp, tier: frontend, deployment: v3 }
. A service can be made to span multiple deployments, such as across rolling updates, by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.restartPolicy: Never
scenarios). A minimal Job is coming. See #1624. Naked pods will not be rescheduled in the event of node failure.此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。