# harbor-helm **Repository Path**: z0ukun/harbor-helm ## Basic Information - **Project Name**: harbor-helm - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-07-01 - **Last Updated**: 2022-07-13 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Helm Chart for Harbor **Notes:** The master branch is in heavy development, please use the other stable versions instead. A highly available solution for Harbor based on chart can be find [here](docs/High%20Availability.md). And refer to the [guide](docs/Upgrade.md) to upgrade the existing deployment. This repository, including the issues, focuses on deploying Harbor chart via helm. For functionality issues or Harbor questions, please open issues on [goharbor/harbor](https://github.com/goharbor/harbor) ## Introduction This [Helm](https://github.com/kubernetes/helm) chart installs [Harbor](https://github.com/goharbor/harbor) in a Kubernetes cluster. Welcome to [contribute](CONTRIBUTING.md) to Helm Chart for Harbor. ## Prerequisites - Kubernetes cluster 1.20+ - Helm v3.2.0+ ## Installation ### Add Helm repository ```bash helm repo add harbor https://helm.goharbor.io ``` ### Configure the chart The following items can be set via `--set` flag during installation or configured by editing the `values.yaml` directly (need to download the chart first). #### Configure how to expose Harbor service - **Ingress**: The ingress controller must be installed in the Kubernetes cluster. **Notes:** if TLS is disabled, the port must be included in the command when pulling/pushing images. Refer to issue [#5291](https://github.com/goharbor/harbor/issues/5291) for details. - **ClusterIP**: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. - **NodePort**: Exposes the service on each Node’s IP at a static port (the NodePort). You’ll be able to contact the NodePort service, from outside the cluster, by requesting `NodeIP:NodePort`. - **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer. #### Configure the external URL The external URL for Harbor core service is used to: 1. populate the docker/helm commands showed on portal 2. populate the token service URL returned to docker/notary client Format: `protocol://domain[:port]`. Usually: - if service exposed via `Ingress`, the `domain` should be the value of `expose.ingress.hosts.core` - if service exposed via `ClusterIP`, the `domain` should be the value of `expose.clusterIP.name` - if service exposed via `NodePort`, the `domain` should be the IP address of one Kubernetes node - if service exposed via `LoadBalancer`, set the `domain` as your own domain name and add a CNAME record to map the domain name to the one you got from the cloud provider If Harbor is deployed behind the proxy, set it as the URL of proxy. #### Configure how to persist data - **Disable**: The data does not survive the termination of a pod. - **Persistent Volume Claim(default)**: A default `StorageClass` is needed in the Kubernetes cluster to dynamically provision the volumes. Specify another StorageClass in the `storageClass` or set `existingClaim` if you already have existing persistent volumes to use. - **External Storage(only for images and charts)**: For images and charts, the external storages are supported: `azure`, `gcs`, `s3` `swift` and `oss`. #### Configure the other items listed in [configuration](#configuration) section ### Install the chart Install the Harbor helm chart with a release name `my-release`: ```bash helm install my-release harbor/harbor ``` ## Uninstallation To uninstall/delete the `my-release` deployment: ```bash helm uninstall my-release ``` ## Configuration The following table lists the configurable parameters of the Harbor chart and the default values. | Parameter | Description | Default | | -------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------- | | **Expose** | | | | `expose.type` | How to expose the service: `ingress`, `clusterIP`, `nodePort` or `loadBalancer`, other values will be ignored and the creation of service will be skipped. | `ingress` | | `expose.tls.enabled` | Enable TLS or not. Delete the `ssl-redirect` annotations in `expose.ingress.annotations` when TLS is disabled and `expose.type` is `ingress`. Note: if the `expose.type` is `ingress` and TLS is disabled, the port must be included in the command when pulling/pushing images. Refer to https://github.com/goharbor/harbor/issues/5291 for details. | `true` | | `expose.tls.certSource` | The source of the TLS certificate. Set as `auto`, `secret` or `none` and fill the information in the corresponding section: 1) auto: generate the TLS certificate automatically 2) secret: read the TLS certificate from the specified secret. The TLS certificate can be generated manually or by cert manager 3) none: configure no TLS certificate for the ingress. If the default TLS certificate is configured in the ingress controller, choose this option | `auto` | | `expose.tls.auto.commonName` | The common name used to generate the certificate, it's necessary when the type isn't `ingress` | | | `expose.tls.secret.secretName` | The name of secret which contains keys named: `tls.crt` - the certificate; `tls.key` - the private key | | | `expose.tls.secret.notarySecretName` | The name of secret which contains keys named: `tls.crt` - the certificate; `tls.key` - the private key. Only needed when the `expose.type` is `ingress` | | | `expose.ingress.hosts.core` | The host of Harbor core service in ingress rule | `core.harbor.domain` | | `expose.ingress.hosts.notary` | The host of Harbor Notary service in ingress rule | `notary.harbor.domain` | | `expose.ingress.controller` | The ingress controller type. Currently supports `default`, `gce`, `alb` and `ncp` | `default` | | `expose.ingress.kubeVersionOverride` | Allows the ability to override the kubernetes version used while templating the ingress | | | `expose.ingress.annotations` | The annotations used commonly for ingresses | | | `expose.ingress.harbor.annotations` | The annotations specific to harbor ingress | {} | | `expose.ingress.harbor.labels` | The labels specific to harbor ingress | {} | | `expose.ingress.notary.annotations` | The annotations specific to notary ingress | {} | | `expose.ingress.notary.labels` | The labels specific to notary ingress | {} | | `expose.clusterIP.name` | The name of ClusterIP service | `harbor` | | `expose.clusterIP.annotations` | The annotations attached to the ClusterIP service | {} | | `expose.clusterIP.ports.httpPort` | The service port Harbor listens on when serving HTTP | `80` | | `expose.clusterIP.ports.httpsPort` | The service port Harbor listens on when serving HTTPS | `443` | | `expose.clusterIP.ports.notaryPort` | The service port Notary listens on. Only needed when `notary.enabled` is set to `true` | `4443` | | `expose.nodePort.name` | The name of NodePort service | `harbor` | | `expose.nodePort.ports.http.port` | The service port Harbor listens on when serving HTTP | `80` | | `expose.nodePort.ports.http.nodePort` | The node port Harbor listens on when serving HTTP | `30002` | | `expose.nodePort.ports.https.port` | The service port Harbor listens on when serving HTTPS | `443` | | `expose.nodePort.ports.https.nodePort` | The node port Harbor listens on when serving HTTPS | `30003` | | `expose.nodePort.ports.notary.port` | The service port Notary listens on. Only needed when `notary.enabled` is set to `true` | `4443` | | `expose.nodePort.ports.notary.nodePort` | The node port Notary listens on. Only needed when `notary.enabled` is set to `true` | `30004` | | `expose.loadBalancer.name` | The name of service | `harbor` | | `expose.loadBalancer.IP` | The IP of the loadBalancer. It only works when loadBalancer supports assigning IP | `""` | | `expose.loadBalancer.ports.httpPort` | The service port Harbor listens on when serving HTTP | `80` | | `expose.loadBalancer.ports.httpsPort` | The service port Harbor listens on when serving HTTPS | `30002` | | `expose.loadBalancer.ports.notaryPort` | The service port Notary listens on. Only needed when `notary.enabled` is set to `true` | | | `expose.loadBalancer.annotations` | The annotations attached to the loadBalancer service | {} | | `expose.loadBalancer.sourceRanges` | List of IP address ranges to assign to loadBalancerSourceRanges | [] | | **Internal TLS** | | | | `internalTLS.enabled` | Enable TLS for the components (chartmuseum, core, jobservice, portal, registry, trivy) | `false` | | `internalTLS.certSource` | Method to provide TLS for the components, options are `auto`, `manual`, `secret`. | `auto` | | `internalTLS.trustCa` | The content of trust CA, only available when `certSource` is `manual`. **Note**: all the internal certificates of the components must be issued by this CA | | | `internalTLS.core.secretName` | The secret name for core component, only available when `certSource` is `secret`. The secret must contain keys named: `ca.crt` - the CA certificate which is used to issue internal key and crt pair for components and all Harbor components must be issued by the same CA, `tls.crt` - the content of the TLS cert file, `tls.key` - the content of the TLS key file. | | | `internalTLS.core.crt` | Content of core's TLS cert file, only available when `certSource` is `manual` | | | `internalTLS.core.key` | Content of core's TLS key file, only available when `certSource` is `manual` | | | `internalTLS.jobservice.secretName` | The secret name for jobservice component, only available when `certSource` is `secret`. The secret must contain keys named: `ca.crt` - the CA certificate which is used to issue internal key and crt pair for components and all Harbor components must be issued by the same CA, `tls.crt` - the content of the TLS cert file, `tls.key` - the content of the TLS key file. | | | `internalTLS.jobservice.crt` | Content of jobservice's TLS cert file, only available when `certSource` is `manual` | | | `internalTLS.jobservice.key` | Content of jobservice's TLS key file, only available when `certSource` is `manual` | | | `internalTLS.registry.secretName` | The secret name for registry component, only available when `certSource` is `secret`. The secret must contain keys named: `ca.crt` - the CA certificate which is used to issue internal key and crt pair for components and all Harbor components must be issued by the same CA, `tls.crt` - the content of the TLS cert file, `tls.key` - the content of the TLS key file. | | | `internalTLS.registry.crt` | Content of registry's TLS cert file, only available when `certSource` is `manual` | | | `internalTLS.registry.key` | Content of registry's TLS key file, only available when `certSource` is `manual` | | | `internalTLS.portal.secretName` | The secret name for portal component, only available when `certSource` is `secret`. The secret must contain keys named: `ca.crt` - the CA certificate which is used to issue internal key and crt pair for components and all Harbor components must be issued by the same CA, `tls.crt` - the content of the TLS cert file, `tls.key` - the content of the TLS key file. | | | `internalTLS.portal.crt` | Content of portal's TLS cert file, only available when `certSource` is `manual` | | | `internalTLS.portal.key` | Content of portal's TLS key file, only available when `certSource` is `manual` | | | `internalTLS.chartmuseum.secretName` | The secret name for chartmuseum component, only available when `certSource` is `secret`. The secret must contain keys named: `ca.crt` - the CA certificate which is used to issue internal key and crt pair for components and all Harbor components must be issued by the same CA, `tls.crt` - the content of the TLS cert file, `tls.key` - the content of the TLS key file. | | | `internalTLS.chartmuseum.crt` | Content of chartmuseum's TLS cert file, only available when `certSource` is `manual` | | | `internalTLS.chartmuseum.key` | Content of chartmuseum's TLS key file, only available when `certSource` is `manual` | | | `internalTLS.trivy.secretName` | The secret name for trivy component, only available when `certSource` is `secret`. The secret must contain keys named: `ca.crt` - the CA certificate which is used to issue internal key and crt pair for components and all Harbor components must be issued by the same CA, `tls.crt` - the content of the TLS cert file, `tls.key` - the content of the TLS key file. | | | `internalTLS.trivy.crt` | Content of trivy's TLS cert file, only available when `certSource` is `manual` | | | `internalTLS.trivy.key` | Content of trivy's TLS key file, only available when `certSource` is `manual` | | | **IPFamily** | | | | `ipFamily.ipv4.enabled` | if cluster is ipv4 enabled, all ipv4 related configs will set correspondingly, but currently it only affects the nginx related components | `true` | | `ipFamily.ipv6.enabled` | if cluster is ipv6 enabled, all ipv6 related configs will set correspondingly, but currently it only affects the nginx related components | `true` | | **Persistence** | | | | `persistence.enabled` | Enable the data persistence or not | `true` | | `persistence.resourcePolicy` | Setting it to `keep` to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted. Does not affect PVCs created for internal database and redis components. | `keep` | | `persistence.persistentVolumeClaim.registry.existingClaim` | Use the existing PVC which must be created manually before bound, and specify the `subPath` if the PVC is shared with other components | | | `persistence.persistentVolumeClaim.registry.storageClass` | Specify the `storageClass` used to provision the volume. Or the default StorageClass will be used (the default). Set it to `-` to disable dynamic provisioning | | | `persistence.persistentVolumeClaim.registry.subPath` | The sub path used in the volume | | | `persistence.persistentVolumeClaim.registry.accessMode` | The access mode of the volume | `ReadWriteOnce` | | `persistence.persistentVolumeClaim.registry.size` | The size of the volume | `5Gi` | | `persistence.persistentVolumeClaim.registry.annotations` | The annotations of the volume | | | `persistence.persistentVolumeClaim.chartmuseum.existingClaim` | Use the existing PVC which must be created manually before bound, and specify the `subPath` if the PVC is shared with other components | | | `persistence.persistentVolumeClaim.chartmuseum.storageClass` | Specify the `storageClass` used to provision the volume. Or the default StorageClass will be used (the default). Set it to `-` to disable dynamic provisioning | | | `persistence.persistentVolumeClaim.chartmuseum.subPath` | The sub path used in the volume | | | `persistence.persistentVolumeClaim.chartmuseum.accessMode` | The access mode of the volume | `ReadWriteOnce` | | `persistence.persistentVolumeClaim.chartmuseum.size` | The size of the volume | `5Gi` | | `persistence.persistentVolumeClaim.chartmuseum.annotations` | The annotations of the volume | | | `persistence.persistentVolumeClaim.jobservice.existingClaim` | Use the existing PVC which must be created manually before bound, and specify the `subPath` if the PVC is shared with other components | | | `persistence.persistentVolumeClaim.jobservice.storageClass` | Specify the `storageClass` used to provision the volume. Or the default StorageClass will be used (the default). Set it to `-` to disable dynamic provisioning | | | `persistence.persistentVolumeClaim.jobservice.subPath` | The sub path used in the volume | | | `persistence.persistentVolumeClaim.jobservice.accessMode` | The access mode of the volume | `ReadWriteOnce` | | `persistence.persistentVolumeClaim.jobservice.size` | The size of the volume | `1Gi` | | `persistence.persistentVolumeClaim.jobservice.annotations` | The annotations of the volume | | | `persistence.persistentVolumeClaim.database.existingClaim` | Use the existing PVC which must be created manually before bound, and specify the `subPath` if the PVC is shared with other components. If external database is used, the setting will be ignored | | | `persistence.persistentVolumeClaim.database.storageClass` | Specify the `storageClass` used to provision the volume. Or the default StorageClass will be used (the default). Set it to `-` to disable dynamic provisioning. If external database is used, the setting will be ignored | | | `persistence.persistentVolumeClaim.database.subPath` | The sub path used in the volume. If external database is used, the setting will be ignored | | | `persistence.persistentVolumeClaim.database.accessMode` | The access mode of the volume. If external database is used, the setting will be ignored | `ReadWriteOnce` | | `persistence.persistentVolumeClaim.database.size` | The size of the volume. If external database is used, the setting will be ignored | `1Gi` | | `persistence.persistentVolumeClaim.database.annotations` | The annotations of the volume | | | `persistence.persistentVolumeClaim.redis.existingClaim` | Use the existing PVC which must be created manually before bound, and specify the `subPath` if the PVC is shared with other components. If external Redis is used, the setting will be ignored | | | `persistence.persistentVolumeClaim.redis.storageClass` | Specify the `storageClass` used to provision the volume. Or the default StorageClass will be used (the default). Set it to `-` to disable dynamic provisioning. If external Redis is used, the setting will be ignored | | | `persistence.persistentVolumeClaim.redis.subPath` | The sub path used in the volume. If external Redis is used, the setting will be ignored | | | `persistence.persistentVolumeClaim.redis.accessMode` | The access mode of the volume. If external Redis is used, the setting will be ignored | `ReadWriteOnce` | | `persistence.persistentVolumeClaim.redis.size` | The size of the volume. If external Redis is used, the setting will be ignored | `1Gi` | | `persistence.persistentVolumeClaim.redis.annotations` | The annotations of the volume | | | `persistence.persistentVolumeClaim.trivy.existingClaim` | Use the existing PVC which must be created manually before bound, and specify the `subPath` if the PVC is shared with other components. | | | `persistence.persistentVolumeClaim.trivy.storageClass` | Specify the `storageClass` used to provision the volume. Or the default StorageClass will be used (the default). Set it to `-` to disable dynamic provisioning | | | `persistence.persistentVolumeClaim.trivy.subPath` | The sub path used in the volume | | | `persistence.persistentVolumeClaim.trivy.accessMode` | The access mode of the volume | `ReadWriteOnce` | | `persistence.persistentVolumeClaim.trivy.size` | The size of the volume | `5Gi` | | `persistence.persistentVolumeClaim.trivy.annotations` | The annotations of the volume | | | `persistence.imageChartStorage.disableredirect` | The configuration for managing redirects from content backends. For backends which not supported it (such as using minio for `s3` storage type), please set it to `true` to disable redirects. Refer to the [guide](https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect) for more details | `false` | | `persistence.imageChartStorage.caBundleSecretName` | Specify the `caBundleSecretName` if the storage service uses a self-signed certificate. The secret must contain keys named `ca.crt` which will be injected into the trust store of registry's and chartmuseum's containers. | | | `persistence.imageChartStorage.type` | The type of storage for images and charts: `filesystem`, `azure`, `gcs`, `s3`, `swift` or `oss`. The type must be `filesystem` if you want to use persistent volumes for registry and chartmuseum. Refer to the [guide](https://github.com/docker/distribution/blob/master/docs/configuration.md#storage) for more details | `filesystem` | | **General** | | | | `externalURL` | The external URL for Harbor core service | `https://core.harbor.domain` | | `caBundleSecretName` | The custom CA bundle secret name, the secret must contain key named "ca.crt" which will be injected into the trust store for chartmuseum, core, jobservice, registry, trivy components. | | | `uaaSecretName` | If using external UAA auth which has a self signed cert, you can provide a pre-created secret containing it under the key `ca.crt`. | | | `imagePullPolicy` | The image pull policy | | | `imagePullSecrets` | The imagePullSecrets names for all deployments | | | `updateStrategy.type` | The update strategy for deployments with persistent volumes(jobservice, registry and chartmuseum): `RollingUpdate` or `Recreate`. Set it as `Recreate` when `RWM` for volumes isn't supported | `RollingUpdate` | | `logLevel` | The log level: `debug`, `info`, `warning`, `error` or `fatal` | `info` | | `harborAdminPassword` | The initial password of Harbor admin. Change it from portal after launching Harbor | `Harbor12345` | | `caSecretName` | The name of the secret which contains key named `ca.crt`. Setting this enables the download link on portal to download the CA certificate when the certificate isn't generated automatically | | | `secretKey` | The key used for encryption. Must be a string of 16 chars | `not-a-secure-key` | | `proxy.httpProxy` | The URL of the HTTP proxy server | | | `proxy.httpsProxy` | The URL of the HTTPS proxy server | | | `proxy.noProxy` | The URLs that the proxy settings not apply to | 127.0.0.1,localhost,.local,.internal | | `proxy.components` | The component list that the proxy settings apply to | core, jobservice, trivy | | `enableMigrateHelmHook` | Run the migration job via helm hook, if it is true, the database migration will be separated from harbor-core, run with a preupgrade job migration-job | `false` | | **Nginx** (if service exposed via `ingress`, Nginx will not be used) | | | | `nginx.image.repository` | Image repository | `goharbor/nginx-photon` | | `nginx.image.tag` | Image tag | `dev` | | `nginx.replicas` | The replica count | `1` | | `nginx.revisionHistoryLimit` | The revision history limit | `10` | | `nginx.resources` | The [resources] to allocate for container | undefined | | `nginx.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `nginx.nodeSelector` | Node labels for pod assignment | `{}` | | `nginx.tolerations` | Tolerations for pod assignment | `[]` | | `nginx.affinity` | Node/Pod affinities | `{}` | | `nginx.podAnnotations` | Annotations to add to the nginx pod | `{}` | | `nginx.priorityClassName` | The priority class to run the pod as | | | **Portal** | | | | `portal.image.repository` | Repository for portal image | `goharbor/harbor-portal` | | `portal.image.tag` | Tag for portal image | `dev` | | `portal.replicas` | The replica count | `1` | | `portal.revisionHistoryLimit` | The revision history limit | `10` | | `portal.resources` | The [resources] to allocate for container | undefined | | `portal.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `portal.nodeSelector` | Node labels for pod assignment | `{}` | | `portal.tolerations` | Tolerations for pod assignment | `[]` | | `portal.affinity` | Node/Pod affinities | `{}` | | `portal.podAnnotations` | Annotations to add to the portal pod | `{}` | | `portal.priorityClassName` | The priority class to run the pod as | | | **Core** | | | | `core.image.repository` | Repository for Harbor core image | `goharbor/harbor-core` | | `core.image.tag` | Tag for Harbor core image | `dev` | | `core.replicas` | The replica count | `1` | | `core.revisionHistoryLimit` | The revision history limit | `10` | | `core.startupProbe.initialDelaySeconds` | The initial delay in seconds for the startup probe | `10` | | `core.resources` | The [resources] to allocate for container | undefined | | `core.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `core.nodeSelector` | Node labels for pod assignment | `{}` | | `core.tolerations` | Tolerations for pod assignment | `[]` | | `core.affinity` | Node/Pod affinities | `{}` | | `core.podAnnotations` | Annotations to add to the core pod | `{}` | | `core.secret` | Secret is used when core server communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. | | | `core.secretName` | Fill the name of a kubernetes secret if you want to use your own TLS certificate and private key for token encryption/decryption. The secret must contain keys named: `tls.crt` - the certificate and `tls.key` - the private key. The default key pair will be used if it isn't set | | | `core.xsrfKey` | The XSRF key. Will be generated automatically if it isn't specified | | | `core.priorityClassName` | The priority class to run the pod as | | | `core.artifactPullAsyncFlushDuration` | The time duration for async update artifact pull_time and repository pull_count | | | **Jobservice** | | | | `jobservice.image.repository` | Repository for jobservice image | `goharbor/harbor-jobservice` | | `jobservice.image.tag` | Tag for jobservice image | `dev` | | `jobservice.replicas` | The replica count | `1` | | `jobservice.revisionHistoryLimit` | The revision history limit | `10` | | `jobservice.maxJobWorkers` | The max job workers | `10` | | `jobservice.jobLoggers` | The loggers for jobs: `file`, `database` or `stdout` | `file` | | `jobservice.loggerSweeperDuration` | The jobLogger sweeper duration in days (ignored if `jobLoggers` is set to `stdout`) | `14` | | `jobservice.resources` | The [resources] to allocate for container | undefined | | `jobservice.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `jobservice.nodeSelector` | Node labels for pod assignment | `{}` | | `jobservice.tolerations` | Tolerations for pod assignment | `[]` | | `jobservice.affinity` | Node/Pod affinities | `{}` | | `jobservice.podAnnotations` | Annotations to add to the jobservice pod | `{}` | | `jobservice.priorityClassName` | The priority class to run the pod as | | | `jobservice.secret` | Secret is used when job service communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. | | | **Registry** | | | | `registry.registry.image.repository` | Repository for registry image | `goharbor/registry-photon` | | `registry.registry.image.tag` | Tag for registry image | `dev` | | `registry.registry.resources` | The [resources] to allocate for container | undefined | | `registry.controller.image.repository` | Repository for registry controller image | `goharbor/harbor-registryctl` | | `registry.controller.image.tag` | Tag for registry controller image | `dev` | | `registry.controller.resources` | The [resources] to allocate for container | undefined | | `registry.replicas` | The replica count | `1` | | `registry.revisionHistoryLimit` | The revision history limit | `10` | | `registry.nodeSelector` | Node labels for pod assignment | `{}` | | `registry.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `registry.tolerations` | Tolerations for pod assignment | `[]` | | `registry.affinity` | Node/Pod affinities | `{}` | | `registry.middleware` | Middleware is used to add support for a CDN between backend storage and `docker pull` recipient. See [official docs](https://github.com/docker/distribution/blob/master/docs/configuration.md#middleware). | | | `registry.podAnnotations` | Annotations to add to the registry pod | `{}` | | `registry.priorityClassName` | The priority class to run the pod as | | | `registry.secret` | Secret is used to secure the upload state from client and registry storage backend. See [official docs](https://github.com/docker/distribution/blob/master/docs/configuration.md#http). If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. | | | `registry.credentials.username` | The username for accessing the registry instance, which is hosted by htpasswd auth mode. More details see [official docs](https://github.com/docker/distribution/blob/master/docs/configuration.md#htpasswd). | `harbor_registry_user` | | `registry.credentials.password` | The password for accessing the registry instance, which is hosted by htpasswd auth mode. More details see [official docs](https://github.com/docker/distribution/blob/master/docs/configuration.md#htpasswd). It is suggested you update this value before installation. | `harbor_registry_password` | | `registry.credentials.htpasswdString` | Login and password in htpasswd string format. Excludes `registry.credentials.username` and `registry.credentials.password`. May come in handy when integrating with tools like argocd or flux. This allows the same line to be generated each time the template is rendered, instead of the `htpasswd` function from helm, which generates different lines each time because of the salt. | undefined | | `registry.relativeurls` | If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL. Needed if harbor is behind a reverse proxy | `false` | | `registry.upload_purging.enabled` | If true, enable purge _upload directories | `true` | | `registry.upload_purging.age` | Remove files in _upload directories which exist for a period of time, default is one week. | `168h` | | `registry.upload_purging.interval` | The interval of the purge operations | `24h` | | `registry.upload_purging.dryrun` | If true, enable dryrun for purging _upload, default false | `false` | | **Chartmuseum** | | | | `chartmuseum.enabled` | Enable chartmusuem to store chart | `true` | | `chartmuseum.absoluteUrl` | If true, ChartMuseum will return absolute URLs. The default behavior is to return relative URLs | `false` | | `chartmuseum.image.repository` | Repository for chartmuseum image | `goharbor/chartmuseum-photon` | | `chartmuseum.image.tag` | Tag for chartmuseum image | `dev` | | `chartmuseum.replicas` | The replica count | `1` | | `chartmuseum.revisionHistoryLimit` | The revision history limit | `10` | | `chartmuseum.resources` | The [resources] to allocate for container | undefined | | `chartmuseum.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `chartmuseum.nodeSelector` | Node labels for pod assignment | `{}` | | `chartmuseum.tolerations` | Tolerations for pod assignment | `[]` | | `chartmuseum.affinity` | Node/Pod affinities | `{}` | | `chartmuseum.podAnnotations` | Annotations to add to the chart museum pod | `{}` | | `chartmuseum.priorityClassName` | The priority class to run the pod as | | | **[Trivy][trivy]** | | | | `trivy.enabled` | The flag to enable Trivy scanner | `true` | | `trivy.image.repository` | Repository for Trivy adapter image | `goharbor/trivy-adapter-photon` | | `trivy.image.tag` | Tag for Trivy adapter image | `dev` | | `trivy.resources` | The [resources] to allocate for Trivy adapter container | | | `trivy.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `trivy.replicas` | The number of Pod replicas | `1` | | `trivy.debugMode` | The flag to enable Trivy debug mode | `false` | | `trivy.vulnType` | Comma-separated list of vulnerability types. Possible values `os` and `library`. | `os,library` | | `trivy.severity` | Comma-separated list of severities to be checked | `UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL` | | `trivy.ignoreUnfixed` | The flag to display only fixed vulnerabilities | `false` | | `trivy.insecure` | The flag to skip verifying registry certificate | `false` | | `trivy.skipUpdate` | The flag to disable [Trivy DB][trivy-db] downloads from GitHub | `false` | | `trivy.offlineScan` | The flag prevents Trivy from sending API requests to identify dependencies. | `false` | | `trivy.timeout` | The duration to wait for scan completion | `5m0s` | | `trivy.gitHubToken` | The GitHub access token to download [Trivy DB][trivy-db] (see [GitHub rate limiting][trivy-rate-limiting]) | | | `trivy.priorityClassName` | The priority class to run the pod as | | | **Notary** | | | | `notary.enabled` | Enable Notary? | `true` | | `notary.server.image.repository` | Repository for notary server image | `goharbor/notary-server-photon` | | `notary.server.image.tag` | Tag for notary server image | `dev` | | `notary.server.replicas` | The replica count | `1` | | `notary.server.resources` | The [resources] to allocate for container | undefined | | `notary.server.priorityClassName` | The priority class to run the pod as | | | `notary.server.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `notary.signer.image.repository` | Repository for notary signer image | `goharbor/notary-signer-photon` | | `notary.signer.image.tag` | Tag for notary signer image | `dev` | | `notary.signer.replicas` | The replica count | `1` | | `notary.signer.resources` | The [resources] to allocate for container | undefined | | `notary.signer.priorityClassName` | The priority class to run the pod as | | | `notary.signer.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `notary.nodeSelector` | Node labels for pod assignment | `{}` | | `notary.tolerations` | Tolerations for pod assignment | `[]` | | `notary.affinity` | Node/Pod affinities | `{}` | | `notary.podAnnotations` | Annotations to add to the notary pod | `{}` | | `notary.secretName` | Fill the name of a kubernetes secret if you want to use your own TLS certificate authority, certificate and private key for notary communications. The secret must contain keys named `tls.ca`, `tls.crt` and `tls.key` that contain the CA, certificate and private key. They will be generated if not set. | | | **Database** | | | | `database.type` | If external database is used, set it to `external` | `internal` | | `database.internal.image.repository` | Repository for database image | `goharbor/harbor-db` | | `database.internal.image.tag` | Tag for database image | `dev` | | `database.internal.password` | The password for database | `changeit` | | `database.internal.shmSizeLimit` | The limit for the size of shared memory for internal PostgreSQL, conventionally it's around 50% of the memory limit of the container | `512Mi` | | `database.internal.resources` | The [resources] to allocate for container | undefined | | `database.internal.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `database.internal.initContainer.migrator.resources` | The [resources] to allocate for the database migrator initContainer | undefined | | `database.internal.initContainer.permissions.resources` | The [resources] to allocate for the database permissions initContainer | undefined | | `database.internal.nodeSelector` | Node labels for pod assignment | `{}` | | `database.internal.tolerations` | Tolerations for pod assignment | `[]` | | `database.internal.affinity` | Node/Pod affinities | `{}` | | `database.internal.priorityClassName` | The priority class to run the pod as | | | `database.external.host` | The hostname of external database | `192.168.0.1` | | `database.external.port` | The port of external database | `5432` | | `database.external.username` | The username of external database | `user` | | `database.external.password` | The password of external database | `password` | | `database.external.coreDatabase` | The database used by core service | `registry` | | `database.external.notaryServerDatabase` | The database used by Notary server | `notary_server` | | `database.external.notarySignerDatabase` | The database used by Notary signer | `notary_signer` | | `database.external.sslmode` | Connection method of external database (require, verify-full, verify-ca, disable) | `disable` | | `database.maxIdleConns` | The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained. | `50` | | `database.maxOpenConns` | The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections. | `100` | | `database.podAnnotations` | Annotations to add to the database pod | `{}` | | **Redis** | | | | `redis.type` | If external redis is used, set it to `external` | `internal` | | `redis.internal.image.repository` | Repository for redis image | `goharbor/redis-photon` | | `redis.internal.image.tag` | Tag for redis image | `dev` | | `redis.internal.resources` | The [resources] to allocate for container | undefined | | `redis.internal.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `redis.internal.nodeSelector` | Node labels for pod assignment | `{}` | | `redis.internal.tolerations` | Tolerations for pod assignment | `[]` | | `redis.internal.affinity` | Node/Pod affinities | `{}` | | `redis.internal.priorityClassName` | The priority class to run the pod as | | | `redis.external.addr` | The addr of external Redis: :. When using sentinel, it should be :,:,: | `192.168.0.2:6379` | | `redis.external.sentinelMasterSet` | The name of the set of Redis instances to monitor | | | `redis.external.coreDatabaseIndex` | The database index for core | `0` | | `redis.external.jobserviceDatabaseIndex` | The database index for jobservice | `1` | | `redis.external.registryDatabaseIndex` | The database index for registry | `2` | | `redis.external.chartmuseumDatabaseIndex` | The database index for chartmuseum | `3` | | `redis.external.trivyAdapterIndex` | The database index for trivy adapter | `5` | | `redis.external.password` | The password of external Redis | | | `redis.podAnnotations` | Annotations to add to the redis pod | `{}` | | **Exporter** | | | | `exporter.replicas` | The replica count | `1` | | `exporter.revisionHistoryLimit` | The revision history limit | `10` | | `exporter.podAnnotations` | Annotations to add to the exporter pod | `{}` | | `exporter.image.repository` | Repository for redis image | `goharbor/harbor-exporter` | | `exporter.image.tag` | Tag for exporter image | `dev` | | `exporter.nodeSelector` | Node labels for pod assignment | `{}` | | `exporter.tolerations` | Tolerations for pod assignment | `[]` | | `exporter.affinity` | Node/Pod affinities | `{}` | | `exporter.automountServiceAccountToken` | Mount serviceAccountToken? | `false` | | `exporter.cacheDuration` | the cache duration for information that exporter collected from Harbor | `30` | | `exporter.cacheCleanInterval` | cache clean interval for information that exporter collected from Harbor | `14400` | | `exporter.priorityClassName` | The priority class to run the pod as | | | **Metrics** | | | | `metrics.enabled` | if enable harbor metrics | `false` | | `metrics.core.path` | the url path for core metrics | `/metrics` | | `metrics.core.port` | the port for core metrics | `8001` | | `metrics.registry.path` | the url path for registry metrics | `/metrics` | | `metrics.registry.port` | the port for registry metrics | `8001` | | `metrics.exporter.path` | the url path for exporter metrics | `/metrics` | | `metrics.exporter.port` | the port for exporter metrics | `8001` | | `metrics.serviceMonitor.enabled` | create prometheus serviceMonitor. Requires prometheus CRD's | `false` | | `metrics.serviceMonitor.additionalLabels` | additional labels to upsert to the manifest | `""` | | `metrics.serviceMonitor.interval` | scrape period for harbor metrics | `""` | | `metrics.serviceMonitor.metricRelabelings` | metrics relabel to add/mod/del before ingestion | `[]` | | `metrics.serviceMonitor.relabelings` | relabels to add/mod/del to sample before scrape | `[]` | | **Trace** | | | | `trace.enabled` | Enable tracing or not | `false` | | `trace.provider` | The tracing provider: `jaeger` or `otel`. `jaeger` should be 1.26+ | `jaeger` | | `trace.sample_rate` | Set `sample_rate` to 1 if you want sampling 100% of trace data; set 0.5 if you want sampling 50% of trace data, and so forth | `1` | | `trace.namespace` | Namespace used to differentiate different harbor services | | | `trace.attributes` | `attributes` is a key value dict contains user defined attributes used to initialize trace provider | | | `trace.jaeger.endpoint` | The endpoint of jaeger | `http://hostname:14268/api/traces` | | `trace.jaeger.username` | The username of jaeger | | | `trace.jaeger.password` | The password of jaeger | | | `trace.jaeger.agent_host` | The agent host of jaeger | | | `trace.jaeger.agent_port` | The agent port of jaeger | `6831` | | `trace.otel.endpoint` | The endpoint of otel | `hostname:4318` | | `trace.otel.url_path` | The URL path of otel | `/v1/traces` | | `trace.otel.compression` | Whether enable compression or not for otel | `false` | | `trace.otel.insecure` | Whether establish insecure connection or not for otel | `true` | | `trace.otel.timeout` | The timeout of otel | `10s` | [resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ [trivy]: https://github.com/aquasecurity/trivy [trivy-db]: https://github.com/aquasecurity/trivy-db [trivy-rate-limiting]: https://github.com/aquasecurity/trivy#github-rate-limiting