Using the GitLab Shell chart

  • Tier: Free, Premium, Ultimate
  • Offering: GitLab Self-Managed

The gitlab-shell sub-chart provides an SSH server configured for Git SSH access to GitLab.

Requirements

This chart depends on access to the Workhorse services, either as part of the complete GitLab chart or provided as an external service reachable from the Kubernetes cluster this chart is deployed onto.

Design Choices

In order to easily support SSH replicas, and avoid using shared storage for the SSH authorized keys, we are using the SSH AuthorizedKeysCommand to authenticate against the GitLab authorized keys endpoint. As a result, we don’t persist or update the AuthorizedKeys file within these pods.

Configuration

The gitlab-shell chart is configured in two parts: external services, and chart settings. The port exposed through Ingress is configured with global.shell.port, and defaults to 22. The Service’s external port is also controlled by global.shell.port.

Installation command line options

ParameterDefaultDescription
affinity{}Affinity rules for pod assignment
annotationsPod annotations
podLabelsSupplemental Pod labels. Will not be used for selectors.
common.labelsSupplemental labels that are applied to all objects created by this chart.
config.clientAliveInterval0Interval between keepalive pings on otherwise idle connections; the default value of 0 disables this ping
config.loginGraceTime60Specifies amount of time that the server will disconnect after if the user has not successfully logged in
config.maxStartups.full100SSHd refuse probability will increase linearly and all unauthenticated connection attempts would be refused when unauthenticated connections number will reach specified number
config.maxStartups.rate30SSHd will refuse connections with specified probability when there would be too many unauthenticated connections (optional)
config.maxStartups.start10SSHd will refuse connection attempts with some probability if there are currently more than the specified number of unauthenticated connections (optional)
config.proxyProtocolfalseEnable PROXY protocol support for the gitlab-sshd daemon
config.proxyPolicy"use"Specify policy for handling PROXY protocol. Value must be one of use, require, ignore, reject
config.proxyHeaderTimeout"500ms"The maximum duration gitlab-sshd will wait before giving up on reading the PROXY protocol header. Must include units: ms, s, or m.
config.ciphers[aes128-gcm@openssh.com, chacha20-poly1305@openssh.com, aes256-gcm@openssh.com, aes128-ctr, aes192-ctr, aes256-ctr]Specify the ciphers allowed.
config.kexAlgorithms[curve25519-sha256, curve25519-sha256@libssh.org, ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521, diffie-hellman-group14-sha256, diffie-hellman-group14-sha1]Specifies the available KEX (Key Exchange) algorithms.
config.macs[hmac-sha2-256-etm@openssh.com, hmac-sha2-512-etm@openssh.com, hmac-sha2-256, hmac-sha2-512, hmac-sha1]Specifies the available MAC (message authentication code algorithms.
config.publicKeyAlgorithms[]Custom list of public key algorithms. If empty, the default algorithms are used.
config.gssapi.enabledfalseEnable GSS-API support for the gitlab-sshd daemon
config.gssapi.keytab.secretThe name of a Kubernetes secret holding the keytab for the gssapi-with-mic authentication method
config.gssapi.keytab.keykeytabKey holding the keytab in the Kubernetes secret
config.gssapi.krb5ConfigContent of the /etc/krb5.conf file in the GitLab Shell container
config.gssapi.servicePrincipalNameThe Kerberos service name to be used by the gitlab-sshd daemon
config.lfs.pureSSHProtocolfalseEnable LFS Pure SSH protocol support
config.pat.enabledtrueEnable PAT using SSH
config.pat.allowedScopes[]An array of scopes allowed for PATs generated with SSH
opensshd.supplemental_configSupplemental configuration, appended to sshd_config. Strict alignment to man page
deployment.livenessProbe.initialDelaySeconds10Delay before liveness probe is initiated
deployment.livenessProbe.periodSeconds10How often to perform the liveness probe
deployment.livenessProbe.timeoutSeconds3When the liveness probe times out
deployment.livenessProbe.successThreshold1Minimum consecutive successes for the liveness probe to be considered successful after having failed
deployment.livenessProbe.failureThreshold3Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
deployment.readinessProbe.initialDelaySeconds10Delay before readiness probe is initiated
deployment.readinessProbe.periodSeconds5How often to perform the readiness probe
deployment.readinessProbe.timeoutSeconds3When the readiness probe times out
deployment.readinessProbe.successThreshold1Minimum consecutive successes for the readiness probe to be considered successful after having failed
deployment.readinessProbe.failureThreshold2Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
deployment.strategy{}Allows one to configure the update strategy utilized by the deployment
deployment.terminationGracePeriodSeconds30Seconds that Kubernetes will wait for a pod to forcibly exit
enabledtrueShell enable flag
extraContainersMultiline literal style string containing a list of containers to include
extraInitContainersList of extra init containers to include
extraVolumeMountsList of extra volumes mounts to do
extraVolumesList of extra volumes to create
extraEnvList of extra environment variables to expose
extraEnvFromList of extra environment variables from other data sources to expose
hpa.behavior{scaleDown: {stabilizationWindowSeconds: 300 }}Behavior contains the specifications for up- and downscaling behavior (requires autoscaling/v2beta2 or higher)
hpa.customMetrics[]Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in targetAverageUtilization)
hpa.cpu.targetTypeAverageValueSet the autoscaling CPU target type, must be either Utilization or AverageValue
hpa.cpu.targetAverageValue100mSet the autoscaling CPU target value
hpa.cpu.targetAverageUtilizationSet the autoscaling CPU target utilization
hpa.memory.targetTypeSet the autoscaling memory target type, must be either Utilization or AverageValue
hpa.memory.targetAverageValueSet the autoscaling memory target value
hpa.memory.targetAverageUtilizationSet the autoscaling memory target utilization
hpa.targetAverageValueDEPRECATED Set the autoscaling CPU target value
image.pullPolicyIfNotPresentShell image pull policy
image.pullSecretsSecrets for the image repository
image.repositoryregistry.gitlab.com/gitlab-org/build/cng/gitlab-shellShell image repository
image.tagmasterShell image tag
init.image.repositoryinitContainer image
init.image.taginitContainer image tag
init.containerSecurityContextinitContainer specific securityContext
init.containerSecurityContext.allowPrivilegeEscalationfalseinitContainer specific: Controls whether a process can gain more privileges than its parent process
init.containerSecurityContext.runAsNonRoottrueinitContainer specific: Controls whether the container runs with a non-root user
init.containerSecurityContext.capabilities.drop[ "ALL" ]initContainer specific: Removes Linux capabilities for the container
keda.enabledfalseUse KEDA ScaledObjects instead of HorizontalPodAutoscalers
keda.pollingInterval30The interval to check each trigger on
keda.cooldownPeriod300The period to wait after the last trigger reported active before scaling the resource back to 0
keda.minReplicaCountMinimum number of replicas KEDA will scale the resource down to, defaults to minReplicas
keda.maxReplicaCountMaximum number of replicas KEDA will scale the resource up to, defaults to maxReplicas
keda.fallbackKEDA fallback configuration, see the documentation
keda.hpaNameThe name of the HPA resource KEDA will create, defaults to keda-hpa-{scaled-object-name}
keda.restoreToOriginalReplicaCountSpecifies whether the target resource should be scaled back to original replicas count after the ScaledObject is deleted
keda.behaviorThe specifications for up- and downscaling behavior, defaults to hpa.behavior
keda.triggersList of triggers to activate scaling of the target resource, defaults to triggers computed from hpa.cpu and hpa.memory
logging.formatjsonSet to text for unstructured logs
logging.sshdLogLevelERRORLog level for underlying SSH daemon
priorityClassNamePriority class assigned to pods.
replicaCount1Shell replicas
serviceLabels{}Supplemental service labels
service.allocateLoadBalancerNodePortsNot set, to use Kubernetes default value.Allows to disable NodePort allocation on LoadBalancer service, see the documentation
service.externalTrafficPolicyClusterShell service external traffic policy (Cluster or Local)
service.internalPort2222Shell internal port
service.nodePortSets shell nodePort if set
service.namegitlab-shellShell service name
service.typeClusterIPShell service type
service.loadBalancerIPIP address to assign to LoadBalancer (if supported)
service.loadBalancerSourceRangesList of IP CIDRs allowed access to LoadBalancer (if supported)
serviceAccount.annotations{}ServiceAccount annotations
serviceAccount.automountServiceAccountTokenfalseIndicates whether or not the default ServiceAccount access token should be mounted in pods
serviceAccount.createfalseIndicates whether or not a ServiceAccount should be created
serviceAccount.enabledfalseIndicates whether or not to use a ServiceAccount
serviceAccount.nameName of the ServiceAccount. If not set, the full chart name is used
securityContext.fsGroup1000Group ID under which the pod should be started
securityContext.runAsUser1000User ID under which the pod should be started
securityContext.fsGroupChangePolicyPolicy for changing ownership and permission of the volume (requires Kubernetes 1.23)
securityContext.seccompProfile.typeRuntimeDefaultSeccomp profile to use
containerSecurityContextOverride container securityContext under which the container is started
containerSecurityContext.runAsUser1000Allow to overwrite the specific security context under which the container is started
containerSecurityContext.allowPrivilegeEscalationfalseControls whether a process of the container can gain more privileges than its parent process
containerSecurityContext.runAsNonRoottrueControls whether the container runs with a non-root user
containerSecurityContext.capabilities.drop[ "ALL" ]Removes Linux capabilities for the Gitaly container
sshDaemonopensshSelects which SSH daemon would be run, possible values (openssh, gitlab-sshd)
tolerations[]Toleration labels for pod assignment
traefik.entrypointgitlab-shellWhen using traefik, which traefik entrypoint to use for GitLab Shell. Defaults to gitlab-shell
traefik.tcpMiddlewares[]When using traefik, which TCP Middlewares to add to IngressRouteTCP resource. No middlewares by default
workhorse.serviceNamewebserviceWorkhorse service name (by default, Workhorse is a part of the webservice Pods / Service)
metrics.enabledfalseIf a metrics endpoint should be made available for scraping (requires sshDaemon=gitlab-sshd).
metrics.port9122Metrics endpoint port
metrics.path/metricsMetrics endpoint path
metrics.serviceMonitor.enabledfalseIf a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
metrics.serviceMonitor.additionalLabels{}Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig{}Additional endpoint configuration for the ServiceMonitor
metrics.annotationsDEPRECATED Set explicit metrics annotations. Replaced by template content.

Chart configuration examples

extraEnv

extraEnv allows you to expose additional environment variables in all containers in the pods.

Below is an example use of extraEnv:

extraEnv:
  SOME_KEY: some_value
  SOME_OTHER_KEY: some_other_value

When the container is started, you can confirm that the environment variables are exposed:

env | grep SOME
SOME_KEY=some_value
SOME_OTHER_KEY=some_other_value

extraEnvFrom

extraEnvFrom allows you to expose additional environment variables from other data sources in all containers in the pods.

Below is an example use of extraEnvFrom:

extraEnvFrom:
  MY_NODE_NAME:
    fieldRef:
      fieldPath: spec.nodeName
  MY_CPU_REQUEST:
    resourceFieldRef:
      containerName: test-container
      resource: requests.cpu
  SECRET_THING:
    secretKeyRef:
      name: special-secret
      key: special_token
      # optional: boolean
  CONFIG_STRING:
    configMapKeyRef:
      name: useful-config
      key: some-string
      # optional: boolean

image.pullSecrets

pullSecrets allows you to authenticate to a private registry to pull images for a pod.

Additional details about private registries and their authentication methods can be found in the Kubernetes documentation.

Below is an example use of pullSecrets:

image:
  repository: my.shell.repository
  tag: latest
  pullPolicy: Always
  pullSecrets:
  - name: my-secret-name
  - name: my-secondary-secret-name

serviceAccount

This section controls if a ServiceAccount should be created and if the default access token should be mounted in pods.

NameTypeDefaultDescription
annotationsMap{}ServiceAccount annotations.
automountServiceAccountTokenBooleanfalseControls if the default ServiceAccount access token should be mounted in pods. You should not enable this unless it is required by certain sidecars to work properly (for example, Istio).
createBooleanfalseIndicates whether or not a ServiceAccount should be created.
enabledBooleanfalseIndicates whether or not to use a ServiceAccount.
nameStringName of the ServiceAccount. If not set, the full chart name is used.

livenessProbe/readinessProbe

deployment.livenessProbe and deployment.readinessProbe provide a mechanism to help control the termination of Pods under some scenarios.

Larger repositories benefit from tuning liveness and readiness probe times to match their typical long-running connections. Set readiness probe duration shorter than liveness probe duration to minimize potential interruptions during clone and push operations. Increase terminationGracePeriodSeconds and give these operations more time before the scheduler terminates the pod. Consider the example below as a starting point to tune GitLab Shell pods for increased stability and efficiency with larger repository workloads.

deployment:
  livenessProbe:
    initialDelaySeconds: 10
    periodSeconds: 20
    timeoutSeconds: 3
    successThreshold: 1
    failureThreshold: 10
  readinessProbe:
    initialDelaySeconds: 10
    periodSeconds: 5
    timeoutSeconds: 2
    successThreshold: 1
    failureThreshold: 3
  terminationGracePeriodSeconds: 300

Reference the official Kubernetes Documentation for additional details regarding this configuration.

tolerations

tolerations allow you schedule pods on tainted worker nodes

Below is an example use of tolerations:

tolerations:
- key: "node_label"
  operator: "Equal"
  value: "true"
  effect: "NoSchedule"
- key: "node_label"
  operator: "Equal"
  value: "true"
  effect: "NoExecute"

affinity

For more information, see affinity.

annotations

annotations allows you to add annotations to the GitLab Shell pods.

Below is an example use of annotations

annotations:
  kubernetes.io/example-annotation: annotation-value

External Services

This chart should be attached the Workhorse service.

Workhorse

workhorse:
  host: workhorse.example.com
  serviceName: webservice
  port: 8181
NameTypeDefaultDescription
hostStringThe hostname of the Workhorse server. This can be omitted in lieu of serviceName.
portInteger8181The port on which to connect to the Workhorse server.
serviceNameStringwebserviceThe name of the service which is operating the Workhorse server. By default, Workhorse is a part of the webservice Pods / Service. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name) in place of the host value. This is convenient when using Workhorse as a part of the overall GitLab chart.

Chart settings

The following values are used to configure the GitLab Shell Pods.

hostKeys.secret

The name of the Kubernetes secret to grab the SSH host keys from. The keys in the secret must start with the key names ssh_host_ in order to be used by GitLab Shell.

authToken

GitLab Shell uses an Auth Token in its communication with Workhorse. Share the token with GitLab Shell and Workhorse using a shared Secret.

authToken:
 secret: gitlab-shell-secret
 key: secret
NameTypeDefaultDescription
authToken.keyStringThe name of the key in the above secret that contains the auth token.
authToken.secretStringThe name of the Kubernetes Secret to pull from.

LoadBalancer Service

If the service.type is set to LoadBalancer, you can optionally specify service.loadBalancerIP to create the LoadBalancer with a user-specified IP (if your cloud provider supports it).

You can also optionally specify a list of service.loadBalancerSourceRanges to restrict the CIDR ranges that can access the LoadBalancer (if your cloud provider supports it).

Additional information about the LoadBalancer service type can be found in the Kubernetes documentation

service:
  type: LoadBalancer
  loadBalancerIP: 1.2.3.4
  loadBalancerSourceRanges:
  - 5.6.7.8/32
  - 10.0.0.0/8

OpenSSH supplemental configuration

When making use of OpenSSH’s sshd (via .sshDaemon: openssh), it is possible to provide supplemental configuration in two ways: .opensshd.supplemental_config, and via mounting configuration snippets to /etc/ssh/sshd_config.d/*.conf.

Any configuration supplied must meet the functional requirements of sshd_config. Ensure you read the manual page.

opensshd.supplemental_config

The content of .opensshd.supplemental_config will be directly placed at the end the sshd_config file within the container. This value should be a mutli-line string.

Example, enabling older clients using the ssh-rsa key exchange algorithms. Note that enabling deprecated algorithms, such as ssh-rsa, creates significant security vulnerabilities. The likelihood of exploitation is significantly amplified on publicly exposed GitLab instances with these changes.

opensshd:
    supplemental_config: |-
      HostKeyAlgorithms +ssh-rsa,ssh-rsa-cert-v01@openssh.com
      PubkeyAcceptedAlgorithms +ssh-rsa,ssh-rsa-cert-v01@openssh.com
      CASignatureAlgorithms +ssh-rsa

sshd_config.d

You may provide full configuration snippets to sshd via mounting content into /etc/ssh/sshd_config.d, with the files matching *.conf. Note, that these are included after the default configuration which is required for the application to function in the container, and within the chart. These values will not override the contents of sshd_config, but extend them.

Example, mounting a single item of a ConfigMap into the container via extraVolumes and extraVolumeMounts:

extraVolumes: |
  - name: gitlab-sshdconfig-extra
    configMap:
      name: gitlab-sshdconfig-extra

extraVolumeMounts: |
  - name: gitlab-sshdconfig-extra
    mountPath: /etc/ssh/sshd_config.d/extra.conf
    subPath: extra.conf

Configuring the networkpolicy

This section controls the NetworkPolicy. This configuration is optional and is used to limit Egress and Ingress of the Pods to specific endpoints.

NameTypeDefaultDescription
enabledBooleanfalseThis setting enables the NetworkPolicy
ingress.enabledBooleanfalseWhen set to true, the Ingress network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rulesArray[]Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabledBooleanfalseWhen set to true, the Egress network policy will be activated. This will block all egress connections unless rules are specified.
egress.rulesArray[]Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below

Example Network Policy

The gitlab-shell service requires Ingress connections for port 22 and Egress connections to various to default workhorse port 8181. This example adds the following network policy:

  • Allows Ingress requests:

    • From the nginx-ingress pod to port 2222

    • From the prometheus pod to port 9122

      Access from prometheus to port 9122 is only necessary when the SSH daemon is set to gitlab-sshd

  • Allows Egress requests:

    • To the webservice pod to port 8181
    • To the gitaly pod to port 8075

Note the example provided is only an example and may not be complete

The example is based on the assumption that kube-dns was deployed to the namespace kube-system, prometheus was deployed to the namespace monitoring and nginx-ingress was deployed to the namespace nginx-ingress.

networkpolicy:
  enabled: true
  ingress:
    enabled: true
    rules:
      - from:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: nginx-ingress
            podSelector:
              matchLabels:
                app: nginx-ingress
                component: controller
        ports:
          - port: 2222
      - from:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: monitoring
            podSelector:
              matchLabels:
                app: prometheus
                component: server
                release: gitlab
        ports:
          - port: 9122
  egress:
    enabled: true
    rules:
      - to:
          - podSelector:
              matchLabels:
                app: gitaly
        ports:
          - port: 8075
      - to:
          - podSelector:
              matchLabels:
                app: webservice
        ports:
          - port: 8181
      - to:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: kube-system
            podSelector:
              matchLabels:
                k8s-app: kube-dns
        ports:
          - port: 53
            protocol: UDP

Configuring KEDA

This keda section enables the installation of KEDA ScaledObjects instead of regular HorizontalPodAutoscalers. This configuration is optional and can be used when there is a need for autoscaling based on custom or external metrics.

Most settings default to the values set in the hpa section where applicable.

If the following are true, CPU and memory triggers are added automatically based on the CPU and memory thresholds set in the hpa section:

  • triggers is not set.
  • The corresponding request.cpu.request or request.memory.request setting is also set to a non-zero value.

If no triggers are set, the ScaledObject is not created.

Refer to the KEDA documentation for more details about those settings.

NameTypeDefaultDescription
enabledBooleanfalseUse KEDA ScaledObjects instead of HorizontalPodAutoscalers
pollingIntervalInteger30The interval to check each trigger on
cooldownPeriodInteger300The period to wait after the last trigger reported active before scaling the resource back to 0
minReplicaCountIntegerMinimum number of replicas KEDA will scale the resource down to, defaults to minReplicas
maxReplicaCountIntegerMaximum number of replicas KEDA will scale the resource up to, defaults to maxReplicas
fallbackMapKEDA fallback configuration, see the documentation
hpaNameStringThe name of the HPA resource KEDA will create, defaults to keda-hpa-{scaled-object-name}
restoreToOriginalReplicaCountBooleanSpecifies whether the target resource should be scaled back to original replicas count after the ScaledObject is deleted
behaviorMapThe specifications for up- and downscaling behavior, defaults to hpa.behavior
triggersArrayList of triggers to activate scaling of the target resource, defaults to triggers computed from hpa.cpu and hpa.memory

See examples/keda/gitlab-shell.yml for an usage example of keda.