Connecting GitLab with a Kubernetes cluster

Introduced in GitLab 10.1.

Connect your project to Google Kubernetes Engine (GKE) or an existing Kubernetes cluster in a few steps.

Overview

With one or more Kubernetes clusters associated to your project, you can use Review Apps, deploy your applications, run your pipelines, use it with Auto DevOps, and much more, all from within GitLab.

There are two options when adding a new cluster to your project; either associate your account with Google Kubernetes Engine (GKE) so that you can create new clusters from within GitLab, or provide the credentials to an existing Kubernetes cluster.

NOTE: Note: From GitLab 11.6 you can also associate a Kubernetes cluster to your groups. Learn more about group Kubernetes clusters.

Adding and creating a new GKE cluster via GitLab

TIP: Tip: Every new Google Cloud Platform (GCP) account receives $300 in credit upon sign up, and in partnership with Google, GitLab is able to offer an additional $200 for new GCP accounts to get started with GitLab's Google Kubernetes Engine Integration. All you have to do is follow this link and apply for credit.

NOTE: Note: The Google authentication integration must be enabled in GitLab at the instance level. If that's not the case, ask your GitLab administrator to enable it. On GitLab.com, this is enabled.

Requirements

Before creating your first cluster on Google Kubernetes Engine with GitLab's integration, make sure the following requirements are met:

Creating the cluster

If all of the above requirements are met, you can proceed to create and add a new Kubernetes cluster to your project:

  1. Navigate to your project's Operations > Kubernetes page.

    NOTE: Note: You need Maintainer permissions and above to access the Kubernetes page.

  2. Click Add Kubernetes cluster.

  3. Click Create with Google Kubernetes Engine.

  4. Connect your Google account if you haven't done already by clicking the Sign in with Google button.

  5. From there on, choose your cluster's settings:

    • Kubernetes cluster name - The name you wish to give the cluster.
    • Environment scope - The associated environment to this cluster.
    • Google Cloud Platform project - Choose the project you created in your GCP console that will host the Kubernetes cluster. Learn more about Google Cloud Platform projects.
    • Zone - Choose the region zone under which the cluster will be created.
    • Number of nodes - Enter the number of nodes you wish the cluster to have.
    • Machine type - The machine type of the Virtual Machine instance that the cluster will be based on.
    • RBAC-enabled cluster - Leave this checked if using default GKE creation options, see the RBAC section for more information.
  6. Finally, click the Create Kubernetes cluster button.

After a couple of minutes, your cluster will be ready to go. You can now proceed to install some pre-defined applications.

Adding an existing Kubernetes cluster

To add an existing Kubernetes cluster to your project:

  1. Navigate to your project's Operations > Kubernetes page.

    NOTE: Note: You need Maintainer permissions and above to access the Kubernetes page.

  2. Click Add Kubernetes cluster.

  3. Click Add an existing Kubernetes cluster and fill in the details:

    • Kubernetes cluster name (required) - The name you wish to give the cluster.

    • Environment scope (required) - The associated environment to this cluster.

    • API URL (required) - It's the URL that GitLab uses to access the Kubernetes API. Kubernetes exposes several APIs, we want the "base" URL that is common to all of them, e.g., https://kubernetes.example.com rather than https://kubernetes.example.com/api/v1.

    • CA certificate (required) - A valid Kubernetes certificate is needed to authenticate to the EKS cluster. We will use the certificate created by default.

      • List the secrets with kubectl get secrets, and one should named similar to default-token-xxxxx. Copy that token name for use below.
      • Get the certificate by running this command:
      kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
    • Token - GitLab authenticates against Kubernetes using service tokens, which are scoped to a particular namespace. The token used should belong to a service account with cluster-admin privileges. To create this service account:

      1. Create a file called gitlab-admin-service-account.yaml with contents:

        apiVersion: v1
        kind: ServiceAccount
        metadata:
          name: gitlab-admin
          namespace: kube-system
      2. Apply the service account to your cluster:

        kubectl apply -f gitlab-admin-service-account.yaml

        Output:

        serviceaccount "gitlab-admin" created
      3. Create a file called gitlab-admin-cluster-role-binding.yaml with contents:

        apiVersion: rbac.authorization.k8s.io/v1beta1
        kind: ClusterRoleBinding
        metadata:
          name: gitlab-admin
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: cluster-admin
        subjects:
        - kind: ServiceAccount
          name: gitlab-admin
          namespace: kube-system
      4. Apply the cluster role binding to your cluster:

        kubectl apply -f gitlab-admin-cluster-role-binding.yaml

        Output:

        clusterrolebinding "gitlab-admin" created
      5. Retrieve the token for the gitlab-admin service account:

        kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')

      Copy the <authentication_token> value from the output:

      Name:         gitlab-admin-token-b5zv4
      Namespace:    kube-system
      Labels:       <none>
      Annotations:  kubernetes.io/service-account.name=gitlab-admin
                    kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8
      
      Type:  kubernetes.io/service-account-token
      
      Data
      ====
      ca.crt:     1025 bytes
      namespace:  11 bytes
      token:      <authentication_token>

      NOTE: Note: For GKE clusters, you will need the container.clusterRoleBindings.create permission to create a cluster role binding. You can follow the Google Cloud documentation to grant access.

    • Project namespace (optional) - You don't have to fill it in; by leaving it blank, GitLab will create one for you. Also:

      • Each project should have a unique namespace.
      • The project namespace is not necessarily the namespace of the secret, if you're using a secret with broader permissions, like the secret from default.
      • You should not use default as the project namespace.
      • If you or someone created a secret specifically for the project, usually with limited permissions, the secret's namespace and project namespace may be the same.
  4. Finally, click the Create Kubernetes cluster button.

After a couple of minutes, your cluster will be ready to go. You can now proceed to install some pre-defined applications.

To determine the:

  • API URL, run kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'.
  • Token:
    1. List the secrets by running: kubectl get secrets. Note the name of the secret you need the token for.
    2. Get the token for the appropriate secret by running: kubectl get secret <SECRET_NAME> -o jsonpath="{['data']['token']}" | base64 --decode.
  • CA certificate, run kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode.

Security implications

CAUTION: Important: The whole cluster security is based on a model where developers are trusted, so only trusted users should be allowed to control your clusters.

The default cluster configuration grants access to a wide set of functionalities needed to successfully build and deploy a containerized application. Bear in mind that the same credentials are used for all the applications running on the cluster.

Base domain

Introduced in GitLab 11.8.

NOTE: Note: You do not need to specify a base domain on cluster settings when using GitLab Serverless. The domain in that case will be specified as part of the Knative installation. See Installing Applications.

Specifying a base domain will automatically set KUBE_INGRESS_BASE_DOMAIN as an environment variable. If you are using Auto DevOps, this domain will be used for the different stages. For example, Auto Review Apps and Auto Deploy.

The domain should have a wildcard DNS configured to the Ingress IP address. After ingress has been installed (see Installing Applications), you can either:

  • Create an A record that points to the Ingress IP address with your domain provider.
  • Enter a wildcard DNS address using a service such as nip.io or xip.io. For example, 192.168.1.1.xip.io.

Access controls

When creating a cluster in GitLab, you will be asked if you would like to create an Attribute-based access control (ABAC) cluster, or a Role-based access control (RBAC) one.

NOTE: Note: RBAC is recommended and the GitLab default.

Whether ABAC or RBAC is enabled, GitLab will create the necessary service accounts and privileges in order to install and run GitLab managed applications:

  • If GitLab is creating the cluster, a gitlab service account with cluster-admin privileges will be created in the default namespace, which will be used by GitLab to manage the newly created cluster.

  • A project service account with edit privileges will be created in the project namespace (also created by GitLab), which will be used in deployment jobs.

    NOTE: Note: Restricted service account for deployment was introduced in GitLab 11.5.

  • When you install Helm Tiller into your cluster, the tiller service account will be created with cluster-admin privileges in the gitlab-managed-apps namespace. This service account will be added to the installed Helm Tiller and will be used by Helm to install and run GitLab managed applications. Helm Tiller will also create additional service accounts and other resources for each installed application. Consult the documentation of the Helm charts for each application for details.

If you are adding an existing Kubernetes cluster, ensure the token of the account has administrator privileges for the cluster.

The following sections summarize which resources will be created on ABAC/RBAC clusters.

Attribute-based access control (ABAC)

Name Kind Details Created when
gitlab ServiceAccount default namespace Creating a new GKE Cluster
gitlab-token Secret Token for gitlab ServiceAccount Creating a new GKE Cluster
tiller ServiceAccount gitlab-managed-apps namespace Installing Helm Tiller
tiller-admin ClusterRoleBinding cluster-admin roleRef Installing Helm Tiller
Project namespace ServiceAccount Uses namespace of Project Creating/Adding a new GKE Cluster
Project namespace Secret Token for project ServiceAccount Creating/Adding a new GKE Cluster

Role-based access control (RBAC)

Name Kind Details Created when
gitlab ServiceAccount default namespace Creating a new GKE Cluster
gitlab-admin ClusterRoleBinding cluster-admin roleRef Creating a new GKE Cluster
gitlab-token Secret Token for gitlab ServiceAccount Creating a new GKE Cluster
tiller ServiceAccount gitlab-managed-apps namespace Installing Helm Tiller
tiller-admin ClusterRoleBinding cluster-admin roleRef Installing Helm Tiller
Project namespace ServiceAccount Uses namespace of Project Creating/Adding a new GKE Cluster
Project namespace Secret Token for project ServiceAccount Creating/Adding a new GKE Cluster
Project namespace RoleBinding edit roleRef Creating/Adding a new GKE Cluster

Security of GitLab Runners

GitLab Runners have the privileged mode enabled by default, which allows them to execute special commands and running Docker in Docker. This functionality is needed to run some of the Auto DevOps jobs. This implies the containers are running in privileged mode and you should, therefore, be aware of some important details.

The privileged flag gives all capabilities to the running container, which in turn can do almost everything that the host can do. Be aware of the inherent security risk associated with performing docker run operations on arbitrary images as they effectively have root access.

If you don't want to use GitLab Runner in privileged mode, first make sure that you don't have it installed via the applications, and then use the Runner's Helm chart to install it manually.

Installing applications

NOTE: Note: Before starting the installation of applications, make sure that time is synchronized between your GitLab server and your Kubernetes cluster. Otherwise, installation could fail and you may get errors like Error: remote error: tls: bad certificate in the stdout of pods created by GitLab in your Kubernetes cluster.

GitLab provides a one-click install for various applications which can be added directly to your configured cluster. Those applications are needed for Review Apps and deployments. You can install them after you create a cluster.

To see a list of available applications to install:

  1. Navigate to your project's Operations > Kubernetes.
  2. Select your cluster.

Install Helm Tiller first because it's used to install other applications.

NOTE: Note: As of GitLab 11.6, Helm Tiller will be upgraded to the latest version supported by GitLab before installing any of the applications.

Application GitLab version Description Helm Chart
Helm Tiller 10.2+ Helm is a package manager for Kubernetes and is required to install all the other applications. It is installed in its own pod inside the cluster which can run the helm CLI in a safe environment. n/a
Ingress 10.2+ Ingress can provide load balancing, SSL termination, and name-based virtual hosting. It acts as a web proxy for your applications and is useful if you want to use Auto DevOps or deploy your own web apps. stable/nginx-ingress
Cert Manager 11.6+ Cert Manager is a native Kubernetes certificate management controller that helps with issuing certificates. Installing Cert Manager on your cluster will issue a certificate by Let's Encrypt and ensure that certificates are valid and up-to-date. stable/cert-manager
Prometheus 10.4+ Prometheus is an open-source monitoring and alerting system useful to supervise your deployed applications. stable/prometheus
GitLab Runner 10.6+ GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. It is used in conjunction with GitLab CI/CD, the open-source continuous integration service included with GitLab that coordinates the jobs. When installing the GitLab Runner via the applications, it will run in privileged mode by default. Make sure you read the security implications before doing so. runner/gitlab-runner
JupyterHub 11.0+ JupyterHub is a multi-user service for managing notebooks across a team. Jupyter Notebooks provide a web-based interactive programming environment used for data analysis, visualization, and machine learning. We use a custom Jupyter image that installs additional useful packages on top of the base Jupyter. Authentication will be enabled only for project members with Developer or higher access to the project. You will also see ready-to-use DevOps Runbooks built with Nurtch's Rubix library. More information on creating executable runbooks can be found in our Nurtch documentation. jupyter/jupyterhub
Knative 11.5+ Knative provides a platform to create, deploy, and manage serverless workloads from a Kubernetes cluster. It is used in conjunction with, and includes Istio to provide an external IP address for all programs hosted by Knative. You will be prompted to enter a wildcard domain where your applications will be exposed. Configure your DNS server to use the external IP address for that domain. For any application created and installed, they will be accessible as <program_name>.<kubernetes_namespace>.<domain_name>. This will require your kubernetes cluster to have RBAC enabled. knative/knative

With the exception of Knative, the applications will be installed in a dedicated namespace called gitlab-managed-apps.

CAUTION: Caution: If you have an existing Kubernetes cluster with Tiller already installed, you should be careful as GitLab cannot detect it. In this case, installing Tiller via the applications will result in the cluster having it twice, which can lead to confusion during deployments.

Upgrading applications

Introduced in GitLab 11.8.

Users can perform a one-click upgrade for the GitLab Runner application, when there is an upgrade available.

To upgrade the GitLab Runner application:

  1. Navigate to your project's Operations > Kubernetes.
  2. Select your cluster.
  3. Click the Upgrade button for the Runnner application.

The Upgrade button will not be shown if there is no upgrade available.

NOTE: Note: Upgrades will reset values back to the values built into the runner chart plus the values set by values.yaml

Getting the external endpoint

NOTE: Note: With the following procedure, a load balancer must be installed in your cluster to obtain the endpoint. You can use either Ingress, or Knative's own load balancer (Istio) if using Knative.

In order to publish your web application, you first need to find the endpoint which will be either an IP address or a hostname associated with your load balancer.

Let GitLab fetch the external endpoint

Introduced in GitLab 10.6.

If you installed Ingress or Knative, you should see the Ingress Endpoint on this same page within a few minutes. If you don't see this, GitLab might not be able to determine the external endpoint of your ingress application in which case you should manually determine it.

Manually determining the external endpoint

If the cluster is on GKE, click the Google Kubernetes Engine link in the Advanced settings, or go directly to the Google Kubernetes Engine dashboard and select the proper project and cluster. Then click Connect and execute the gcloud command in a local terminal or using the Cloud Shell.

If the cluster is not on GKE, follow the specific instructions for your Kubernetes provider to configure kubectl with the right credentials. The output of the following examples will show the external endpoint of your cluster. This information can then be used to set up DNS entries and forwarding rules that allow external access to your deployed applications.

If you installed the Ingress via the Applications, run the following command:

kubectl get service --namespace=gitlab-managed-apps ingress-nginx-ingress-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Some Kubernetes clusters return a hostname instead, like Amazon EKS. For these platforms, run:

kubectl get service --namespace=gitlab-managed-apps ingress-nginx-ingress-controller -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'

For Istio/Knative, the command will be different:

kubectl get svc --namespace=istio-system knative-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip} '

Otherwise, you can list the IP addresses of all load balancers:

kubectl get svc --all-namespaces -o jsonpath='{range.items[?(@.status.loadBalancer.ingress)]}{.status.loadBalancer.ingress[*].ip} '

Using a static IP

By default, an ephemeral external IP address is associated to the cluster's load balancer. If you associate the ephemeral IP with your DNS and the IP changes, your apps will not be able to be reached, and you'd have to change the DNS record again. In order to avoid that, you should change it into a static reserved IP.

Read how to promote an ephemeral external IP address in GKE.

Pointing your DNS at the external endpoint

Once you've set up the external endpoint, you should associate it with a wildcard DNS record such as *.example.com. in order to be able to reach your apps. If your external endpoint is an IP address, use an A record. If your external endpoint is a hostname, use a CNAME record.

Multiple Kubernetes clusters [PREMIUM]

Introduced in GitLab Premium 10.3.

With GitLab Premium, you can associate more than one Kubernetes clusters to your project. That way you can have different clusters for different environments, like dev, staging, production, etc.

Simply add another cluster, like you did the first time, and make sure to set an environment scope that will differentiate the new cluster with the rest.

Setting the environment scope [PREMIUM]

When adding more than one Kubernetes clusters to your project, you need to differentiate them with an environment scope. The environment scope associates clusters with environments similar to how the environment-specific variables work.

The default environment scope is *, which means all jobs, regardless of their environment, will use that cluster. Each scope can only be used by a single cluster in a project, and a validation error will occur if otherwise. Also, jobs that don't have an environment keyword set will not be able to access any cluster.


For example, let's say the following Kubernetes clusters exist in a project:

Cluster Environment scope
Development *
Staging staging
Production production

And the following environments are set in .gitlab-ci.yml:

stages:
- test
- deploy

test:
  stage: test
  script: sh test

deploy to staging:
  stage: deploy
  script: make deploy
  environment:
    name: staging
    url: https://staging.example.com/

deploy to production:
  stage: deploy
  script: make deploy
  environment:
    name: production
    url: https://example.com/

The result will then be:

  • The development cluster will be used for the "test" job.
  • The staging cluster will be used for the "deploy to staging" job.
  • The production cluster will be used for the "deploy to production" job.

Deployment variables

The Kubernetes cluster integration exposes the following deployment variables in the GitLab CI/CD build environment.

Variable Description
KUBE_URL Equal to the API URL.
KUBE_TOKEN The Kubernetes token of the project service account.
KUBE_NAMESPACE The Kubernetes namespace is auto-generated if not specified. The default value is <project_name>-<project_id>. You can overwrite it to use different one if needed, otherwise the KUBE_NAMESPACE variable will receive the default value.
KUBE_CA_PEM_FILE Path to a file containing PEM data. Only present if a custom CA bundle was specified.
KUBE_CA_PEM (deprecated) Raw PEM data. Only if a custom CA bundle was specified.
KUBECONFIG Path to a file containing kubeconfig for this deployment. CA bundle would be embedded if specified. This config also embeds the same token defined in KUBE_TOKEN so you likely will only need this variable. This variable name is also automatically picked up by kubectl so you won't actually need to reference it explicitly if using kubectl.
KUBE_INGRESS_BASE_DOMAIN From GitLab 11.8, this variable can be used to set a domain per cluster. See cluster domains for more information. 

NOTE: NOTE: Prior to GitLab 11.5, KUBE_TOKEN was the Kubernetes token of the main service account of the cluster integration.

Troubleshooting missing KUBECONFIG or KUBE_TOKEN

GitLab will create a new service account specifically for your CI builds. The new service account is created when the cluster is added to the project. Sometimes there may be errors that cause the service account creation to fail.

In such instances, your build will not be passed the KUBECONFIG or KUBE_TOKEN variables and, if you are using Auto DevOps, your Auto DevOps pipelines will no longer trigger a production deploy build. You will need to check the logs to debug why the service account creation failed.

A common reason for failure is that the token you gave GitLab did not have cluster-admin privileges as GitLab expects.

Another common problem for why these variables are not being passed to your builds is that they must have a matching environment:name. If your build has no environment:name set, it will not be passed the Kubernetes credentials.

Monitoring your Kubernetes cluster [ULTIMATE]

Introduced in GitLab Ultimate 10.6.

When Prometheus is deployed, GitLab will automatically monitor the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start.

Cluster Monitoring

Enabling or disabling the Kubernetes cluster integration

After you have successfully added your cluster information, you can enable the Kubernetes cluster integration:

  1. Click the Enabled/Disabled switch
  2. Hit Save for the changes to take effect

You can now start using your Kubernetes cluster for your deployments.

To disable the Kubernetes cluster integration, follow the same procedure.

Removing the Kubernetes cluster integration

NOTE: Note: You need Maintainer permissions and above to remove a Kubernetes cluster integration.

NOTE: Note: When you remove a cluster, you only remove its relation to GitLab, not the cluster itself. To remove the cluster, you can do so by visiting the GKE dashboard or using kubectl.

To remove the Kubernetes cluster integration from your project, simply click the Remove integration button. You will then be able to follow the procedure and add a Kubernetes cluster again.

View Kubernetes pod logs from GitLab [ULTIMATE]

Learn how to easily view the logs of running pods in connected Kubernetes clusters.

What you can get with the Kubernetes integration

Here's what you can do with GitLab if you enable the Kubernetes integration.

Deploy Boards [PREMIUM]

GitLab's Deploy Boards offer a consolidated view of the current health and status of each CI environment running on Kubernetes, displaying the status of the pods in the deployment. Developers and other teammates can view the progress and status of a rollout, pod by pod, in the workflow they already use without any need to access Kubernetes.

Read more about Deploy Boards

Canary Deployments [PREMIUM]

Leverage Kubernetes' Canary deployments and visualize your canary deployments right inside the Deploy Board, without the need to leave GitLab.

Read more about Canary Deployments

Kubernetes monitoring

Automatically detect and monitor Kubernetes metrics. Automatic monitoring of NGINX ingress is also supported.

Read more about Kubernetes monitoring

Auto DevOps

Auto DevOps automatically detects, builds, tests, deploys, and monitors your applications.

To make full use of Auto DevOps(Auto Deploy, Auto Review Apps, and Auto Monitoring) you will need the Kubernetes project integration enabled.

Read more about Auto DevOps

Web terminals

NOTE: Note: Introduced in GitLab 8.15. You must be the project owner or have maintainer permissions to use terminals. Support is limited to the first container in the first pod of your environment.

When enabled, the Kubernetes service adds web terminal support to your environments. This is based on the exec functionality found in Docker and Kubernetes, so you get a new shell session within your existing containers. To use this integration, you should deploy to Kubernetes using the deployment variables above, ensuring any pods you create are labelled with app=$CI_ENVIRONMENT_SLUG. GitLab will do the rest!

Integrating Amazon EKS cluster with GitLab

Serverless