Using Google Cloud Service Accounts on GKE

Nick Joyce
Real Kinetic Blog
Published in
6 min readApr 4, 2019

GKE is a managed Kubernetes offering by Google Cloud Platform (GCP). The services that you deploy work together to form the application. Each service needs to be able to communicate with its neighbours and that communication typically needs to authenticated and authorised. This post is going to walk you through setting up and using Google Cloud service accounts to authorise access to Google Cloud Services such as storage and KMS.

When you create the cluster, you provide a service account and set of scopes (or permissions) that make up the default credentials that the underlying nodes (aka VMs) will use to access other Google Cloud Services.

Defining the service account and access scopes in the create GKE cluster screen

The default for new clusters is to use the “Compute Engine” default service account along with the default set of scopes defined, including:

  • Read-only access to Google Cloud Storage (GCS)
  • Write access to write Compute Engine logs
  • Write access to publish metric data to your Google Cloud projects
  • Read-only access to Service Management features required for Google Cloud Endpoints
  • Read/write access to Service Control features required for Google Cloud Endpoints

Inherited Credentials

All Kubernetes pods running within the cluster will inherit these credentials by default when contacting other Google Cloud services as the network packets all appear to originate from the VM IP, not the pod IP. If you want to communicate with a service beyond the default scopes, you will need to provide your own service account credentials.

As a side note, since many pods can be running on a single VM at any one time, it is important that the principle of least privilege is used to ensure pods cannot access services that they were not intended to. The “Compute Engine” default service account does a good job at this and should not be changed unless you have a specific need.

Configuring Service-to-Service Authentication

What if a pod needed to write to a specific Google Cloud Storage bucket? That permission is not available using the default cluster credentials (and nor should it be). Using the principle of least privilege, the answer is to create a service account that has write access to the specific bucket and use that when communicating to the GCS APIs.

What is a Service Account?

Service accounts are a primitive within the IAM (Identity & Access Management) service provided by GCP. They provide a mechanism for non-humans to be able to interact with Google Cloud APIs in a controlled and managed way. It allows for both authentication and authorization but also rate limiting, auditing, and monitoring.

A lot more information on service accounts is available in the GCP documentation.

Getting Started

At this point, I’m going to assume you have set up your GKE cluster and have your gcloud CLI configured to the right project/cluster. The gsutil CLI is provided by the gcloud SDK. Installation documentation is available if needed.

export PROJECT_ID=$(gcloud config get-value core/project)
export SERVICE_ACCOUNT_NAME="my-app-gcs-service-account"
export GCS_BUCKET_NAME="my-app"

Create the Bucket

Skip this step if the bucket already exists.

gsutil mb gs://${GCS_BUCKET_NAME}/

Detailed documentation on creation of GCS buckets is available.

Create the Service Account

gcloud iam service-accounts create ${SERVICE_ACCOUNT_NAME} --display-name="My App Service Account"

This creates a new service account within your GCP project. You can list all the service accounts for the project by running:

gcloud iam service-accounts list

Configure the Necessary Permissions

We could assign the objectViewer role to the newly created service account and access would work, but continuing with the theme of least privilege, we want to restrict access for this service account to the specific bucket. GCS provides bucket-level policies that can be applied through the gsutil CLI. Since we want to be able to read and write to the bucket at runtime, we need to configure the correct IAM roles. In this case, objectAdmin is what we’re looking for:

gsutils iam ch serviceAccount:${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com:objectAdmin gs://${GCS_BUCKET_NAME}/

Create a Service Account Key

Now that we have the service account created and configured correctly, we need to get some credentials that we can use to communicate with the Google APIs. We do this by creating a key associated with the service account:

gcloud iam service-accounts keys create --iam-account "${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" service-account.json

This command will create the key and output the contents to service-account.json. This file contains sensitive information so act accordingly. Don’t, for example, commit it to your source repository or bake it into the container image. :)

Service Account Keys

Keys generated by service accounts will allow you to directly access the GCS APIs. Because of this, it is your responsibility to ensure that it is kept safe and that you follow best practices for managing the key.

Deployment to Kubernetes

Okay that was a lot of set up, but we’re finally ready to deploy our app to GKE and use our newly created service account.

Container images should look for credentials in known places in the underlying filesystem, typically using an environment variable to point to the location. We’re going to mount the Kubernetes Secret at the location specified by our environment variable. This is so we can separate out the code from any runtime configuration and be able to run the same container image in different environments (such as dev and staging) before hitting production. It also allows SRE staff to have control over the service account at runtime.

Do you want to be woken up at 3AM to roll a new container image just because new creds were required? Didn’t think so. :)

Kubernetes Secrets

We are going to use a Kubernetes Secret to hold the contents of the key for us. In a hardened environment, using something like Vault to hold the contents might be more appropriate, but that decision does not have an impact on this blog post.

kubectl create secret generic my-app-sa-key --from-file service-account.json

This will create a secret in Kubernetes called my-app-sa-key in the default namespace using the contents of the file we created earlier. You can list all the secrets in Kubernetes (that you can see) via:

kubectl get secret

Deploying the App

We’re now ready to configure Kubernetes to deploy our application to use our newly created secret.

Here is some example yaml used to configure that deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: IMAGE_URL
env:
- name: "BUCKET_NAME"
value: my-app
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/var/run/secret/cloud.google.com/service-account.json"
volumeMounts:
- name: "service-account"
mountPath: "/var/run/secret/cloud.google.com"
- name: "certs"
mountPath: "/etc/ssl/certs"
volumes:
- name: "service-account"
secret:
secretName: "my-app-sa-key"
- name: "certs"
hostPath:
path: "/etc/ssl/certs"

Note: IMAGE_URL should be replaced with whatever is appropriate for your environment.

Save this file to my-app.yaml and deploy:

kubectl apply -f my-app.yaml

Points to note:

  • We used an environment variable called GOOGLE_APPLICATION_CREDENTIALS to point to the file that contains the credentials necessary to access the bucket. Google’s client libraries all use this environment variable to help provide a clean and consistent api for getting credentials.
  • We’re mounting the my-app-sa-key secret using the volume name “service-account”. This will create a file named service-account.json available at the specified mountPath with the contents of the Kubernetes Secret we created earlier.
  • We’re mounting the host’s SSL certs into the running container. This allows runtime control over what certificates are available and considered valid/acceptable. This step is not strictly necessary but a useful demonstration of what is possible.

All Done!

Congrats on getting this far!

We should now have a running application on Google Kubernetes Engine using a service account that only has read and write access to a specific GCS bucket and nothing more. We created a service account key and provided it to the running application in a configurable way that does not involve modifying the underlying image each time the credentials need to be updated.

Cleaning Up

To tear down the application and Service Account we used, these steps should be run in order:

1. kubectl delete -f my-app.yaml
2. kubectl delete secret my-app-sa-key
3. gsutils iam ch -d serviceAccount:${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com:objectAdmin gs://${GCS_BUCKET_NAME}/
4. gcloud iam service-accounts delete "${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
5. rm service-account.json

If you created the GCS bucket, run the final step:

gsutil rm -r gs://${GCS_BUCKET_NAME}/

Real Kinetic has extensive experience leveraging Kubernetes, GKE, and GCP. Learn more about working with us.

Published in Real Kinetic Blog

Our thoughts, opinions, and insights into technology and leadership. We blog about scalability, devops, and organizational issues.

Written by Nick Joyce

Cloud herder. Code monkey. Wood worker. Husband. Human. Managing Partner at Real Kinetic.

Responses (4)

What are your thoughts?

If you end up here like me after searching what is the alternative of EKS IRSA in GKE, search for "Google Workload Identity" and stop managing service account keys :)

mountPath

What's so special about this path? /var/run/secret/cloud.google.com

Thanks for the great article.
There is a small typo
gsutils iam ch serviceAccount:${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com:objectAdmin gs://${GCS_BUCKET_NAME}/
it should be `gsutil`