Sometimes your Google Cloud Run app needs to communicate with or consume other services. This can be a simple as reading an object in Cloud Storage, sending an email, or connecting to a database. What identity does Cloud Run use? Can I change the identity? How do I use this identity to security my services?

In this article, I will cover these questions. We will create a service account, create and lock down a Cloud Storage Bucket, encrypt our secrets with Cloud KMS and deploy a Cloud Run instance that securely gets and decrypts secrets from Cloud Storage.

The default Cloud Run Identity is the Compute Engine Default Service Account. Unless you have changed this service account, it has the roles/editor permissions. This role has vast permissions across the Google Cloud Platform. This service account is also shared with other services such as Compute Engine and Cloud Functions.

In the latest Cloud Run Alpha release, Google has added a new command-line option --service-account. Update June 11, 2019 – This command line option is now Beta. This option allows you to specify a service account to use as the Cloud Run identity. This means you can use a different identity for each of your Cloud Run services. This is a big feature for Cloud Run. You need not create and download keys for this service account. No key leakage or management. This is inherently powerful and secure.

When storing parameters and secrets, it is very important to limit who/what can access these secrets. By using a unique identity, you can lock down and secure access to secrets.

This article just touches upon Cloud Run Identity. Other Google Cloud services, such as Pub/Sub, can use the Cloud Run Identity for authorization. You can also use this identity in your calls to your own services. In another article, I will discuss the low-level details of Cloud Run identity and how to verify identities.

In this article we will:

  1. Create a new service account. No permissions are assigned to this service account.
  2. Create a KMS Keyring and Key.
  3. Add the service account as an IAM member to the KMS key for decryption.
  4. Create a Cloud Storage Bucket and lock down access to only Project Owners and this service account.
  5. Add the service account as an IAM member to the storage bucket members list.
  6. Encrypt our secrets with KMS and copy to Cloud Storage.
  7. Configure Cloud Run to use this service account as an identity for service-to-service access.
  8. Our application in Cloud Run will access the encrypted secrets in Cloud Storage, decrypt using KMS and display for review.

Star of the Show

Google added the command-line option --service-account to the alpha version of the gcloud run deploy command. I cannot find when this feature was added to the Cloud SDK. I am testing with Cloud SDK 238.0.0 released May 28, 2019. As with all alpha and beta commands, do not use in production.

Update: June 11, 2019. This command-line option is now in the beta commands. This was released in the Cloud SDK version 250.0.0. Run the command gcloud components update to get the latest version.

This command-line option supports specifying the service account to use for the Cloud Run identity. When you request ADC (Application Default Credentials), this will be the service account used for your OAuth tokens. This feature means that you can create a service account with no permissions, no keys, no JSON file, etc. Then add this service account email address to the services you want to consume securely. Examples in this article are Cloud Storage and KMS.

Google’s description of –service-account

Email address of the IAM service account associated with the revision of the service. The service account represents the identity of the running revision, and determines what permissions the revision has. If not provided, the revision will use the project’s default service account.

Getting Started

This article assumes that you have the CLI gcloud installed and configured with credentials. This article is CLI based and we will not be using the Google Cloud Console. Google’s GUI is excellent, but I prefer the CLI as I can create scripts, create better documentation, etc. Sometimes there are options that are not available in the GUI and you must use the CLI. Step 7 uses a new Cloud Run feature --service-account which is only available in the alpha and beta versions of the CLI.

Verify that the correct project is the default project:

If the correct project is not displayed, use this command to change the default project:

You can list the projects in your account. Some security configurations will not allow you to list projects. In that case, you will need to specify the default project manually as shown above.

Enable the Cloud Run Service

Set the default region for Cloud Run. This is obvious today, but soon there will be more regions announced:

For Cloud Run on GKE:

If you develop for BOTH Cloud Run and Cloud Run on GKE, you cannot set the properties as described above, because they will conflict. Instead, supply the –region parameter as needed in the gcloud command-line for Cloud Run , and supply the –cluster and –cluster-location parameters as needed in the gcloud command-line for Cloud Run on GKE.

Software Requirements

Download Git Repository

I have published the files for this article on GitHub.

License: MIT License

Clone my repository to your system:

My repository has build scripts for Linux and for Windows. I tested Linux with the Google Cloud Shell and Windows with Windows 10 Professional.

Provided that you have correctly setup the CLI gcloud the build scripts will do everything automatically.

Linux setup:

  • sets up the build environment. Review the settings in this file. You can override the CLI settings for some items.
  • and build and destroy everything. There are smaller scripts that do specific things such as deploy to Cloud Run.
  • Execute chmod +x *.sh to make each script executable.

Windows setup:

  • env.bat sets up the build environment. Review the settings in this file. You can override the CLI settings for some items.
  • setup.bat and cleanup.bat build and destroy everything. There are smaller scripts that do specific things such as deploy to Cloud Run.

Tip: Change to the scripts-linux or scripts-windows directory. For Linux execute ./ For Windows execute setup.bat. Everything will be created, built and deployed.

The shell script creates several environment variables that are used by other scripts. Edit to tailor to your environment.

Step 1 – Create the Service Account

For this project, we will create a new service account. This service account will provide the identity for Cloud Run. Cloud Storage and KMS will use this identity for authorization.

The file defines the name for the service account:

Create the service account:

Step 2 – Create the KMS Keyring and Key

Create the KMS Keyring. Keyrings cannot be deleted, so this is a one-time operation.

Create the KMS Key.

Step 3 – Setup KMS IAM Policy

We will now add the service account to the KMS policy for the keyring and key that we created. This will allow Cloud Run to decrypt data.

Step 4 – Encrypt the Secrets

For this article, I created a config.json file. This is to simulate storing database credentials:

Encrypt config.json using Cloud KMS and store the encrypted results in config.enc:

Step 5 – Create the Cloud Storage Bucket

For this project, we will create a new storage bucket. This bucket will hold our secret file config.json.

The file defines the name for the bucket:

Change the default ACL for this bucket to private:

Enable versioning on this bucket. This prevents objects from easily being deleted.

Our secrets file is config.json. Copy this file to the bucket:

Change the ACL for this object to private:

We will assign two IAM roles to the bucket allowing this service account to access both the bucket and the objects stored in the bucket. We are only granting read access. Note that we are applying the IAM permissions to the bucket and not the service account. The service account has no IAM permissions.

Assign the IAM role legacyBucketReader to the bucket:

Assign the IAM role legacyObjectRead to the config.json file:

If you add additional objects to this bucket, repeat the last command to assign rights to access the object.

Summary. We have created a new bucket, copied our secret config.json file to the bucket and locked down permissions to access anything in this bucket. At this point, the only identity that can access this bucket is Project Owners.

Step 6 – Build the Docker Image

The file defines the name for the image:

Use Cloud Build to build the image:

Step 7 – Deploy Image to Cloud Run:

Step 8 – Verify that everything works

When the deploy command completes, you will see a message similar to the following. Make note of the service URL:

Open a web browser. Enter the URL. You should see a screen similar to this:

If you do not see a screen similar to this, but instead see not_defined for each parameter or a stack trace error message, go to the debugging section.

Step 9 – Cleanup

Once you are finished with this example, execute This script will delete the bucket, the service account, the IAM permissions from KMS and the Cloud Run service.

The cleanup script does not delete the following items:

  • The image stored in the Google Container Registry
  • The KMS Keyring
  • The KMS Key

Additional Thoughts

In this article, we encrypted our secrets file config.json and copied it to Cloud Storage. The example Python code loads the secrets file on every HTTP request. A better approach is to load the secrets file once when the container starts. This will reduce the response time for HTTP requests. However, this brings up the issue of how do you rotate credentials?

There are several strategies that come to mind to rotate credentials:

  • Unless necessary, do not immediately invalidate the current credentials when rotating. Instead, create new credentials that overlap the old credentials for a period of time.
  • An option is to add “smarts” to the code so that when the current credentials no longer work, reload the secrets from Cloud Storage and try again. This will provide an automated retry when credential rotation occurs.
  • When the Cloud Run container starts, it will load whatever credentials are stored in the secrets file. Update the secrets file with new credentials.
  • Once the new credentials are in place, issue a new Cloud Run deploy command. This will cause all Cloud Run instances to start with a new version loading the new secrets file.
  • After a short period of time, invalidate the old credentials.
  • An option is to keep track of what time the current credentials were loaded inside the container. Perhaps every 15-minutes, reload the credentials. This will reduce the requirement for issuing a new deploy command. You would update the secrets file in Cloud Storage, wait for 15 minutes and then delete the old credentials.

The service account we created has no IAM permissions other than being able to access one KMS key and one object in Cloud Storage. Cloud Run cannot access any other Google Cloud services such as Pub/Sub. Review what services your Cloud Run instance requires access to and add the corresponding permissions to the service account.


The first step in debugging is to open the Google Console Stackdriver Logging section. My sample code will log messages to Stackdriver including error messages. This will help you pinpoint what is going wrong.

KMS Key Problems:

If you see a message like means that either the KMS key does not exist or the IAM permissions are missing.

Additional Information

  • Google’s Seth Vargo wrote an article “Secrets in Serverless“. His article is excellent and focusses on Cloud Functions, which share many common configuration features with Cloud Run. Seth’s article started me on my journey to write this article.
  • Ahmet’s Cloud Run FAQ is a must read and reference for everything Cloud Run.
  • For a good 5-minute video introduction to Cloud KMS keys: Data Encryption and Managed Encryption Keys – Take5


I write free articles about technology. Recently, I learned about which provides free images. The image in this article is courtesy of Pixabay at Pexels.