Google Cloud Memorystore – Redis

Date created: April 17, 2019
Last updated: April 19, 2019

Introduction

Redis (REmote DIctionary Server) is one of the most popular databases in the world. Redis is a Key Value dictionary. Google Cloud Memorystore is Google’s managed service for Redis. As part of my deep dive into Google Cloud, I need to know both Redis on Google Cloud VMs (Compute Engine, App Engine Flexible and Kubernetes) and Google Cloud Memorystore.

This article is being developed. I plan to add more content around installing and setting up Redis, Clustering (Sharding), HA and programming typical use cases.

Redis Popularity

Redis is #7 of all types of databases and #1 of Key-value databases.

What is Google Cloud Memorystore?

“Cloud Memorystore for Redis provides a fully managed in-memory data store service built on scalable, secure, and highly available infrastructure managed by Google. Use Cloud Memorystore to build application caches that provides sub-millisecond data access. Cloud Memorystore is compatible with the Redis protocol, allowing easy migration with zero code changes.” Google Source.

Google Cloud and Redis Labs Expand Strategic Partnership

Google Next ’19. Watch this video for a demonstration of the next iteration of Redis on Google Cloud. Very cool product demo. Clusters, replication, flash storage, persistence, monitoring, Stackdriver and more is supported.

Excellent videos for learning Redis

I watched a lot of videos on Redis. I have listed the ones below that I think are good. The Pluralsight course below will give you a solid introduction to Redis. Advanced items such as Redis Streams are not covered in the Pluralsight course. For advanced Redis look at Redis University below. Their courses are free and good.

Excellent videos for learning Google Cloud Memorystore

I watched a lot of videos on Cloud MemoryStore. I have listed the ones that I think are good. If you are preparing for a Google Cloud Certification, the Pluralsight course below is perfect and will give you everything your need in a couple of hours.

Redis University

These courses are good. A plus is that you are constantly being quizzed during the course which helps to reinforce knowledge. Another plus is their Virtual Lab which lets you play with Redis online. This includes running Python programs to test Redis features.

Home page for the following resources:

  • Online Class: Virtual Lab Primer – PR001
  • Online Class: Introduction to Redis Data Structures – RU101
  • Online Class: Redis Search – RU201
  • Online Class: Redis Streams – RU202
Redis Books
Popular Redis Use Cases
Redis in the Cloud
Google Cloud Memorystore
  • Cloud Memorystore for Redis FAQ
    • Cloud Memorystore has a number of limitations. Read the FAQ before deciding. In some cases you should deploy Redis on your own VMs. At Google Next ’19 was a demo of the next iteration of Redis on GCP. I expect big changes this year for Cloud Memorystore and Redis Enterprise.
  • Note: Cloud Memorystore for Redis does not currently support persistence.
  • Product constraints
  • Maintenance policy
    • Basic Tier Cloud Memorystore for Redis instances experience a full cache flush during maintenance. If your application cannot withstand a full cache flush, create it with a Standard Tier instance.
Redis Clients

Google Cloud Certified – Associate Cloud Engineer

Date created: March 22, 2019
Last updated: March 25, 2019

Today I took the Associate Cloud Engineer exam and passed. The exam was medium difficulty. However, I took this exam for granted and I did not study or prepare at all.

This exam is not a “walk in the park”. You need to know a lot of small details to pass this exam. This is a command line gcloud style exam. Lots of questions on projects, service accounts, IAM, Kubernetes, etc. There were not many questions on actual Google Cloud services. I expected more questions on all the core services (Compute Engine, App Engine, SQL, Spanner, PubSub, BigQuery, Datastore, etc.) but they were seldom mentioned. This exam leans more towards what you should do when you first create a project in Google Cloud. Setting up IAM, projects, permissions, etc. There were a few questions on storage lifecycle (moving data to nearline / coldline). A fair number of questions on Kubernetes command line (kubectl).

I took the exam at the Bellevue College North Campus. Today is my 14th certification exam at this facility over the past few years. This is a great place to take exams. Lots of parking, quiet, clean and very organized. The testing computers are older, but I have not had any problems for any of my exams. There is a large cafeteria on the first level. You can relax with coffee or a soft drink, have lunch, etc. before you take your exam. I mention this because it is important to relax a bit before taking an exam. Stress reduces your IQ and memory.

Once I completed the exam, the monitor said passed but I will have to wait for seven to ten days for my actual certificate. No score is given, just Pass or Fail. Update: certification confirmation received March 25th.

Key areas to study:
  • The CLI gcloud. Know how to create projects and configurations. Know all the basic commands for Compute Engine.
  • The Kubernetes CLI kubectl.
  • The Storage CLI gsutil.
  • Understand IAM, Roles, Permissions, etc. very well.
  • Understand the basics of VPCs. There were a number of questions on subnets.
  • Understand Storage and migrating data to nearline and coldline.
Suggested Online Training:
Qwiklabs

Absolutely complete as many labs as you can. Set a target of 100 completed labs that you understand well. Repeat each lab until you understand everything. I completed over 200 labs and 20 quests before I took my first Google certification exam. They really helped reinforce my memory with lots of small details.

Qwiklabs Home

Suggestions:

Prepare well before taking this exam. I was surprised at the difficulty level for an associate level exam. I just took both the professional security and network exams, which are vastly harder than the associate, so I walked into the exam without preparing at all. Luckily, I do everything with the CLI instead of the console GUI, so most of the exam was easy.

Even if you know Google Cloud well and you plan to skip this certification and go right to the Professional Cloud Architect, don’t. Take this exam. The type of questions and knowledge required is different. This will ensure that you have a solid undertanding from top to bottom. Creating projects and configuring credentials is not something that we normally do everyday. This exam will make sure that you know how.

Google Compute – Stackdriver Logging – Installation, Setup & Debugging

Date created: March 10, 2018
Last updated: March 10, 2019

Google Stackdriver is a very good product for monitoring and logging your compute instances on Google Cloud, AWS, Azure, Alibaba, etc.

This article covers Stackdriver logging for Google Compute instances running Debian 9.

To make sure that Stackdriver is installed on each instance, I create instance templates that contain a script in the custom metadata section to automate Stackdriver installation and setup.

An important item to remember, startup scripts are executed every time an instance starts and not just on instance creation.

In my startup script for Debian 9 Stretch, I install Google Stackdriver logging and monitoring agents.

If you are manually creating a Compute instance, copy this script into the Automation -> Startup script section when creating the instance.

Installing the Logging Agent

Installing the Monitoring Agent

Sending a test Stackdriver log message

logger "Hello Stackdriver

This message is sent to Stackdriver and can be found in Stackdriver Logging -> GCE VM Instance -> Instance Name. If you do not see this message after about 15 seconds, check for Stackdriver errors in the logfile on the instance.

Stackdriver logfile

To see the latest logs in the Stackdriver logfile for debugging:

tail /var/log/google-fluentd/google-fluentd.log

Common Stackdriver errors

No service account assigned to the VM instance

Missing IAM permission to write to Stackdriver

Compute instances without a public IP address

For instances without external IP addresses, you must enable Private Google Access to allow the Stackdriver Logging agent to send logs.

Verify that your instance can resolve the following DNS hostnames:

      • oauth2.googleapis.com
      • monitoring.googleapis.com
      • stackdriver.googleapis.com
Google Stackdriver service account file location

Stackdriver will check for the following location and use these credentials if present instead of the metadata service account credentials.

/etc/google/auth/application_default_credentials.json

IAM Permissions required for Stackdriver

Stackdriver Monitoring

Your VM instance needs the permission roles/monitoring.metricWriter which can be added via the role roles/monitoring.metricWriter. Link.

Stackdriver Logging

Your VM instance needs the permission logging.logEntries.create which can be added via the role roles/logging.logWriter. Link.

Stackdriver Error Reporting

Your VM instance needs the permission errorreporting.errorEvents.create which can be added via the role roles/errorreporting.writer. Link.

Stackdriver Profiler

Your VM instance needs the permission cloudprofiler.profiles.create and
cloudprofiler.profiles.update which can be added via the role roles/cloudprofiler.agent. Link.

Stackdriver Trace

Your VM instance needs the permission cloudtrace.traces.patch which can be added via the role roles/cloudtrace.agent. Link.

Stackdriver Debugger

You don’t directly give members permissions; instead, you grant them one or more roles on a GCP resource, which have one or more permissions bundled within them. Refer to this document.

To determine the currently installed versions:

Stackdriver Monitoring

Output:

 

Stackdriver Logging

Output:

Add a startup Script remotely

You can add a startup-script for a running instance from the CLI. Note: this command will replace the existing startup script.

Copy the following to a local file. In this example startup.script. Modify to fit your requirements:

 

Execute the following command from your desktop:

 

You can also store your scripts in Google Storage:

 

The startup script will be executed the next time the instance reboots.

Restarting the Stackdriver agent

sudo service google-fluentd restart

Stackdriver agent status

sudo service google-fluentd status

Upgrading the Stackdriver agent – Debian & Ubuntu systems

sudo apt-get install --only-upgrade google-fluentd

Note: This command does not change the agent’s configuration files. To get the latest default configuration and catch-all configuration files run the following command instead.

Uninstall the Stackdriver agent – Debian & Ubuntu systems

 

Google Cloud – Compute Engine Service Accounts

  • Date created: March 1, 2019
    Last updated: March 3, 2019

Note: This article is evolving as I document my deep dive.

Contents:

Introduction

Service accounts are the keys to the cloud kingdom.

This is the first of my “The Master Series” on Google Cloud. In this article we will dive deep into Compute Engine Service Accounts. We will investigate service accounts, instance metadata, access scopes, identity and access management (IAM), impersonation, firewall rules, Stackdriver, auditing, logging events, alerting and best practices.

I have written a number of articles on service accounts on this site. However, this article will be different as we will investigate items that are not documented, unknown or interesting. We will experiment, do the unexpected, create scenarios and test.

Service accounts are one of the most misunderstood features in Google Cloud. One of the reasons is that Google designed services accounts with power, flexibility and features. Service accounts are both an identity and a resource. Service accounts can act and be impersonated. Understanding service accounts is important to properly authorize and secure cloud resources.

March 1, 2019 – Day #1 – Basics and FAQ

What is a Compute Engine service account?

A service account is a special account that can be used by services and applications running on your Compute Engine instance to interact with other Google Cloud Platform APIs. Applications can use service account credentials to authorize themselves to a set of APIs and perform actions within the permissions granted to the service account and virtual machine instance. In addition, you can create firewall rules that allow or deny traffic to and from instances based on the service account that you associate with each instance. Source.

What is a Compute Engine default service account?

New projects are created with the Compute Engine default service account, identifiable using this email:

[PROJECT_NUMBER]-compute@developer.gserviceaccount.com

The default service account is created by Google and added to your account automatically but you have full control over the account.

What is a Compute Engine Service Agent aka Compute Engine System service account?

See my related article: Google Cloud – Compute Engine System Service Account.

What permissions does the Compute Engine default service account have?

Project Editor – roles/editor

Project Editor is one of the primitive roles that Google create early on in Google Cloud. In this article, I will recommend removing the Project Editor role from the Compute Engine default service account and assign specific IAM predefined or custom roles. Google also recommends this. This advice goes for any primitive role (Owner, Editor, Viewer).

FIX: Find the reference for Google recommending removing Project Editor from a service account.

What resources rely on the Compute Engine default service account?

Managed instance groups and autoscaling use the credentials of this account to create, delete, and manage instances.

Can you create a VM instance without a service account?

Yes. The instance will still be able to access most metadata, but will not be able to interact with other Google Cloud Platform APIs.

Which items do not work on VM instances without a service account?

Verifying the identity of instances will not work. More information about VM instance identity.

Can you authorize a VM instance without a Compute Engine service account?

Yes, you can authorize the instance using several methods. The first method is gcloud auth application-default login to provide user account credentials to use for Application Default Credentials. The VM instance will need Internet access to reach Google Accounts. This gcloud command will write credentials to:

~/.config/gcloud/application_default_credentials.json

The second method is to use gcloud auth login to provide user account credentials. The VM instance will need Internet access to reach Google Accounts. This gcloud gcloud command will write credentials to:

~/.config/gcloud/legacy_credentials/john.hanley@jhanley.com/adc.json

User credentials persist across reboots. If your goal is security and you removed the default service account, using gcloud auth login or gcloud auth application-default login will defeat your goal of an instance with no credentials. Revoke the credentials with gcloud auth revoke or gcloud application-default revoke.

The last method, which is also the best method, is to use service account credentials in a Json file. Copy your service account file to your instance and authorize it using gcloud auth activate-service-account [ACCOUNT--key-file=KEY_FILE.

See my article: Google Cloud – Setting up Gcloud with Service Account Credentials which goes into detail on how to correctly setup authorization with service account credentials.

What happens if you delete the default service account while a VM instance is running?

Existing running instances will error with “Invalid Credentials” for gcloud.

Instance metadata will not have the entries in /computeMetadata/v1/instance/service-accounts/

FIX – Double check: Software will fail to obtain Application Default Credentials.

What happens if you delete the default service account for new VM instances?

Creating new default VM instances will fail with an error that the service account was not found. You will be able to create new VM instances if you specify “No service account” when configuring the new VM instance.

How do I recreate the Compute Engine default service account?

You will need to contact the Google Cloud Compute Engine team to recover your service account.

Contact the Compute Engine team.

What does Google Cloud use internally for a Service Account identifier?

Google Cloud uses the unique ID assigned to a service account at creation.

This is important to know because you can create a service account, assign roles, delete the service account and then create a new service account with the same name. The role bindings are not immediately deleted. This means that you could recreate a service account and the old bindings will still be in effect for a while for the old service account (with the same name).

Let’s look at the default Compute Engine service account for my account:

Which returns:

The uniqueId is used internally.

Recommendation: Delete the roles assigned to a service account before deleting the service account.

More information here.

What are Compute Engine best practices?

 Part 1: Source:

In general, Google recommends that each instance that needs to call a Google API should run as a service account with the minimum permissions necessary for that instance to do its job. In practice, this means you should configure service accounts for your instances with the following process:

Create a new service account rather than using the Compute Engine default service account.

Grant IAM roles to that service account for only the resources that it needs.

Configure the instance to run as that service account.

Grant the instance the https://www.googleapis.com/auth/cloud-platform scope to allow full access to all Google Cloud APIs, so that the IAM permissions of the instance are completely determined by the IAM roles of the service account.

Part 2: Source

Restrict who can act as service accounts. Users who are Service Account Users for a service account can indirectly access all the resources the service account has access to. Therefore, be cautious when granting the serviceAccountUser role to a user.

Grant the service account only the minimum set of permissions required to achieve their goal. Learn about Granting roles to a service account for specific resources.

Create service accounts for each service with only the permissions required for that service.

Use the display name of a service account to keep track of the service accounts. When you create a service account, populate its display name with the purpose of the service account.

Define a naming convention for your service accounts.

Implement processes to automate the rotation of user-managed service account keys.

Take advantage of the IAM service account API to implement key rotation.

Audit service accounts and keys using either the serviceAccount.keys.list() method or the Logs Viewer page in the console.

Do not delete service accounts that are in use by running instances on Google App Engine or Google Compute Engine.


March 2, 2019 – Day #2 – Auditing, Alerting & Stackdriver

Stackdriver can provide a wealth of information about service accounts if you know how to use Stackdriver logs. Today we will cover how to use Stackdriver logs to audit events. Then we will use Pub/Sub and Cloud Functions to process Stackdriver logs looking for specific events and creating an action, such as sending an email, when a specific event occurrs.

Auditing

In order to perform an audit, you need to obtain information:

  • What resources do you have?
  • What resource have you had in the past?
  • What has been done using those resource?
  • What has been done to those resources?

For this deep dive we are only interested in service account resources.

I created a new project so that the number of resources is limited. Then I enabled the Compute Engine API.

What resources do you have?

The first step is to list all of the service accounts that are currently in a project.

Which provided me with:

Only one service account in my project.

What resource have you had in the past?

Google does not provide a method to easily determine this. We will use Stackdriver to review the events for this project. Stackdriver stores events related to service accounts in the “Activity” log. The resource type within this log is “service_account”.

This provide a lot of information. Lets save this output to a file and then parse the output.

This provides us with a list of actions on service accounts. In our case, not much has happened.

This shows that we created a service account and then created a service account key. Normal stuff. However, if you saw activity where service accounts were being created and deleted, this might indicated that someone is trying to hide their activity or grant themselves permissions for use when not at work. The only way to know is to keep track of activity on resources. When something unexpected happens, investigate. You can also see the principal email address for each activity.

Knowing who does what to whom is an important part of auditing.

From the Stackdriver logs, you can reconstruct what resources you had in the past, who created and deleted those resources.

What has been done using those resource?

Unfortunately, Google Cloud does not log all activity using service accounts. Since service accounts are the mechanism to obtain an Access Token, which authorizes API calls, the number of log entries would match the number of API calls and then some. This would result in massive log files that would be expensive to store.

However, certain admin types of activities are logged. The principal will be the service account email address that was used to create, delete, etc. other resource types. This can provide you with a higher level overview of activity by this service account.

What has been done to those resources?

By parsing the Stackdriver logs, we can see what activity has been done to a service account. Actions such as create, delete, create keys, etc. can be tracked in detail by time and who performed the action.

Audit service account usage.

Next we will use a Compute Engine default service account to create a Compute Engine VM. A common security problem that I see is that a user is create with IAM permissions that do not allow creating VM instances, but the user is allowed to connect to VMs using SSH where the Compute Engine default service account is set to Project Editor. This service account then allows the user to bypass the IAM user account permissions and use the service account to create VM instances.

Create a new VM instance:

Connect to the new VM instance

While inside the SSH terminal session, create a new VM instance. This VM instance is created using the Compute Engine service account.

Exit the SSH session.

Now lets look at the Stackdriver logs for Compute Engine activities. The resource type within this log is “gce_instance”. Notice I set the –freshness command line option to 1 hour since we just created the VM.

Now search for activity:

Which provided me with one action:

Looking at the logfile for this action, I can see the principalEmail that created the instance:

Which is a Compute Engine default service account. The format for Compute Engine default service accounts:

[PROJECT_NUMBER]-compute@developer.gserviceaccount.com

I create a more complicate jq command that outputs information in CSV:

Which results in this output. Notice that some lines have empty fields. This is due to events being logged at the start and the completion of an action.

This example displays the date, user email, action and IP address.


March 3, 2019 – Day #3 – Stackdriver Logs, PubSub & Cloud Functions

Alerting

Manually looking thru or searching logfiles is not much fun. The boredom can make you overlook the obvious due to too much information to review.

Today we will enable Stackdriver export, create a Pub/Sub topic and create a Cloud Function. These combined services will automate monitoring events that involve service accounts. I will just create a simple example that you can expand upon for more serious monitoring of Stackdriver logging events.

Google Cloud – The Master Series

Date created: February 27, 2019
Last updated: March 2, 2019

Introduction

This month I completed two beta Google certification exams (Security, Network) with another exam scheduled for March 11th. In preparing for these exams I realized that it is important to master a number of GCP topics / subjects. These topics become your core that you rely upon to pass these certification exams.

Something interesting happened when I published my article on Google Security. My website went nuts. Traffic increased dramatically (100x) and there is a hit to this article every couple of seconds, 24 hours a day. Totally took me by surprise. This dramatic traffic increase tells me that a lot of people want technical information about GCP. This is why I am creating this series of articles.

This article series will follow the pattern that I created for my preparation for the Google Professional Cloud Security Engineer Certification Beta exam. I will create an article that deep dives into one topic / subject with a list of materials and the schedule that I used to master that one single item. I will not be covering global topics, such as VPC. Instead I will focus on one well-defined item such as Firewalls for App Engine or SSH for Compute Engine. My goal is to master well-defined topics, very deeply and broadly across GCP, one at a time. Add 50 or 100 topics together and you become an expert – note, I am just guessing at the number. I am already very proficient with GCP – now my goal is a deep understanding in many areas.

I am a software developer. I approach every service within GCP with a programmer’s eye. I want to know the APIs, how everything plugs together, how to optimize security, performance and pricing. I am not satisfied with only an architect’s view of a service. I usually dig deep into the SDKs, API calls, libraries and documentation. I turn on gcloud debugging and study the API calls and data formats.  I write code, test and experiment – over and over.

One item that I have noticed when I deep dive into one specific topic: I am often taken all over GCP as I see how everything fits together and affects each other. Take SSH for example. I quickly got into project and instance metadata, IAM security, RSA keypairs, etc., etc. No topic in GCP is simple. The extent of their services amazes me very often.

The source for my articles will be real-world items from my normal consulting on Google Cloud plus items that I notice from the Google certification exams and Stack Overflow.

Released articles:

Upcoming articles ideas:
  • Google Cloud – Compute Engine System Service Account
  • Identity Aware Proxy Authorization (Python)
  • Google Cloud Billing
  • SSH and Compute Engine

There will be many more to come. If you have an idea that you would like to see an article cover, send me an email: blog@jhanley.com.

Google Professional Cloud Network Engineer Certification

Date created: February 10, 2019
Last updated: March 13, 2019

Update March 13, 2019. I passed this certification.

Update February 21, 2019. Bad news. My work schedule has been so long each day that I have not been able to study at all during the evenings. That is the reason that this article has not been updated. I will be taking the exam tomorrow with no preparation. I will have to trust my knowledge of and experience with Google Cloud Platform. I cannot change the date as this is the last day that the exam is available. This is just the reality that sometimes you are required to put your job and your customers first. I will post an update after taking the exam.

On January 24, 2019, Google announced two new professional level certifications. The security certification beta starting February 8, 2019 and ending February 28, 2019. For the networking certification beta, February 2 to February 23, 2019. Those are the dates that are available in my area (Seattle).

I have decided to take the network certification beta also. I signed up with Kryterion to take the exam on February 22, 2019. The exam is four hours long and should be very challenging. This blog will track my progress as I prepare to take this certification and the results. I am also taking the Professional Cloud Security Engineer exam tomorrow. Then I only have ten days to prepare after tomorrow’s exam, so this should be interesting given that I do not have another Google Cloud certification. I do have both AWS specialty certifications for security and networking.  My other certifications are listed here. Read my article on preparing for the Professional Cloud Security Engineer here.

I have worked with Google Cloud off and on again for about eight years. However, starting in 2018 I started to work with Google Cloud a lot. Google’s services and market really started to take off last year. I think the reason is due to their excellent big data platforms. I feel that I have a good chance of passing the network certification exam.

Special Mention: Google Qwiklabs

As part of my goal to really dig deep into Google Cloud Platform, I used Google Qwiklabs almost everyday for four months. I continue to use Qwiklabs often. During the past four months, I completed 20 Quests and over 215 labs. The ability to follow predesigned labs for practice is very useful when combined with consistent every day study sessions. Qwiklabs has great value and protects you from unplanned expenses when you forget and leave resources running in the cloud. Just like cooking in the kitchen, following a recipe gives you a foundation to the build upon and create your own recipes.

Link to my profile on Google Qwiklabs.

Google Cloud Developer’s Cheat Sheet

Very nice poster, courtesy of Greg Wilson, that shows the large breadth of Google Cloud Platform. Project home page.

Networking section from Greg’s poster:

February 12, 2019 – Day #1 – Preparation Start:
  • A thorough review of the certification exam guide. I printed this document and then checked off every area that I was not at an advanced level with. I then narrowed down this list to 10 areas to focus on, one per day.
Documents Studied:
Notes:
  • Google is offering a webinar Networking in Google Cloud Platform: Getting Started and Getting Certified on February 22, 9:45 AM to 10:30 AM PST.
  • A Cloud Guru offers the course Certified Advanced Networking – Specialty. This course is designed for AWS, but a major part of the course is just networking that is cloud agnostic. This was one of the materials that I used for the AWS certification. You should know in detail 75% of this course for GCP. This is my favorite course for cloud networking and I highly recommend that you study a large part of this course. I will repeat large sections of this course as I prepare for Google’s network exam.
    • Watch Chapter 2 – Networking Refresher
    • Watch Chapter 4 – Design & Implement Hybrid Networks at Scale
  • My article: Google Cloud Private DNS Zones. Understanding DNS is very important for Google Cloud.
  • Over the past few months I have taken a number of courses by “Loonycorn” authors: Janani Ravi and Vitthal Srinivasan. They are specializing in Google Cloud Platform and Google Big Data. Their courses are very good. Last month they released a number of new Google courses on Pluralsight. Click each author’s name above to see the course list. I will be including these courses in my studies: Leveraging Advanced Networking and Load Balancing Services on the GCP and Designing Scalable Data Architectures on the Google Cloud.
  • I highly recommend Pluralsight for training courses. They courses are very consistent with a high level of quality. So much so that I always have a paid subscription. Most months I pick a random course from Pluralsight just to learn something new or to refresh something old.
Total time spent today: about 3 hours.
February 13, 2019 – Day #2 – Data Center to GCP
Documents Studied:
Videos Watched:
Interesting new term: ALTS

Google’s Application Layer Transport Security (ALTS) is a mutual authentication and transport encryption system developed by Google and typically used for securing Remote Procedure Call (RPC) communications within Google’s infrastructure. ALTS is similar in concept to mutually authenticated TLS but has been designed and optimized to meet the needs of Google’s datacenter environments.

Interesting new product / technology: BoringSSL

BoringSSL is a fork of OpenSSL that is designed to meet Google’s needs. Currently BoringSSL is the SSL library in Chrome/Chromium, Android (but it’s not part of the NDK) and a number of other apps/programs.

Interesting Points:
  • Google Andromeda
    • Google Cloud customers now enjoy significantly improved intra-zone network latency with the release of Andromeda 2.1, a software-defined network (SDN) stack that underpins all of Google Cloud Platform (GCP). The latest version of Andromeda reduces network latency between Compute Engine VMs by 40% over Andromeda 2.0 and by nearly a factor of 8 since we first launched Andromeda in 2014.
  • Legacy Networks
    • You can still create a legacy network, which does not have any subnets. Legacy networks have a single global IP range. You cannot create subnets in a legacy network or switch from legacy to auto or custom VPC networks. Documentation here and here.
  • Network performance is related to VM core count.
  • Private Google Access enables VM instances with only internal (private) IP addresses (no external IP addresses) to reach the public IP addresses of Google APIs and services. You enable Private Google Access at the subnet level. When enabled, instances in the subnet that only have private IP addresses can send traffic to Google APIs and services through the default route (0.0.0.0/0) with a next hop to the default Internet gateway.
  • A VPC Service Controls service perimeter controls access to Google APIs and services. To enable Private Google Access within a service perimeter, your VM instances must send requests to restricted.googleapis.com instead of *.googleapis.com. Enabling this feature provides access to supported Google APIs and services.
  • Jupiter Data-center Fabric
  • B4 Backbone
    • Datacenter to Datacenter
    • SDN platform
    • High throughput (multiple terrabits)
  • B2 Backbone
    • Google to Internet
    • Protected network with high SLA
    • Traffic is carried as far as possible on Google (cold potato)
  • Espresso
    • SDN to Peering Edge
    • Faster, Low-latency access to Google Services – best availability & user experience
    • Dynamically choose from where to serve customers based on end-to-end requirements.
  • Access Transparency
    • Trust is paramount when choosing a cloud provider. We want to be as open and transparent as possible, allowing customers to see what happens to their data. Now, with Access Transparency, we’ll provide you with an audit log of authorized administrative accesses by Google Support and Engineering, as well as justifications for those accesses, for many GCP services, and we’ll be adding more throughout the year. With Access Transparency, we can continue to maintain high performance and reliability for your environment while remaining accountable to the trust you place in our service.
Total time spent today: about 4 hours.

To be continued each day as I prepare for the Google Professional Network Engineer Certification beta exam.

 

Google Professional Cloud Security Engineer Certification

Date created: January 30, 2019
Last updated: March 29, 2019
Exam Completed: February 15, 2019

Part 1: Introduction
Part 2: Post Exam Review
Part 3: Daily Study
Part 4: Tips and Advice
Part 5: Final Exam

Update March 29, 2019. Google invited me to participate in creating the final security exam. Jump to Part 5.

Part 1: Introduction

On January 24, 2019, Google announced two new professional level certifications. The security certification beta starting February 8, 2019 and ending February 28, 2019. For the networking certification beta, February 2 to February 23, 2019. Those are the dates that are available in my area (Seattle).

I have decided to take the security certification beta now. I signed up with Kryterion to take the exam on February 11, 2019. The exam is four hours long and should be very challenging. This blog will track my progress as I prepare to take this certification and the results. I only have ten days to prepare, so this should be interesting given that I do not have another Google Cloud certification. I do have both AWS specialty certifications for security and networking. I plan to also take the Google Cloud networking exam at some point.

Update (Feb-10-19)- I scheduled the Professional Cloud Network Engineer exam beta for February 22, 2019. Might as well knock out that certification attempt also. I have started another article for the network exam.

Update (Feb-10-19) – Due to the bad weather in Seattle, the testing center will be closed tomorrow. I might be able to schedule for Wednesday pending the weather report tomorrow.

Update (Feb-12-19) – Rescheduled exam for Feb-15-2019. I had a hard time getting another exam scheduled. The testing center is booked. Kryterion told me that the number of people taking GCP exams has exploded. This is very interesting and matches what I have seen with customer interest in Google Cloud.

My background has extensive work in security. This includes software development and forensics going back over two decades. I also have two other professional level certifications in Security. One with AWS and the other with CWNP. My other certifications are listed here.

I have worked with Google Cloud off and on again for about eight years. However, starting in 2018 I started to work with Google Cloud a lot. Google’s services and market really started to take off last year. I think the reason is due to their excellent big data platforms. I feel that I have a good chance of passing the security certification as I have really focussed on Google Cloud security and I have written a number of articles about Google Cloud on this website.

I am creating a list of training materials that I plan to use to prepare and then in detail for each day. After I take the certification, I will write a review  on how well these materials fit into the beta exam.

Note: In the past I have watched a number of excellent security videos by Google. I will list each one after I refind them. I plan to watch each one again.

I do not believe in just watching videos. You can watch a video on golf, but you have the hit the range to practice over and over again. Same thing with any IT service. Study the fundamentals, read the FAQ and documentation and then design these services into something that you can test against.

Study, practice and repeat. Every year new services are enhanced / released. New technologies are announced. New vulnerabilities are discovered. Building secure systems is not something you guess at. You either know how to build security into your designs or you don’t.

Update: March 29, 2019:

Special Mention: Loonycorn

Over the past few months I have taken a number of courses by “Loonycorn” authors: Janani Ravi and Vitthal Srinivasan. They are specializing in Google Cloud Platform and Google Big Data. Their courses are very good. Last month they released a number of new Google courses on Pluralsight. Click each author’s name above to see the course list.

Special Mention: Google Qwiklabs

As part of my goal to really dig deep into Google Cloud Platform, I used Google Qwiklabs almost everyday for four months. I continue to use Qwiklabs often. During the past four months, I completed 20 Quests and over 215 labs. The ability to follow predesigned labs for practice is very useful when combined with consistent every day study sessions. Qwiklabs has great value and protects you from unplanned expenses when you forget and leave resources running in the cloud. Just like cooking in the kitchen, following a recipe gives you a foundation to the build upon and create your own recipes.

Link to my profile on Google Qwiklabs.

Google Cloud Developer’s Cheat Sheet

Very nice poster, courtesy of Greg Wilson, that shows the large breadth of Google Cloud Platform. Project home page.

Identity and Security section from Greg’s poster:

Special Mention: Exam Reviews from other test takers:

If you have taken this exam send me an email with a link to your article and I will put your review on this list: blog@jhanley.com

Part 2: Post Exam Review

The test is 133 questions with four hours to complete it.

Overall, this test is very good quality with only have few minor things here and there. Nothing that would affect your score. However, this test is much too long. After 80 questions, I was tired and I had problems staying 100% focussed. This exam is very hard. You will really need to know Google services and Google best practices for security. Read those whitepapers.

My tips after taking the exam. These tips might save you from failing.

  1. Study the security whitepapers. Further down is a list of whitepapers to study.
  2. Study Google Cloud networking. Understand VPCs, Firewalls, Peering, etc.
  3. Study IAP.
  4. Study Active Directory integration with IAP.
  5. Study DLP.
  6. Study KMS and Encryption.

The number one item that surprised me was the number of networking related questions. I have MVP awards in both security and networking, so this really should not surprise me that Google considers networking a vital component in security. I am taking the Google Professional Cloud Network Engineer Certification on February 22. At that time I will know which one to take first (networking or security). Just plan to take both so that you have that advanced level of knowledge covering both.


Update March 13, 2019

I passed the Network Engineer exam.

Update Feb 27, 2019

I took the Network Engineer exam on the 22nd. The network exam is a bit harder and there is not much overlap of security knowledge on the network exam. There is some overlap of network knowledge on the security exam. However, the network knowledge required for the network exam is much deeper and more complex with a big focus on hybrid technologies (VPNs, Interconnect, etc.) that is not present on the security exam. Both exams are challenging and require solid knowledge of GCP services. You can pass the network exam without solid GCP security knowledge however, you will not pass the security exam without solid GCP network knowledge.

Which one should you take first? This is a toss-up. I love both topics so my interest level for both is very high every day. I recommend a program where you prepare for both exams interleaving security and networking. Take the security exam first, then polish your hybrid networking skills and then take the network exam.


I was right in that encryption and KMS would be all over the exam. Make sure that you understand DEK, KEK, CMEK, CSEK and KMS. There was only one question where HSM was mentioned.

I practiced hard with Identity-Aware Proxy (IAP). I wished that I would have done more homemade labs on IAP. There were a lot of questions on IAP.

Google places a high importance on understanding identity in this exam. Study each identity related service including federation (AD -> Google Identity) and how to synchronize identity providers. Make sure that you know G Suite enough to set everything up. I was surprised with G Suite being on the exam. I was prepared at a basic level as I use G Suite for my domain and email.

IAM was everywhere. Make sure that you understand IAM at the Organization and Project level.

Another area that surprised me was the CLI gcloud or rather the lack of gcloud. I had expected a lot of questions that would require solid knowledge of command line statements. There were none.

There were a number of questions on DLP. I was not well prepared for this service. I knew the basics but not enough for this exam.

The exam had numerous questions on security best practices. I wish that I would have spent more time drilling into the security whitepapers.

My recommendations which I am reading again post exam:

In this article I documented a lot of the material that I studied to prepare. I recommend that you watch every video, document and whitepaper. Then add a lot more to my list. Google’s security exam is very broad and deep. The “Professional” part really means it. The Google Security exam is harder than the AWS Security Specialty.

What would I do differently now that I have taken the exam? I would allocate more time, maybe four weeks instead of two and study the whitepapers more in-depth. I would spend more time on Active Directory with Google Identity. I would spend more time on IAP. I would spend more time on DLP.

Part 3: Daily Study
January 29, 2019 – Day #1 – Preparation Start:
  • A thorough review of the certification exam guide. I printed this document and then checked off every area that I was not at an advanced level with. I then narrowed down this list to 10 areas to focus on, one per day.
  • I searched the Internet for training materials for Google Cloud Security. The list of materials available is listed above. I did a cram session on Linux Academy’s security essentials course. This is an excellent introductory level course and made for a good review of the basics. I made notes of areas to focus on. I spent a lot of time in the Google console making sure that I knew all details well.
  • Total time spent: the entire day – about 12 hours.
  • I only plan to spend about two hours per day from now until the exam. However, I like to start with a cram session so that I have a solid understanding of my weak areas. Then I can create a realistic plan to succeed.
January 30, 2019 – Day #2 – KMS, HSM and Encryption

Today my focus is on KMS and encryption in general for GCP.

Encryption and the management of encryption permissions and keys is a very important topic for cloud services. I expect that this topic will be everywhere on the Google exam. As I prepare for certification I will review the encryption features for each GCP service in detail.

Documents Studied:
Videos Watched:
Interesting new term: ALTS

Google’s Application Layer Transport Security (ALTS) is a mutual authentication and transport encryption system developed by Google and typically used for securing Remote Procedure Call (RPC) communications within Google’s infrastructure. ALTS is similar in concept to mutually authenticated TLS but has been designed and optimized to meet the needs of Google’s datacenter environments.

Interesting new product / technology: BoringSSL

Tip: This is on the exam

BoringSSL is a fork of OpenSSL that is designed to meet Google’s needs. Currently BoringSSL is the SSL library in Chrome/Chromium, Android (but it’s not part of the NDK) and a number of other apps/programs.

Action items requiring more study / work before the exam:
  • Istio – Linux Academy shows Istio being used for VM to VM traffic. Their slide showed this as user configurable.
Interesting Points:
  • All data within GCP is encrypted. This means “At Rest” and “In Transit”.
  • Encrypted data is broken into chunks and distributed across data center with unique keys.
  • Each chunk uses its own unique key for encryption.
  • The key for each chunk is itself encrypted by another key.
  • HSM is not available on global keyrings.
  • Google supports public-key wrapped CSEK (Customer Supplied Encryption Key) also called RSA key wrapping. Details here and here.
Total time spent today: about 4 hours.
January 31, 2019 – Day #3 – Encryption & Services

Today my focus is on encryption and individual Google services. My first task is to create a lab practicing with CMEK (Customer Managed Encryption Keys) for BiqQuery.

Tip: BiqQuery was not on the exam. CMEK was.

Documents Studied:
Videos Watched:
Interesting Points:
  • BigQuery does not support the global region for KeyRings or keys.
  • You need the BigQuery service account to setup permissions for KMS. The easiest way is via the CLI:
    • bq show --encryption_service_account
  • No special arrangements are required to query a table protected by Cloud KMS.
  • You can change the encryption key using the bq update command. You can also change the encryption key when copying a table using the bq cp command.
  • You can remove the encryption key using the bq cp command copying the table to itself.
  • BigQuery does not automatically rotate a table encryption key when the Cloud KMS key associated with the table rotates. Existing tables continue to use the key version with which they were created. New tables will use the current key version.
  • BigQuery supports the SQL command SESSION_USER() which returns the email address of the user running the query. Documentation here.
Total time spent today: about 2 hours.
February 1, 2019 – Day #4 – Network Security

Section 2 of the exam guide is “Configuring Network Security”. Today I will start an in-depth review of Google Cloud Networking.

Tip: Lots of networking questions on the exam.

Documents Studied:
Videos Watched:
Interesting Points:
  • Google Andromeda
    • Google Cloud customers now enjoy significantly improved intra-zone network latency with the release of Andromeda 2.1, a software-defined network (SDN) stack that underpins all of Google Cloud Platform (GCP). The latest version of Andromeda reduces network latency between Compute Engine VMs by 40% over Andromeda 2.0 and by nearly a factor of 8 since we first launched Andromeda in 2014.
  • Legacy Networks
    • You can still create a legacy network, which does not have any subnets. Legacy networks have a single global IP range. You cannot create subnets in a legacy network or switch from legacy to auto or custom VPC networks. Documentation here and here.
  • Network performance is related to VM core count.
  • Private Google Access enables VM instances with only internal (private) IP addresses (no external IP addresses) to reach the public IP addresses of Google APIs and services. You enable Private Google Access at the subnet level. When enabled, instances in the subnet that only have private IP addresses can send traffic to Google APIs and services through the default route (0.0.0.0/0) with a next hop to the default Internet gateway.
  • A VPC Service Controls service perimeter controls access to Google APIs and services. To enable Private Google Access within a service perimeter, your VM instances must send requests to restricted.googleapis.com instead of *.googleapis.com. Enabling this feature provides access to supported Google APIs and services.
  • Jupiter Data-center Fabric
  • B4 Backbone
    • Datacenter to Datacenter
    • SDN platform
    • High throughput (multiple terrabits)
  • B2 Backbone
    • Google to Internet
    • Protected network with high SLA
    • Traffic is carried as far as possible on Google (cold potato)
  • Espresso
    • SDN to Peering Edge
    • Faster, Low-latency access to Google Services – best availability & user experience
    • Dynamically choose from where to serve customers based on end-to-end requirements.
  • Access Transparency
    • Trust is paramount when choosing a cloud provider. We want to be as open and transparent as possible, allowing customers to see what happens to their data. Now, with Access Transparency, we’ll provide you with an audit log of authorized administrative accesses by Google Support and Engineering, as well as justifications for those accesses, for many GCP services, and we’ll be adding more throughout the year. With Access Transparency, we can continue to maintain high performance and reliability for your environment while remaining accountable to the trust you place in our service.
Today’s Summary:

The more that I prepare for the security exam the more that I also want to take the networking exam. I am very experienced at an advanced networking level both for data centers and cloud infrastructures. Tomorrow I will review the networking certification exam guide and consider scheduling that exam also.

Update – I scheduled the Professional Cloud Network Engineer exam beta for February 22, 2019. Might as well knock out that certification attempt also.

Total time spent today: about 4 hours.
February 2, 2019 – Day #5 – Network Security

Section 2 of the exam guide is “Configuring Network Security”. Today I will continue my in-depth review of Google Cloud Networking.

Documents Studied:
Videos Watched:
Interesting Points:
  • DSR (Direct Server Return)
    • When the packet arrives at the selected service endpoint, it is decapsulated and consumed. The response, when ready, is put into an IP packet with the source address being the VIP and the destination address being the IP of the user. We use Direct Server Return (DSR) to send responses directly to the router so that Maglev does not need to handle returning packets, which are typically larger in size. This paper focuses on the load balancing of incoming user traffic. The implementation of DSR is out of the scope of this paper. Link
  • Network Load Balancer (NLB)
    • Regional only
    • Layer 4
    • Does not support IPv6
    • No traffic routing based on L7
    • No TLS termination/offload
    • Client IP preserved – does not need x-forwarded-for
    • IP based session affinity
Today’s Summary:

There are a lot of details to pay attention to for Google load balancers. Nothing really related to security and today’s preparation is better suited for the networking certification. Time well spent as a learned a few subtle features.

Total time spent today: about 2 hours.

 

February 3, 2019 – Day #6 – Network Security

Today I will split the study time in two. The first half on network security. Then work on section “4.3 Monitoring for security events” which includes Cloud Security Scanner and Forseti.

Tip: Forseti is on the exam.

Documents Studied:
Videos Watched:
Interesting Points:
  • SSL proxy can handle HTTPS, but this is not recommended. Link. I think that a lot of people confuse SSL with HTTPS.
  • SSL Proxy Load Balancing supports ports 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, and 5222. I am not sure if knowing all port numbers is important. (Tip: port numbers were not on the exam). There were a few that I had to look up:
    • 195 – DNSIX Network Level Module Audit
    • 465 – TCP: URL Rendezvous Directory for SSM (Cisco protocol)
    • 465 – UDP: Authenticated SMTP over TLS/SSL (SMTPS)
    • 700 – Extensible Provisioning Protocol (EPP)
    • 1883 – MQTT (formerly MQ Telemetry Transport)
    • 5222 – Extensible Messaging and Presence Protocol (XMPP) client connection
  • Better utilization of the virtual machine instances — SSL processing can be very CPU intensive if the ciphers used are not CPU efficient. To maximize CPU performance, use ECDSA SSL certs, TLS 1.2 and prefer the ECDHE-ECDSA-AES128-GCM-SHA256 cipher suite for SSL between the load balancer and your instances. Link. Note the wording “between the load balancer and your instances”.
  • Organization policies support restricting at lower levels in hierarchy. This contrasts with IAM policies where inheritance can only expand lower in the hierarchy. Video at 14:30.
Total time spent today: about 4 hours.

 

February 4, 2019 – Day #7 – Cloud Storage Security

Today I will focus on Cloud Storage security (IAM and ACLs).

Tip: Study IAM and ACLs in detail for the exam.

Documents Studied:
Videos Watched:
  • None
Interesting Points:
  • GCS: You cannot grant discrete permissions for reading or writing ACLs or other metadata. To allow someone to read and write ACLs, you must grant them OWNER permission. Link.
  • This table lists the different naming conventions used by Google APIs:
  • Google Cloud Storage uses scope, also called grantee, to specify who it is that has a given permission. OAuth uses scopes to define permissions.
  • Special identifier for all Google account holders:
    • This special scope identifier represents anyone who is authenticated with a Google account. The special scope identifier for all Google account holders is allAuthenticatedUsers.
  • Special identifier for all users:
    • This special scope identifier represents anyone who is on the Internet, with or without a Google account. The special scope identifier for all users is allUsers.
Total time spent today: about 1 hour.
February 5-7, 2019 – Day #8,9,10 – No Study

I did not have time during these days to study. My work, which currently is GCP, had me working long hours.

February 8, 2019 – Day #11 – Google Cloud Endpoints

Google Cloud Endpoints is a service that I have not worked with previously. I decided that API security is very important today as just about everything that we do with customer facing systems involves APIs of some sort. Time to learn about another Google Cloud service.

Tip: Only one question on Cloud Endpoints. Focus on IAP for the exam.

Documents Studied:
  • Google Cloud Endpoints documentation home page.
    • I started with the home page and read just about everything.
Videos Watched:
  • None
Labwork:
  • Qwiklabs: Cloud Endpoints: Qwik Start
    • This is a good introductory lab for Cloud Endpoints. Very recommended as a first step. Complete this lab and then go back and hit the documentation. Then complete the next tutorial.
  • Getting Started with Endpoints on Compute Engine with Docker. Link.
Interesting Points:

It look a lot of effort to really understand Cloud Endpoints and how to put them to work. This is an interesting service that appears simple on the surface but has lots of details to master.

Today’s Summary:

Google Cloud Endpoints is a very interesting service that has good potential to both protect your external facing APIs but improve the standardization of your APIs. Supported authorization methods include API Keys, Firebase Authentication, Auth0, Google ID Tokens, JWTs (Service Accounts).

Total time spent today: about 4 hours.
February 9, 2019 – Day #12 – Google Cloud API Security

Today I am continuing on API security. I don’t have much time this evening as we had a power failure in Seattle that lasted until 6 PM.

Documents Studied:
  • None
Videos Watched:
Today’s Summary:

I lost all day due to the power failure. I did spend time doing a mental review of everything so far, so not all was lost.

Total time spent today: about 1 hour.
February 10, 2019 – Day #13 – Google Cloud Security Review

Today is the last day before the exam. Today I will spend all day reviewing. I have created a list of topics to review. At this point I am very comfortable taking the exam tomorrow based upon the certification exam guide.

Items that I will review today:

Tip: I put an asterisk next to the items that are important for the exam.

    • *Cloud Identity
    • *Google Cloud Directory Sync
    • *Cloud Security Scanner
    • *Cloud Interconnect
    • *Forseti
    • *VPC Peering
    • *Shared VPC
    • *Private Access
    • *DNSSEC
    • *IAP
    • IPSEC
    • Enclave computing
Videos Watched:
Labwork:

Tip: Cloud Functions was not on the exam. VPC Peering is on the exam.

  • Qwiklabs: Controlling Access to Google Cloud Functions
    • This is a very good lab that shows how to use Google service accounts to create an OAuth token to authorize Cloud Function requests. This is primarily service-to-service authorization. I plan to enhance this lab to use Google Accounts for User authorization at a later date and publish this as an article here on my blog.
  • Qwiklabs: VPC Network Peering
  • Qwiklabs: Network Performance Testing
  • Qwiklabs: Network Tiers – Optimizing Network Spend

My next post should be a review of taking the actual exam.

Total time spent today: all day.
February 11, 2019 – Day #14 – Exam Day

I am not taking the exam today. The testing center is closed due to the bad weather we are having in Seattle. I will update later once I can get a date scheduled. In the meantime I will now prepare for the network certification beta exam. Security exam rescheduled for February 15, 2019.

See below for the real exam day.

February 13, 2019 – Extra Study

Since my exam has been delayed to Friday due to weather, I will continue studying. For the most part I will study some of the less commonly used features to round out my knowledge.

Documents Studied:
Videos Watched:
February 15, 2019 – Exam Completed

Last night, I ate my favorite food, tacos, and went to bed early. No study yesterday. I got up early and went out for a good breakfast (Denver omelet and hash browns). Arrived at the testing center about 30 minutes early, completed my paperwork and began the test at 11:45 AM. Finished the test 2 hours 10 minutes later.

Did I pass? Yes, I am confident that I passed the exam. I had no problems in any area except for DLP and Active Directory + IAP. I completed the exam in almost half the allocated time. I should learn the exam results once the beta period closes, which I think is February 28.

In summary, I highly recommend that everyone who takes Google Cloud Platform seriously prepare for and take this exam. The security knowledge required to properly and safely manage cloud services and applications is mandatory today. This exam is very broad in respect to Google services and technologies. Spend the time to properly prepare. Cramming for this exam will probably result in failure or a low score.

Good luck on your journey in the cloud.

Tips & Advice
  • Learn the Google Cloud Services in depth before preparing for this certification. You should already be at the Google Cloud Professional Architect level in knowledge and experience.
  • Do not expect to cram for this exam and succeed.
  • Set a date and pay for the exam. You can always change the exam date. However, by actually paying for the exam, I find that I get focussed quickly.
  • Do not worry about failing the certification exam (money aside). The experience will help you really understand your weak points.
  • Your goal should be polishing your knowledge on the certification subject and not cramming to pass. You should already know GCP well before preparing for the security exam. So many cloud engineers and architects that I interview have two or three certifications and cannot remember the important details three months later. When I drill down into their exam preparation I find that they often crammed eight hours a day and then just barely passed the exam without really mastering the certification. There is no replacement for experience. Certifications are great but terribly embarrassing when you cannot apply what you should know.
  • Set aside consistent every day study and practice sessions. Four or five days per week.
  • You do not need to study the day before the exam. Take a break and let your mind rest all those nerve connections that you built.
  • It can take 20 minutes to get into deep concentration. Try to set aside two to three hours each study day for this exam. You will need a quiet place with no disturbances. Tell the kids that you are doing homework preparing for a final exam.
  • Communicate to your family and friends your goal. They will understand that your time will be limited for a couple of weeks. Give them a hint to throw you a dinner or party when you take the exam even if you fail.
  • Stop when you get tired. It is hard to be focussed and maintain good memory when you are tired. Note: I am not saying stop when you are lazy. Dedication and focus requires will power. Know your limits and the difference between tired (fatigue) and lazy.
  • Electronic notes are OK. Hand written notes on paper reinforces longer term memory. When I watch a training video, I stop fifty or a hundred times and make little detail notes. I repeat sections sometimes two or three times. I put an asterisk next to items I must study in more detail. I go back over my notes the next day and hit those asterisks.
  • Exercise, good eating and plenty of rest are critical items during your study periods (and all the time). Your comprehension and memory will be much higher when your body is functioning well.
  • I go into a mode I call “total absorption” when I am preparing for a certification. I put training videos on my iPhone and play them while driving (listen only), sitting at lunch, etc. I increase my exercise by walking a lot and listening to recorded classes, YouTube, etc while walking. I skip lots of other little things, to become totally focussed on the certification subject. I think and repeat in my mind everything that I am studying. I keep reviewing the exam objectives over and over. I just become one with the subject matter.

To give you a real comparison, I consider certification the final steps in my mastery of Google Cloud Platform. I have been working with GCP for around 8 years (off and on again). I wrote a C++ SDK for Google Cloud Storage as my first project for GCP. I consult in Security, Networking and Big Data for GCP. For the past five months I have been studying GCP in detail to begin preparation for certification. I did not use certification as my method to learn GCP. My depth of knowledge of GCP is very broad. I am #1 every month, #2 for total questions answered and #7 of all time for GCP on Stack Overflow: link.

What I am trying to say is “Do not be in a hurry to certify, learn Google Cloud Platform well and then certify”.

Part 5: Final Exam

After I took the security certification beta exam, Google contacted me to let me know I passed and that my score was very high. Google asked me to participate in creating the final security certification exam.

I am under NDA for this process. I will add that Google Cloud is using science instead of raw scores to determine passing grades. This is an interesting comparison to other cloud vendors. For the person taking the test, I think that this is a better method and gives more importance to experience over raw memorization.

The exam has been reduced from 113 questions to 40. The beta exam was a monster covering huge areas of Google Cloud Platform, G Suite and security in general.  The final exam is now reasonable in knowledge and experience required.

One tip: this exam now tests for your experience and understanding of Google Cloud security. This is not an exam that you can watch a few video courses, memorize facts and then pass. The criteria that the exam is designed to test is three years of security experience and at least one year of GCP experience.

The final exam will be available at Google Next ’19 for scheduling.

Google OAuth 2.0 – Testing with Curl – Refresh Access Token

In my previous article on how to test Google OAuth 2.0 flows from the command line I showed how to generate Google OAuth 2.0 Access Token, Refresh Token and ID Token.

In this article I will show how to refresh an Access Token.

You will need your Client ID, Client Secret and Refresh Token.

In this example, the Client ID and Client Secret are stored in the Google secrets file /config/client_secrets.json. The Refresh Token is stored in the file refresh.token. The refresh.token file was created by curl_oauth.bat from my previous article.

Windows Batch Script:

The output from https://www.googleapis.com/oauth2/v4/token looks like this:

Notice the new Access Token and ID Token.

In summary to refresh a Google OAuth 2.0 Access Token requires three items:

  1. Client ID
  2. Client Secret
  3. Refresh Token

However, in order to obtain a Refresh Token the original OAuth 2.0 authentication must have requested access_type=offline or access_type=consent.

 

Google OAuth 2.0 – Testing with Curl – Version 2

If you have ever wanted to test Google OAuth 2.0 flows from the command line you will like this short article.

This article is the second version. I wrote a previous article on using curl, but that version did not have a custom webserver to handle the OAuth callback. This version includes a webserver to automate the entire process. Read the first version for an introduction on using curl with OAuth.

Also see my next article on how to refresh an access token:

Google OAuth 2.0 – Testing with Curl – Refresh Access Token

This article is for Windows Command Prompt users but should be easily adaptable to Linux and Mac also.

You will need your Google Client ID and Client Secret. These can be obtained from the Google Console under APIs & Services -> Credentials. In the following example code, these are stored in the file /config/client_secrets.json

These examples also use the program jq for processing the Json output. You can download a copy here.

In the following example, the Scope is cloud-platform. Modify to use the scopes that you want to test with. Here are a few scopes that you can test with:

OAuth 2.0 Scopes for Google APIs

Note that in this example the code uses three scopes. The second two are for the Client ID Token.

Details:
  1. Copy the following statements to a Windows batch file.
  2. Modify to fit your environment.
  3. Modify the script for the browser that you want to use.
  4. Run the batch file.
  5. A browser will be launched.
  6. The browser will go to https://accounts.google.com where you can complete the Google OAuth 2.0 authentication.
  7. The script will complete the OAuth 2.0 code exchange for a Token.
  8. The Token will be displayed in the command prompt.

The returned Token contains the Access Token, Refresh Token and Client ID Token.

Windows Batch Script:

Web Server Python code (webserver.py):

 

 

 

 

Google Cloud IAM – Listing Projects

This article shows how to display a list of Google Cloud Projects that you have access to list. This article includes two examples in Python that use two different Google Cloud Python libraries. These examples produce the same output as the Google CLI gcloud projects list command.

You can only list projects for which you have permissions to access. This means that you cannot see all projects unless you have the rights to access them. In my examples below I show which scopes are required. This also means that you can list projects across accounts. This allows you to see which projects you have access to using the credentials specified in the examples. I show how to use Application Default Credentials (ADC) and Service Account Credentials (Json file format).

These examples have been tested with Python 3.6 on Windows 10 Professional.

Example 1 using the Python Client Library (services discovery method):

Example 2 using the Python Google Cloud Resource Manager API Client Library:

 

Google Cloud IAM – Member Types

Google Cloud IAM supports a number of member types that can be authorized to access Google Cloud resources. The following member types can be added to Google Cloud IAM to authorize access to your Google Cloud Platform services.

Google IAM Member Types:

  • Google account – individual (me@example.com)
  • Google group – (team@example.com)
  • G Suite domain – (@example.com)
  • Cloud Identity domain – same as G Suite domain without Google services
  • Service account – Json or P12 file for program access
Useful commands:

List current project: gcloud config list project

List all projects: gcloud projects list

List service accounts: gcloud iam service-accounts list

Listing IAM members is more difficult. Roles are assigned to projects. Members are assigned to roles. This command will list everything: gcloud projects get-iam-policy development-123456.

For several gcloud commands such as add-iam-policy-binding you must prefix the member identifier with the type such as: user:, group:, serviceAccount: and domain:.

For example: john@example.com is specified as user:john@example.com.

Google Account

A Google Account is a user name and password that can be used to login to Google applications and Google services. Any email address that is associated with a Google account can be an identity.

The following gcloud command will add the user john@example.com to IAM and assign the role roles/iam.serviceAccountUser

After this command (takes about 60 seconds to take effect) the user will be able to list and get details for the project’s service accounts. Change the project development-123456 to match your project.

This command will remove the role from the user.

Note: You can replace “projects” in the previous commands with “organizations” for organization level commands and inheritance. I will discuss organizations in a future article.

Google Accounts Signup

Google Group

A Google Group is a G Suite Group that includes one or more Google Account members. These members are assigned the same privileges to access Google Cloud services.

The following gcloud command will add the G Suite group storage-admins@example.com to IAM and assign the role roles/storage.admin. Everyone in this group will have full control of buckets and objects.

Google Groups

Google G Suite Domain

A Google G Suite Domain represents all users in a G Suite domain name. This is also called Google Apps Domain.

Google Apps

Google Cloud Identity Domain

Google Cloud Identity is the authentication system from Google G Suite. Cloud Identity manages users, devices and apps without providing Google services.

Cloud Identity

Service Account

A Service Account is a special type of Google account that belongs to your application or virtual machine, instead of to an individual user. Service Account credentials are typically stored in Json files, but can also be accessed thru other methods such as thru Compute Engine metadata.

The following gcloud command will add the service account sa-storage-admin@example.com to IAM and assign the role roles/storage.admin. This service account will have full control of buckets and objects.

Understanding Service Accounts

allUsers

The special identifier allUsers is an identifier that represents anyone who is on the internet, including authenticated and unauthenticated users. Note that some GCP APIs require authentication of any user accessing the service, and in those cases, allUsers will only imply authorization for all authenticated users.

Note: allUsers is a group so this requires the group: type identifier.

Warning: I do not recommend using this member type. There is no security.

allAuthenticatedUsers

The special identifier allAuthenticatedUsers is a special identifier that represents anyone who is authenticated with a Google account or a service account. Users who are not authenticated, such as anonymous visitors, are not included.

Note: allAuthenticatedUsers is a group so this requires the group: type identifier.

Warning: I do not recommend using this member type. There is no security.

 

Google Cloud – Creating OAuth Access Tokens for REST API Calls

The following example shows several important steps to call Google Cloud APIs without using an SDK in Python. Similar code works in just about any language (c#, java, php, nodejs).

Change the source code with the filename of your service account Json file, your Google Zone and your Project ID.

This example will list the instances in one zone for the specified project. From this example you will know the framework to call almost any Google Cloud API.

This code will show you how to:

  1. How to load service account credentials from a Json file.
  2. How to extract the Private Key used to sign requests.
  3. How to create a JWT (Json Web Token) for Google Oauth 2.0.
  4. How to set the Google Scopes (permissions).
  5. How to sign a JWT to create a Signed-JWT (JWS).
  6. How to exchange the Signed-JWT for a Google OAuth 2.0 Access Token.
  7. How to set the expiration time. This program defaults to 3600 seconds (1 Hour).
  8. How to call a Google API and set the Authorization Header.
  9. How to process the returned Json results and display the name of each instance.

Example program in Python 3.x:

 

Google Cloud – Converting Service Account Credentials from P12 to Json Format

I have written a number of articles about Google Cloud Credentials. For Service Account credentials, there are two on-disk formats: P12 and Json.

This article shows how to convert these credentials from P12 to Json.

 

Google Cloud – Extracting Private Key from Service Account P12 Credentials

Google Service Account Credentials are available in two file formats: Json and P12. P12 is also known as PFX. The following code shows how to process a P12 file and split into Private Key and Certificate. This code also works with normal SSL Certificate Bundles (PFX).

In another article I show how to use P12 credentials (Private Key) to create Google Access Tokens.

Note: The P12 file format is deprecated. The Google recommended format is now Json.

 

Google Cloud – Recovering from UFW lockout

Introduction

You have a Debian instance running in Google Cloud Compute Engine. You connect to this instance via SSH. One day you decide to enable the UFW firewall and your SSH connection drops. You cannot reconnect.

Problem

The problem is that by enabling UFW you blocked SSH access.

Solution

This article shows two methods of solving this problem.

The first method is to create a startup-script that disables UFW. The second method  attaches the boot disk to another instance and modifies the file /etc/ufw/ufw.conf

Method 1:

Step 1:

Login into the Google Cloud Console. Go to Compute Engine -> VM instances. Click on your instance. Click on the Edit button.

Step 2:

Scroll down to the section “Custom metadata”. For Key enter startup-script. For Value enter:

Click the Save button

Note: An option is to just enable SSH in the startup-script.

Step 3:

Reboot your instance. During reboot the startup-script will run disabling the UFW firewall. Login to your instance using SSH.

Step 4:

Repeat Step #2 except this time, delete the startup-script. Otherwise the firewall will be disabled each time your instance boots.

Method 2:

STEP 1:

Shutdown your instance with the UFW problem. Login into the Google Cloud Console. Go to Compute Engine -> VM instances. Click on your instance and make note of the “Boot disk” name. This will be the first disk under “Boot disk and local disks”.

STEP 2:

Create a snapshot of the boot disk before doing anything further. While still in Compute Engine -> Disk. Click on your boot disk. Click on “CREATE SNAPSHOT”.

STEP 3:

Create a new instance in the same zone. A micro instance will work.

STEP 4:

Open a Cloud Shell prompt (this also works from your desktop if gcloud is setup). Execute this command. Replace NAME with your instance name (broken system) and DISK with the boot disk name and ZONE with the zone that the system is in:

Make sure that the previous command did not report an error.

STEP 5:

Now we will attach this disk to the new instance that you created.

Make sure that the repair instance is running before attaching the second disk. Sometimes an instance can get confused on which disk to boot from if more than one disk is bootable.

Go to Compute Engine -> VM instances. Click on your instance. Click Edit. Under “Additional disks” click “Add item”. For name enter/select the disk that you detached from your broken instance. Click Save.

STEP 6:

SSH into your new instance with both disks attached.

STEP 7:

Follow these steps carefully. We will mount the second disk to the root file system. Then change the contents of /mnt/repair/etc/ufw/ufw.conf to disable the firewall.

  • Become superuser. Execute sudo -s
  • Execute df. Make sure that /dev/sdb1 is not mounted.
  • Create a directory for the mountpoint: mkdir /mnt/repair
  • Mount the second disk: mount /dev/sdb1 /mnt/repair
  • Change directories: cd /mnt/repair/etc/ufw
  • Edit ufw.conf
  • Change ENABLED=yes to ENABLED=no
  • Shutdown the repair system: halt
STEP 8:

Now reverse the procedure and move the second disk back to your original instance and reattach using the command below. Then start your instance and connect via SSH.

Note: To reattach the boot disk you have to use gcloud with the -boot option.

Google Cloud – Creating Access Tokens from Service Account P12 Credentials

Google Service Account Credentials are available in two file formats: Json and P12. P12 is also known as PFX. The following code shows how to use P12 credentials to list the buckets in Google Cloud Storage without using an SDK.

Note: The P12 file format is deprecated. The Google recommended format is now Json.

You will need the Google Service Account Email address, Service Account Credentials in P12 format, the P12 password (defaults to notasecret) and your project Id (gcloud config list project)

The P12 file is processed and the Private Key is loaded into memory to be used to sign a Json Web Token (JWT). This Signed-JWT (JWS) is then passed to a Google OAuth 2.0 endpoint to be exchanged for a Bearer Access Token. This token is then used in the HTTP Authorization header to authorize Google API calls.

The HTTP headers for a Google API request look like this:

Example Python 3.x source code:

 

Google Cloud Stackdriver – IP Addresses

I have worked with Google Cloud Stackdriver for about three months. The more I learn about Stackdriver the more I like it. Great product for logging, monitoring, error reporting, application tracing and application debugging and more.

One of the items that I don’t like with web server uptime checks is that the health checks clutter up my server’s log files. I want to know what IP addresses Stackdriver agents use so that I can filter my log files.

This week I learned about a new Google Cloud beta feature that supports listing Stackdriver IP addresses: Getting uptime-check IP addresses

The following code in Python will generate a table with information about the IP addresses that Stackdriver agents use.

This results in a table similar to the following.

By modifying the Python code, you can generate the list of IP addresses in any format required to filter your log files.

The following code in c# does the same thing. Notice that my code requires the beta03 dotnet libraries. This is a new feature from Google Cloud.

 

 

 

Google Cloud Application Default Credentials

Google Cloud Application Default Credentials (ADC) are not credentials. ADC is a strategy to locate Google Cloud Service Account credentials.

If the environment variable GOOGLE_APPLICATION_CREDENTIALS is set, ADC will use the filename that the variable points to for service account credentials. This file is a Google Cloud Service Account credentials file in Json format. The previous P12 (PFX) certificates are deprecated.

If the environment variable is not set, the default service account is used for credentials if the application running on Compute Engine, App Engine, Kubernetes Engine or Cloud Functions.

If the previous two steps fail to find valid credentials, ADC will fail, and an error occurs.

Also read my articles Setting up Gcloud with Service Account Credentials and Creating and Authorizing Service Account Credentials with the CLI for more information about Service Account Credentials.

Do not confuse Service Account Credentials with the credentials obtained by gcloud auth application-default login even though application-default looks similar. This command obtains User Account Credentials which Google no longer recommends for Google Cloud access.

User Account Credentials are useful when combining Google Cloud access with other Google services such as Gmail, Drive, Calendar, etc. Another type of Google User Credential is Firebase Authentication.

The example uses ADC to locate and create credentials:

ADC uses similar methods to create credentials using the following examples.

Specify Service Account credentials via the environment:

The previous Python code checks the environment first and will use service_account.json for credentials.

Loading credentials from Json:

Service account default credentials on Compute Engine:

Service account default credentials on App Engine:

Best practices for managing credentials

Google Cloud Credentials provide access to services and data in the cloud. Protect these credentials.

Do not embed credentials in source code or configuration files. If your application is running in the cloud, attach a service account to your Google Cloud Service and use ADC to obtain these credentials from the instance’s metadata.

Create one set of credentials for testing. Once you are ready to deploy into production, create another set and delete / revoke the testing credentials. This will make sure that no credentials were accidentally leaked or used in the application. Create a new set of credentials for the next phase of testing.

Only transfer credentials over HTTPS or other types of encrypted secure channels. Never transfer credentials in the clear over HTTP.

Do not embed or use long-term credentials in client applications. Use OAuth 2.0 to obtain short-lived temporary credentials. In another article I show how to take a service account credentials and create short-term credentials. This is a good practice when you need to allocate credentials for applications and users.

Periodically rotate credentials. After a grace period revoke the older credentials. This practice will make sure that credentials are in known documented locations (as you will need to change them).

Use a Key Management Service (KMS) or Kubernetes Secrets for encryption keys, certificates, secure configuration variables and anything that needs to be kept private. Build your applications and infrastructure to be secure and free from credential and key leakage.

Google OAuth 2.0 – Testing with Curl

If you have ever wanted to test Google OAuth 2.0 flows from the command line you will like this short article.

[Update: I thought about the problem below with the copy and paste requirement. I created a simple python web server which listens to the OAuth 2.0 callback which automates the two curl commands. I will document this in a follow-up Part 2 article.]

Google OAuth 2.0 – Testing with Curl – Version 2
Google OAuth 2.0 – Testing with Curl – Refresh Access Token

This article is for Windows Command Prompt users but should be easily adaptable to Linux and Mac also.

You will need your Google Client ID and Client Secret. These can be obtained from the Google Console under APIs & Services -> Credentials. In the following example code, these are stored in the file /config/client_secrets.json

These examples also use the program jq for processing the Json output. You can download a copy here.

In the following example, the Scope is cloud-platform. Modify to use the scopes that you want to test with. Here are a few scopes that you can test with:

OAuth 2.0 Scopes for Google APIs

Details:
  1. Copy the following statements to a Windows batch file.
  2. Modify to fit your environment.
  3. Modify the script for the browser that you want to use.
  4. Run the batch file.
  5. A browser will be launched.
  6. The browser will go to https://accounts.google.com where you can complete the Google OAuth 2.0 authentication.
  7. Once complete a code will be displayed in the browser window.
  8. Copy this code (control-c) from the browser window and paste into the command prompt window (control-rightclick).
  9. The script will complete the OAuth 2.0 code exchange for a Token.
  10. The Token will be displayed in the command prompt.

The returned Token contains an Access Token that can be used in more curl commands.

Windows Batch Script:

The final output looks like this:

Example curl command using Access Token:

Tip: Save the Access Token to a file

Modify the last line of the batch script to use jq to process the output:

The last two lines show how to read the Access Token that was saved to a file for further use in more scripts.

Remember, Tokens expire after 60 minutes which is the default value.

This example implements the most common type of OAuth application – Web Server Application.

In code above, we begin by creating the login endpoint:

and build a url containing the endpont and query parameters:

  • response_type=code – Indicates that your server expects to receive an authorization code
  • client_id – The client ID you received when you first created the application
  • redirect_uri – Indicates the URI to return the user to after authorization is complete
  • scope – One or more scope values indicating which parts of the user’s account you wish to access
  • state – A random string generated by your application, which you’ll verify later (optional – not used in our example program)

The login url then looks similar to this:

Notice the special redirect_uri used in the URL: urn:ietf:wg:oauth:2.0:oob

urn:ietf:wg:oauth:2.0:oob

This value signals to the Google Authorization Server that the authorization code should be returned in the title bar of the browser, with the page text prompting the user to copy the code and paste it in the application. This is useful when the client (such as a Windows application) cannot listen on an HTTP port without significant client configuration.

Next we launch a web browser using this code to login using Google Accounts. Three different browsers are listed with two being commented out so that you can select one for your test case.

After the user completes the OAuth authentication (login), a code will be displayed in the browser. This part of the script allows the user to enter that code into the example script:

The next step is to exchange the code for OAuth tokens:

 

 

 

 

Google Cloud – Creating and Authorizing Service Account Credentials with the CLI

This article is written for Windows, but the same principles apply to Linux and Mac.

A service account is a special Google account that is used with applications or services, such as, Google Compute Engine. Service account credentials are stored in a file. There are two file formats, Json and P12.

The Json format is the recommended format for service account credential files. This format consists of multiple Json keys, with the private key being the critical value which is used to sign API requests. This file can be viewed in any text editor.

The P12 format, otherwise known as PKCS #12 or PFX, is a binary format for storing a certificate, intermediate certificates and the private key in an encrypted file. Common file suffixes are .p12 or .pfx. The following openssl command will display a P12 file. Note the option -nodes. This means No DES, not NODES. Use this option so that you can see the unencrypted private key. In this example the password is notasecret.

Now let’s create a service account using the gcloud CLI. First, let’s set some environment variables to reduce mistakes. Modify with your Google Project ID.

Notice the MEMBER variable. The format for service account credentials is always USER@PROJECT suffixed with the domain iam.gserviceaccount.com.

This command will display the current Project ID:

To get fancy using jq:

Outputs:

Using jq makes it easy to set environment variables to chain commands together.

Create a service account using the previous environment variables:

Command output:

The next step is to authorize the service account with permissions. In this example, we authorize the role viewer:

To authorize a service account, you apply the role to the project and not to the service account itself. IAM policies applied to the service account manage who can use the service account and not the service account permissions. Remember to apply permissions for a service account to the project.

To quote Google:

In addition to being an identity, a service account is a resource which has IAM policies attached to it. These policies determine who can use the service account.

For instance, Alice can have the editor role on a service account and Bob can have viewer role on a service account. This is just like granting roles for any other GCP resource.

Now that we have created and authorized this service account, create and download a service account credentials file to be used later in our software.

Command output:

To download a P12 format add the command line option:

gcloud iam service-accounts create

gcloud projects add-iam-policy-binding

gcloud iam service-accounts keys create

 

Google Cloud – Where are my credentials stored

Google Cloud stores your credentials in a database on your system. These credentials can then be used over and over. Google’s choice of a database means that the CLI and SDK tools can manage a huge number of credentials efficiently. Credentials are managed by configurations.

However, Google also chose not to encrypt the database storing these credentials and I think that this is a potential security weakness and should be reconsidered. IMHO all data should be encrypted. Data that authorizes or protects other data MUST be encrypted.

More details about configurations are in another article that I wrote.

A gcloud configuration is a set of properties that govern the behavior of gcloud and other Google Cloud SDK tools. When you first install gcloud on your desktop a configuration named default is created.

A gcloud configuration is managed by gcloud config configurations. To see the list of configurations on your system:

This will output a list of configurations present on your system:

The link between a set of configurations and a set of credentials in the database is via the account id.

The databases are stored in the following directory. Replace username with your Windows user name.

For Linux:

Credentials are stored in two files: access_tokens.db and credentials.db in this directory. Both of these files are a SQLite database. To see the contents of these databases I wrote two small Python programs.

ACCESS_TOKENS.DB

The database access_tokens.db contains a table named access_tokens with four columns account_id access_token token_expiry rapt_token.

Table schema:

The column account_id is the email address associated with the credentials.

The access_token is the access token used for authenticating requests, for example in CURL and REST APIs. In another article, I will cover in detail what access tokens and credentials look like and how to use them in your own software. I will also cover how to generate access tokens from credentials.

The token_expiry is the date that the token expires.

The rapt_token is involved with token refresh. I have not yet investigated how to use this.

This Python program will output the contents of the access_tokens.db database.

This is the output from the program. I have obfuscated the output to protect the access tokens.

CREDENTIALS.DB

The database credentials.db contains a table named credentials with two columns account_id value.

Table schema:

The column account_id is the email address associated with the credentials.

The column value is your credentials in Json format. I will cover the format of credentials in detail in another article.

This Python program will output the contents of the credentials.db database.

This is the output from the program. I have obfuscated the output to protect the credentials by deleting them from the listing.

There you have it. Details on where Google stores credentials on your system, the format of the database and what the credentials look like on your system.

Google Cloud – Setting up Gcloud with Service Account Credentials

In this article we will download and install the Google gcloud CLI. Then we will setup gcloud with Google Service Account credentials. This article is for Windows based system but the same principles apply to Linux and Mac systems.

Step 1 – Download gcloud

Google Cloud SDK Installer

Step 2 – Launch the installer

At the Completing the Google Cloud SDK Setup Wizard, deselect Run gcloud init to configure the Cloud SDK. The reason is that we only want to use Service Account credentials.

Step 3 – Access a Google public bucket

This command should succeed and provide a listing of the files in this bucket. This command verifies that the CLI is installed. We have not setup credentials yet.

Step 4 – Access one of your own private buckets

This step will verify that you have no credentials. Change the bucket name to a private bucket that you own.

This command should fail. If it succeeds you have a public bucket that anyone can access.

Step 5 – Create Google Service Account credentials.

You can skip this step if you already have credentials to use.

In this example we will only grant Storage Admin to these credentials.

  1. Go to IAM & admin -> Service accounts
  2. Click CREATE SERVICE ACCOUNT
  3. Enter a Service account name and Service account description
  4. Click CREATE
  5. In the next screen Service account permissions, select a role.
  6. Select Storage -> Storage Admin
  7. Click CONTINUE
  8. Click Create key
  9. Check the JSON radio button for the Key type
  10. Save the json file to your local computer.

Make note of the email address that Google Cloud created for these credentials.

Step 6- Configure gcloud with the Google Service Account credentials

In this example, the email address is: test@development-123456.iam.gserviceaccount.com

The credentials file is: test_google_account.json

Modify these items to what you created in step 5.

Step 7 – Verify that the credentials work

Change the bucket name to a private bucket that you own.

This command should now succeed.

You have now successfully configured gcloud to work with Google Service Account credentials.

In a follow-on article I will show you how to use these same credentials when programming, for example, in Python, C#, etc. Then we will cover in detail what Google Service Account credentials are and how to programmatically generate Access Tokens from these credentials.

Google Cloud – Understanding Gcloud Configurations

This article is written for Windows, but the same principles apply to Linux and Mac.

I need to work with multiple Google Cloud accounts and be able to easily switch my credentials between accounts. For those of you with AWS backgrounds, think profiles.

A gcloud configuration is a set of properties that govern the behavior of gcloud and other Google Cloud SDK tools. When you first install gcloud on your desktop a configuration named default is created.

A gcloud configuration is managed by gcloud config configurations. To see the list of configurations on your system:

This will output a list of configurations present on your system:

The creation of a configuration can be accomplished with gcloud or manually.

Command line Method #1:

Using the gcloud CLI, create an new configuration. This configuration will be empty. In this case I am creating a configuration named dev.

Now that we have a new configuration created, we need to activate it.

Set the account. The account is the email address that Google Cloud IAM created for you or that you authorized in Google Cloud IAM. This account is either a Google Account email address or a Google Service Account email address.

The next step is to authorize the dev configuration.

There are additional optional items that you can set in the new configuration such as default project, region and zone. Review the manual method below to see more options.

Command line Method #2

You can also use gcloud init to create a new configuration.

This will prompt you with / for the following information:

Manual Method:

For the manual method, the first step is to setup gcloud with a default account. Then go to the directory where configurations are stored and create new ones. The configurations are stored in the following directory. Replace username with your Windows user name.

For Linux:

List the contents of this directory. Each configuration starts with config_.

To create a new configuration named dev, copy config_default to config_dev.

Now using your favorite editor, modify the file. My config_default looks like this.

The important item to modify is account = user1@example.com. This user ID will be used for authentication.

The minimum configuration looks like this:

Now that we have a new configuration created, we need to activate it.

The active configuration is stored in this file.

The next step is to authorize the dev configuration.

We now have two configurations, default and dev. To switch back to the default configuration.

gcloud auth login creates the default configuration if it does not exist.

Most gcloud commands accept the command line option --configuration=CONFIGURATION_NAME. For example:

gcloud also supports the environment variable CLOUDSDK_ACTIVE_CONFIG_NAME.

To list the Google accounts that have been authorized:

This will display a list like this:

The active account is the one with the ‘*’ in the left column.

To set the project for the current configuration:

To set the region for the current configuration:

To set the zone for the current configuration:

 

Reference documentation:

Managing SDK Configurations

gcloud config configurations

gcloud auth login

gcloud auth activate-service-account

gcloud config set

Authorizing Cloud SDK Tools

 

 

 

 

Google Cloud Private DNS Zones

On October 23, 2018, Google introduced private DNS zones for Google Cloud DNS. This is an important announcement as this keeps internal DNS names private. Today’s article covers how to implement this new feature in Google Cloud Platform.

What is Google Cloud Private DNS Zones? A DNS server can provide a feature called split-horizon DNS. This means that the information returned to a DNS query can change based upon the location of who is asking. For Google Cloud DNS, you have queries that can arrive from the Public Internet or from Google Cloud VPC.

Google Cloud DNS can now provide:

  1. Create private DNS zones to provide DNS name resolution to your private network resources (VMs, load balancers, etc.).
  2. Connect a private zone to a single network or multiple networks, giving you flexibility when designing your internal network architectures.
  3. Create split-horizon DNS architectures where identical or overlapping zones can coexist between public and private zones in Cloud DNS, or across different GCP networks.
  4. Utilize IAM-based, DNS-specific roles to delegate administrative or editor access to manage or view managed private zones.

Above four bullets copied from “Introducing Private DNS Zones“.

Private zones for Google Cloud DNS is a beta feature. This requires creating the private zone using the gcloud CLI.

For this article, we will use the domain name “example.com”. We will setup both private and public zones.

Step 1 – Create the private zone.

Step 2 – Create the public zone.

Note you can skip this step if you are not using Google Cloud DNS for your domain name.