Setup Identity Federation

The Crafting system can act as an OIDC provider. For services that support Identity Federation with an OIDC provider — such as AWS and GCP — a client can exchange Crafting's JWT token for the service's access token. This eliminates the need for interactive logins or storing sensitive credentials inside a sandbox.

Setup with AWS

  1. Add an Identity Provider to IAM: From IAM, add an Identity Provider of type OpenID Connect with the following details:

- Provider URL: https://SYS-DOMAIN (SYS-DOMAIN is sandboxes.cloud for Crafting SaaS, or the specific DNS hostname for a self-hosted deployment) - Audience: Your org name in the Crafting system

  1. Assign a role: Add an AssumeRole policy (Trust relationships) to the designated role, where <PROVIDER-NAME> is the name of the OIDC provider added to IAM in the previous step:
   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Federated": "arn:aws:iam::<ACCOUNT-ID>:oidc-provider/<PROVIDER-NAME>"
               },
               "Action": "sts:AssumeRoleWithWebIdentity",
               "Condition": {
                   "StringEquals": {
                       "<PROVIDER-NAME>:aud": [
                           "<YOUR ORG NAME>"
                       ]
                   }
               }
           }
       ]
   }
  1. Configure the workspace: Use the following content as $AWS_CONFIG_FILE:
   [default]
   region = <YOUR REGION>
   credential_process = idfed aws <ACCOUNT-ID> <ROLE-NAME>

This can be injected as ~/.aws/config, or saved to another file with the AWS_CONFIG_FILE environment variable pointing to it. It can also be stored as a secret (for example, named aws-config) and accessed at AWS_CONFIG_FILE=/run/sandbox/fs/secrets/shared/aws-config. There is no sensitive information in this file.

With this setup, all sandbox users can use the AWS CLI from workspaces to access the AWS account directly. You can also attach the AssumeRole policy to multiple roles and use profiles in $AWS_CONFIG_FILE to specify different roles for different processes:

[default]
region = <YOUR REGION>
credential_process = idfed aws <ACCOUNT-ID> <DEFAULT-ROLE-NAME>

[profile role1]
region = <YOUR REGION>
credential_process = idfed aws <ACCOUNT-ID> <ROLE1-NAME>

[profile role2]
region = <YOUR REGION>
credential_process = idfed aws <ACCOUNT-ID> <ROLE2-NAME>

Use the AWS_PROFILE environment variable before launching a process to run it under the corresponding role.

To quickly validate the setup, run:

aws sts get-caller-identity

Setup for EKS

For EKS specifically, the Crafting system can be added as an OIDC provider. The following Terraform example demonstrates this:

variable "cluster_name" {
  description = "Name of the cluster"
}

variables "crafting_org" {
  description = "Org name in Crafting sandbox system"
}

variable "crafting_server_url" {
  description = "The Server URL of Crafting system"
  default     = "https://sandboxes.cloud" # Change this for self-hosted Crafting.
}

resource "aws_eks_identity_provider_config" "crafting" {
  cluster_name = var.cluster_name

  oidc {
    client_id                     = var.crafting_org
    identity_provider_config_name = "crafting"
    issuer_url                    = var.crafting_server_url
  }
}

In a workspace, use aws eks update-kubeconfig ... to obtain a kubeconfig file, then edit the user section as follows:

users:
- name: crafting
  user:
    tokenFile: /run/sandbox/fs/metadata/owner/token

Then create a RoleBinding or ClusterRoleBinding in the EKS cluster for fine-grained access control. The subject should be of kind User with a name in the format https://CRAFTING-SERVER-HOST#EMAIL. For example:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: crafting-user-foo
subjects:
- kind: User
  name: 'https://sandboxes.cloud#foo@gmail.com'
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: bar-cluster-role
  apiGroup: rbac.authorization.k8s.io

Setup with GCP

  1. Add an Identity Provider to IAM: This can be done from the IAM / Workload Identity Federation menu in the Google Cloud Console. Using the Google Cloud SDK:
   gcloud iam workload-identity-pools create ${POOL_ID} --location=global
   gcloud iam workload-identity-pools providers create-oidc ${PROVIDER_ID} \
       --issuer-uri="https://sandboxes.cloud" --allowed-audiences=${SANDBOX_ORG} \
       --attribute-mapping="google.subject=assertion.sub" \
       --workload-identity-pool=${POOL_ID} --location=global
  1. Bind to a service account (multiple service accounts can be bound):
   gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser \
       --member "principalSet://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${POOL_ID}/*" \
       ${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
  1. Configure the sandbox: Use the following content for the file pointed to by $GOOGLE_APPLICATION_CREDENTIALS:
   {
     "type": "external_account",
     "audience": "//iam.googleapis.com/projects/<PROJECT-NUMBER>/locations/global/workloadIdentityPools/<POOL_ID>/providers/<PROVIDER_ID>",
     "subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
     "token_url": "https://sts.googleapis.com/v1/token",
     "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<SERVICE_ACCOUNT_NAME>@<PROJECT_ID>.iam.gserviceaccount.com:generateAccessToken",
     "credential_source": {
         "file": "/run/sandbox/fs/metadata/1000/token",
         "format": {
             "type": "text"
         }
     }
   }

For accessing a GKE cluster specifically, use the following as the user credential in the kubeconfig file:

   apiVersion: v1
   kind: Config
   ...
   users:
   - name: foo
     user:
       exec:
         apiVersion: client.authentication.k8s.io/v1beta1
         command: idfed
         args:
         - gke

With this setup, processes in the sandbox can access the GCP project and GKE clusters.

The JSON configuration above can also be stored as a secret (for example, named gcp-account.json, with the GKE kubeconfig as kubeconfig) and referenced via environment variables in the sandbox or workspace definition:

env:
- GOOGLE_APPLICATION_CREDENTIALS=/run/sandbox/fs/secrets/shared/gcp-account.json
- CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$GOOGLE_APPLICATION_CREDENTIALS
- KUBECONFIG=/run/sandbox/fs/secrets/shared/kubeconfig

To quickly validate the setup, run:

gcloud auth print-access-token

It is not recommended to use gcloud login, as it saves a user login credential in the home directory.