Deploy Rocketship on DigitalOcean Kubernetes
This walkthrough recreates the production proof-of-concept we validated on DigitalOcean Kubernetes (DOKS). It covers standing up Temporal, publishing Rocketship images to DigitalOcean Container Registry (DOCR), terminating TLS through an NGINX ingress, and wiring the CLI via profiles.
The steps assume you control public DNS for cli.rocketship.globalbank.com
, app.rocketship.globalbank.com
, and auth.rocketship.globalbank.com
(or equivalent) and can issue a SAN certificate that covers all three hosts.
Prerequisites
- DigitalOcean account with:
- A Kubernetes cluster (2 × CPU-optimised nodes were used during validation)
- DigitalOcean Container Registry (
registry.digitalocean.com/<registry>
) enabled doctl
authenticated (doctl auth init
)kubectl
configured for the cluster (doctl kubernetes cluster kubeconfig save <cluster-name>
)- Docker CLI with Buildx
- Helm 3
- TLS assets
certificate.crt
andprivate.key
(ZeroSSL issues these; concatenate the intermediate bundle with the server cert if required)
All commands below run from the repository root.
1. Set Up Namespaces and Ingress Controller
kubectl create namespace rocketship
kubectl config set-context --current --namespace=rocketship
# Install ingress-nginx (DigitalOcean automatically provisions a Load Balancer)
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--version 4.13.2 \
--namespace ingress-nginx --create-namespace \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-enable-proxy-protocol"="true"
The annotation enables PROXY protocol support on DigitalOcean’s load balancer, which keeps source IPs available in the ingress logs. Omit or adjust if you do not need it.
2. Install Temporal
helm repo add temporal https://go.temporal.io/helm-charts
helm repo update
helm install temporal temporal/temporal \
--version 0.66.0 \
--namespace rocketship \
--set server.replicaCount=1 \
--set cassandra.config.cluster_size=1 \
--set elasticsearch.replicas=1 \
--set prometheus.enabled=false \
--set grafana.enabled=false \
--wait --timeout 15m
Register the Temporal logical namespace the Rocketship worker will use:
kubectl exec -n rocketship deploy/temporal-admintools -- \
temporal operator namespace create --namespace default
(Keep default
unless you intend to manage multiple namespaces; update Helm values accordingly later.)
3. Create the TLS Secret
Issue a SAN certificate that covers cli.rocketship.globalbank.com
, app.rocketship.globalbank.com
, and auth.rocketship.globalbank.com
(Let’s Encrypt or ZeroSSL work well). After you have the combined cert/key, update the secret:
# optional: remove the old secret if it exists
kubectl delete secret globalbank-tls -n rocketship 2>/dev/null || true
# create the secret with the new cert/key
kubectl create secret tls globalbank-tls \
--namespace rocketship \
--cert=/etc/letsencrypt/live/rocketship.sh/fullchain.pem \
--key=/etc/letsencrypt/live/rocketship.sh/privkey.pem
4. Authenticate the Registry Inside the Cluster
Create the image pull secret with doctl
and apply it to the rocketship
namespace:
doctl registry kubernetes-manifest --namespace rocketship > do-registry-secret.yaml
kubectl apply -f do-registry-secret.yaml
The secret name is typically registry-<registry-name>
and is referenced automatically by the chart when imagePullSecrets
is set.
5. Build and Push Rocketship Images
DigitalOcean’s nodes run on linux/amd64
, so build multi-architecture images to avoid “exec format error” crashes:
export REGISTRY=registry.digitalocean.com/rocketship
export TAG=v0.1-test
# Engine
docker buildx build \
--platform linux/amd64,linux/arm64 \
-f .docker/Dockerfile.engine \
-t $REGISTRY/rocketship-engine:$TAG . \
--push
# Worker
docker buildx build \
--platform linux/amd64,linux/arm64 \
-f .docker/Dockerfile.worker \
-t $REGISTRY/rocketship-worker:$TAG . \
--push
# Auth broker
docker buildx build \
--platform linux/amd64,linux/arm64 \
-f .docker/Dockerfile.authbroker \
-t $REGISTRY/rocketship-auth-broker:$TAG . \
--push
Re-run these commands whenever you change code; keep the tag stable (for example
v0.1-test
) so the Helm release pulls the updated digest.
6. Deploy the Rocketship Helm Chart
Before installing the chart, create the secrets that hold the auth broker’s configuration:
# 1) Postgres connection string (replace with the managed database DSN)
kubectl create secret generic globalbank-auth-broker-database \
--namespace rocketship \
--from-literal=DATABASE_URL='postgres://rocketship:<password>@<host>:5432/rocketship?sslmode=require'
# 2) 32-byte refresh-token signing key (base64 encoded)
kubectl create secret generic globalbank-auth-broker-secrets \
--namespace rocketship \
--from-literal=ROCKETSHIP_BROKER_REFRESH_KEY="$(openssl rand -base64 32)"
The Postgres database backs user/org membership and refresh tokens. The generated
ROCKETSHIP_BROKER_REFRESH_KEY
is used to HMAC refresh tokens before they are stored, so rotate it carefully (invalidate existing sessions as needed). Enabling the chart’s bundled Postgres (--set postgres.enabled=true
) auto-generates the broker database secret, so you can skip theglobalbank-auth-broker-database
step and simply supplypostgres.auth.password
in the Helm command.
Create a values override file (deploy/do-values.yaml
) or inline the settings:
helm install rocketship charts/rocketship \
--namespace rocketship \
--set temporal.host=temporal-frontend.rocketship:7233 \
--set temporal.namespace=default \
--set engine.image.repository=$REGISTRY/rocketship-engine \
--set engine.image.tag=$TAG \
--set worker.image.repository=$REGISTRY/rocketship-worker \
--set worker.image.tag=$TAG \
# Uncomment the two lines below to use the bundled Postgres chart instead of an external DB
# --set postgres.enabled=true \
# --set postgres.auth.password=$POSTGRES_PASSWORD \
--set imagePullSecrets[0].name=registry-rocketship \
--set ingress.enabled=true \
--set ingress.className=nginx \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io/backend-protocol"=GRPC \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io/ssl-redirect"="true" \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-body-size"="0" \
--set ingress.tls[0].secretName=globalbank-tls \
--set ingress.tls[0].hosts[0]=cli.rocketship.globalbank.com \
--set ingress.hosts[0].host=cli.rocketship.globalbank.com \
--set ingress.hosts[0].paths[0].path=/ \
--set ingress.hosts[0].paths[0].pathType=Prefix \
--wait
Confirm the pods are healthy:
rocketship-engine
, rocketship-worker
, rocketship-auth-broker
, and rocketship-web-oauth2-proxy
should report READY 1/1
. Temporal services may restart once while Cassandra and Elasticsearch initialise—that is expected.
7. Enable Auth for the Web UI (optional)
After the gRPC ingress is live you can optionally front the engine’s HTTP port with oauth2-proxy. Choose the option that matches your organisation:
Option A — GitHub broker (reuse CLI device flow)
-
Create or reuse a GitHub OAuth app: visit https://github.com/settings/developers (or your organisation equivalent) and register an OAuth App for the CLI device flow. Record the generated Client ID and Client Secret – you will supply them via Kubernetes secrets. The Authorization callback can be any valid HTTPS URL because device flow does not redirect end users.
-
Create the broker secrets: use the client ID/secret captured in the previous step.
# GitHub OAuth app credentials kubectl create secret generic globalbank-github-oauth \ --namespace rocketship \ --from-literal=ROCKETSHIP_GITHUB_CLIENT_ID=YOUR_GITHUB_CLIENT_ID \ --from-literal=ROCKETSHIP_GITHUB_CLIENT_SECRET=YOUR_GITHUB_CLIENT_SECRET # Database DSN (managed Postgres or self-hosted) kubectl create secret generic globalbank-auth-broker-database \ --namespace rocketship \ --from-literal=DATABASE_URL='postgres://rocketship:<password>@<host>:5432/rocketship?sslmode=require' # Refresh-token HMAC key (32 bytes, base64 encoded) kubectl create secret generic globalbank-auth-broker-secrets \ --namespace rocketship \ --from-literal=ROCKETSHIP_BROKER_REFRESH_KEY="$(openssl rand -base64 32)" # JWKS signing material (PEM formatted private key + matching cert) kubectl create secret generic globalbank-auth-broker-signing \ --namespace rocketship \ --from-file=signing-key.pem=./signing-key.pem # Web front-door OAuth client (used by oauth2-proxy). Create a SECOND GitHub OAuth app with # callback URL https://app.globalbank.rocketship.sh/oauth2/callback and plug its credentials below. kubectl create secret generic oauth2-proxy-credentials \ --namespace rocketship \ --from-literal=clientID=YOUR_WEB_OAUTH_CLIENT_ID \ --from-literal=clientSecret=YOUR_WEB_OAUTH_CLIENT_SECRET \ --from-literal=cookieSecret=$(python -c "import secrets, base64; print(base64.urlsafe_b64encode(secrets.token_bytes(32)).decode())")
-
Review
charts/rocketship/values-github-selfhost.yaml
andcharts/rocketship/values-github-web.yaml
: - Ensure the public hostnames (
cli/globalbank/app/globalbank
) match your ingress controller. - Replace the placeholder
YOUR_GITHUB_CLIENT_ID
(and the corresponding secret) with the values from your OAuth app. -
The oauth2-proxy preset points its issuer at
https://auth.globalbank.rocketship.sh
, which is served by the broker deployment. -
Apply the presets alongside the base ingress values:
-
Verify the flow: visit
https://app.globalbank.rocketship.sh/
in a new session. You should be redirected to GitHub, approve the OAuth app you created, and land on the proxied Rocketship health page (/healthz
).
If the CLI returns
permission denied (roles: pending)
after logging in, callPOST https://auth.globalbank.rocketship.sh/api/orgs
with the bearer token to create the first organisation/project, or ask an existing admin to invite you. Pending users cannot run suites until they belong to a project.
Option B — Bring your own IdP (Auth0/Okta/Azure AD)
-
Create the oauth2-proxy credentials secret:
openssl genrsa -out signing-key.pem 2048 kubectl create secret generic globalbank-auth-broker-signing \ --namespace rocketship \ --from-file=signing-key.pem kubectl create secret generic globalbank-auth-broker-database \ --namespace rocketship \ --from-literal=DATABASE_URL='postgres://rocketship:<password>@<host>:5432/rocketship?sslmode=require' kubectl create secret generic globalbank-auth-broker-secrets \ --namespace rocketship \ --from-literal=ROCKETSHIP_BROKER_REFRESH_KEY="$(openssl rand -base64 32)"
-
Bootstrap oauth2-proxy credentials. Use the same GitHub OAuth application (client ID/secret) so both CLI and UI share it.
COOKIE_SECRET=$(python3 - <<'PY'
import secrets
print(secrets.token_hex(16))
PY
)
kubectl create secret generic oauth2-proxy-credentials \
--namespace rocketship \
--from-literal=clientID=YOUR_IDP_CLIENT_ID \
--from-literal=clientSecret=YOUR_IDP_CLIENT_SECRET \
--from-literal=cookieSecret=$(python -c "import secrets, base64; print(base64.urlsafe_b64encode(secrets.token_bytes(32)).decode())")
- Review
charts/rocketship/values-oidc-web.yaml
: - Set
OAUTH2_PROXY_OIDC_ISSUER_URL
to your IdP’s issuer URL (e.g.https://auth.globalbank.com/oidc
). - Update
OAUTH2_PROXY_REDIRECT_URL
to match the web hostname (https://app.globalbank.rocketship.sh/oauth2/callback
). -
Populate
auth.oidc.*
with the native-app client (CLI device flow) details from your IdP. -
Apply the preset:
-
Verify the flow: browse to
https://app.globalbank.rocketship.sh/
, complete your IdP login, and confirm the proxied Rocketship health page renders (/healthz
).
8. Enable Token Authentication for gRPC (recommended)
Issue a long-lived token for the engine so only authenticated CLI or CI jobs can invoke workflows.
-
Generate a token and store it in a Kubernetes secret (replace the example value):
-
Patch your Helm values (or create
values-token.yaml
) to inject the token:
engine:
env:
- name: ROCKETSHIP_ENGINE_TOKEN
valueFrom:
secretKeyRef:
name: rocketship-engine-token
key: token
Apply it alongside the production values:
helm upgrade --install rocketship charts/rocketship \
--namespace rocketship \
-f charts/rocketship/values-production.yaml \
-f charts/rocketship/values-github-cloud.yaml \
-f charts/rocketship/values-github-web.yaml \
--set engine.image.repository=$REGISTRY/rocketship-engine \
--set engine.image.tag=$TAG \
--set worker.image.repository=$REGISTRY/rocketship-worker \
--set worker.image.tag=$TAG \
--set auth.broker.image.repository=$REGISTRY/rocketship-auth-broker \
--set auth.broker.image.tag=$TAG \
--wait
- Validate the flows.
Browse to
rocketship profile create cloud grpcs://cli.rocketship.globalbank.com rocketship profile use cloud rocketship login rocketship status
https://app.rocketship.globalbank.com/
in a fresh session—you should be redirected through GitHub and land back on the proxied Rocketship UI after approval. The CLI command above walks you through device flow (https://github.com/login/device
) and persists the refresh token locally.
The broker stores only hashed refresh tokens in Postgres (keyed via
ROCKETSHIP_BROKER_REFRESH_KEY
). Rotate that secret—and the signing key—by updating the Kubernetes secrets and rerunninghelm upgrade
. If the CLI reportspermission denied (roles: pending)
, use the issued bearer token to callPOST https://auth.globalbank.rocketship.sh/api/orgs
and create the first organisation, or ask an administrator to invite you.
Option 2 – Bring your own IdP (Auth0, Okta, Azure AD, …)
If you already manage an internal IdP, point the chart at it. Provision the necessary applications in your provider (typically a native app for the CLI device flow and a web app for oauth2-proxy), then update charts/rocketship/values-oidc-web.yaml
with your issuer, client IDs, and scopes.
helm upgrade --install rocketship charts/rocketship \
--namespace rocketship \
-f charts/rocketship/values-production.yaml \
-f charts/rocketship/values-oidc-web.yaml \
--set engine.image.repository=$REGISTRY/rocketship-engine \
--set engine.image.tag=$TAG \
--set worker.image.repository=$REGISTRY/rocketship-worker \
--set worker.image.tag=$TAG \
--set auth.broker.image.repository=$REGISTRY/rocketship-auth-broker \
--set auth.broker.image.tag=$TAG \
--wait
After rollout, point your CLI profile at the engine (rocketship profile create <name> grpcs://cli.rocketship.globalbank.com
) and run rocketship login
. The CLI follows the device flow your IdP exposes and automatically refreshes the issued token on subsequent commands.
RBAC considerations
Regardless of where Rocketship runs (cloud usage-based, dedicated enterprise, or self-hosted), the recommended RBAC model is the same:
- Issue Rocketship JWTs that carry organisation/team roles. The broker (or customer IdP) mints access tokens with claims such as
org
,project
, androle
(admin
,editor
,viewer
,service-account
). - Engine enforces on every RPC. When the CLI calls
CreateRun
,ListRuns
, etc., the engine reads the claims and rejects calls from users without the required role. Tokens are short-lived and verified via JWKS, so enforcement is consistent across cloud and self-hosted clusters. - Role management lives in Rocketship. Maintain an RBAC table in Rocketship Cloud (or the broker) so you can invite users, sync GitHub teams if desired, or import roles from customer IdPs. The engine only consumes the resulting claims; it doesn’t need to know whether they originated from GitHub, Okta, or internal configuration.
- Future enhancements (optional): provide an
rbac.yaml
or Terraform provider so self-hosted clusters can seed organisations/roles declaratively, and add UI to sync GitHub org/team membership if customers opt in.
This approach lets you offer the same RBAC semantics in every environment. Usage-based customers rely on the GitHub-backed broker, while enterprise tenants with their own IdP simply mint tokens that include the same claim set.
10. Point DNS at the Load Balancer
Create A (or CNAME) records for cli.rocketship.globalbank.com
, app.rocketship.globalbank.com
, and auth.rocketship.globalbank.com
pointing at the ingress load balancer IP (see step 6). DNS propagation usually completes within a minute on DigitalOcean DNS, but public resolvers may take longer.
11. Smoke Test the Endpoint
The Rocketship health endpoint answers gRPC, so an HTTPS request returns 415
with application/grpc
, which confirms end-to-end TLS:
curl -v https://cli.rocketship.globalbank.com/healthz
curl -v https://auth.rocketship.globalbank.com/healthz
Create and use the default cloud profile from the CLI (already pointing at cli.rocketship.globalbank.com:443
):
If you see a connection refused
message against 127.0.0.1:7700
, ensure you are running a CLI build that includes the profile resolution fixes introduced in PR #2.
10. Updating the Deployment
- Rebuild and push the images with the same tag (or bump the
TAG
). - Run
helm upgrade rocketship charts/rocketship ...
with the updated values. - Watch rollout status:
11. Troubleshooting Tips
CrashLoopBackOff
withexec /bin/engine: exec format error
indicates the image was built for the wrong architecture. Rebuild with--platform linux/amd64
.- If the worker logs show
Namespace <name> is not found
, rerun the Temporal namespace creation step and verifytemporal.namespace
in the Helm values matches. curl
connecting to127.0.0.1
usually means DNS hasn’t propagated or the CLI profile points at the wrong port (7700
vs443
). Profiles created withgrpcs://
automatically default to port 443.
With these steps you have a durable Rocketship installation bridging a managed Temporal stack, ingress TLS, and CLI profiles—ready for teams to run suites from their laptops or CI pipelines.