Deploy Blazemeter private location to your Kubernetes cluster using HELM chart. The chart allows to make advanced/custom configurations to your Blazemeter private location deployment.
- A BlazeMeter account
- A Kubernetes cluster
- Latest Helm installed
- The kubernetes cluster needs to fulfill Blazemeter Private location requirements
To start with, you will need Harbour_ID, Ship_ID & Auth_token from Blazemeter. You can either generate these from Blazemeter GUI or through API as described below.
-
Get the Harbour_ID, Ship_ID and Auth_token through BlazeMeter GUI
- Login to Blazemeter & create a Private Location
- Copy the Harbour_ID once the private location has been created in BlazeMeter.
- Create an Agent
- Copy the Ship_ID & Auth_token, you can copy Harbour_ID, when you click on the add agent button.
-
Get the Harbour_ID, Ship_ID and Auth_token through BlazeMeter API
-
Pull/Download the chart - tar file from the GitHub repository Download the latest Chart
-
Untar the chart
tar -xvf helm-crane(version).tgz- Open the
valuesfile to apply configurations as per your deployment requirements.
Before installing the chart, you must provide your BlazeMeter harbour_id, ship_id, and authtoken in the values.yaml file. These values are required for the Crane deployment to register and authenticate with BlazeMeter.
Refer to 2.0 for instructions on how to obtain these values.
Example:
env:
authtoken: "YOUR_AUTH_TOKEN"
harbour_id: "YOUR_HARBOUR_ID"
ship_id: "YOUR_SHIP_ID"- Replace the example values above with your actual credentials.
If you want to keep your credentials secure and not store them directly in values.yaml, you can use one of the following integrations:
- SecretProviderClass (CSI Driver)
- ExternalSecrets Operator
When either of these integrations is enabled (secretProviderClass.enable: yes or externalSecretsOperator.enable: yes), the env.authtoken, env.harbour_id, and env.ship_id values in values.yaml will be ignored, and the credentials will be sourced from your external secret store.
Example:
secretProviderClass:
enable: yes
provider: aws
# ...other configuration...or
externalSecretsOperator:
enable: yes
# ...other configuration...Notes:
Important:
- Only set credentials in one place. If both
envand a secret integration are set, the secret integration takes precedence. - Do not commit sensitive values to version control.
The .Values.deployment section in values.yaml controls how the main Crane deployment is created, including service account, RBAC roles, and restart policy of the deployment (should it fails).
Example:
deployment:
role: # (Optional) Name of an existing Role to use in the namespace. If not set, defaults to <releaseName>-role.
clusterrole: # (Optional) Name of an existing ClusterRole to use. If not set, defaults to <releaseName>-clusterrole.
serviceAccount:
create: false # Set to true to create a new ServiceAccount, or false to use an existing one.
name: # (Optional) Name of the ServiceAccount to use. Leave empty to use the default.
annotations: # (Optional) Annotations to add to the ServiceAccount.
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/example-role
custom.annotation/key: custom-value
restartPolicy: # (Optional) Pod restart policy. Defaults to "Always".role: Use an existing Kubernetes Role for RBAC. Leave empty to let the chart create one.clusterrole: Use an existing ClusterRole for cluster-wide permissions. Leave empty to let the chart create one.serviceAccount.create: Iftrue, the chart creates a new ServiceAccount. Iffalse, you must specify an existing ServiceAccount inserviceAccount.name.serviceAccount.name: Name of the ServiceAccount to use. If empty, the default ServiceAccount is used.serviceAccount.annotations: Add custom annotations (e.g., for IAM roles or workload identity).restartPolicy: Pod restart policy (Always,OnFailure, orNever). Defaults toAlways.
Notes:
- If your cluster uses IAM roles for service accounts (IRSA) or workload identity, add the required annotations under
serviceAccount.annotations. - If
create: false, the chart will not create or modify the existing ServiceAccount, and the annotations in values.yaml will be ignored. - If you want to use pre-existing RBAC roles, specify their names in
roleandclusterrole. - For most installations, you can leave these fields at their defaults unless you have specific security or compliance requirements.
Example for creating a new ServiceAccount with a custom IAM role:
deployment:
serviceAccount:
create: true
name: my-existing-sa
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/my-crane-roleThe chart supports overriding the default images used for Crane and its components through the imageOverride section in your values.yaml file. This allows you to specify custom registries, images, tags, and pull policies for all relevant containers.
Example configuration:
imageOverride:
docker_registry: "gcr.io/<custom-registry>"
craneImage: "gcr.io/<custom-registry>/blazemeter/crane"
tag: "latest-master"
auto_update: true
auto_update_running_containers: false
executorImages:
taurus-cloud:latest: "pathToYourRepo/taurus-cloud:version"
torero:latest: "pathToYourRepo/torero:version"
service-mock:latest: "pathToYourRepo/service-mock:version"
mock-pc-service:latest: "pathToYourRepo/mock-pc-service:version"
sv-bridge:latest: "pathToYourRepo/sv-bridge:version"
doduo:latest: "pathToYourRepo/doduo:version"
pullPolicy: "Always"
testImage: "gcr.io/verdant-bulwark-278/cranehook"
testTag: "latest"- docker_registry: Custom Docker registry for all images.
- craneImage: Path to the Crane image.
- tag: Image tag to use.
- auto_update: Enable or disable automatic updates.
- auto_update_running_containers: Control auto-update for running containers.
- executorImages: Map of executor/component images to override.
- pullPolicy: Image pull policy (
Always,IfNotPresent, etc.). - testImage and testTag: Image and tag for the test hook.
Note:
- If you do not need to override images, you can leave this section commented or empty, and the chart will use the default images provided by BlazeMeter.
If your environment requires the use of a proxy, you can enable and configure proxy settings for the Crane deployment. Set enable to yes and provide the relevant proxy URLs in your values.yaml file.
Example configuration:
proxy:
enable: yes
http_path: "http://your-http-proxy:port"
https_path: "https://your-https-proxy:port"
no_proxy: "kubernetes.default,127.0.0.1,localhost,myHostname.com"- enable: Set to
yesto activate proxy configuration. - http_path: (Optional) HTTP proxy URL.
- https_path: (Optional) HTTPS proxy URL.
- no_proxy: (Optional) Comma-separated list of hosts or domains that should bypass the proxy.
Note:
- Only set the proxy values if your cluster requires outbound traffic to go through a proxy. The
no_proxyfield helps exclude internal or local addresses (defaults to "kubernetes.default,127.0.0.1,localhost")
- If you plan to configure the Kubernetes installation to use CA certificates, make changes to the following section of the values.yaml file:
- Change the
enabletoyes - Provide the path to the certificate file respectively for both (ca_subpath & aws_subpath). You will need to copy/move these cert files in the same directory as this chart and just provide the name of the certs instead of the complete path. '
- Change the
ca_bundle:
enable: no
request_ca_bundle: "certificate.crt"
aws_ca_bundle: "certificate2.crt"
volume:
volume_name: "volume-cm"
mount_path: "/var/cm"
readOnly: true[4.6] Adding gridProxy configuration (only configure if required, & for GUI functional testing only)
- If you plan to configure your crane installation to use gridProxy, make changes to the following section of the
values.yamlfile. Grid Proxy enables you to run Selenium functional tests in BlazeMeter without using a local server. You can run Grid Proxy over the HTTPS protocol using the following methods:
gridProxy:
enable: yes
a_environment: 'https://your.environment.net'
tlsKeyGrid: "certificate.key" # The private key for the domain used to run the BlazeMeter Grid proxy over HTTPS. Value in string format.
tlsCertGrid: "certificate.crt" # The public certificate for the domain used to run the BlazeMeter Grid proxy over HTTPS. Value in string format.
mount_path: "/etc/ssl/certs/doduo"
doduoPort: 9070 # The user-defined port where to run Doduo (BlazeMeter Grid Proxy). By default, Doduo listens on port 8000.
volume:
volume_name: "tls-files"
mount_path: "/etc/ssl/certs/doduo"
readOnly: true- If you plan to deploy the Blazemeter crane as a non_privileged installation, make changes to the following part of the
valuesfile. Change theenabletoyesand this will automatically run the deployment and consecutive pods as Non_root/Non_privilege. You can amend the runAsGroup and runAsUser to any value of your choice. We can only have same user/groupId for both crane and child resources.
non_privilege_container:
enable: no
runAsGroup: 1337
runAsUser: 1337Note:
- Non-root deployment requires an additional feature to be enabled at account level, please contact support for enabling this feature.*
- This will automatically configure the
securityContext.Capabilitiestodrop allfor crane and child resources.*
If your Private Location will run service-virtualisation (mock services), enable the service_virtualization section in your values.yaml file. This allows you to expose mock services using either Istio or NGINX ingress controllers.
service_virtualization:
enable: yes
ingressType: nginx # or istio, depending on your cluster setup
credentialName: "wildcard-credential"
web_expose_subdomain: "mydomain.local"- enable: Set to
yesto activate service virtualisation. - ingressType: Choose
nginxoristiobased on your ingress controller. - credentialName: Name of the credential (e.g., wildcard certificate) to use.
- web_expose_subdomain: Subdomain to expose mock services.
Note:
- Only one ingress type can be enabled at a time. Ensure the corresponding ingress controller (NGINX or Istio) is installed and configured in your cluster.
- For more details, see the Blazemeter guide.
You can add custom labels to the main Crane deployment, crane pod and its child resources (such as executor pods) using the following sections in your values.yaml file. This is useful for organizing, tracking, or applying policies to your resources.
There are three label sections:
labelsCrane: Labels for the Crane Pod & deployment.labelsExecutors: Labels for child resources (executors/agents).
Each section has:
enable: Set toyesto apply the labels.syntax: Provide your labels in JSON format.
Example configuration:
labelsCrane:
enable: yes
syntax: {"purpose": "loadtest", "owner": "devops"}
labelsExecutors:
enable: yes
syntax: {"type": "executor", "region": "us-east-1"}Notes:
- Use these sections to ensure your Crane deployment and all related resources are labeled according to your organization’s standards
- These labels are added in addition to any default labels set by the helm chart and Blazemeter.
- If
enableis set tono, labels will not be applied for that resource type.
- The configuration is used to specify the tolerations for crane and child resources. Switch the
enabletoyesand add tolerations for crane and & child resources. Add tolerations in a Json format as per the example:
tolerationCrane:
enable: no
syntax: [{ "effect": "NoSchedule", "key": "lifecycle", "operator": "Equal", "value": "spot" }]
tolerationExecutors:
enable: no
syntax: [{ "effect": "NoSchedule", "key": "lifecycle", "operator": "Equal", "value": "spot" }]Note:
tolerationCraneis for tolerations declared for crane andtolerationExecutorsis for tolerations declared for child resources.*
- The configuration is used to specify the node selector for crane and child resources. Switch the
enabletoyesand add node selectors for crane and child resources. Add node selectors in a Json format as per the example:
nodeSelectorCrane:
enable: no
syntax: {"label_1": "label_1_value", "label_2": "label_2_value"}
nodeSelectorExecutor:
enable: no
syntax: {"label_1": "label_1_value", "label_2": "label_2_value"}Note:
nodeSelectorCraneis for node selectors declared for crane andnodeSelectorExecutoris for node selectors declared for child resources.*
You can specify CPU, memory, and ephemeral storage resource requests and limits for both the Crane deployment and its child resources. Use the resourcesCrane section for the main Crane deployment, and resourcesExecutors for child resources (executors/agents).
Add or update the following in your values.yaml file:
# Resource requests and limits for the Crane deployment.
resourcesCrane:
requests:
CPU: 250m
MEM: 512Mi
storage: 100 # Ephemeral storage in MB (optional)
limits:
CPU: 1 # Example: 1 core
MEM: 2Gi
storage: 1024 # Ephemeral storage in MB (optional)
# Resource requests and limits for child resources (executors/agents).
resourcesExecutors:
requests:
CPU: 1000m
MEM: 4096 # This value should be an integer (Mi), unlike other values that support k8s standard notation.
storage: 100 # Ephemeral storage in MB (optional)
limits:
CPU: 2
MEM: 8Gi
storage: 1024Notes:
resourcesCraneapplies to the main Crane deployment.resourcesExecutorsapplies to child resources created by the agent.- For
resourcesExecutors, theMEMvalue should be an integer (in Mi), not a string (e.g.,4096not4096Mi).- The
storagefield is optional and represents ephemeral storage in MB.- If you do not need to set resource limits or requests, you can omit these sections or leave them
A Pod Disruption Budget (PDB) ensures that a minimum number of pods remain available during voluntary disruptions (such as node drains or cluster upgrades). You can configure a PDB for the Crane deployment by enabling the following settings in your values.yaml file.
- Enable PDB: Set
enabletoyesto activate the PDB. - minAvailable / maxUnavailable: Specify either
minAvailable(minimum pods that must be available) ormaxUnavailable(maximum pods that can be unavailable). If both are set,minAvailabletakes precedence. - matchLabels: You can then specify the labels to match pods for the PDB.
Example configuration:
podDisruptionBudget:
enable: yes
# Only one of minAvailable or maxUnavailable should be set.
minAvailable: 1
# maxUnavailable: 1
matchLabels: {"app": "crane"}Note:
- If you do not require a PDB, leave
enableasno.
The SecretProviderClass resource is used with the Secrets Store CSI Driver to mount secrets, keys, or certificates from external secret management systems (such as Azure Key Vault, AWS Secrets Manager, or HashiCorp Vault) into Kubernetes pods as files or Kubernetes secrets.
You can enable and configure SecretProviderClass for the Crane deployment by updating the following section in your values.yaml file:
- Enable SecretProviderClass: Set
enabletoyesto activate the integration. - provider: Specify the external secrets provider (e.g.,
azure,aws,vault). - objects: Add a list of provider-specific objects (such as secrets, alias or keys, etc.)
- secretObjects: (Optional) Define Kubernetes secrets to be created from the mounted content.
- envName: This is not a standard parameter in secretProviderClass, however, you are required to put in the env variable the specific secret is going to replace/populate.
Example configuration:
secretProviderClass:
enable: yes
provider: aws
# This is in JSON, to allow users configure different spec, like: secretPath, secretKey, objectAlias, etc.
objects: [{ "objectName": "arn:aws:secretsmanager:ap-southeast-2:{{AWS ACCOUNT}}:secret:harbour-id-{{dummy}}","objectType": "secretsmanager","objectAlias": "harbour-id-opl"},{"objectName": "arn:aws:secretsmanager:ap-southeast-2:{{AWS ACCOUNT}}:secret:ship-id-{{dummy}}","objectType": "secretsmanager","objectAlias": "ship-id-opl"}]
secretObjects:
# Comment out the below section if you do not plan to create secrets in the namespace.
- secretName: auth-token
type: Opaque
data:
- key: auth-token-key
objectName: auth-token-opl
envName: AUTH_TOKEN
- secretName: harbour-id
type: Opaque
data:
- key: harbour-id-key
objectName: harbour-id-opl
envName: HARBOR_ID
- secretName: ship-id
type: Opaque
data:
- key: ship-id-key
objectName: ship-id-opl
envName: SHIP_IDNotes:
- You can specify as many as you need in the same map/slice fashion. The chart is designed to loop over these items.
- The
parametersandsecretObjectsfields should be customized based on your secrets provider and use case.- If you do not require SecretProviderClass integration, leave
enableasno.
The ExternalSecrets Operator allows you to synchronize secrets from external secret management systems (such as AWS Secrets Manager or Google Cloud Secret Manager) into Kubernetes secrets. This integration is useful if you want your Crane deployment to automatically fetch and manage secrets from your external provider.
You can enable and configure the ExternalSecrets Operator for the Crane deployment by updating the following section in your values.yaml file:
- Enable ExternalSecrets Operator: Set
enabletoyesto activate the integration. - volume: (Optional) Configure the volume name, mount path, and readOnly flag for mounting secrets.
- externalSecret: Configure the ExternalSecret resource:
name: Name of the ExternalSecret resource.refreshInterval: How often the operator should refresh the secret.target.name: Name of the Kubernetes Secret to create.data: List of secrets to fetch, mappingsecretKey(Kubernetes key) toremoteRef.key(external secret name) andenvName(environment variable to populate).
- secretStore: Configure the SecretStore resource:
name: Name of the SecretStore.provider: Configure your secrets provider (e.g., AWS or GCP).- AWS: Set
enableforawstotrue, specifyserviceandregion.- authSecretRef: (Optional) Use this if you want to authenticate to AWS using static credentials (not recommended for production).
accessKeyID: Reference to a Kubernetes Secret containing your AWS access key ID.secretAccessKey: Reference to a Kubernetes Secret containing your AWS secret access key.- If
authSecretRef.enableisfalse, the chart will use the service account associated with the deployment (recommended).
- authSecretRef: (Optional) Use this if you want to authenticate to AWS using static credentials (not recommended for production).
- GCP: Set
enableforgcpsmtotrue, specifyprojectID.- secretRef: (Optional) Use this if you want to authenticate to GCP using a static service account key.
secretAccessKeySecretRef: Reference to a Kubernetes Secret containing your GCP credentials.- If
secretRef.enableisfalse, the chart will use Workload Identity with the service account (recommended).
- secretRef: (Optional) Use this if you want to authenticate to GCP using a static service account key.
- AWS: Set
Example configuration:
externalSecretsOperator:
enable: yes
volume:
name:
readOnly:
path:
externalSecret:
name: blaze-external-secret
refreshInterval: "15s"
target:
name: blazemeter-secrets-store
data:
- secretKey: ship-id
remoteRef:
key: ship-id
envName: SHIP_ID
- secretKey: harbour-id
remoteRef:
key: harbour-id
envName: HARBOR_ID
- secretKey: auth-token
remoteRef:
key: auth-token
envName: AUTH_TOKEN
secretStore:
name: blaze-secret-store
provider:
aws:
enable: true
service: SecretsManager
region: ap-southeast-2
# Optionally configure authentication using static credentials:
authSecretRef:
enable: false
# ---- <Rest of the config> ----Notes:
- Only enable the provider you intend to use (
awsorgcpsm). For other providers (such as Azure), please contact support. - The chart will use the service account associated with the deployment for authentication unless
authSecretRef(AWS) orsecretRef(GCP) is enabled. authSecretRefandsecretRefallow you to reference Kubernetes secrets for static credentials, but using IAM roles (AWS) or Workload Identity (GCP) is recommended for production.- The
datasection allows you to map external secrets to Kubernetes secrets and environment variables. - If you do not require ExternalSecrets Operator integration, leave
enableasno. - For more details, see the ExternalSecrets Operator documentation.
You can add custom annotations to the main Crane deployment and its child resources (such as executor pods) using the annotations sections in your values.yaml file. This is useful for integrating with cluster autoscaler, admission controllers, monitoring systems, service meshes like Istio, or other Kubernetes tools that rely on pod annotations.
There are two annotation sections:
annotationsCrane: Annotations for the Crane Pod.annotationsExecutor: Annotations for child resources (executors/agents).
Each section has:
enable: Set toyesto apply the annotations.syntax: Provide your annotations in JSON format.
Common use cases examples:
- Istio Service Mesh Integration:
annotationsCrane:
enable: yes
syntax: {"sidecar.istio.io/inject": "true", "sidecar.istio.io/proxyCPU": "100m", "sidecar.istio.io/proxyMemory": "128Mi"}
annotationsExecutor:
enable: yes
syntax: {"sidecar.istio.io/inject": "true", "traffic.sidecar.istio.io/excludeOutboundPorts": "443,8080"}- Custom Resource Management or Prometheus:
annotationsCrane:
enable: yes
syntax: {"prometheus.io/scrape": "true", "prometheus.io/port": "5000"}
annotationsExecutor:
enable: yes
syntax: {"custom.company.com/workload-type": "load-testing", "custom.company.com/billing-code": "project-alpha"}Notes:
annotationsCraneapplies annotations only to the Crane pod.annotationsExecutorapplies annotations to all child resources (executor/agent pods) created by Crane.- The
syntaxfield must be valid JSON format. - Child resources automatically get
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"by default to prevent premature eviction during tests. - These annotations are added in addition to any default annotations set by the chart.
- If
enableis set tono, custom annotations will not be applied for that resource type.
Use these sections to ensure your Crane deployment and related resources work seamlessly with your cluster's automation, monitoring, service mesh, and management tools.
- Once the values are updated, please verify if the values are correctly used in the helm chart:
helm lint <path-to-chart>
helm template <path-to-chart>This will print the template Helm will use to install this chart. Check the values and if something is missing, please make amends.
- Install the helm chart
helm install crane /path/to/chart --namespace <namespace name>Here, crane is the name we are setting for the chart release
After installing the chart, you can verify both the deployment and the underlying Kubernetes infrastructure using Helm’s built-in test hooks. This chart includes a test pod that checks for essential connectivity and configuration, ensuring your environment is ready for BlazeMeter workloads.
To execute the test:
helm test <release-name> -n <namespace>- Replace
<release-name>with the name you used for your Helm release (e.g.,crane). - Replace
<namespace>with the namespace where you installed the chart.
The test pod will:
- Validate that the Cluster resources are suitable to run Crane & child deployment
- Check for required roles and mappings.
- Verify network connectivity and DNS resolution from within the cluster.
- Validate if the required k8s resources are deployed to support crane and its functionalities
-
Success:
If the test passes, you’ll see output similar to:NAME: crane LAST DEPLOYED: Tue Jun 3 20:24:12 2025 NAMESPACE: default STATUS: deployed REVISION: 5 TEST SUITE: cranetesthook Last Started: Tue Jun 3 20:24:24 2025 Last Completed: Tue Jun 3 20:24:30 2025 Phase: Succeeded
This means your chart and infrastructure are ready.
-
Failure:
If the test fails, review the logs for details. The--logsflag would point to the issue that is causing the failure. Common issues include missing secrets, network restrictions, or misconfigured values/specs. Address any reported issues and re-run the test.
- You can add the
--logsflag tohelm testto automatically print the test pod logs:helm test <release-name> --logs
- If the test pod is stuck or fails to start, check for k8s scheduler error (possible with third-party admission controllers), image pull errors, or missing configuration.
If you continue to encounter issues, please contact your cloud or DevOps team for assistance.
To upgrade your existing Helm release to a new version of the chart, use the helm upgrade command. This allows you to apply new chart versions or updated configuration values without uninstalling and reinstalling.
helm upgrade <release-name> /path/to/newchart -n <namespace>- Replace
<release-name>with the name of your Helm release (e.g.,crane). - Replace
/path/to/newchartwith the path to the new or updated chart directory or.tgzfile. - Replace
<namespace>with the namespace where your release is installed.
If you have a custom values.yaml file, specify it with the -f flag:
helm upgrade <release-name> /path/to/newchart -n <namespace> -f /path/to/values.yamlYou can specify multiple -f flags to merge several values files.
- Before upgrading, you can preview the changes with:
(Requires the helm-diff plugin.)
helm diff upgrade <release-name> /path/to/newchart -n <namespace> -f /path/to/values.yaml
- If you want to force resource updates (for example, if only config or secrets changed), add
--force:helm upgrade <release-name> /path/to/newchart -n <namespace> --force
- After upgrading, verify the deployment and run the Helm test as described in the previous section.
If you encounter issues during upgrade, review the output for errors and consult the Helm upgrade documentation.
- To uninstall the Helm chart run:
helm uninstall <release-name> -n <namespace name>- 1.4.3: Inclusion of
securityContext.Capabilitieswhich would default todrop: ["ALL"]in our chart for child resources/executors. (No change to the values YAML file) - 1.4.2: Support for custom annotations with Crane and child resources.
- 1.4.1: Added default values for secret wildcard credential for test-hook. Fixed minor condition handling for istio-based test-hook role configuration. No changes to main chart functionality.
- 1.4.0: Added support for Pod Disruption Budgets (PDB) and SecretProviderClass integration. Introduced ExternalSecrets Operator support. Addition of testHook for faster/accurate validation of installation. Simplified the image override usage. Incorporation of ingress setup & usage in one single config. Other minor bug fixes and template enhancements. Extended documentations on chart usage.
- 1.3.1: Readiness and Liveness probes are now added.
- 1.3.0: Chart can support image-override configuration. gridProxy is in working configuration. Resource (CPU & MEM) limit/requests are now configurable for crane and child resources and also for ephemeral storage. Simplified nesting and values configuration. The chart can now work with non-default serviceAccount. Tolerations, nodeSelector and labels can be declared for Crane and child resources separately, with Major fixes & calibrations.
- Anything below 1.3.0 - UNSUPPORTED
