HYPERFLEET-683 | ci: update the adpter deployment logic#37
HYPERFLEET-683 | ci: update the adpter deployment logic#37yingzhanredhat wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
Conversation
|
Skipping CI for Draft Pull Request. |
WalkthroughReplaces adapter auto-discovery with explicit environment-driven adapter deployment configuration. Adds CLUSTER_TIER0_ADAPTERS_DEPLOYMENT, NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT, ADAPTERS_FILE_DIR and TESTDATA_DIR variables and CLI options. Removes RELEASE_PREFIX and switches Helm release names to namespace-based naming (e.g., api-${NAMESPACE}, sentinel-${NAMESPACE}-${resource}). Adapter discovery now validates entries from environment variables and sources configs from ADAPTERS_FILE_DIR; testdata paths renamed from "clusters-" / "nodepools-" to the new "" layout (e.g., cl-deployment, np-configmap). Sequence Diagram(s)sequenceDiagram
participant User as CLI
participant DeployScript as deploy-clm.sh
participant AdapterLib as lib/adapter.sh
participant FS as ADAPTERS_FILE_DIR
participant Helm as Helm/Kubernetes
User->>DeployScript: invoke (with --cluster-tier0-adapters / --adapters-file-dir)
DeployScript->>AdapterLib: export CLUSTER_TIER0_ADAPTERS_DEPLOYMENT, NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT, ADAPTERS_FILE_DIR
AdapterLib->>FS: validate adapter directories for each resource_type|adapter_name
FS-->>AdapterLib: exists / error
AdapterLib->>Helm: helm upgrade --install release=adapter-${NAMESPACE}-${adapter_name} (values from ADAPTERS_FILE_DIR)
Helm-->>AdapterLib: deployment result
AdapterLib-->>DeployScript: status/logs
DeployScript-->>User: final status/logs
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
f6ace85 to
b9d467e
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
deploy-scripts/lib/adapter.sh (1)
196-199:⚠️ Potential issue | 🟠 MajorPropagate discovery failures instead of returning success
Line 196-199 and Line 264-267 treat any
discover_adaptersfailure as “no adapters” and return0.
This masks real misconfigurations (missing adapter-config directory, invalid adapter list, missing adapter directory) and can make CI appear green while deployment/uninstall never happened.Suggested fix
@@ - if ! adapters=$(discover_adapters); then - log_warning "No adapters found to deploy" - return 0 - fi + if ! adapters=$(discover_adapters); then + log_error "Adapter discovery failed; aborting deployment" + return 1 + fi @@ - if ! adapters=$(discover_adapters); then - log_warning "No adapters found to uninstall" - return 0 - fi + if ! adapters=$(discover_adapters); then + log_error "Adapter discovery failed; aborting uninstall" + return 1 + fiAlso applies to: 264-267
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy-scripts/lib/adapter.sh` around lines 196 - 199, The current pattern "if ! adapters=$(discover_adapters); then log_warning ...; return 0" swallows real errors from discover_adapters; change it to capture the command's exit status and propagate failures: call discover_adapters, check its exit code, and if non-zero log an error (use log_error or log_warning with clear context) and return/exit with that non-zero status; only treat a successful command that returns an empty adapter list as the benign "no adapters" case (log_warning and return 0). Apply this same fix wherever the same pattern appears (the discover_adapters invocation at the top and the second occurrence around lines 264-267), referencing the discover_adapters invocation and the log_warning usage to locate the spots to change.
🧹 Nitpick comments (1)
deploy-scripts/lib/sentinel.sh (1)
109-115: Use exact release matching during uninstall existence checks.Line 114 uses a prefix grep and can match unintended releases. Prefer exact filtering (same pattern already used in
deploy-scripts/lib/api.sh).Proposed change
- if ! helm list -n "${NAMESPACE}" | grep -q "^${release_name}"; then + if [[ -z "$(helm list -n "${NAMESPACE}" -q -f "^${release_name}$")" ]]; then log_warning "Release '${release_name}' not found in namespace '${NAMESPACE}'" return 0 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy-scripts/lib/sentinel.sh` around lines 109 - 115, The exist-check uses a prefix grep and can match unintended releases; update the if condition that checks release_name to perform an exact match (for example use `helm list -n "${NAMESPACE}" -q | grep -xq -- "${release_name}"` or equivalent) so the variable release_name is matched exactly when deciding whether to log the warning/uninstall in the uninstall block that references release_name and NAMESPACE.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deploy-scripts/.env.example`:
- Line 31: The dotenv example currently uses quoted values for environment
variables (e.g.,
CLUSTER_TIER0_ADAPTERS_DEPLOYMENT="cl-namespace,cl-job,cl-deployment") which
triggers QuoteCharacter warnings from dotenv-linter; remove the surrounding
double quotes for that variable and the other affected variables mentioned in
the comment (the similar entries at the other locations) so they become unquoted
comma-separated values (e.g.,
CLUSTER_TIER0_ADAPTERS_DEPLOYMENT=cl-namespace,cl-job,cl-deployment) to satisfy
the linter while preserving the same value content.
- Around line 46-50: The default value for API_ADAPTERS_CLUSTER currently
pre-populates adapters opposite the "leave empty by default" guidance; change
the API_ADAPTERS_CLUSTER variable so it is empty by default (e.g.,
API_ADAPTERS_CLUSTER="") or remove the preset value and keep the explanatory
comment so tests must explicitly set API_ADAPTERS_CLUSTER per test case; update
the adjacent comment to clarify that adapters must be set per test case when
needed and that an empty default prevents accidental configuration leakage.
In `@deploy-scripts/deploy-clm.sh`:
- Around line 58-61: The script allows running an install with ADAPTERS_FILE_DIR
missing/invalid or with empty CLUSTER_TIER0_ADAPTERS_DEPLOYMENT /
NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT; add validation in parse_arguments() after
action validation to fail fast: check ADAPTERS_FILE_DIR exists and is a
directory (and is readable) and ensure at least one of
CLUSTER_TIER0_ADAPTERS_DEPLOYMENT or NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT is
non-empty when action is install (or the relevant install action string), and
call fatal/exit with a clear error message if validations fail so the installer
cannot proceed with invalid adapter inputs.
In `@deploy-scripts/lib/adapter.sh`:
- Line 95: The release name currently set as local
release_name="adapter-${NAMESPACE}-${adapter_name}" can collide across resource
types; update both occurrences to include the resource kind/type so names are
unique (for example: local
release_name="adapter-${NAMESPACE}-${resource_type}-${adapter_name}" or include
the variable used to differentiate clusters vs nodepools), ensure the variable
used (resource_type or adapter_kind) is defined or passed into the function that
sets release_name, and update all subsequent uses (helm
install/upgrade/uninstall calls that reference release_name) to use the new
release_name format so cluster and nodepool adapters no longer target the same
Helm release.
- Around line 35-40: The script currently trims adapter tokens but doesn't
reject empty entries, allowing a blank adapter_name to match adapter_configs_dir
(e.g., "${adapter_configs_dir}/") and later cause dangerous rm -rf behavior;
update the loops that populate cluster_adapter_array/nodepool arrays to validate
adapter_name after trimming and skip if empty (i.e., after adapter_name=$(echo
... | xargs) add a guard that continues the loop when adapter_name is empty),
and apply the same non-empty validation for the other analogous loops/variables
(references: cluster_adapter_array, adapter_name, adapter_configs_dir,
adapter_dirs) so no empty token is used to build filesystem paths.
---
Outside diff comments:
In `@deploy-scripts/lib/adapter.sh`:
- Around line 196-199: The current pattern "if ! adapters=$(discover_adapters);
then log_warning ...; return 0" swallows real errors from discover_adapters;
change it to capture the command's exit status and propagate failures: call
discover_adapters, check its exit code, and if non-zero log an error (use
log_error or log_warning with clear context) and return/exit with that non-zero
status; only treat a successful command that returns an empty adapter list as
the benign "no adapters" case (log_warning and return 0). Apply this same fix
wherever the same pattern appears (the discover_adapters invocation at the top
and the second occurrence around lines 264-267), referencing the
discover_adapters invocation and the log_warning usage to locate the spots to
change.
---
Nitpick comments:
In `@deploy-scripts/lib/sentinel.sh`:
- Around line 109-115: The exist-check uses a prefix grep and can match
unintended releases; update the if condition that checks release_name to perform
an exact match (for example use `helm list -n "${NAMESPACE}" -q | grep -xq --
"${release_name}"` or equivalent) so the variable release_name is matched
exactly when deciding whether to log the warning/uninstall in the uninstall
block that references release_name and NAMESPACE.
ℹ️ Review info
Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3f1c6028-8817-44c9-8736-084a216cefcd
📒 Files selected for processing (23)
deploy-scripts/.env.exampledeploy-scripts/deploy-clm.shdeploy-scripts/lib/adapter.shdeploy-scripts/lib/api.shdeploy-scripts/lib/sentinel.shtestdata/adapter-configs/cl-deployment/adapter-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-resource-deployment.yamltestdata/adapter-configs/cl-deployment/values.yamltestdata/adapter-configs/cl-job/adapter-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-role.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-rolebinding.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-serviceaccount.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job.yamltestdata/adapter-configs/cl-job/values.yamltestdata/adapter-configs/cl-namespace/adapter-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/values.yamltestdata/adapter-configs/np-configmap/adapter-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-resource-configmap.yamltestdata/adapter-configs/np-configmap/values.yaml
b9d467e to
04bd20e
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (4)
deploy-scripts/lib/adapter.sh (4)
235-235:⚠️ Potential issue | 🟠 MajorApply same release naming fix in uninstall function.
Ensure
uninstall_adapter_instanceuses the same naming convention to match the installed release.Proposed fix
- local release_name="adapter-${NAMESPACE}-${adapter_name}" + local release_name="adapter-${NAMESPACE}-${resource_type}-${adapter_name}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy-scripts/lib/adapter.sh` at line 235, uninstall_adapter_instance currently uses a different release naming convention than install, causing mismatched release names; update the release_name computation inside the uninstall_adapter_instance function to use the identical pattern release_name="adapter-${NAMESPACE}-${adapter_name}" so the uninstall targets the same Helm release created at install time (refer to the release_name variable and uninstall_adapter_instance function to locate and change the code).
35-45:⚠️ Potential issue | 🟠 MajorGuard against empty adapter names after trimming.
If
CLUSTER_TIER0_ADAPTERS_DEPLOYMENTcontains a trailing comma (e.g.,"cl-namespace,"), the resulting array includes an empty string. Afterxargstrimming, this empty entry passes through and can match${adapter_configs_dir}/(which exists), leading to potential issues downstream—particularly at line 115 whererm -rf "${dest_adapter_dir}"could target an unintended path.Proposed fix
for adapter_name in "${cluster_adapter_array[@]}"; do # Trim whitespace adapter_name=$(echo "${adapter_name}" | xargs) + if [[ -z "${adapter_name}" ]]; then + continue + fi if [[ -d "${adapter_configs_dir}/${adapter_name}" ]]; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy-scripts/lib/adapter.sh` around lines 35 - 45, After trimming adapter_name in the for-loop over cluster_adapters, guard against empty names by checking if adapter_name is an empty string and skip it (or log a warning) before testing the directory; update the loop that uses adapter_name (the block that calls xargs, checks [[ -d "${adapter_configs_dir}/${adapter_name}" ]], appends to adapter_dirs, and calls log_error) to return/continue early when adapter_name is empty so you never test or operate on "${adapter_configs_dir}/" inadvertently.
95-95:⚠️ Potential issue | 🟠 MajorRelease naming can collide across resource types.
The release name
adapter-${NAMESPACE}-${adapter_name}doesn't includeresource_type. If the sameadapter_nameis configured for bothclustersandnodepools, both deployments target the same Helm release—causing overwrites during install and incorrect uninstalls.Proposed fix
- local release_name="adapter-${NAMESPACE}-${adapter_name}" + local release_name="adapter-${NAMESPACE}-${resource_type}-${adapter_name}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy-scripts/lib/adapter.sh` at line 95, The release name currently built as local release_name="adapter-${NAMESPACE}-${adapter_name}" can collide across resource types; change the construction to include resource_type (e.g., local release_name="adapter-${NAMESPACE}-${resource_type}-${adapter_name}") and update any downstream uses of the release_name variable in this script (install/uninstall/upgrade/helm commands) so the new name is used everywhere; ensure the resource_type variable is set/validated before use and sanitize or quote variables if needed to keep the release name valid for Helm.
50-60:⚠️ Potential issue | 🟠 MajorSame empty-entry guard needed for nodepool adapters.
Apply the same empty-string validation after trimming to prevent similar issues with
NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT.Proposed fix
for adapter_name in "${nodepool_adapter_array[@]}"; do # Trim whitespace adapter_name=$(echo "${adapter_name}" | xargs) + if [[ -z "${adapter_name}" ]]; then + continue + fi if [[ -d "${adapter_configs_dir}/${adapter_name}" ]]; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy-scripts/lib/adapter.sh` around lines 50 - 60, After trimming each entry from nodepool_adapters (variable nodepool_adapters and loop variable adapter_name), add the same empty-string guard used for other adapter lists: if adapter_name is empty after trimming, skip to the next iteration (continue) instead of attempting to check the directory or returning an error; this prevents false failures when NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT contains empty entries. Ensure you reference adapter_name, adapter_configs_dir, and adapter_dirs in the updated logic so valid adapters are still appended as "nodepools|${adapter_name}".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@deploy-scripts/lib/adapter.sh`:
- Line 235: uninstall_adapter_instance currently uses a different release naming
convention than install, causing mismatched release names; update the
release_name computation inside the uninstall_adapter_instance function to use
the identical pattern release_name="adapter-${NAMESPACE}-${adapter_name}" so the
uninstall targets the same Helm release created at install time (refer to the
release_name variable and uninstall_adapter_instance function to locate and
change the code).
- Around line 35-45: After trimming adapter_name in the for-loop over
cluster_adapters, guard against empty names by checking if adapter_name is an
empty string and skip it (or log a warning) before testing the directory; update
the loop that uses adapter_name (the block that calls xargs, checks [[ -d
"${adapter_configs_dir}/${adapter_name}" ]], appends to adapter_dirs, and calls
log_error) to return/continue early when adapter_name is empty so you never test
or operate on "${adapter_configs_dir}/" inadvertently.
- Line 95: The release name currently built as local
release_name="adapter-${NAMESPACE}-${adapter_name}" can collide across resource
types; change the construction to include resource_type (e.g., local
release_name="adapter-${NAMESPACE}-${resource_type}-${adapter_name}") and update
any downstream uses of the release_name variable in this script
(install/uninstall/upgrade/helm commands) so the new name is used everywhere;
ensure the resource_type variable is set/validated before use and sanitize or
quote variables if needed to keep the release name valid for Helm.
- Around line 50-60: After trimming each entry from nodepool_adapters (variable
nodepool_adapters and loop variable adapter_name), add the same empty-string
guard used for other adapter lists: if adapter_name is empty after trimming,
skip to the next iteration (continue) instead of attempting to check the
directory or returning an error; this prevents false failures when
NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT contains empty entries. Ensure you reference
adapter_name, adapter_configs_dir, and adapter_dirs in the updated logic so
valid adapters are still appended as "nodepools|${adapter_name}".
ℹ️ Review info
Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e5dcd95f-f982-4d80-8fa9-4ffd4e3ce7cb
📒 Files selected for processing (23)
deploy-scripts/.env.exampledeploy-scripts/deploy-clm.shdeploy-scripts/lib/adapter.shdeploy-scripts/lib/api.shdeploy-scripts/lib/sentinel.shtestdata/adapter-configs/cl-deployment/adapter-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-resource-deployment.yamltestdata/adapter-configs/cl-deployment/values.yamltestdata/adapter-configs/cl-job/adapter-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-role.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-rolebinding.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-serviceaccount.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job.yamltestdata/adapter-configs/cl-job/values.yamltestdata/adapter-configs/cl-namespace/adapter-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/values.yamltestdata/adapter-configs/np-configmap/adapter-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-resource-configmap.yamltestdata/adapter-configs/np-configmap/values.yaml
🚧 Files skipped from review as they are similar to previous changes (5)
- deploy-scripts/lib/sentinel.sh
- testdata/adapter-configs/cl-namespace/values.yaml
- testdata/adapter-configs/cl-job/values.yaml
- deploy-scripts/deploy-clm.sh
- testdata/adapter-configs/cl-deployment/values.yaml
04bd20e to
ef671f6
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
deploy-scripts/deploy-clm.sh (1)
291-301:⚠️ Potential issue | 🟠 MajorAdd fail-fast validation for adapter deployment inputs.
Line 291–301 adds parsing for new adapter flags, but there is still no guard after argument validation to block installs when adapter inputs are invalid. This should fail early when adapter install is enabled.
Proposed fix
if [[ "${ACTION}" != "install" && "${ACTION}" != "uninstall" ]]; then log_error "Invalid action: ${ACTION}. Must be 'install' or 'uninstall'" exit 1 fi + + if [[ "${ACTION}" == "install" && "${INSTALL_ADAPTER}" == "true" ]]; then + if [[ -z "${CLUSTER_TIER0_ADAPTERS_DEPLOYMENT}" && -z "${NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT}" ]]; then + log_error "Adapter installation requires at least one of:" + log_error " --cluster-tier0-adapters or --nodepool-tier0-adapters" + exit 1 + fi + + if [[ ! -d "${ADAPTERS_FILE_DIR}" || ! -r "${ADAPTERS_FILE_DIR}" ]]; then + log_error "ADAPTERS_FILE_DIR is not a readable directory: ${ADAPTERS_FILE_DIR}" + exit 1 + fi + fi # Validate at least one component is selected if [[ "${INSTALL_API}" == "false" && "${INSTALL_SENTINEL}" == "false" && "${INSTALL_ADAPTER}" == "false" ]]; then log_error "At least one component must be selected for installation" exit 1Also applies to: 338-355
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy-scripts/deploy-clm.sh` around lines 291 - 301, Add a fail-fast validation step after argument parsing that checks when adapter installation is enabled that CLUSTER_TIER0_ADAPTERS_DEPLOYMENT, NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT and ADAPTERS_FILE_DIR are present and valid; if any required adapter input is missing or invalid, print a clear error and exit 1 to block the install. Implement the same guard for the other parsing block referenced (the block around the second occurrence handling lines 338-355) so both flag sets validate consistently, and include the variable names CLUSTER_TIER0_ADAPTERS_DEPLOYMENT, NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT and ADAPTERS_FILE_DIR in the error text for easy debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@deploy-scripts/deploy-clm.sh`:
- Around line 291-301: Add a fail-fast validation step after argument parsing
that checks when adapter installation is enabled that
CLUSTER_TIER0_ADAPTERS_DEPLOYMENT, NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT and
ADAPTERS_FILE_DIR are present and valid; if any required adapter input is
missing or invalid, print a clear error and exit 1 to block the install.
Implement the same guard for the other parsing block referenced (the block
around the second occurrence handling lines 338-355) so both flag sets validate
consistently, and include the variable names CLUSTER_TIER0_ADAPTERS_DEPLOYMENT,
NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT and ADAPTERS_FILE_DIR in the error text for
easy debugging.
ℹ️ Review info
Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 86c932bc-2406-417a-9961-dec809ef48a2
📒 Files selected for processing (23)
deploy-scripts/.env.exampledeploy-scripts/deploy-clm.shdeploy-scripts/lib/adapter.shdeploy-scripts/lib/api.shdeploy-scripts/lib/sentinel.shtestdata/adapter-configs/cl-deployment/adapter-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-resource-deployment.yamltestdata/adapter-configs/cl-deployment/values.yamltestdata/adapter-configs/cl-job/adapter-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-role.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-rolebinding.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job-serviceaccount.yamltestdata/adapter-configs/cl-job/adapter-task-resource-job.yamltestdata/adapter-configs/cl-job/values.yamltestdata/adapter-configs/cl-namespace/adapter-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/values.yamltestdata/adapter-configs/np-configmap/adapter-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-resource-configmap.yamltestdata/adapter-configs/np-configmap/values.yaml
🚧 Files skipped from review as they are similar to previous changes (4)
- deploy-scripts/lib/adapter.sh
- testdata/adapter-configs/np-configmap/values.yaml
- deploy-scripts/.env.example
- testdata/adapter-configs/cl-deployment/values.yaml
| log_error "Failed to parse adapter directory name: ${dir_name}" | ||
| return 1 | ||
| fi | ||
| # Extract resource_type and adapter_name from format: resource_type|adapter_name |
There was a problem hiding this comment.
The old parse_adapter_name() call provided validation for both install and
uninstall. After the refactor, install_adapter_instance got inline validation
(L101-104), but uninstall_adapter_instance did not — the descriptor is parsed
without any check.
Should be straightforward to add the same guard here:
# Extract resource_type and adapter_name from format:
resource_type|adapter_name
local resource_type="${dir_name%%|*}"
local adapter_name="${dir_name##*|}"
# Validate the descriptor format and ensure both parts are non-empty
if [[ -z "${resource_type}" || -z "${adapter_name}" || "${dir_name}" != *"|"*
]]; then
log_error "Invalid adapter descriptor '${dir_name}'. Expected format:
resource_type|adapter_name"
return 1
fi| API_ADAPTERS_CLUSTER=cl-namespace,cl-job,cl-deployment | ||
|
|
||
| # Adapters for API nodepool configuration | ||
| # API_ADAPTERS_NODEPOOL="" |
There was a problem hiding this comment.
Also give it the default value np-configmap
Update the E2E deployment scripts to use explicit environment variable configuration for adapter deployment instead of auto-discovery
Summary by CodeRabbit
New Features
Refactor
Chores