diff --git a/test-design/testcases/adapter.md b/test-design/testcases/adapter.md index adce183..6468c79 100644 --- a/test-design/testcases/adapter.md +++ b/test-design/testcases/adapter.md @@ -2,16 +2,20 @@ ## Table of Contents -1. [Adapter framework can detect and report failures to cluster API endpoints](#test-title-adapter-framework-can-detect-and-report-failures-to-cluster-api-endpoints) -2. [Adapter framework can detect and handle resource timeouts](#test-title-adapter-framework-can-detect-and-handle-resource-timeouts) +1. [Adapter can detect and report invalid K8s resource failures](#test-title-adapter-can-detect-and-report-invalid-k8s-resource-failures) +2. [Adapter can detect and handle precondition timeouts](#test-title-adapter-can-detect-and-handle-precondition-timeouts) +3. [Adapter can recover from crash and process redelivered events](#test-title-adapter-can-recover-from-crash-and-process-redelivered-events) +4. [Adapter can process pending events after restart](#test-title-adapter-can-process-pending-events-after-restart) +5. [Adapter can handle duplicate events for same cluster idempotently](#test-title-adapter-can-handle-duplicate-events-for-same-cluster-idempotently) +6. [API can handle incomplete adapter status reports gracefully](#test-title-api-can-handle-incomplete-adapter-status-reports-gracefully) --- -## Test Title: Adapter framework can detect and report failures to cluster API endpoints +## Test Title: Adapter can detect and report invalid K8s resource failures ### Description -This test validates that the adapter framework correctly detects and reports failures when attempting to create invalid Kubernetes resources on the target cluster. It ensures that when an adapter's configuration contains invalid K8s resource objects, the framework properly handles the API server rejection, logs meaningful error messages, and reports the failure status back to the HyperFleet API with appropriate condition states and error details. +This test validates that the adapter framework correctly detects and reports failures when attempting to create invalid Kubernetes resources on the target cluster. It ensures that when an adapter's configuration contains invalid K8s resource objects, the framework properly handles the API server rejection and reports the failure status back to the HyperFleet API with appropriate condition states and error details. --- @@ -32,55 +36,55 @@ This test validates that the adapter framework correctly detects and reports fai ### Preconditions 1. Environment is prepared using [hyperfleet-infra](https://github.com/openshift-hyperfleet/hyperfleet-infra) with all required platform resources 2. HyperFleet API and HyperFleet Sentinel services are deployed and running successfully +3. A dedicated test adapter is deployed via Helm with AdapterConfig containing invalid K8s resource objects --- ### Test Steps -#### Step 1: Test template rendering errors -**Action:** -- Configure AdapterConfig with invalid AdapterConfig (invalid K8s resource object) -- Deploy the test adapter - -**Expected Result:** -- Adapter detects template rendering error -- Log reports failure with clear error message - -#### Step 2: Send POST request to create a new cluster +#### Step 1: Send POST request to create a new cluster **Action:** - Execute cluster creation request: ```bash curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ -H "Content-Type: application/json" \ - -d + -d @testdata/payloads/clusters/cluster-request.json ``` **Expected Result:** - API returns successful response -#### Step 3: Wait for timeout and Verify Timeout Handling +#### Step 2: Verify adapter status reports failure **Action:** -- Wait for some minutes -- Verify adapter status +- Poll adapter statuses: +```bash +curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses +``` **Expected Result:** +- The test adapter reports `Available` condition with `status: "False"`, with reason indicating invalid K8s resource + +#### Step 3: Cleanup resources + +**Action:** +- Delete the namespace created for this cluster: ```bash - curl -X GET ${API_URL}/api/hyperfleet/v1/clusters//statuses \ - | jq -r '.items[] | select(.adapter=="") | .conditions[] | select(.type=="Available")' +kubectl delete namespace {cluster_id} +``` +- Uninstall the test adapter Helm release - # Example: - # { - # "type": "Available", - # "status": "False", - # "reason": "`invalid k8s object` resource is invalid", - # "message": "Invalid Kubernetes object" - # } +**Expected Result:** +- Namespace and all associated resources are deleted successfully +- Test adapter deployment is removed + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "clusters" resource type, the namespace deletion should be replaced with: +```bash +curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} ``` --- - -## Test Title: Adapter framework can detect and handle resource timeouts +## Test Title: Adapter can detect and handle precondition timeouts ### Description @@ -104,71 +108,508 @@ This test validates that the adapter framework correctly detects and handles res ### Preconditions 1. Environment is prepared using [hyperfleet-infra](https://github.com/openshift-hyperfleet/hyperfleet-infra) with all required platform resources 2. HyperFleet API and HyperFleet Sentinel services are deployed and running successfully +3. A dedicated timeout-adapter is deployed via Helm with AdapterConfig containing preconditions that cannot be met, for example: +```yaml +preconditions: + - name: "clusterStatus" + apiCall: + method: "GET" + url: "{{ .hyperfleetApiBaseUrl }}/api/hyperfleet/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}" + timeout: 10s + retryAttempts: 3 + retryBackoff: "exponential" + capture: + - name: "clusterName" + field: "name" + - name: "clusterPhase" + field: "status.phase" + - name: "generationId" + field: "generation" + conditions: + - field: "clusterPhase" + operator: "in" + values: ["NotReady", "Ready"] +``` --- ### Test Steps -#### Step 1: Configure adapter with timeout setting -**Action:** -- Configure AdapterConfig with non-existed conditions that can't meet the precondition -```yaml - preconditions: - - name: "clusterStatus" - apiCall: - method: "GET" - url: "{{ .hyperfleetApiBaseUrl }}/api/hyperfleet/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}" - timeout: 10s - retryAttempts: 3 - retryBackoff: "exponential" - capture: - - name: "clusterName" - field: "name" - - name: "clusterPhase" - field: "status.phase" - - name: "generationId" - field: "generation" - conditions: - - field: "clusterPhase" - operator: "in" - values: ["NotReady", "Ready"] -``` -- Deploy the test adapter - -**Expected Result:** -- Adapter loads configuration successfully -- Adapter pods are running successfully -- Adapter logs show successful initialization - -#### Step 2: Send POST request to create a new cluster +#### Step 1: Send POST request to create a new cluster **Action:** - Execute cluster creation request: ```bash curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ -H "Content-Type: application/json" \ - -d + -d @testdata/payloads/clusters/cluster-request.json ``` **Expected Result:** - API returns successful response -#### Step 3: Wait for timeout and Verify Timeout Handling +#### Step 2: Verify adapter status reports timeout +**Action:** +- Poll adapter statuses: +```bash +curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses +``` + +**Expected Result:** +- The timeout-adapter reports `Available` condition with `status: "False"`, with reason indicating timeout (e.g., `"reason": "JobTimeout"`) + +#### Step 3: Cleanup resources + +**Action:** +- Delete the namespace created for this cluster: +```bash +kubectl delete namespace {cluster_id} +``` +- Uninstall the timeout-adapter Helm release + +**Expected Result:** +- Namespace and all associated resources are deleted successfully +- Timeout-adapter deployment is removed + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "clusters" resource type, the namespace deletion should be replaced with: +```bash +curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} +``` + +--- + + +## Test Title: Adapter can recover from crash and process redelivered events + +### Description + +This test validates that when an adapter crashes during event processing, the system ensures that pending events are eventually processed after the adapter recovers. This ensures that no events are lost due to adapter failures and the system maintains eventual consistency. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Negative | +| **Priority** | Tier2 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-03-04 | + + +--- + +### Preconditions +1. Environment is prepared using [hyperfleet-infra](https://github.com/openshift-hyperfleet/hyperfleet-infra) +2. HyperFleet API, Sentinel, and Adapter services are deployed +3. A dedicated crash-adapter is deployed via Helm with pre-configured crash behavior (`SIMULATE_RESULT=crash`), separate from the normal adapters used in other tests +4. Message broker is configured with appropriate acknowledgment deadline and retry policy + +--- + +### Test Steps + +#### Step 1: Create a cluster to trigger event +**Action:** +- Submit a POST request to create a Cluster resource: +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d @testdata/payloads/clusters/cluster-request.json +``` + +**Expected Result:** +- API returns successful response with cluster ID + +**Note:** After the cluster is created, Sentinel will detect the new cluster during its polling cycle and publish an event to the broker, which triggers the crash-adapter to receive and process the event. + +#### Step 2: Verify crash-adapter crashes on event receipt +**Action:** +- Monitor crash-adapter pod status: +```bash +kubectl get pods -n hyperfleet -l app.kubernetes.io/instance=crash-adapter -w +``` + +**Expected Result:** +- crash-adapter pod crashes (CrashLoopBackOff or Error state) + +**Note:** The unacknowledged message will be redelivered by the broker, which is verified in Step 4. + +#### Step 3: Restore crash-adapter to normal mode **Action:** -- Wait for some minutes -- Verify adapter status +- Upgrade crash-adapter Helm release with `SIMULATE_RESULT=success` **Expected Result:** +- crash-adapter pod starts and remains Running + +#### Step 4: Verify message redelivery and processing +**Action:** +- Wait for crash-adapter to process redelivered message +- Check cluster status: ```bash - curl -X GET ${API_URL}/api/hyperfleet/v1/clusters//statuses \ - | jq -r '.items[] | select(.adapter=="") | .conditions[] | select(.type=="Available")' +curl -s ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses | jq '.' +``` + +**Expected Result:** +- crash-adapter eventually processes the cluster event via redelivered message +- Status is reported with appropriate conditions +- No event is lost + +#### Step 5: Cleanup resources +**Action:** +- Delete the namespace created for this cluster: +```bash +kubectl delete namespace {cluster_id} +``` +- Uninstall the crash-adapter Helm release + +**Expected Result:** +- Namespace and all associated resources are deleted successfully +- crash-adapter deployment is removed + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "clusters" resource type, the namespace deletion should be replaced with: +```bash +curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} +``` + +--- + +### Notes + +- Message redelivery behavior depends on the message broker configuration: + - For Google Pub/Sub: Uses acknowledgment deadline and retry policy + - If adapter crashes before acknowledging message, broker will redeliver after the acknowledgment deadline expires +- Sentinel may also republish events during its polling cycle if generation > observed_generation + +--- + +## Test Title: Adapter can process pending events after restart + +### Description + +This test validates that adapters can recover from restarts and continue processing pending events. Pending events should be eventually processed after the adapter restarts. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Positive | +| **Priority** | Tier2 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-03-04 | + + +--- + +### Preconditions +1. HyperFleet system is deployed and running +2. Adapter is running normally + +--- + +### Test Steps + +#### Step 1: Create cluster and verify initial processing +**Action:** +- Submit a POST request to create a Cluster resource: +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d @testdata/payloads/clusters/cluster-request.json +``` +- Wait for adapter to process and verify status: +```bash +curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses +``` + +**Expected Result:** +- Cluster created and processed +- Adapter status shows Available=True + +--- + +#### Step 2: Restart adapter pod +**Action:** +- Delete an adapter pod to trigger restart: +```bash +kubectl delete pod -n hyperfleet -l app.kubernetes.io/instance= +``` + +**Expected Result:** +- Adapter pod is terminated +- New adapter pod starts up automatically + +--- + +#### Step 3: Create new cluster while adapter is restarting +**Action:** +- Create another cluster via API during adapter restart window: +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d @testdata/payloads/clusters/cluster-request.json +``` + +**Expected Result:** +- Cluster created successfully (API is independent of adapter) - # Example: - # { - # "type": "Available", - # "status": "False", - # "reason": "JobTimeout", - # "message": "Validation job did not complete within 30 seconds" - # } +--- + +#### Step 4: Verify adapter processes pending events after restart +**Action:** +- Wait for adapter to fully restart +- Check status of the new cluster: +```bash +curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses +``` + +**Expected Result:** +- Both clusters have adapter statuses with Applied=True, Available=True + +#### Step 5: Cleanup resources +**Action:** +- Delete the namespaces created for both clusters: +```bash +kubectl delete namespace {cluster_id_1} +kubectl delete namespace {cluster_id_2} +``` + +**Expected Result:** +- Namespaces and all associated resources are deleted successfully + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "clusters" resource type, the namespace deletion should be replaced with: +```bash +curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} +``` + +--- + +## Test Title: Adapter can handle duplicate events for same cluster idempotently + +### Description + +This test validates that when multiple events are published for the same cluster (e.g., due to retries or rapid updates), the adapter handles them idempotently without creating duplicate resources. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Positive | +| **Priority** | Tier1 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-03-04 | + + +--- + +### Preconditions +1. HyperFleet system is deployed and running +2. Adapter uses discovery pattern to check existing resources + +--- + +### Test Steps + +#### Step 1: Create cluster and wait for initial processing +**Action:** +- Submit a POST request to create a Cluster resource: +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d @testdata/payloads/clusters/cluster-request.json +``` +- Wait for adapter to process and verify status: +```bash +curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses +``` + +**Expected Result:** +- Cluster created +- Adapter creates namespace and other resources +- Status reported + +--- + +#### Step 2: Trigger multiple events for same cluster +**Action:** +- Restart Sentinel to trigger re-polling: +```bash +kubectl delete pod -n hyperfleet -l app.kubernetes.io/name=sentinel +``` + +**Expected Result:** +- Multiple events may be published for the same cluster + +--- + +#### Step 3: Verify no duplicate resources created +**Action:** +```bash +# Check namespace count (should be exactly 1) +kubectl get namespace {cluster_id} + +# Check job count (should be 1 or managed correctly) +kubectl get jobs -n {cluster_id} +``` + +**Expected Result:** +- Only one namespace exists for the cluster +- No duplicate jobs or resources + +--- + +#### Step 4: Verify status is consistent +**Action:** +```bash +curl -s ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses | jq '.items | length' +``` + +**Expected Result:** +- Only one status entry per adapter +- Status reflects latest state (not duplicated) + +#### Step 5: Cleanup resources +**Action:** +- Delete the namespace created for this cluster: +```bash +kubectl delete namespace {cluster_id} +``` + +**Expected Result:** +- Namespace and all associated resources are deleted successfully + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "clusters" resource type, the namespace deletion should be replaced with: +```bash +curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} +``` + +--- + +## Test Title: API can handle incomplete adapter status reports gracefully + +### Description + +This test validates that the HyperFleet API can gracefully handle adapter status reports that are missing expected fields. When an adapter reports status with incomplete or missing condition fields (e.g., missing `reason`, `message`, or `observed_generation`), the API should accept the report without crashing and store what is available, rather than rejecting the entire status update. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Negative | +| **Priority** | Tier2 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-02-11 | + + +--- + +### Preconditions +1. Environment is prepared using [hyperfleet-infra](https://github.com/openshift-hyperfleet/hyperfleet-infra) with all required platform resources +2. HyperFleet API is deployed and running successfully +3. A cluster resource has been created and its cluster_id is available + +--- + +### Test Steps + +#### Step 1: Submit a status report with missing optional fields +**Action:** +- Send a status report with minimal fields (missing `reason`, `message`): +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses \ + -H "Content-Type: application/json" \ + -d '{ + "adapter": "test-incomplete-adapter", + "conditions": [ + { + "type": "Applied", + "status": "True" + } + ] + }' +``` + +**Expected Result:** +- API accepts the status report (HTTP 200/201) +- API does not return an error or crash +- Status is stored with available fields; missing fields default to empty or null + +#### Step 2: Submit a status report with missing observed_generation +**Action:** +- Send a status report without `observed_generation`: +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses \ + -H "Content-Type: application/json" \ + -d '{ + "adapter": "test-no-generation-adapter", + "conditions": [ + { + "type": "Applied", + "status": "True", + "reason": "ResourceApplied", + "message": "Resource applied successfully" + } + ] + }' +``` + +**Expected Result:** +- API accepts the status report +- `observed_generation` defaults to 0 or null +- Cluster conditions are not corrupted + +#### Step 3: Submit a status report with empty conditions array +**Action:** +- Send a status report with an empty conditions list: +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses \ + -H "Content-Type: application/json" \ + -d '{ + "adapter": "test-empty-conditions-adapter", + "conditions": [] + }' +``` + +**Expected Result:** +- API either accepts the report with empty conditions, or returns a clear validation error (HTTP 400) +- API does not crash or return HTTP 500 + +#### Step 4: Verify cluster state is not corrupted +**Action:** +- Retrieve the cluster and its statuses: +```bash +curl -s ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} | jq '.conditions' +curl -s ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses | jq '.' +``` + +**Expected Result:** +- Cluster conditions remain consistent and valid +- Previously reported statuses from other adapters are not affected +- Incomplete status entries are stored but do not interfere with cluster Ready/Available evaluation + +#### Step 5: Cleanup resources + +**Action:** +- Delete the namespace created for this cluster: +```bash +kubectl delete namespace {cluster_id} +``` + +**Expected Result:** +- Namespace and all associated resources are deleted successfully + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "clusters" resource type, the namespace deletion should be replaced with: +```bash +curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} ``` --- diff --git a/test-design/testcases/cluster.md b/test-design/testcases/cluster.md index 4d9526b..cf9fd94 100644 --- a/test-design/testcases/cluster.md +++ b/test-design/testcases/cluster.md @@ -2,12 +2,27 @@ ## Table of Contents -1. [Clusters Resource Type - Workflow Validation](#test-title-clusters-resource-type---workflow-validation) -2. [Clusters Resource Type - K8s Resources Check Aligned with Preinstalled Clusters Related Adapters Specified](#test-title-clusters-resource-type---k8s-resources-check-aligned-with-preinstalled-clusters-related-adapters-specified) -3. [Clusters Resource Type - Adapter Dependency Relationships Workflow Validation for Preinstalled Clusters Related Dependent Adapters](#test-title-clusters-resource-type---adapter-dependency-relationships-workflow-validation-for-preinstalled-clusters-related-dependent-adapters) +1. [Cluster can complete end-to-end workflow with all required adapters](#test-title-cluster-can-complete-end-to-end-workflow-with-all-required-adapters) +2. [Cluster adapters can create K8s resources with correct state and metadata](#test-title-cluster-adapters-can-create-k8s-resources-with-correct-state-and-metadata) +3. [Cluster adapters can enforce dependency order during workflow](#test-title-cluster-adapters-can-enforce-dependency-order-during-workflow) +4. [Cluster can reflect adapter failure in top-level status](#test-title-cluster-can-reflect-adapter-failure-in-top-level-status) +5. [API can reject cluster with invalid name format (RFC 1123)](#test-title-api-can-reject-cluster-with-invalid-name-format-rfc-1123) +6. [API can reject cluster with name exceeding max length](#test-title-api-can-reject-cluster-with-name-exceeding-max-length) + +--- + +## Test Variables + +```bash +# Required adapters from configs/config.yaml under adapters.cluster +export ADAPTER_NAMESPACE='cl-namespace' +export ADAPTER_JOB='cl-job' +export ADAPTER_DEPLOYMENT='cl-deployment' +``` + --- -## Test Title: Clusters Resource Type - Basic Workflow Validation +## Test Title: Cluster can complete end-to-end workflow with all required adapters ### Description @@ -74,9 +89,9 @@ curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses **Expected Result:** - Response returns HTTP 200 (OK) status code - All required adapters from config are present in the response: - - `clusters-namespace` - - `clusters-job` - - `clusters-deployment` + - `${ADAPTER_NAMESPACE}` + - `${ADAPTER_JOB}` + - `${ADAPTER_DEPLOYMENT}` - Each required adapter has all required condition types: `Applied`, `Available`, `Health` - Each condition has `status: "True"` indicating successful execution - **Adapter condition metadata validation** (for each condition in adapter.conditions): @@ -125,7 +140,7 @@ curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} --- -## Test Title: Clusters Resource Type - K8s Resources Check Aligned with Preinstalled Clusters Related Adapters Specified +## Test Title: Cluster adapters can create K8s resources with correct state and metadata ### Description @@ -179,7 +194,7 @@ curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses ``` **Expected Result:** -- All required adapters from config (cl-namespace, cl-job, cl-deployment) are present +- All required adapters from config (${ADAPTER_NAMESPACE}, ${ADAPTER_JOB}, ${ADAPTER_DEPLOYMENT}) are present - Each adapter has all three conditions (`Applied`, `Available`, `Health`) with `status: True` **Note:** Required adapters are configurable via `configs/config.yaml` under `adapters.cluster` @@ -189,7 +204,7 @@ curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses **Action:** - For each required adapter, retrieve and validate corresponding Kubernetes resources: -**For cl-namespace adapter:** +**For ${ADAPTER_NAMESPACE} adapter:** ```bash kubectl get namespace {cluster_id} -o yaml ``` @@ -200,7 +215,7 @@ kubectl get namespace {cluster_id} -o yaml - Required annotations: - `hyperfleet.io/generation`: Equals "1" for new creation request -**For cl-job adapter:** +**For ${ADAPTER_JOB} adapter:** ```bash kubectl get job -n {cluster_id} -l hyperfleet.io/cluster-id={cluster_id},hyperfleet.io/resource-type=job -o yaml ``` @@ -211,7 +226,7 @@ kubectl get job -n {cluster_id} -l hyperfleet.io/cluster-id={cluster_id},hyperfl - Required annotations: - `hyperfleet.io/generation`: Equals "1" for new creation request -**For cl-deployment adapter:** +**For ${ADAPTER_DEPLOYMENT} adapter:** ```bash kubectl get deployment -n {cluster_id} -l hyperfleet.io/cluster-id={cluster_id},hyperfleet.io/resource-type=deployment -o yaml ``` @@ -240,11 +255,11 @@ curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} --- -## Test Title: Clusters Resource Type - Adapter Dependency Relationships Workflow Validation +## Test Title: Cluster adapters can enforce dependency order during workflow ### Description -This test validates that CLM correctly handles adapter dependency relationships when processing a clusters resource request. Specifically, it verifies the dependency relationship where the cl-deployment adapter depends on the cl-job adapter completion. The test continuously polls and validates throughout the workflow period to ensure: (1) cl-deployment's Applied condition remains False until cl-job's Available condition reaches True, enforcing the dependency precondition; (2) during cl-job execution, cl-deployment's Available condition stays Unknown (never False), confirming the adapter waits correctly without attempting execution; (3) successful completion with cl-deployment's Available eventually transitioning to True. This validation demonstrates that the workflow engine properly enforces adapter dependencies and ensures dependent adapters wait for prerequisites before executing. +This test validates that CLM correctly handles adapter dependency relationships when processing a clusters resource request. Specifically, it verifies the dependency relationship where the ${ADAPTER_DEPLOYMENT} adapter depends on the ${ADAPTER_JOB} adapter completion. The test continuously polls and validates throughout the workflow period to ensure: (1) ${ADAPTER_DEPLOYMENT}'s Applied condition remains False until ${ADAPTER_JOB}'s Available condition reaches True, enforcing the dependency precondition; (2) during ${ADAPTER_JOB} execution, ${ADAPTER_DEPLOYMENT}'s Available condition stays Unknown (never False), confirming the adapter waits correctly without attempting execution; (3) successful completion with ${ADAPTER_DEPLOYMENT}'s Available eventually transitioning to True. This validation demonstrates that the workflow engine properly enforces adapter dependencies and ensures dependent adapters wait for prerequisites before executing. --- @@ -283,45 +298,45 @@ curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ **Expected Result:** - API returns successful response -#### Step 2: Verify cl-deployment initial state and dependency waiting behavior +#### Step 2: Verify ${ADAPTER_DEPLOYMENT} initial state and dependency waiting behavior **Action:** -- Poll adapter statuses to capture cl-deployment's initial waiting state: +- Poll adapter statuses to capture ${ADAPTER_DEPLOYMENT}'s initial waiting state: ```bash curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses ``` **Expected Result:** -At the initial state (when cl-deployment first appears in statuses): +At the initial state (when ${ADAPTER_DEPLOYMENT} first appears in statuses): - Response returns HTTP 200 (OK) status code -- The `cl-deployment` adapter is present with initial waiting state: - - `Applied` condition has `status: "False"` (deployment hasn't been applied yet, waiting for cl-job dependency) +- The `${ADAPTER_DEPLOYMENT}` adapter is present with initial waiting state: + - `Applied` condition has `status: "False"` (deployment hasn't been applied yet, waiting for ${ADAPTER_JOB} dependency) - `Available` condition has `status: "Unknown"` (deployment hasn't been applied yet) - `Health` condition has `status: "True"` (adapter itself is healthy, just waiting) #### Step 3: Verify dependency relationship and condition transitions throughout entire workflow **Action:** -- Continuously poll adapter statuses from the initial state until cl-deployment completes: +- Continuously poll adapter statuses from the initial state until ${ADAPTER_DEPLOYMENT} completes: ```bash curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses ``` **Expected Result:** -Throughout the entire period (from initial state until cl-deployment completes), validate the following on each poll: +Throughout the entire period (from initial state until ${ADAPTER_DEPLOYMENT} completes), validate the following on each poll: -**Validation 1 - Dependency enforcement (during cl-job execution):** -- While `cl-job` adapter's `Available` condition has NOT reached `status: "True"`: - - The `cl-deployment` adapter's `Applied` condition must remain `status: "False"` - - The `cl-deployment` adapter's `Available` condition must remain `status: "Unknown"` (never `status: "False"`) - - This validates that cl-deployment waits for cl-job to complete without attempting to apply resources +**Validation 1 - Dependency enforcement (during ${ADAPTER_JOB} execution):** +- While `${ADAPTER_JOB}` adapter's `Available` condition has NOT reached `status: "True"`: + - The `${ADAPTER_DEPLOYMENT}` adapter's `Applied` condition must remain `status: "False"` + - The `${ADAPTER_DEPLOYMENT}` adapter's `Available` condition must remain `status: "Unknown"` (never `status: "False"`) + - This validates that ${ADAPTER_DEPLOYMENT} waits for ${ADAPTER_JOB} to complete without attempting to apply resources **Validation 2 - Success condition:** -- Once `cl-job` adapter's `Available` reaches `status: "True"`, cl-deployment can proceed with execution -- Once `cl-deployment` completes execution, its `Available` condition eventually becomes `status: "True"` +- Once `${ADAPTER_JOB}` adapter's `Available` reaches `status: "True"`, ${ADAPTER_DEPLOYMENT} can proceed with execution +- Once `${ADAPTER_DEPLOYMENT}` completes execution, its `Available` condition eventually becomes `status: "True"` - This confirms the complete dependency workflow succeeded -**Note:** After cl-job completes, cl-deployment's `Available` condition may temporarily be `False` (e.g., `MinimumReplicasUnavailable` during deployment startup) before becoming `True`, which is expected behavior and not validated. +**Note:** After ${ADAPTER_JOB} completes, ${ADAPTER_DEPLOYMENT}'s `Available` condition may temporarily be `False` (e.g., `MinimumReplicasUnavailable` during deployment startup) before becoming `True`, which is expected behavior and not validated. #### Step 4: Cleanup resources @@ -338,3 +353,194 @@ kubectl delete namespace {cluster_id} ```bash curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} ``` + +--- + +## Test Title: Cluster can reflect adapter failure in top-level status + +### Description + +This test validates that the end-to-end workflow correctly handles adapter failure scenarios. When an adapter reports a failure status (e.g., `Health=False`), the cluster's top-level conditions (`Ready`, `Available`) should remain `False`, accurately reflecting that the cluster has not reached a healthy state. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Negative | +| **Priority** | Tier1 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-03-04 | + + +--- + +### Preconditions + +1. Environment is prepared using [hyperfleet-infra](https://github.com/openshift-hyperfleet/hyperfleet-infra) with all required platform resources +2. HyperFleet API and HyperFleet Sentinel services are deployed and running successfully +3. A dedicated failure-adapter is deployed via Helm with pre-configured failure behavior (e.g., `SIMULATE_RESULT=failure`), separate from the normal adapters used in other tests + +--- + +### Test Steps + +#### Step 1: Submit an API request to create a Cluster resource + +**Action:** +- Submit a POST request to create a Cluster resource: +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d @testdata/payloads/clusters/cluster-request.json +``` + +**Expected Result:** +- API returns successful response with cluster ID + +#### Step 2: Verify adapter failure is reported via status API + +**Action:** +- Poll adapter statuses until the failure-adapter reports its status: +```bash +curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses +``` + +**Expected Result:** +- The failure-adapter is present in the statuses response +- The failure-adapter reports `Health` condition with `status: "False"`, with reason and message indicating the failure +- The failure-adapter reports `Available` condition with `status: "False"` + +#### Step 3: Verify cluster top-level status reflects adapter failure + +**Action:** +- Retrieve cluster status: +```bash +curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} +``` + +**Expected Result:** +- Cluster `Ready` condition remains `status: "False"` +- Cluster `Available` condition remains `status: "False"` +- Cluster does not transition to Ready state while any adapter reports failure + +#### Step 4: Cleanup resources + +**Action:** +- Delete the namespace created for this cluster: +```bash +kubectl delete namespace {cluster_id} +``` +- Uninstall the failure-adapter Helm release + +**Expected Result:** +- Namespace and all associated resources are deleted successfully +- Failure-adapter deployment is removed + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "clusters" resource type, the namespace deletion should be replaced with: +```bash +curl -X DELETE ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} +``` + +--- + +## Test Title: API can reject cluster with invalid name format (RFC 1123) + +### Description + +This test validates that the HyperFleet API correctly rejects cluster creation requests with invalid name formats that don't comply with RFC 1123 DNS label naming conventions. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Negative | +| **Priority** | Tier1 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-02-11 | + +--- + +### Preconditions +1. HyperFleet API is deployed and running successfully +2. API is accessible via port-forward or ingress + +--- + +### Test Steps + +#### Step 1: Send POST request with invalid name format +**Action:** +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d '{ + "kind": "Cluster", + "name": "Invalid_Name_With_Underscore", + "spec": {"platform": {"type": "gcp", "gcp": {"projectID": "test", "region": "us-central1"}}} + }' +``` + +**Expected Result:** +- API returns HTTP 400 Bad Request +- Response contains validation error: +```json +{ + "code": "HYPERFLEET-VAL-000", + "detail": "name must match pattern ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$", + "status": 400, + "title": "Validation Failed" +} +``` + +--- + +## Test Title: API can reject cluster with name exceeding max length + +### Description + +This test validates that the HyperFleet API correctly rejects cluster creation requests with names exceeding the maximum allowed length. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Negative | +| **Priority** | Tier2 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-02-11 | + +--- + +### Preconditions +1. HyperFleet API is deployed and running successfully + +--- + +### Test Steps + +#### Step 1: Send POST request with name exceeding max length +**Action:** +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d '{ + "kind": "Cluster", + "name": "this-is-a-very-long-cluster-name-that-exceeds-the-maximum-allowed-length-for-cluster-names", + "spec": {"platform": {"type": "gcp", "gcp": {"projectID": "test", "region": "us-central1"}}} + }' +``` + +**Expected Result:** +- API returns HTTP 400 Bad Request +- Response contains validation error about name length + +--- diff --git a/test-design/testcases/concurrent-processing.md b/test-design/testcases/concurrent-processing.md new file mode 100644 index 0000000..1c22d4f --- /dev/null +++ b/test-design/testcases/concurrent-processing.md @@ -0,0 +1,204 @@ +# Feature: Concurrent Processing + +## Table of Contents + +1. [System can process concurrent cluster creations without event loss or resource conflicts](#test-title-system-can-process-concurrent-cluster-creations-without-event-loss-or-resource-conflicts) +2. [Multiple nodepools can coexist under same cluster without conflicts](#test-title-multiple-nodepools-can-coexist-under-same-cluster-without-conflicts) + +--- + +## Test Title: System can process concurrent cluster creations without event loss or resource conflicts + +### Description + +This test validates that the system can handle multiple cluster creation requests submitted simultaneously without message loss, resource conflicts, or processing failures. It ensures that the system can correctly process concurrent events and all clusters reach their expected final state. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Positive | +| **Priority** | Tier1 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-02-11 | + + +--- + +### Preconditions +1. Environment is prepared using [hyperfleet-infra](https://github.com/openshift-hyperfleet/hyperfleet-infra) with all required platform resources +2. HyperFleet API, Sentinel, and Adapter services are deployed and running successfully +3. The adapters defined in testdata/adapter-configs are all deployed successfully + +--- + +### Test Steps + +#### Step 1: Submit 5 cluster creation requests simultaneously +**Action:** +- Submit 5 POST requests in parallel (each call generates a unique name via `{{.Random}}` template): +```bash +for i in $(seq 1 5); do + curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d @testdata/payloads/clusters/cluster-request.json & +done +wait +``` + +**Expected Result:** +- All 5 requests return successful responses (HTTP 200/201) +- Each response contains a unique cluster ID +- No request is rejected or fails due to concurrency + +#### Step 2: Wait for all clusters to be processed +**Action:** +- For each cluster created in Step 1, poll its status until Ready state or a timeout is reached: +```bash +curl -s ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id} | jq '.conditions' +``` + +**Expected Result:** +- All 5 clusters eventually reach Ready=True and Available=True +- No cluster is stuck in a pending or processing state indefinitely + +#### Step 3: Verify Kubernetes resources for all clusters +**Action:** +- For each cluster created in Step 1, check that it has its own namespace and expected resources: +```bash +kubectl get namespace {cluster_id} +kubectl get jobs -n {cluster_id} +``` + +**Expected Result:** +- 5 separate namespaces exist (one per cluster) +- Each namespace contains the expected jobs/resources created by adapters +- No cross-contamination between clusters (resources are isolated) + +#### Step 4: Verify adapter statuses for all clusters +**Action:** +- For each cluster created in Step 1, check that it has complete adapter status reports: +```bash +curl -s ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/statuses | jq '.items | length' +``` + +**Expected Result:** +- Each cluster has the expected number of adapter status entries +- All adapters report Applied=True, Available=True, Health=True for each cluster +- No missing status reports (no message was lost) + +#### Step 5: Cleanup resources +**Action:** +- For each cluster created in Step 1, delete the namespace: +```bash +kubectl delete namespace {cluster_id} +``` + +**Expected Result:** +- All namespaces and associated resources are deleted successfully + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "clusters" resource type, this step should be replaced with API DELETE calls. + +--- + +## Test Title: Multiple nodepools can coexist under same cluster without conflicts + +### Description + +This test validates that multiple nodepools can be created under the same cluster and coexist without conflicts. It verifies that each nodepool is processed independently by the adapters, has its own set of Kubernetes resources, and reports its own status without interfering with other nodepools. + +--- + +| **Field** | **Value** | +|-----------|---------------| +| **Pos/Neg** | Positive | +| **Priority** | Tier1 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-03-04 | + + +--- + +### Preconditions + +1. Environment is prepared using [hyperfleet-infra](https://github.com/openshift-hyperfleet/hyperfleet-infra) with all required platform resources +2. HyperFleet API and HyperFleet Sentinel services are deployed and running successfully +3. The adapters defined in testdata/adapter-configs are all deployed successfully +4. A cluster resource has been created and its cluster_id is available + - **Cleanup**: Cluster resource cleanup should be handled in test suite teardown where cluster was created + +--- + +### Test Steps + +#### Step 1: Create multiple nodepools under the same cluster +**Action:** +- Submit 3 POST requests to create NodePool resources (each call generates a unique name via `{{.Random}}` template): +```bash +for i in 1 2 3; do + curl -X POST ${API_URL}/api/hyperfleet/v1/nodepools \ + -H "Content-Type: application/json" \ + -d @testdata/payloads/nodepools/gcp.json +done +``` + +**Expected Result:** +- All 3 nodepools are created successfully +- Each returns a unique nodepool ID + +#### Step 2: Verify all nodepools appear in the list +**Action:** +```bash +curl -s ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/nodepools | jq '.items | length' +``` + +**Expected Result:** +- List contains all 3 nodepools +- Each nodepool has a distinct ID and name + +#### Step 3: Verify each nodepool reaches Ready state independently +**Action:** +- For each nodepool created in Step 1, check its conditions: +```bash +curl -s ${API_URL}/api/hyperfleet/v1/nodepools/{nodepool_id} | jq '.conditions' +``` + +**Expected Result:** +- All 3 nodepools eventually reach Ready=True and Available=True +- Each nodepool's adapter status is independent (one nodepool's failure does not block others) + +#### Step 4: Verify Kubernetes resources are isolated per nodepool +**Action:** +- Check that each nodepool has its own set of resources: +```bash +kubectl get configmaps -n {cluster_id} -l nodepool-id +``` + +**Expected Result:** +- Each nodepool's resources are labeled/named distinctly +- No resource name collisions between nodepools +- Resources for one nodepool do not overwrite resources of another + +#### Step 5: Cleanup resources + +**Action:** +- Delete nodepool-specific Kubernetes resources: +```bash +kubectl delete -n {cluster_id} +``` + +**Expected Result:** +- All nodepool-specific resources are deleted successfully + +**Note:** This is a workaround cleanup method. Once CLM supports DELETE operations for "nodepools" resource type, this step should be replaced with: +```bash +curl -X DELETE ${API_URL}/api/hyperfleet/v1/nodepools/{nodepool_id} +``` + +--- diff --git a/test-design/testcases/nodepool.md b/test-design/testcases/nodepool.md index c783973..8d95cf3 100644 --- a/test-design/testcases/nodepool.md +++ b/test-design/testcases/nodepool.md @@ -2,12 +2,14 @@ ## Table of Contents -1. [Nodepools Resource Type - Workflow Validation](#test-title-nodepools-resource-type---workflow-validation) -2. [Nodepools Resource Type - K8s Resource Check Aligned with Preinstalled NodePool Related Adapters Specified](#test-title-nodepools-resource-type---k8s-resource-check-aligned-with-preinstalled-nodepool-related-adapters-specified) +1. [Nodepool can complete end-to-end workflow with required adapters](#test-title-nodepool-can-complete-end-to-end-workflow-with-required-adapters) +2. [Nodepool adapters can create K8s resources with correct metadata](#test-title-nodepool-adapters-can-create-k8s-resources-with-correct-metadata) +3. [API can reject nodepool creation with non-existent cluster](#test-title-api-can-reject-nodepool-creation-with-non-existent-cluster) +4. [API can reject nodepool with name exceeding 15 characters](#test-title-api-can-reject-nodepool-with-name-exceeding-15-characters) --- -## Test Title: Nodepools Resource Type - Workflow Validation +## Test Title: Nodepool can complete end-to-end workflow with required adapters ### Description @@ -23,7 +25,7 @@ This test validates that the workflow can work correctly for nodepools resource | **Automation** | Not Automated | | **Version** | MVP | | **Created** | 2026-02-04 | -| **Updated** | 2026-02-05 | +| **Updated** | 2026-03-04 | --- @@ -46,7 +48,7 @@ This test validates that the workflow can work correctly for nodepools resource ```bash curl -X POST ${API_URL}/api/hyperfleet/v1/nodepools \ -H "Content-Type: application/json" \ - -d @testdata/payloads/nodepools/nodepool-request.json + -d @testdata/payloads/nodepools/gcp.json ``` **Expected Result:** @@ -85,7 +87,20 @@ curl -X GET ${API_URL}/api/hyperfleet/v1/nodepools/{nodepool_id} **Expected Result:** - Final nodepool conditions have `status: True` for both condition `{"type": "Ready"}` and `{"type": "Available"}` -#### Step 4: Cleanup resources +#### Step 4: Verify nodepool appears in list by cluster + +**Action:** +- Retrieve all nodepools belonging to the cluster: +```bash +curl -s ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/nodepools | jq '.' +``` + +**Expected Result:** +- Response returns HTTP 200 (OK) status code +- Response contains an array of nodepools +- The created nodepool appears in the list with matching id, name, and cluster_id + +#### Step 5: Cleanup resources **Action:** - Delete nodepool-specific Kubernetes resources: @@ -103,7 +118,7 @@ curl -X DELETE ${API_URL}/api/hyperfleet/v1/nodepools/{nodepool_id} --- -## Test Title: Nodepools Resource Type - K8s Resource Check Aligned with Preinstalled NodePool Related Adapters Specified +## Test Title: Nodepool adapters can create K8s resources with correct metadata ### Description @@ -142,7 +157,7 @@ This test verifies that the Kubernetes resources of different types (e.g., confi ```bash curl -X POST ${API_URL}/api/hyperfleet/v1/nodepools \ -H "Content-Type: application/json" \ - -d @testdata/payloads/nodepools/nodepool-request.json + -d @testdata/payloads/nodepools/gcp.json ``` **Expected Result:** @@ -173,3 +188,143 @@ curl -X DELETE ${API_URL}/api/hyperfleet/v1/nodepools/{nodepool_id} ``` --- + +## Test Title: API can reject nodepool creation with non-existent cluster + +### Description + +This test validates that the HyperFleet API correctly validates the existence of the parent cluster resource when creating a nodepool, returning HTTP 404 Not Found for non-existent clusters. This ensures proper resource hierarchy validation and prevents orphaned nodepool records. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Negative | +| **Priority** | Tier1 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-03-04 | + +--- + +### Preconditions +1. HyperFleet API is deployed and running successfully +2. No cluster exists with the test cluster ID + +--- + +### Test Steps + +#### Step 1: Attempt to create nodepool with non-existent cluster ID +**Action:** +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters/{fake_cluster_id}/nodepools \ + -H "Content-Type: application/json" \ + -d '{ + "kind": "NodePool", + "name": "np-test", + "spec": { + "nodeCount": 1, + "platform": { + "type": "gcp", + "gcp": { + "machineType": "n2-standard-4" + } + } + } + }' +``` + +**Expected Result:** +- API returns HTTP 404 Not Found +- Response follows RFC 9457 Problem Details format: +```json +{ + "type": "https://api.hyperfleet.io/errors/not-found", + "title": "Not Found", + "status": 404, + "detail": "Cluster not found" +} +``` + +--- + +#### Step 2: Verify error response format (RFC 9457) +**Action:** +- Parse the error response and verify required fields + +**Expected Result:** +- Response contains `type` field +- Response contains `title` field +- Response contains `status` field with value 404 +- Optional: Response contains `detail` field with descriptive message + +--- + +#### Step 3: Verify no nodepool was created +**Action:** +```bash +curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/{fake_cluster_id}/nodepools +``` + +**Expected Result:** +- API returns HTTP 404 (cluster doesn't exist, so nodepools list is not accessible) +- OR returns empty list if API allows listing nodepools for non-existent clusters + +--- + +## Test Title: API can reject nodepool with name exceeding 15 characters + +### Description + +This test validates that the HyperFleet API correctly rejects nodepool creation requests with names exceeding 15 characters. + +--- + +| **Field** | **Value** | +|-----------|-----------| +| **Pos/Neg** | Negative | +| **Priority** | Tier2 | +| **Status** | Draft | +| **Automation** | Not Automated | +| **Version** | MVP | +| **Created** | 2026-02-11 | +| **Updated** | 2026-02-11 | + +--- + +### Preconditions +1. HyperFleet API is deployed and running successfully +2. A valid cluster exists + +--- + +### Test Steps + +#### Step 1: Send POST request with nodepool name exceeding 15 characters +**Action:** +```bash +curl -X POST ${API_URL}/api/hyperfleet/v1/clusters/{cluster_id}/nodepools \ + -H "Content-Type: application/json" \ + -d '{ + "kind": "NodePool", + "name": "this-is-too-long-nodepool-name", + "spec": {"nodeCount": 1, "platform": {"type": "gcp", "gcp": {"machineType": "n2-standard-4"}}} + }' +``` + +**Expected Result:** +- API returns HTTP 400 Bad Request +- Response contains validation error: +```json +{ + "code": "HYPERFLEET-VAL-000", + "detail": "name must be at most 15 characters", + "status": 400, + "title": "Validation Failed" +} +``` + +---