feat: Ruff precommit and GHA check #31
Conversation
Signed-off-by: Alberto Perdomo <aperdomo@redhat.com>
Signed-off-by: Alberto Perdomo <aperdomo@redhat.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
📝 WalkthroughWalkthroughAdds Ruff linting (pre-commit + GitHub Actions) and applies broad stylistic/typing normalization (string quoting, trailing commas, PEP 604 generics, multi-line signatures). Changes are primarily formatting, import cleanup, and type-hint modernizations; a few small behavioral fixes/observable tweaks exist (runtime task variable fix, tunnelling ssh_flags default, removed local summary_file assignment). Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (29)
projects/core/notifications/github/api.py (4)
21-21:⚠️ Potential issue | 🔴 CriticalReplace dict() call with dictionary literal.
Ruff C408 flags this as an unnecessary
dict()call that should be rewritten as a literal for better readability and performance.🔧 Proposed fix
- headers = dict(Authorization=f"Bearer {jwt}") | COMMON_HEADERS + headers = {"Authorization": f"Bearer {jwt}"} | COMMON_HEADERS🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/notifications/github/api.py` at line 21, The assignment to headers uses an unnecessary dict() call; replace the expression creating headers (the variable headers that currently uses dict(Authorization=f"Bearer {jwt}") | COMMON_HEADERS) with a dictionary literal that merges the Authorization entry and COMMON_HEADERS (e.g., a literal containing the "Authorization" key and unpacking COMMON_HEADERS) so Ruff C408 is satisfied and readability/performance are improved.
1-4:⚠️ Potential issue | 🔴 CriticalFix import order to pass Ruff I001 check.
The imports are not sorted according to Ruff's isort rules. According to the pipeline failure, the import block needs to be organized.
📦 Proposed fix for import ordering
+import json +import logging import requests -import json -import logging from datetime import datetime🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/notifications/github/api.py` around lines 1 - 4, The import block in projects/core/notifications/github/api.py is not sorted per Ruff/isort rules; reorder imports so standard-library imports come first (alphabetically) and third-party imports after: change to "from datetime import datetime" followed by "import json" and "import logging", then the third-party "import requests" to satisfy Ruff I001.
53-53:⚠️ Potential issue | 🔴 CriticalReplace dict() call with dictionary literal.
Ruff C408 flags this as an unnecessary
dict()call.🔧 Proposed fix
- data = dict(body=message) + data = {"body": message}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/notifications/github/api.py` at line 53, Replace the unnecessary dict() call that assigns to the variable named data (the line creating data with body and message) with a dictionary literal by constructing a mapping with the "body" key mapped to the message value; update the assignment in the same scope (where data is defined) to use a literal mapping instead of dict().
51-51:⚠️ Potential issue | 🔴 CriticalReplace dict() call with dictionary literal.
Ruff C408 flags this as an unnecessary
dict()call.🔧 Proposed fix
- headers = dict(Authorization=f"Bearer {user_token}") | COMMON_HEADERS + headers = {"Authorization": f"Bearer {user_token}"} | COMMON_HEADERS🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/notifications/github/api.py` at line 51, Replace the unnecessary dict() call when building headers: instead of using dict(Authorization=f"Bearer {user_token}") | COMMON_HEADERS, create a dictionary literal for the Authorization entry and merge it with COMMON_HEADERS (e.g., use a literal for the "Authorization" key and combine with COMMON_HEADERS via the mapping merge operator or dict unpacking) so the variables headers, user_token and COMMON_HEADERS are used without calling dict().projects/core/ci_entrypoint/fournos.py (2)
118-118:⚠️ Potential issue | 🟡 MinorUse modern union syntax for type annotation.
Ruff reports
UP045: Use X | None for type annotations. TheOptional[Path]annotation should use the modern union syntax.🔧 Proposed fix
-def parse_and_save_pr_arguments_fournos() -> Optional[Path]: +def parse_and_save_pr_arguments_fournos() -> Path | None:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/fournos.py` at line 118, Update the return type annotation of function parse_and_save_pr_arguments_fournos from Optional[Path] to the modern union syntax Path | None; adjust imports accordingly (remove Optional from typing if no longer used) and ensure any type comments or references to Optional are updated to the new X | None form.
9-15:⚠️ Potential issue | 🟡 MinorAddress Ruff pipeline failures: unsorted imports.
The Ruff check reports
I001: Import block is un-sorted or un-formatted. The imports should be organized according to Ruff's isort rules.🔧 Proposed fix for import organization
import os -import yaml import logging from pathlib import Path from typing import Optional + +import yaml logger = logging.getLogger(__name__)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/fournos.py` around lines 9 - 15, The import block in fournous.py is not sorted per Ruff/isort; reorder imports so standard library imports (os, logging, pathlib.Path, typing.Optional) come first, then third-party (yaml), and ensure alphabetical order within each group and a single blank line between groups, e.g., group and alphabetize os, logging, from pathlib import Path, from typing import Optional, then yaml; alternatively run isort/ruff --fix to apply the correct formatting; keep the existing logger = logging.getLogger(__name__) line unchanged.projects/core/ci_entrypoint/github/pr_args.py (2)
11-20:⚠️ Potential issue | 🟡 MinorAddress Ruff pipeline failures: unsorted imports and unused import.
The Ruff check reports:
I001: Import block is un-sorted or un-formattedF401: 're' imported but unused🔧 Proposed fix
import json import logging import os -import re import sys +import urllib.error +import urllib.request from pathlib import Path -from typing import Dict, List, Optional, Tuple, Any, Callable -import urllib.request -import urllib.error +from typing import Any, Callable, Dict, List, Optional, Tuple🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/github/pr_args.py` around lines 11 - 20, The import block in pr_args.py is unsorted and includes an unused import 're'; fix by removing the unused 're' import and reordering imports to follow the project's import grouping and alphabetical order (standard library, third-party, local) as expected by Ruff/isort; update the top-level imports (the list containing json, logging, os, sys, Path, typing names, urllib.request, urllib.error) so they are sorted and grouped correctly and run the project's formatter/ruff to verify the I001 and F401 errors are resolved.
138-139:⚠️ Potential issue | 🟡 MinorUse exception chaining in except clause.
Ruff reports
B904: raise exceptions within except clause should use 'raise ... from err'. This preserves the exception chain for better debugging.🔧 Proposed fix
except Exception as e: - raise Exception(f"Invalid /var directive: {line} - {e}") + raise Exception(f"Invalid /var directive: {line} - {e}") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/github/pr_args.py` around lines 138 - 139, Change the bare re-raise in the except block to use exception chaining so the original exception is preserved: replace the current "raise Exception(f\"Invalid /var directive: {line} - {e}\")" in the except Exception as e: handler with a chained raise using "from e" (i.e., raise Exception(f"Invalid /var directive: {line} - {e}") from e) so the original traceback is kept.projects/jump_ci/testing/tunnelling.py (1)
92-99:⚠️ Potential issue | 🔴 CriticalCritical: F-string interpolation broken by quote normalization.
The quote style change broke the f-string interpolation for
ansible_ssh_common_args. The current code:ansible_ssh_common_args: "{" ".join(SSH_FLAGS)}"Will be parsed as
{" "}(interpolating a space literal) followed by the text.join(SSH_FLAGS)}", producing malformed YAML output likeansible_ssh_common_args: " .join(SSH_FLAGS)}".The original intent was to join SSH flags with spaces. You need to keep the single quotes inside the f-string interpolation.
🐛 Proposed fix
extra_vars_yaml_content = f""" ansible_port: {remote_host_port} ansible_ssh_private_key_file: {private_key_path} ansible_ssh_user: {bastion_user} -ansible_ssh_common_args: "{" ".join(SSH_FLAGS)}" +ansible_ssh_common_args: "{' '.join(SSH_FLAGS)}" """To satisfy Ruff's quote preference while keeping correct behavior, you could also extract the join operation:
ssh_common_args = " ".join(SSH_FLAGS) extra_vars_yaml_content = f""" ansible_port: {remote_host_port} ansible_ssh_private_key_file: {private_key_path} ansible_ssh_user: {bastion_user} ansible_ssh_common_args: "{ssh_common_args}" """🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/tunnelling.py` around lines 92 - 99, The f-string for extra_vars_yaml_content currently breaks interpolation around ansible_ssh_common_args because the inner quotes were normalized; fix by computing the join outside the f-string (e.g., create ssh_common_args = " ".join(SSH_FLAGS)) and then reference that variable inside extra_vars_yaml_content (ansible_ssh_common_args: "{ssh_common_args}"), or alternatively escape/adjust quotes so the expression {" ".join(SSH_FLAGS)} remains a valid f-string; update the construction of extra_vars_yaml_content used by the print/flush accordingly (see extra_vars_yaml_content and SSH_FLAGS).projects/core/ci_entrypoint/run_ci.py (4)
125-128:⚠️ Potential issue | 🟡 MinorUse exception chaining with
raise ... from.The pipeline flags B904: within an
exceptclause,raiseshould useraise ... from errorraise ... from Noneto preserve the exception chain.🔧 Proposed fix
except subprocess.CalledProcessError as pip_error: print(f"❌ Failed to install {'/'.join(packages)}: {pip_error}") - raise RuntimeError("failed to install the extra packages") + raise RuntimeError("failed to install the extra packages") from pip_error🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 125 - 128, The except block catching subprocess.CalledProcessError (pip_error) should preserve the original exception chain; instead of re-raising a new RuntimeError without context, change the raise to use exception chaining (raise RuntimeError("failed to install the extra packages") from pip_error) so the original pip_error is preserved; locate the except handling that prints f"❌ Failed to install {'/'.join(packages)}: {pip_error}" and update the raise accordingly.
522-523:⚠️ Potential issue | 🟡 MinorUnused variable
ein exception handler.The pipeline flags F841: Local variable
eis assigned but never used. Either use the exception or remove the binding.🔧 Proposed fix
- except Exception as e: + except Exception: logging.exception("Unexpected exception")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 522 - 523, The except block binds an unused variable `e` causing F841; change the handler from "except Exception as e:" to "except Exception:" so logging.exception("Unexpected exception") can remain and the unused local is removed (update the except clause in run_ci.py where that try/except resides).
194-239:⚠️ Potential issue | 🟠 MajorDuplicate function definition
find_ci_script.The function
find_ci_scriptis defined twice (lines 194-215 and 218-239) with identical implementations. The pipeline flags this as F811: Redefinition of unusedfind_ci_scriptfrom line 194. Remove one of the duplicate definitions.🐛 Proposed fix - remove duplicate
return None -def find_ci_script(project_dir: Path, operation: str) -> Optional[Path]: - """ - Find the appropriate CI script for the operation. - - Args: - project_dir: Project directory path - operation: Operation to perform (e.g., 'ci') - - Returns: - Path to CI script if found, None otherwise - """ - # Check possible locations for CI scripts - possible_locations = [ - # Operation-specific script in orchestration subdirectory - project_dir / "orchestration" / f"{operation}.py", - ] - - for script_path in possible_locations: - if script_path.exists() and os.access(script_path, os.X_OK): - return script_path - - return None - - def get_available_projects() -> List[str]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 194 - 239, The file contains two identical definitions of find_ci_script which causes a redefinition error; remove one of the duplicate function blocks so there is only a single find_ci_script implementation, keeping the docstring and behavior that checks project_dir / "orchestration" / f"{operation}.py" and verifies existence and executability via script_path.exists() and os.access(script_path, os.X_OK); ensure any imports used by the remaining function (Path, Optional, os) are preserved and run tests to confirm no references rely on the removed duplicate.
20-32:⚠️ Potential issue | 🟡 MinorImport block is unsorted per Ruff I001.
The pipeline reports that the import block is unsorted or unformatted. Consider running
ruff check --fixorruff formatto auto-fix.🔧 Suggested import ordering
-# Import CI preparation module -from projects.core.ci_entrypoint import prepare_ci - import os import sys import subprocess import logging import signal import time from pathlib import Path from typing import List, Optional import click +# Import CI preparation module +from projects.core.ci_entrypoint import prepare_ci +🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 20 - 32, The import block at the top of run_ci.py is unsorted (Ruff I001); reorder imports into the standard groups (stdlib, third-party, local) and sort them alphabetically within each group—e.g., stdlib: os, signal, subprocess, sys, time; third-party: click; typing and pathlib treated as stdlib; local: from projects.core.ci_entrypoint import prepare_ci—then run an auto-fix like `ruff check --fix` or `ruff format` to apply the changes.projects/core/ci_entrypoint/prepare_ci.py (3)
334-338:⚠️ Potential issue | 🟡 MinorRemove redundant
fprefix on static log strings.Lines 334 and 337 use f-strings without interpolation, which Ruff flags (F541).
Proposed fix
- logger.warning(f"PULL_BASE_SHA not set. Showing the last commits from main.") + logger.warning("PULL_BASE_SHA not set. Showing the last commits from main.") @@ - logger.warning(f"PULL_PULL_SHA not set. Showing the last commits from main.") + logger.warning("PULL_PULL_SHA not set. Showing the last commits from main.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/prepare_ci.py` around lines 334 - 338, The two logger.warning calls use unnecessary f-strings for static messages; replace the f-strings with plain string literals in the logger.warning calls that reference the PULL_BASE_SHA and PULL_PULL_SHA checks (the logger.warning(...) calls shown alongside the PULL_BASE_SHA and pull_sha/os.environ.get("PULL_PULL_SHA", "") logic) so Ruff F541 is avoided—i.e., remove the leading f from those two message strings.
660-662:⚠️ Potential issue | 🔴 CriticalDrop unused exception binding and non-interpolated f-string (Ruff blocker).
Line 660 binds
ebut never uses it (F841), and Line 661 has an unnecessary f-string (F541).Proposed fix
- except Exception as e: - logger.exception(f"Failed to send notifications") + except Exception: + logger.exception("Failed to send notifications") # Don't fail the entire job if notifications fail🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/prepare_ci.py` around lines 660 - 662, Remove the unused exception binding and the unnecessary f-string in the notification error handler: change the except clause from "except Exception as e:" to "except Exception:" and call logger.exception with a normal string literal (e.g., logger.exception("Failed to send notifications")) so the exception is still logged without F841/F541 lint errors in the notification-sending block of prepare_ci.py.
9-23:⚠️ Potential issue | 🔴 CriticalFix import block ordering and remove unused
glob(Ruff blockers).Line 16 (
glob) is unused, and Lines 41-43 are module imports placed after executable code (logger = ...), which triggers E402/I001 and blocks CI.Proposed fix
import os import sys import logging import yaml import threading import time import subprocess -import glob import json import shutil import requests from datetime import datetime from pathlib import Path from typing import Optional, List from enum import StrEnum +import projects.core.ci_entrypoint.fournos as fournos +import projects.core.ci_entrypoint.github.pr_args as github_pr_args +import projects.core.notifications.send as send + IS_LIGHTWEIGHT_IMAGE = os.environ.get("FORGE_LIGHT_IMAGE") @@ # Set up logging logger = logging.getLogger(__name__) - -import projects.core.ci_entrypoint.fournos as fournos -import projects.core.ci_entrypoint.github.pr_args as github_pr_args -import projects.core.notifications.send as sendAlso applies to: 41-43
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/prepare_ci.py` around lines 9 - 23, Remove the unused glob import and move any imports that currently occur after executable code up into the top import block, keeping imports ordered by stdlib, third-party, then local; specifically delete the "glob" import and ensure all module imports appear before the line that defines "logger = logging.getLogger(...)" (or any other executable statements) so Ruff errors E402/I001 are resolved. Ensure import ordering groups (e.g., os, sys, logging, pathlib, datetime, typing, enum) come first, followed by third-party imports like requests and yaml, then local modules.projects/cluster/toolbox/build_image/main.py (4)
277-279:⚠️ Potential issue | 🟡 MinorRemove unused variable
summary_file.The variable
summary_fileis assigned but never used. Either use it or remove the assignment.🔧 Proposed fix
def generate_build_summary(args, ctx): """Generate a summary of the build process""" - summary_file = args.artifact_dir / "artifacts" / "build_summary.txt" - logger.info("=== Shipwright Build Summary ===")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/cluster/toolbox/build_image/main.py` around lines 277 - 279, The variable summary_file assigned as args.artifact_dir / "artifacts" / "build_summary.txt" is unused; remove the assignment (delete the summary_file variable) or, if intended, actually write or read from that path before logging. Locate the assignment to summary_file in main.py near the logger.info("=== Shipwright Build Summary ===") call and either remove the summary_file line or replace it with the intended file usage (e.g., open/write/read) so the variable is not left unused.
140-142:⚠️ Potential issue | 🟡 MinorRemove unused variable
delete_result.The variable
delete_resultis assigned but never used. Either use it or remove the assignment.🔧 Proposed fix
# Check if build already exists and delete if it does - delete_result = shell.run( + shell.run( f"oc delete build {ctx.build_name} -n {args.namespace} --ignore-not-found", )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/cluster/toolbox/build_image/main.py` around lines 140 - 142, The variable delete_result is assigned from shell.run but never used; remove the unused assignment by calling shell.run(...) directly (e.g. replace "delete_result = shell.run(f\"oc delete build {ctx.build_name} -n {args.namespace} --ignore-not-found\")" with a direct call to shell.run(...)) or, if you need the result, consume it (e.g. check or log the return) — update the code around the shell.run invocation that references ctx.build_name, args.namespace, and delete_result accordingly.
11-30:⚠️ Potential issue | 🟡 MinorFix import ordering to satisfy Ruff I001.
Imports should be organized: standard library first, then third-party, then local imports. The current order violates this convention.
📦 Proposed fix for import ordering
#!/usr/bin/env python3 """ Shipwright Build Image Toolbox Creates and manages Shipwright builds for container images. Ensures ImageStream exists, creates the build definition, triggers the build, and waits for completion. """ +import json +import logging + from projects.core.dsl import ( task, retry, always, execute_tasks, shell, toolbox, template, ) - -from datetime import datetime -from pathlib import Path -import time -import json - -import logging - logger = logging.getLogger("TOOLBOX")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/cluster/toolbox/build_image/main.py` around lines 11 - 30, The imports are misordered (Ruff I001); move standard-library imports (datetime, pathlib.Path, time, json, logging) to the top, followed by third-party (none here), then the local package import from projects.core.dsl (task, retry, when, always, execute_tasks, clear_tasks, shell, toolbox, template); ensure grouping and single blank line between sections and remove extraneous blank lines so the import block starts with datetime/Path/time/json/logging and then the from projects.core.dsl import ... line.
10-27:⚠️ Potential issue | 🟡 MinorRemove unused imports flagged by Ruff.
The following imports are unused and should be removed to pass Ruff checks:
- Line 10:
envfromprojects.core.library- Line 14:
whenfromprojects.core.dsl- Line 17:
clear_tasksfromprojects.core.dsl- Line 24:
datetime- Line 25:
Pathfrompathlib- Line 26:
time🔧 Proposed fix
-from projects.core.library import env from projects.core.dsl import ( task, retry, - when, always, execute_tasks, - clear_tasks, shell, toolbox, template, ) -from datetime import datetime -from pathlib import Path -import time import json🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/cluster/toolbox/build_image/main.py` around lines 10 - 27, Remove the unused imports flagged by Ruff: delete env from the projects.core.library import and remove when and clear_tasks from the projects.core.dsl import list, and also remove datetime, Path, and time imports; update the import blocks (task, retry, always, execute_tasks, shell, toolbox, template) to only include used symbols so the top of main.py no longer imports env, when, clear_tasks, datetime, Path, or time.projects/core/dsl/cli.py (2)
98-104:⚠️ Potential issue | 🟡 MinorFix type comparison to use
isinstead of==.Type comparisons should use
isfor identity checks rather than==for equality. This is more reliable and follows Python best practices.🔧 Proposed fix
- if param.annotation == bool: + if param.annotation is bool: # Boolean args are always optional flags cli_name = f"--{param_name.replace('_', '-')}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/cli.py` around lines 98 - 104, The boolean type check in the CLI argument builder currently uses equality (param.annotation == bool); change this to an identity check (param.annotation is bool) within the same block that handles boolean args (the branch that builds cli_name and calls parser.add_argument with action="store_true", dest=param_name, help=help_text) so the type comparison uses Python's recommended "is" for type identity.
105-110:⚠️ Potential issue | 🟡 MinorFix type comparison to use
isorisinstance().The type check on Line 105 should also use
isfor identity comparison orisinstance()for type checking.🔧 Proposed fix
- elif param.annotation in (str, int, float): + elif param.annotation is str or param.annotation is int or param.annotation is float: arg_type = param.annotationOr more concisely using a tuple check:
- elif param.annotation in (str, int, float): + elif param.annotation in {str, int, float}: arg_type = param.annotation🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/cli.py` around lines 105 - 110, The conditional comparing param.annotation to built-in types should use identity or isinstance checks rather than equality; update the branch in the CLI argument-typing logic that assigns arg_type (the block referencing param.annotation and setting arg_type to str/int/float) to use param.annotation is str / is int / is float or use isinstance(param.annotation, type) or a tuple identity check (e.g., param.annotation in (str, int, float) replaced with a proper identity/instance check) so arg_type is determined reliably.projects/core/library/run.py (1)
1-12:⚠️ Potential issue | 🟡 MinorFix import organization issues.
Multiple issues detected:
- E402: Module-level statement (
logging.getLogger().setLevel(logging.INFO)) appears before all imports are complete- F811:
signalis imported twice (line 1 and line 7)- Import statements should be at the top without intervening code
📦 Proposed fix
-import sys, os, signal +import itertools +import json +import logging +import os +import signal +import subprocess +import sys import traceback -import logging - -logging.getLogger().setLevel(logging.INFO) -import json -import signal -import itertools -import subprocess +from . import config, env -from . import env, config +logging.getLogger().setLevel(logging.INFO)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/library/run.py` around lines 1 - 12, Reorder the module so all import statements appear first, remove the duplicate import of signal (keep a single "import signal"), and move the module-level call logging.getLogger().setLevel(logging.INFO) to after the imports; ensure imports are grouped (standard libs first: sys, os, signal, itertools, subprocess, traceback, logging, json) followed by local imports (from . import env, config) and then set the logging level.projects/core/library/config.py (1)
218-229:⚠️ Potential issue | 🟡 MinorFix exception chaining in except block.
When raising a new exception inside an except block, use
from exto properly chain exceptions orfrom Noneto suppress the context. This helps with debugging by preserving the exception chain.🔧 Proposed fix
except IndexError as ex: if default_value != ...: if warn: logging.warning( f"get_config: {jsonpath} --> missing. Returning the default value: {default_value}" ) return default_value logging.error(f"get_config: {jsonpath} --> {ex}") - raise KeyError(f"Key '{jsonpath}' not found in {self.config_path}") + raise KeyError(f"Key '{jsonpath}' not found in {self.config_path}") from ex🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/library/config.py` around lines 218 - 229, In the except IndexError as ex block inside get_config (where value = jsonpath_ng.parse(jsonpath).find(self.config)[0].value is attempted), preserve exception chaining by changing the final raise to include the original exception (e.g. raise KeyError(f"Key '{jsonpath}' not found in {self.config_path}") from ex) so the IndexError context is kept for debugging; keep the existing default_value/warn logic intact and only update the raise statement.projects/core/dsl/runtime.py (3)
147-150:⚠️ Potential issue | 🔴 CriticalFix undefined variable in always-task execution path.
_execute_single_task(task_info, ...)uses an undefined name and raisesNameErrorwhen this path executes. Use the popped variable.💡 Proposed fix
while file_tasks: try: current_task_info = file_tasks.pop(0) - _execute_single_task(task_info, args, shared_context) + _execute_single_task(current_task_info, args, shared_context)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/runtime.py` around lines 147 - 150, The always-task branch is calling _execute_single_task with an undefined name task_info causing a NameError; change the invocation to pass the popped variable current_task_info (i.e., call _execute_single_task(current_task_info, args, shared_context)) so the function receives the correct task object; search for similar uses in the same function or surrounding code to ensure no other calls mistakenly use task_info instead of current_task_info.
354-365:⚠️ Potential issue | 🟠 MajorDrop unused
args_addedstate to satisfy Ruff.
args_addedis assigned but never read, causingF841.💡 Proposed fix
- args_added = False for key, value in function_args.items(): if ( key not in ["function_args"] and value is not None ): # Skip internal parameters and None values if isinstance(value, bool): if value: # Only add flag if True script_content += " \\\n " + f"--{key.replace('_', '-')}" - args_added = True else: script_content += " \\\n " + f'--{key.replace("_", "-")} "{value}"' - args_added = True🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/runtime.py` around lines 354 - 365, The variable args_added in the loop that builds script_content (iterating function_args) is assigned but never used; remove the unused state by deleting the args_added = False initialization and all subsequent assignments to args_added inside the loop (the lines setting args_added = True), leaving the logic that appends flags/args to script_content intact; ensure no other code relies on args_added before committing changes.
85-89:⚠️ Potential issue | 🟠 MajorRemove unused
artifact_dirbinding to unblock Ruff.The context-manager target is unused and triggers
F841.💡 Proposed fix
- with env.NextArtifactDir(command_name) as artifact_dir: + with env.NextArtifactDir(command_name):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/runtime.py` around lines 85 - 89, The with-statement binds artifact_dir from env.NextArtifactDir(command_name) but that local is never used (triggering F841); remove the unused binding by changing the context manager to `with env.NextArtifactDir(command_name):` (or use a throwaway name like `_`), then keep the existing conversion of function_args into types.SimpleNamespace and the assignment to args.artifact_dir = env.ARTIFACT_DIR so env.NextArtifactDir is still entered but no unused variable remains.projects/core/dsl/task.py (1)
5-16:⚠️ Potential issue | 🟠 MajorClean and sort imports to satisfy Ruff.
typesandyamlare unused, and the block is not Ruff-formatted (I001,F401).💡 Proposed fix
import functools +import inspect import logging +import os import time -import inspect -import os -import types -import yaml from typing import Callable, Any, Optional🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/task.py` around lines 5 - 16, Remove the unused imports `types` and `yaml` from the import block and reorder the imports to satisfy Ruff/isort: group and sort standard library imports (functools, inspect, logging, os, time), then third-party/typing imports (from typing import Any, Callable, Optional), then local project imports (import projects.core.library.env as env, from .log import log_task_header, log_execution_banner, from .script_manager import get_script_manager); ensure no other unused names remain and run Ruff to confirm I001/F401 are resolved.projects/cluster/toolbox/rebuild_image/main.py (1)
10-30:⚠️ Potential issue | 🟠 MajorFix import block ordering and remove unused imports.
Current imports fail Ruff (
I001,F401) and block the workflow.💡 Proposed fix
-from projects.core.library import env from projects.core.dsl import ( - task, - retry, - when, - always, + always, + execute_tasks, + retry, + shell, task, - execute_tasks, - clear_tasks, - shell, - toolbox, template, + toolbox, ) -from datetime import datetime -from pathlib import Path -import time import json - import logging🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/cluster/toolbox/rebuild_image/main.py` around lines 10 - 30, Reorder and clean the import block so it satisfies Ruff/isort: place standard library imports (datetime, time, json, logging, pathlib.Path) first, then third-party, then local/project imports (projects.core.library.env and projects.core.dsl symbols), and remove any unused names; specifically verify which of env, task, retry, when, always, execute_tasks, clear_tasks, shell, toolbox, template and which of datetime, Path, time, json, logging are actually used in this file and delete the unused imports. After updating the import order and pruning unused symbols, run ruff/isort to confirm I001/F401 are resolved.
🧹 Nitpick comments (6)
projects/jump_ci/testing/tunnelling.py (1)
113-113: Mutable default argument.Using a mutable default argument
ssh_flags=[]can lead to unexpected behavior if the list is modified. Consider usingNoneas the default and initializing inside the function.♻️ Proposed fix
- ssh_flags=[], + ssh_flags=None, ): + if ssh_flags is None: + ssh_flags = []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/tunnelling.py` at line 113, The function currently uses a mutable default argument ssh_flags=[], which can be shared across calls; change the parameter default to ssh_flags=None and inside the function (in the body of the function that declares ssh_flags) initialize it with something like: if ssh_flags is None: ssh_flags = [] so each call gets a fresh list; update any type hints or callers if needed and ensure subsequent logic uses the newly initialized ssh_flags variable (refer to the ssh_flags parameter in the tunnelling function in tunnelling.py).projects/core/ci_entrypoint/github/pr_args.py (1)
17-17: Use modern typing imports.Ruff reports
UP035: typing imports should come from collections.abc. For Python 3.9+,Dict,List, andTuplecan be replaced with built-indict,list, andtuple.♻️ Proposed fix for modern type hints
-from typing import Dict, List, Optional, Tuple, Any, Callable +from collections.abc import Callable +from typing import Any, Optional # Then update usages throughout the file: # Dict[str, Any] -> dict[str, Any] # List[str] -> list[str] # Tuple[...] -> tuple[...]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/github/pr_args.py` at line 17, Replace legacy typing imports on the top of projects/core/ci_entrypoint/github/pr_args.py: swap Dict, List, Tuple for the built-in types dict, list, tuple and import Callable from collections.abc (keep Optional and Any from typing if still used). Update the import statement so only Optional and Any remain in typing while Callable is imported from collections.abc, ensuring all type annotations using Dict/List/Tuple/Callable are updated to use dict/list/tuple/Callable respectively.projects/foreign_testing/orchestration/foreign_testing.py (1)
1-9: Consider moving logger initialization after all imports.The
logger = logging.getLogger(__name__)at line 4 is placed between import statements. While functional, conventionally module-level initializations come after all imports.♻️ Suggested reordering
import pathlib import logging - -logger = logging.getLogger(__name__) import os import shutil from projects.core.library import env, config, run from projects.core.ci_entrypoint import run_ci + +logger = logging.getLogger(__name__)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/foreign_testing/orchestration/foreign_testing.py` around lines 1 - 9, The logger instantiation is placed between imports; move the module-level initialization logger = logging.getLogger(__name__) to after all import statements so it sits with other top-level initializations; update the placement in projects/foreign_testing/orchestration/foreign_testing.py so logger is defined after pathlib, logging, os, shutil and the from ... imports (referencing the logger symbol and module __name__) to follow conventional ordering.projects/core/ci_entrypoint/prepare_ci.py (1)
293-305: Return annotations/docstrings should match actual behavior.
precheck_artifact_dir()andsystem_prechecks()are described/typed as returningbool, but they currently returnNone(or raise). Consider switching both to-> None(and update docstrings) unless you plan to return explicit booleans.Also applies to: 364-370
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/prepare_ci.py` around lines 293 - 305, precheck_artifact_dir and system_prechecks are annotated and documented as returning bool but they currently return None (or raise); update both functions to have a return type of -> None and revise their docstrings to reflect they do not return a value (remove or replace the "Returns: bool" section with "Returns: None" or no Returns section). Locate the functions by name (precheck_artifact_dir and system_prechecks) and ensure there are no implicit bare returns expecting a boolean; alternatively, if you prefer explicit booleans instead, change the implementation to return True/False consistently and keep the bool annotation—pick one approach and make the signature, docstring, and implementation consistent..github/workflows/ruff.yaml (1)
19-24: Pin GitHub Actions to commit SHAs for supply-chain hardening.Replace mutable version tags with full commit SHAs at line 19 (
actions/checkout@v4) and line 22 (actions/setup-python@v5). This prevents unexpected behavior changes from upstream updates and reduces supply-chain risk.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ruff.yaml around lines 19 - 24, Replace the mutable action tags actions/checkout@v4 and actions/setup-python@v5 with their corresponding immutable commit SHAs to pin workflow dependencies; locate the two usages of actions/checkout and actions/setup-python in the workflow and update the version strings to the full commit SHA values (e.g., actions/checkout@<checkout-sha> and actions/setup-python@<setup-python-sha>) ensuring you use the official commit SHAs from each action's repository release page and update both occurrences consistently.projects/core/dsl/template.py (1)
122-123: Simplify unnecessary list comprehension.The list comprehension
[t for t in templates_dir.glob("*.j2")]can be simplified tolist(templates_dir.glob("*.j2"))for better readability.♻️ Proposed simplification
- available_templates = [t for t in templates_dir.glob("*.j2")] + available_templates = list(templates_dir.glob("*.j2"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/template.py` around lines 122 - 123, Replace the unnecessary list comprehension used to populate available_templates with a direct list() call around the generator from templates_dir.glob("*.j2"); specifically, change the assignment to available_templates so it uses list(templates_dir.glob("*.j2")) (the surrounding logic that raises FileNotFoundError if empty should remain unchanged) to improve readability in the code that includes the available_templates variable and the subsequent FileNotFoundError raise.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2050195c-454b-4a7f-a676-268e914cf247
📒 Files selected for processing (72)
.github/workflows/ruff.yaml.pre-commit-config.yamlprojects/cluster/toolbox/build_image/main.pyprojects/cluster/toolbox/rebuild_image/main.pyprojects/core/ci_entrypoint/fournos.pyprojects/core/ci_entrypoint/github/pr_args.pyprojects/core/ci_entrypoint/prepare_ci.pyprojects/core/ci_entrypoint/run_ci.pyprojects/core/dsl/__init__.pyprojects/core/dsl/cli.pyprojects/core/dsl/context.pyprojects/core/dsl/log.pyprojects/core/dsl/runner.pyprojects/core/dsl/runtime.pyprojects/core/dsl/script_manager.pyprojects/core/dsl/shell.pyprojects/core/dsl/task.pyprojects/core/dsl/template.pyprojects/core/dsl/toolbox.pyprojects/core/dsl/utils/__init__.pyprojects/core/dsl/utils/k8s.pyprojects/core/image/ocpci/include_containerfile.pyprojects/core/launcher/forge_launcher.pyprojects/core/library/ci.pyprojects/core/library/cli.pyprojects/core/library/config.pyprojects/core/library/env.pyprojects/core/library/run.pyprojects/core/library/vault.pyprojects/core/notifications/github/__init__.pyprojects/core/notifications/github/api.pyprojects/core/notifications/github/gen_jwt.pyprojects/core/notifications/send.pyprojects/core/notifications/slack/__init__.pyprojects/core/notifications/slack/api.pyprojects/foreign_testing/orchestration/cli.pyprojects/foreign_testing/orchestration/foreign_testing.pyprojects/fournos_launcher/orchestration/ci.pyprojects/fournos_launcher/orchestration/cli.pyprojects/fournos_launcher/orchestration/submit.pyprojects/fournos_launcher/orchestration/utils.pyprojects/fournos_launcher/toolbox/submit_and_wait/main.pyprojects/jump_ci/orchestration/ci.pyprojects/jump_ci/testing/prepare_jump_ci.pyprojects/jump_ci/testing/test.pyprojects/jump_ci/testing/tunnelling.pyprojects/jump_ci/testing/utils.pyprojects/jump_ci/testing/utils_gethostname.pyprojects/jump_ci/toolbox/jump_ci.pyprojects/legacy/ansible-config/callback_plugins/human_log.pyprojects/legacy/ansible-config/callback_plugins/json_to_logfile.pyprojects/legacy/library/__init__.pyprojects/legacy/library/ansible_toolbox.pyprojects/legacy/library/common.pyprojects/legacy/library/config.pyprojects/legacy/library/env.pyprojects/legacy/library/export.pyprojects/legacy/library/run.pyprojects/legacy/library/run_toolbox.pyprojects/legacy/library/sizing.pyprojects/llm_d/orchestration/ci.pyprojects/llm_d/orchestration/cli.pyprojects/llm_d/orchestration/prepare_llmd.pyprojects/llm_d/orchestration/test_llmd.pyprojects/llm_d/toolbox/capture_isvc_state/main.pyprojects/repo/toolbox/repo.pyprojects/skeleton/orchestration/ci.pyprojects/skeleton/orchestration/cli.pyprojects/skeleton/orchestration/prepare_skeleton.pyprojects/skeleton/orchestration/test_skeleton.pyprojects/skeleton/toolbox/cluster_info/main.pypyproject.toml
| with tarfile.open(tar_file, "r:gz") as tar: | ||
| tar.extract("oc", temp_dir, filter="data") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check Python version requirement in project configuration
# Check pyproject.toml for python version requirement
fd -t f 'pyproject.toml' --exec cat {} \; | grep -A 5 'python'
# Check setup.py for python_requires
fd -t f 'setup.py' --exec cat {} \; | grep -A 2 'python_requires'
# Check any version files
fd -t f '.python-version' --exec cat {} \;Repository: openshift-psap/forge
Length of output: 627
Incompatible with project's Python version requirement.
The filter="data" parameter on line 37 requires Python 3.12+ (PEP 706), but the project specifies requires-python = ">=3.11" in pyproject.toml. This will raise a TypeError at runtime on Python 3.11. Either upgrade the minimum Python requirement to 3.12+, or remove the filter parameter.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@projects/fournos_launcher/orchestration/utils.py` around lines 36 - 37, The
tar.extract call uses the filter="data" kwarg which requires Python 3.12+,
conflicting with our project's requires-python >=3.11; remove the filter
parameter to restore compatibility. Locate the tarfile.open block (the variable
tar from tarfile.open(tar_file, "r:gz")) and change tar.extract("oc", temp_dir,
filter="data") to tar.extract("oc", temp_dir) (or an equivalent extraction call
without the filter kwarg) so the code runs on Python 3.11.
| if not line or line.startswith("#"): | ||
| continue |
There was a problem hiding this comment.
Initialize and validate group before using it in machine entries.
group is only set when a line starts with # , but later entries always use it. If the file starts with a machine row, this can fail at runtime (UnboundLocalError on Line 34).
Suggested fix
def parse_machines():
machines = {}
+ group = None
with open(MACHINES_FILE) as f:
for _line in f.readlines():
line = _line.strip()
if line.startswith("# "):
group = line.strip("# ")
@@
instance, cpu, memory, price, *accel = line.split(", ")
+ if group is None:
+ raise ValueError("Machine entry found before any '# <group>' header")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@projects/legacy/library/sizing.py` around lines 25 - 26, The parser currently
sets group only when encountering header lines like those starting with "# " but
later always uses group for machine rows (causing UnboundLocalError if a machine
row appears first); initialize group to None (or an empty string) before the
loop and, inside the machine-entry handling code (the block that reads machine
rows and uses group), validate that group is set (raise a clear ValueError or
assign a sensible default) before using it; update references to the group
variable in the parsing logic so they check for None/empty and provide a helpful
error message mentioning the offending line or set a default group name.
|
Fixing issues still. Marking as WIP. |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
projects/legacy/library/run.py (1)
101-106:⚠️ Potential issue | 🟠 Major
_dict_to_run_toolbox_args()still drops serialization and escaping.Line 101-Line 106 still compute transformed values but build args from raw
v, so dict/list values aren’t passed as JSON and quote escaping is ineffective for string values.Proposed fix
def _dict_to_run_toolbox_args(args_dict): args = [] for k, v in args_dict.items(): - if isinstance(v, dict) or isinstance(v, list): - json.dumps(v) - arg = f'--{k}="{v}"' - else: - str(v).replace("'", "'") - arg = f"--{k}='{v}'" + if isinstance(v, (dict, list)): + val = json.dumps(v) + else: + val = str(v) + + safe_val = val.replace("'", "'\"'\"'") + arg = f"--{k}='{safe_val}'" args.append(arg) return " ".join(args)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/legacy/library/run.py` around lines 101 - 106, The helper _dict_to_run_toolbox_args currently computes transformed values but then builds args from the raw v; fix it by assigning the transformed value to a variable and using that when building arg: for dict/list call json.dumps(v) and set value = json.dumps(v) and for other values set value = str(v).replace("'", "\\'") (or otherwise escape single quotes), then construct arg using value (e.g., arg = f'--{k}="{value}"' for JSON/dict/list and arg = f"--{k}='{value}'" for strings) so serialization and escaping actually take effect; update places referencing v/k/arg in _dict_to_run_toolbox_args accordingly.
🧹 Nitpick comments (2)
projects/jump_ci/orchestration/ci.py (1)
53-60: Optional: deduplicate repeated--clusteroption definition.The same Click option shape is repeated for
lock_clusterandunlock_cluster. Consider extracting a small decorator factory to keep this centralized.♻️ Suggested refactor
+def cluster_option(help_text: str): + return click.option( + "--cluster", + "cluster", + nargs=1, + metavar="KEY", + help=help_text, + default=None, + ) + `@main.command`() `@click.pass_context` -@click.option( - "--cluster", - "cluster", - nargs=1, - metavar="KEY", - help="Give the name of the cluster to lock", - default=None, -) +@cluster_option("Give the name of the cluster to lock") def lock_cluster(ctx, cluster): @@ `@main.command`() `@click.pass_context` -@click.option( - "--cluster", - "cluster", - nargs=1, - metavar="KEY", - help="Give the name of the cluster to unlock", - default=None, -) +@cluster_option("Give the name of the cluster to unlock") def unlock_cluster(ctx, cluster):Also applies to: 88-95
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/orchestration/ci.py` around lines 53 - 60, The Click option for "--cluster" is duplicated for the lock_cluster and unlock_cluster command decorators; extract a small decorator factory (e.g., cluster_option) that returns click.option(...) with the existing args ( "--cluster", "cluster", nargs=1, metavar="KEY", help="Give the name of the cluster to lock", default=None ) and use that decorator on both the lock_cluster and unlock_cluster command functions to remove repetition and centralize option configuration (refer to the existing click.option usage on lock_cluster and unlock_cluster to replicate the exact option signature).projects/jump_ci/testing/tunnelling.py (1)
91-97: Prefer structured YAML emission over manual string templating.Line 91-Line 97 manually build YAML and can break when SSH flags contain YAML-sensitive characters. Safer to dump a dict via
yaml.safe_dump.Proposed refactor
- extra_vars_yaml_content = f""" -ansible_port: {remote_host_port} -ansible_ssh_private_key_file: {private_key_path} -ansible_ssh_user: {bastion_user} -ansible_ssh_common_args: "{" ".join(SSH_FLAGS)}" -""" - print(extra_vars_yaml_content, file=extra_vars_file) + extra_vars = { + "ansible_port": remote_host_port, + "ansible_ssh_private_key_file": str(private_key_path), + "ansible_ssh_user": bastion_user, + "ansible_ssh_common_args": " ".join(SSH_FLAGS or []), + } + yaml.safe_dump(extra_vars, extra_vars_file, sort_keys=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/tunnelling.py` around lines 91 - 97, The code currently builds extra_vars_yaml_content via string templating (variable extra_vars_yaml_content) which can break if SSH_FLAGS contains YAML-sensitive characters; instead construct a dictionary like {"ansible_port": remote_host_port, "ansible_ssh_private_key_file": private_key_path, "ansible_ssh_user": bastion_user, "ansible_ssh_common_args": " ".join(SSH_FLAGS)} and write it using yaml.safe_dump to extra_vars_file; ensure yaml is imported (safe_dump) and that ansible_ssh_common_args remains a single joined string before dumping to preserve the existing semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@projects/core/ci_entrypoint/run_ci.py`:
- Around line 324-339: The parser in parse_cli_help currently calls line =
line.strip(), which removes leading spaces needed by the subsequent checks;
instead preserve the original line (e.g., raw_line = line) and use a trimmed
version only for content comparisons. Change the loop to keep raw_line = line
(no strip), use trimmed = raw_line.strip() for the "commands:" detection
(trimmed.lower().startswith("commands:")), detect section breaks by checking
raw_line (e.g., in_commands_section and raw_line and not raw_line.startswith("
")), and detect command rows with raw_line.startswith(" ") when extracting
entries — update references to line in that block to use raw_line/trimmed
accordingly in parse_cli_help.
- Around line 517-519: The --verbose click option currently uses is_flag=True
with default=True which inverts its behavior (verbose is on by default and -v
disables it); update the click.option for "--verbose" (the decorator specifying
"--verbose", "-v", is_flag=True, help="Enable verbose output") to use
default=False so that providing -v enables verbose output consistent with the
help and downstream logic.
---
Duplicate comments:
In `@projects/legacy/library/run.py`:
- Around line 101-106: The helper _dict_to_run_toolbox_args currently computes
transformed values but then builds args from the raw v; fix it by assigning the
transformed value to a variable and using that when building arg: for dict/list
call json.dumps(v) and set value = json.dumps(v) and for other values set value
= str(v).replace("'", "\\'") (or otherwise escape single quotes), then construct
arg using value (e.g., arg = f'--{k}="{value}"' for JSON/dict/list and arg =
f"--{k}='{value}'" for strings) so serialization and escaping actually take
effect; update places referencing v/k/arg in _dict_to_run_toolbox_args
accordingly.
---
Nitpick comments:
In `@projects/jump_ci/orchestration/ci.py`:
- Around line 53-60: The Click option for "--cluster" is duplicated for the
lock_cluster and unlock_cluster command decorators; extract a small decorator
factory (e.g., cluster_option) that returns click.option(...) with the existing
args ( "--cluster", "cluster", nargs=1, metavar="KEY", help="Give the name of
the cluster to lock", default=None ) and use that decorator on both the
lock_cluster and unlock_cluster command functions to remove repetition and
centralize option configuration (refer to the existing click.option usage on
lock_cluster and unlock_cluster to replicate the exact option signature).
In `@projects/jump_ci/testing/tunnelling.py`:
- Around line 91-97: The code currently builds extra_vars_yaml_content via
string templating (variable extra_vars_yaml_content) which can break if
SSH_FLAGS contains YAML-sensitive characters; instead construct a dictionary
like {"ansible_port": remote_host_port, "ansible_ssh_private_key_file":
private_key_path, "ansible_ssh_user": bastion_user, "ansible_ssh_common_args": "
".join(SSH_FLAGS)} and write it using yaml.safe_dump to extra_vars_file; ensure
yaml is imported (safe_dump) and that ansible_ssh_common_args remains a single
joined string before dumping to preserve the existing semantics.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c23a89a3-9362-4fa1-89c3-6b4271466fb7
📒 Files selected for processing (62)
projects/cluster/toolbox/build_image/main.pyprojects/cluster/toolbox/rebuild_image/main.pyprojects/core/ci_entrypoint/fournos.pyprojects/core/ci_entrypoint/github/pr_args.pyprojects/core/ci_entrypoint/prepare_ci.pyprojects/core/ci_entrypoint/run_ci.pyprojects/core/dsl/__init__.pyprojects/core/dsl/cli.pyprojects/core/dsl/log.pyprojects/core/dsl/runner.pyprojects/core/dsl/runtime.pyprojects/core/dsl/script_manager.pyprojects/core/dsl/shell.pyprojects/core/dsl/task.pyprojects/core/dsl/template.pyprojects/core/dsl/toolbox.pyprojects/core/image/ocpci/include_containerfile.pyprojects/core/launcher/forge_launcher.pyprojects/core/library/ci.pyprojects/core/library/cli.pyprojects/core/library/config.pyprojects/core/library/env.pyprojects/core/library/run.pyprojects/core/library/vault.pyprojects/core/notifications/github/api.pyprojects/core/notifications/send.pyprojects/core/notifications/slack/api.pyprojects/foreign_testing/orchestration/ci.pyprojects/foreign_testing/orchestration/cli.pyprojects/foreign_testing/orchestration/foreign_testing.pyprojects/fournos_launcher/orchestration/ci.pyprojects/fournos_launcher/orchestration/cli.pyprojects/fournos_launcher/orchestration/submit.pyprojects/fournos_launcher/orchestration/utils.pyprojects/fournos_launcher/toolbox/submit_and_wait/main.pyprojects/jump_ci/orchestration/ci.pyprojects/jump_ci/testing/prepare_jump_ci.pyprojects/jump_ci/testing/test.pyprojects/jump_ci/testing/tunnelling.pyprojects/jump_ci/testing/utils.pyprojects/jump_ci/testing/utils_gethostname.pyprojects/jump_ci/toolbox/jump_ci.pyprojects/legacy/ansible-config/callback_plugins/human_log.pyprojects/legacy/ansible-config/callback_plugins/json_to_logfile.pyprojects/legacy/library/ansible_toolbox.pyprojects/legacy/library/common.pyprojects/legacy/library/config.pyprojects/legacy/library/env.pyprojects/legacy/library/export.pyprojects/legacy/library/run.pyprojects/legacy/library/sizing.pyprojects/llm_d/orchestration/ci.pyprojects/llm_d/orchestration/cli.pyprojects/llm_d/orchestration/prepare_llmd.pyprojects/llm_d/orchestration/test_llmd.pyprojects/llm_d/toolbox/capture_isvc_state/main.pyprojects/repo/toolbox/repo.pyprojects/skeleton/orchestration/ci.pyprojects/skeleton/orchestration/cli.pyprojects/skeleton/orchestration/prepare_skeleton.pyprojects/skeleton/orchestration/test_skeleton.pyprojects/skeleton/toolbox/cluster_info/main.py
✅ Files skipped from review due to trivial changes (37)
- projects/core/notifications/slack/api.py
- projects/legacy/library/common.py
- projects/fournos_launcher/orchestration/ci.py
- projects/skeleton/orchestration/prepare_skeleton.py
- projects/llm_d/orchestration/prepare_llmd.py
- projects/foreign_testing/orchestration/ci.py
- projects/core/library/env.py
- projects/skeleton/orchestration/ci.py
- projects/fournos_launcher/orchestration/utils.py
- projects/foreign_testing/orchestration/foreign_testing.py
- projects/llm_d/orchestration/ci.py
- projects/core/library/ci.py
- projects/llm_d/orchestration/cli.py
- projects/core/dsl/toolbox.py
- projects/core/notifications/github/api.py
- projects/core/dsl/runner.py
- projects/skeleton/orchestration/cli.py
- projects/legacy/ansible-config/callback_plugins/human_log.py
- projects/core/dsl/shell.py
- projects/legacy/library/env.py
- projects/fournos_launcher/orchestration/submit.py
- projects/skeleton/orchestration/test_skeleton.py
- projects/jump_ci/testing/utils.py
- projects/llm_d/toolbox/capture_isvc_state/main.py
- projects/jump_ci/testing/prepare_jump_ci.py
- projects/fournos_launcher/orchestration/cli.py
- projects/fournos_launcher/toolbox/submit_and_wait/main.py
- projects/core/dsl/template.py
- projects/core/dsl/init.py
- projects/core/dsl/task.py
- projects/legacy/library/sizing.py
- projects/jump_ci/testing/test.py
- projects/core/launcher/forge_launcher.py
- projects/core/library/cli.py
- projects/legacy/library/ansible_toolbox.py
- projects/core/ci_entrypoint/prepare_ci.py
- projects/core/notifications/send.py
🚧 Files skipped from review as they are similar to previous changes (19)
- projects/foreign_testing/orchestration/cli.py
- projects/jump_ci/testing/utils_gethostname.py
- projects/repo/toolbox/repo.py
- projects/skeleton/toolbox/cluster_info/main.py
- projects/legacy/ansible-config/callback_plugins/json_to_logfile.py
- projects/core/ci_entrypoint/fournos.py
- projects/cluster/toolbox/build_image/main.py
- projects/llm_d/orchestration/test_llmd.py
- projects/core/dsl/cli.py
- projects/jump_ci/toolbox/jump_ci.py
- projects/core/image/ocpci/include_containerfile.py
- projects/core/library/run.py
- projects/legacy/library/export.py
- projects/cluster/toolbox/rebuild_image/main.py
- projects/core/dsl/log.py
- projects/core/dsl/script_manager.py
- projects/core/library/config.py
- projects/core/ci_entrypoint/github/pr_args.py
- projects/core/library/vault.py
| """Generate a summary of the build process""" | ||
|
|
||
| summary_file = args.artifact_dir / "artifacts" / "build_summary.txt" | ||
| args.artifact_dir / "artifacts" / "build_summary.txt" |
There was a problem hiding this comment.
this line should be removed
|
thanks Alberto, that's a 'small' but great improvement 😅 |
Signed-off-by: Alberto Perdomo <aperdomo@redhat.com>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
projects/core/dsl/runtime.py (3)
143-148:⚠️ Potential issue | 🔴 CriticalOnly run
@alwaystasks in the recovery loop.Line 148 now uses the right task object, but this block still drains every remaining entry in
file_tasks. Becauseprojects/core/dsl/task.py:225-237already storestask_info["always_execute"], a normal task after the first failure will still run here unless you filter on that flag.💡 Suggested fix
if file_tasks: logging.warning("Executing the `@always` tasks ...") while file_tasks: try: current_task_info = file_tasks.pop(0) - _execute_single_task(current_task_info, args, shared_context) + if current_task_info.get("always_execute"): + _execute_single_task(current_task_info, args, shared_context)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/runtime.py` around lines 143 - 148, The recovery loop currently drains all entries from file_tasks and executes them regardless of their flag; change it to only execute tasks whose task_info["always_execute"] is truthy. In the block that pops and calls _execute_single_task, filter file_tasks (or iterate a copy) and only pop/execute entries where task_info.get("always_execute") is True, leaving normal tasks in file_tasks untouched so they aren’t executed here; reference the task_info["always_execute"] flag stored by projects/core/dsl/task.py and the _execute_single_task call to locate where to apply this filter.
213-267:⚠️ Potential issue | 🟠 MajorFix unbound variable errors that would mask task failures.
Line 215 (
create_task_parameters) can fail beforecontextis assigned, but line 263 unconditionally callsvars(context). Also, line 249 shadows the outer exception variableewith an inner exception of the same name; Python clears that binding when the inner handler exits, causingoriginal_exception=eandraise task_error from eon lines 259–260 to raiseUnboundLocalErrorinstead of the actual task failure.Suggested fixes:
- Initialize
context = Nonebefore the try block to handle pre-creation failures- Rename the inner exception handler's variable from
as e:toas path_error:to avoid shadowing- Check
context is not Nonebefore accessing it in line 263Diff
def _execute_single_task(task_info, args, shared_context): """Execute a single task with condition checking""" task_name = task_info["name"] task_func = task_info["func"] condition = task_info["condition"] task_status = task_info["status"] = {} # Check condition if present - try: + context = None + try: # Create readonly args and mutable context readonly_args, context = create_task_parameters(args, shared_context) @@ - except Exception as e: + except Exception as original_exception: co_filename = task_func.original_func.__code__.co_filename try: co_filename = Path(co_filename).relative_to(env.FORGE_HOME) - except ValueError as e: + except ValueError as path_error: logging.warning( - f"Path {co_filename} isn't relative to FORGE_HOME={env.FORGE_HOME} ({e})" + f"Path {co_filename} isn't relative to FORGE_HOME={env.FORGE_HOME} ({path_error})" ) pass # Use absolute path if file is outside FORGE_HOME @@ task_error = TaskExecutionError( task_name=task_name, task_description=task_func.__doc__ or "No description", - original_exception=e, + original_exception=original_exception, task_args=vars(args) if hasattr(args, "__dict__") else None, - task_context=vars(context) if hasattr(context, "__dict__") else None, + task_context=vars(context) if context is not None and hasattr(context, "__dict__") else None, task_location=task_location, artifact_dir=str(env.ARTIFACT_DIR), ) - raise task_error from e + raise task_error from original_exception🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/runtime.py` around lines 213 - 267, The try/except can leak an UnboundLocalError because context may not be set if create_task_parameters fails and the inner except shadows the outer exception variable `e`; fix by initializing context = None before calling create_task_parameters in the block around create_task_parameters and task execution (reference create_task_parameters and variable context), rename the inner ValueError handler variable to path_error (the block that handles Path(...).relative_to(env.FORGE_HOME)) to avoid shadowing the outer exception `e`, and guard uses of context when building TaskExecutionError (use vars(context) only if context is not None) so that TaskExecutionError.original_exception and the final `raise task_error from e` correctly reference the original task exception; keep TaskExecutionError construction and task_location logic (referencing task_func.original_func, env.FORGE_HOME, env.ARTIFACT_DIR) unchanged otherwise.
339-356:⚠️ Potential issue | 🟠 MajorQuote restart-script arguments with shell escaping.
Lines 345 and 356 interpolate raw values into a shell script without escaping. An argument containing
", newlines, backticks, or$()will corruptrestart.shor execute arbitrary commands when the user reruns it.💡 Suggested fix
Add this import near the top of the file:
import shlexThen build the command with shell-escaped tokens:
- script_content += f'python "{rel_filename}"' + script_content += f"python {shlex.quote(rel_filename)}" @@ if isinstance(value, bool): if value: # Only add flag if True - script_content += " \\\n " + f"--{key.replace('_', '-')}" + script_content += " \\\n " + shlex.quote(f"--{key.replace('_', '-')}") else: - script_content += " \\\n " + f'--{key.replace("_", "-")} "{value}"' + script_content += ( + " \\\n " + + shlex.quote(f"--{key.replace('_', '-')}") + + " " + + shlex.quote(str(value)) + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/dsl/runtime.py` around lines 339 - 356, The restart script currently interpolates raw rel_filename and function_args values into script_content (see variables script_content, rel_filename, function_args), which allows quotes, newlines or shell metacharacters to break or execute commands; fix by importing shlex and using shlex.quote() when inserting rel_filename and when formatting each argument value (and for flag booleans include the quoted flag token), e.g., wrap the python command target with shlex.quote(rel_filename) and use shlex.quote(str(value)) for non-bool args while preserving the existing line-continuation formatting so the generated restart.sh is safe from injection.
♻️ Duplicate comments (1)
projects/legacy/library/run.py (1)
101-106:⚠️ Potential issue | 🟠 Major
_dict_to_run_toolbox_argsstill builds malformed CLI args.At Line 102 and Line 105, transformed values are discarded and raw
vis interpolated. This breaks JSON serialization for dict/list values and leaves quoting fragile for string values.Proposed fix
import itertools import json import logging import os import pathlib +import shlex import signal import subprocess import sys import traceback @@ def _dict_to_run_toolbox_args(args_dict): args = [] for k, v in args_dict.items(): - if isinstance(v, dict) or isinstance(v, list): - json.dumps(v) - arg = f'--{k}="{v}"' - else: - str(v).replace("'", "'") - arg = f"--{k}='{v}'" + val = json.dumps(v) if isinstance(v, (dict, list)) else str(v) + arg = f"--{k}={shlex.quote(val)}" args.append(arg) return " ".join(args)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/legacy/library/run.py` around lines 101 - 106, In _dict_to_run_toolbox_args, the transformed values are thrown away — json.dumps(v) and str(v).replace(...) are called but their results aren't used, so args use the raw v and produce malformed CLI fragments; fix by storing the transformed value (e.g., serialized = json.dumps(v) for dict/list and escaped = str(v).replace("'", "\\'") or other safe-quoting for strings) and interpolate that stored value into the arg construction (use the serialized/escaped variable in the f-strings for arg), ensuring dict/list use double-quoted JSON and string values are properly escaped before wrapping in single quotes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@projects/core/ci_entrypoint/github/pr_args.py`:
- Around line 124-135: The current /var parsing accepts strings that contain ":"
but not ": " and silently returns {var_content: var_content}, which
misconfigures overrides; update the fallback in the try block (the branch
handling var_content without ": ") to reject such cases instead of returning a
raw map—use the same validation as the earlier check (raise an Exception
referencing the original line, e.g., "Invalid /var directive format: {line}
(expected 'key: value')") so only properly formatted "key: value" pairs are
accepted; edit the logic around the var_content handling (the split branch and
its fallback) to enforce this rejection.
In `@projects/core/ci_entrypoint/prepare_ci.py`:
- Around line 292-299: precheck_artifact_dir() and system_prechecks() are
declared to return bool but currently return None on success; update both
functions so they return True on successful precheck paths (e.g., when
ARTIFACT_DIR is present or other checks pass) and return False or raise
consistently on failure paths (preserve existing RuntimeError for OpenShift CI
case if appropriate). Ensure you modify the return at the end of
precheck_artifact_dir() (where it currently just returns/falls through) to
return True, and similarly adjust the end/fallthrough returns in
system_prechecks() (lines noted in the review) so callers always receive a
boolean per the -> bool annotation.
---
Outside diff comments:
In `@projects/core/dsl/runtime.py`:
- Around line 143-148: The recovery loop currently drains all entries from
file_tasks and executes them regardless of their flag; change it to only execute
tasks whose task_info["always_execute"] is truthy. In the block that pops and
calls _execute_single_task, filter file_tasks (or iterate a copy) and only
pop/execute entries where task_info.get("always_execute") is True, leaving
normal tasks in file_tasks untouched so they aren’t executed here; reference the
task_info["always_execute"] flag stored by projects/core/dsl/task.py and the
_execute_single_task call to locate where to apply this filter.
- Around line 213-267: The try/except can leak an UnboundLocalError because
context may not be set if create_task_parameters fails and the inner except
shadows the outer exception variable `e`; fix by initializing context = None
before calling create_task_parameters in the block around create_task_parameters
and task execution (reference create_task_parameters and variable context),
rename the inner ValueError handler variable to path_error (the block that
handles Path(...).relative_to(env.FORGE_HOME)) to avoid shadowing the outer
exception `e`, and guard uses of context when building TaskExecutionError (use
vars(context) only if context is not None) so that
TaskExecutionError.original_exception and the final `raise task_error from e`
correctly reference the original task exception; keep TaskExecutionError
construction and task_location logic (referencing task_func.original_func,
env.FORGE_HOME, env.ARTIFACT_DIR) unchanged otherwise.
- Around line 339-356: The restart script currently interpolates raw
rel_filename and function_args values into script_content (see variables
script_content, rel_filename, function_args), which allows quotes, newlines or
shell metacharacters to break or execute commands; fix by importing shlex and
using shlex.quote() when inserting rel_filename and when formatting each
argument value (and for flag booleans include the quoted flag token), e.g., wrap
the python command target with shlex.quote(rel_filename) and use
shlex.quote(str(value)) for non-bool args while preserving the existing
line-continuation formatting so the generated restart.sh is safe from injection.
---
Duplicate comments:
In `@projects/legacy/library/run.py`:
- Around line 101-106: In _dict_to_run_toolbox_args, the transformed values are
thrown away — json.dumps(v) and str(v).replace(...) are called but their results
aren't used, so args use the raw v and produce malformed CLI fragments; fix by
storing the transformed value (e.g., serialized = json.dumps(v) for dict/list
and escaped = str(v).replace("'", "\\'") or other safe-quoting for strings) and
interpolate that stored value into the arg construction (use the
serialized/escaped variable in the f-strings for arg), ensuring dict/list use
double-quoted JSON and string values are properly escaped before wrapping in
single quotes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1e5e6b56-6f6c-446d-a606-c2a17d1e1450
📒 Files selected for processing (37)
projects/cluster/toolbox/build_image/main.pyprojects/cluster/toolbox/rebuild_image/main.pyprojects/core/ci_entrypoint/fournos.pyprojects/core/ci_entrypoint/github/pr_args.pyprojects/core/ci_entrypoint/prepare_ci.pyprojects/core/ci_entrypoint/run_ci.pyprojects/core/dsl/cli.pyprojects/core/dsl/context.pyprojects/core/dsl/log.pyprojects/core/dsl/runtime.pyprojects/core/dsl/script_manager.pyprojects/core/dsl/task.pyprojects/core/launcher/forge_launcher.pyprojects/core/library/config.pyprojects/core/library/run.pyprojects/core/library/vault.pyprojects/core/notifications/github/api.pyprojects/core/notifications/send.pyprojects/core/notifications/slack/api.pyprojects/fournos_launcher/orchestration/cli.pyprojects/fournos_launcher/toolbox/submit_and_wait/main.pyprojects/jump_ci/testing/prepare_jump_ci.pyprojects/jump_ci/testing/test.pyprojects/jump_ci/testing/tunnelling.pyprojects/jump_ci/testing/utils_gethostname.pyprojects/legacy/ansible-config/callback_plugins/human_log.pyprojects/legacy/library/ansible_toolbox.pyprojects/legacy/library/config.pyprojects/legacy/library/env.pyprojects/legacy/library/export.pyprojects/legacy/library/run.pyprojects/legacy/library/sizing.pyprojects/llm_d/toolbox/capture_isvc_state/main.pyprojects/repo/toolbox/repo.pyprojects/skeleton/orchestration/test_skeleton.pyprojects/skeleton/toolbox/cluster_info/main.pypyproject.toml
💤 Files with no reviewable changes (1)
- projects/core/notifications/slack/api.py
✅ Files skipped from review due to trivial changes (18)
- projects/core/notifications/github/api.py
- projects/core/dsl/context.py
- projects/repo/toolbox/repo.py
- projects/fournos_launcher/orchestration/cli.py
- projects/legacy/ansible-config/callback_plugins/human_log.py
- projects/skeleton/orchestration/test_skeleton.py
- projects/core/notifications/send.py
- projects/core/library/run.py
- projects/legacy/library/sizing.py
- projects/jump_ci/testing/test.py
- projects/core/dsl/task.py
- projects/legacy/library/env.py
- projects/cluster/toolbox/rebuild_image/main.py
- projects/jump_ci/testing/prepare_jump_ci.py
- projects/legacy/library/ansible_toolbox.py
- projects/core/dsl/script_manager.py
- projects/skeleton/toolbox/cluster_info/main.py
- projects/core/dsl/log.py
🚧 Files skipped from review as they are similar to previous changes (12)
- projects/legacy/library/export.py
- projects/core/dsl/cli.py
- projects/core/ci_entrypoint/fournos.py
- pyproject.toml
- projects/jump_ci/testing/tunnelling.py
- projects/core/launcher/forge_launcher.py
- projects/fournos_launcher/toolbox/submit_and_wait/main.py
- projects/core/ci_entrypoint/run_ci.py
- projects/llm_d/toolbox/capture_isvc_state/main.py
- projects/cluster/toolbox/build_image/main.py
- projects/core/library/config.py
- projects/core/library/vault.py
| if ":" not in var_content: | ||
| raise Exception(f"Invalid /var directive format: {line} (expected 'key: value')") | ||
|
|
||
| try: | ||
| if ': ' in var_content: | ||
| key, value = var_content.split(': ', 1) | ||
| if ": " in var_content: | ||
| key, value = var_content.split(": ", 1) | ||
| return {key.strip(): value.strip()} | ||
| else: | ||
| # Fallback for other formats | ||
| return {var_content: var_content} | ||
| except Exception as e: | ||
| raise Exception(f"Invalid /var directive: {line} - {e}") | ||
| raise Exception(f"Invalid /var directive: {line} - {e}") from e |
There was a problem hiding this comment.
/var parsing has an incorrect fallback branch.
At Line 128-133, values containing : but not ": " are accepted and converted to {raw: raw}. That silently misconfigures overrides instead of parsing or rejecting the directive.
Proposed fix
def handle_var_directive(line: str) -> dict[str, Any]:
@@
try:
- if ": " in var_content:
- key, value = var_content.split(": ", 1)
- return {key.strip(): value.strip()}
- else:
- # Fallback for other formats
- return {var_content: var_content}
+ key, value = var_content.split(":", 1)
+ key = key.strip()
+ value = value.strip()
+ if not key:
+ raise ValueError("empty key")
+ return {key: value}
except Exception as e:
raise Exception(f"Invalid /var directive: {line} - {e}") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@projects/core/ci_entrypoint/github/pr_args.py` around lines 124 - 135, The
current /var parsing accepts strings that contain ":" but not ": " and silently
returns {var_content: var_content}, which misconfigures overrides; update the
fallback in the try block (the branch handling var_content without ": ") to
reject such cases instead of returning a raw map—use the same validation as the
earlier check (raise an Exception referencing the original line, e.g., "Invalid
/var directive format: {line} (expected 'key: value')") so only properly
formatted "key: value" pairs are accepted; edit the logic around the var_content
handling (the split branch and its fallback) to enforce this rejection.
| artifact_dir = os.environ.get("ARTIFACT_DIR") | ||
|
|
||
| if artifact_dir: | ||
| logger.info(f"Using ARTIFACT_DIR={artifact_dir}.") | ||
| return | ||
|
|
||
| if os.environ.get('OPENSHIFT_CI') == 'true': | ||
| if os.environ.get("OPENSHIFT_CI") == "true": | ||
| raise RuntimeError("ARTIFACT_DIR not set, cannot proceed without it in OpenShift CI.") |
There was a problem hiding this comment.
Boolean return contract is violated in precheck helpers.
precheck_artifact_dir() and system_prechecks() are annotated as -> bool but return None on success (Line 296 and function fallthrough). This can silently break callers expecting a boolean.
Proposed fix
def precheck_artifact_dir() -> bool:
@@
if artifact_dir:
logger.info(f"Using ARTIFACT_DIR={artifact_dir}.")
- return
+ return True
@@
os.environ["ARTIFACT_DIR"] = default_dir
Path(default_dir).mkdir(parents=True, exist_ok=True)
logger.info(f"Using ARTIFACT_DIR={default_dir} as default artifacts directory.")
+ return True
@@
def system_prechecks() -> bool:
@@
# Download PR information if available
download_pr_information()
+ return TrueAlso applies to: 357-368
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@projects/core/ci_entrypoint/prepare_ci.py` around lines 292 - 299,
precheck_artifact_dir() and system_prechecks() are declared to return bool but
currently return None on success; update both functions so they return True on
successful precheck paths (e.g., when ARTIFACT_DIR is present or other checks
pass) and return False or raise consistently on failure paths (preserve existing
RuntimeError for OpenShift CI case if appropriate). Ensure you modify the return
at the end of precheck_artifact_dir() (where it currently just returns/falls
through) to return True, and similarly adjust the end/fallthrough returns in
system_prechecks() (lines noted in the review) so callers always receive a
boolean per the -> bool annotation.
Summary
Add GHA check for format using Ruff and additionally a pre commit hook for local fixes on every
git commit. To use the pre commit hook, the standard flow is:pip install .[dev]pre-commit installgit commitautomatically runs the configured hooks on the files you are committingSummary by CodeRabbit
Style
Chores