Skip to content

fix: pass max_retries and timeout_sec from YAML config to LLMClient#232

Closed
dennis-lynch-nv wants to merge 1 commit intoaiming-lab:mainfrom
dennis-lynch-nv:fix/llm-config-max-retries-passthrough
Closed

fix: pass max_retries and timeout_sec from YAML config to LLMClient#232
dennis-lynch-nv wants to merge 1 commit intoaiming-lab:mainfrom
dennis-lynch-nv:fix/llm-config-max-retries-passthrough

Conversation

@dennis-lynch-nv
Copy link
Copy Markdown

Problem

LlmConfig dataclass and _parse_llm_config() in config.py did not read max_retries or timeout_sec from the YAML config file. Additionally, LLMClient.from_rc_config() in client.py did not forward these values to LLMConfig.

This caused the LLMClient to always use the hardcoded default of 3 retries and 300s timeout regardless of what the user set in config.arc.yaml.

For example, setting max_retries: 8 in config had no effect — the client still retried only 3 times.

Fix

  1. config.py — Added max_retries and timeout_sec fields to the LlmConfig dataclass, and parse them in _parse_llm_config()
  2. client.pyLLMClient.from_rc_config() now forwards both values to LLMConfig

Verification

from researchclaw.config import load_config
from researchclaw.llm.client import LLMClient

cfg = load_config('config.arc.yaml')  # max_retries: 8, timeout_sec: 1900
client = LLMClient.from_rc_config(cfg)
assert client.config.max_retries == 8   # Previously: 3
assert client.config.timeout_sec == 1900  # Previously: 300

LlmConfig dataclass and _parse_llm_config() did not read max_retries
or timeout_sec from the YAML config. LLMClient.from_rc_config() also
did not forward these values to LLMConfig. This caused the LLMClient
to always use the hardcoded default of 3 retries regardless of the
user's config.arc.yaml setting.

Fixes:
- Add max_retries and timeout_sec fields to LlmConfig dataclass
- Parse both values in _parse_llm_config() from YAML
- Forward both values in LLMClient.from_rc_config() to LLMConfig
@Jiaaqiliu
Copy link
Copy Markdown
Collaborator

Hi @dennis-lynch-nv, thank you for the contribution! The config passthrough fix is a good idea, but we found two issues in the current diff:

  1. Debug print left in production code (line 469): print(f"[DEBUG] LLM call to {url} model={model} max_tokens={max_tokens}") — this would print on every LLM call
  2. Dead code: last_err = f"Timeout/OSError: {exc}" after a continue statement (unreachable)

We've merged #234 (your error tracking fix, which is the clean subset of this PR). For the config passthrough (max_retries/timeout_sec), please feel free to resubmit a clean PR with just that change — it's a valid improvement!

@Jiaaqiliu Jiaaqiliu closed this Apr 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants