fix: ManagedAgent provider overload conflates hosted-runtime vs LLM-routing#1550
fix: ManagedAgent provider overload conflates hosted-runtime vs LLM-routing#1550MervinPraison merged 3 commits intomainfrom
Conversation
β¦-routing β propose HostedAgent / LocalAgent split (fixes #1549) - Create new HostedAgent class for hosted runtimes (Anthropic managed infrastructure) - Create new LocalAgent class for local loops with optional cloud compute - Update ManagedAgent factory with deprecation warnings for LLM routing overload - Add proper error handling for compute provider misuse - Maintain full backward compatibility with all existing imports - Update examples to demonstrate new clear semantics - Add comprehensive tests for new backend semantics π€ Generated with [Claude Code](https://claude.ai/code) Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
β¦ent/LocalAgent classes - Create new canonical examples demonstrating HostedAgent vs LocalAgent usage - Update existing examples to use clear semantic distinctions - Add runtime_hosted_anthropic.py for true managed runtime usage - Add runtime_local_*.py examples for local execution with different LLMs - Update all_providers.py to use LocalAgent with compute backends - Update local_basic.py to use LocalAgent semantics - Add comprehensive test suite for backend semantics π€ Generated with [Claude Code](https://claude.ai/code) Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
|
@coderabbitai review |
|
/review |
β Actions performedReview triggered.
|
|
Important Review skippedBot user detected. To trigger a single review, invoke the βοΈ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
π WalkthroughWalkthroughThis PR implements a major architectural refactoring to separate hosted managed runtimes from local agent loops by introducing Changes
Estimated code review effortπ― 3 (Moderate) | β±οΈ ~20 minutes Possibly related issues
Possibly related PRs
Poem
π₯ Pre-merge checks | β 5β Passed checks (5 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing Touchesπ§ͺ Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@copilot Do a thorough review of this PR. Read ALL existing reviewer comments above from Qodo, Coderabbit, and Gemini first β incorporate their findings. Review areas:
|
Greptile SummaryThis PR introduces
Confidence Score: 3/5Not safe to merge: the new test suite contains assertions that contradict the implementation and will fail on first run. A P1 mismatch exists between the ManagedAgent docstring, the test expectations (ValueError for compute providers), and the actual code (DeprecationWarning + passthrough). test_managed_agent_compute_provider_errors will fail, breaking CI. The implementation logic itself is otherwise coherent. tests/unit/integrations/test_backend_semantics.py (test_managed_agent_compute_provider_errors), src/praisonai/praisonai/integrations/managed_agents.py (ManagedAgent docstring) Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
MA["ManagedAgent(provider=?)"] -->|"provider=None"| AD{"Auto-detect\nenv vars"}
AD -->|"ANTHROPIC_API_KEY set"| ANT["provider='anthropic'"]
AD -->|"no key"| LOC["provider='local'"]
MA -->|"provider='anthropic'"| ANT
MA -->|"provider in {openai,gemini,ollama,local}"| WARN1["DeprecationWarning\nβ LocalManagedAgent(provider=...)"]
MA -->|"provider in {e2b,modal,flyio,daytona,docker}"| WARN2["DeprecationWarning\nβ LocalManagedAgent(compute=provider)\nβ docstring claims ValueError"]
MA -->|"unknown provider"| ERR["ValueError"]
ANT --> AMA["AnthropicManagedAgent"]
LOC --> WARN1
HA["HostedAgent(provider='anthropic')"] -->|"provider != 'anthropic'"| ERR2["ValueError (with hint)"]
HA -->|"provider='anthropic'"| AMA
LA["LocalAgent(compute=?, config=?)"] -->|"provider= kwarg present"| DEPWARN["DeprecationWarning\n+ preserves provider for routing"]
LA --> LMA["LocalManagedAgent"]
DEPWARN --> LMA
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (1)
src/praisonai/praisonai/integrations/managed_agents.py (1)
1100-1137:β οΈ Potential issue | π MajorAuto-detect fallback triggers a spurious
DeprecationWarningfor callers that never passedprovider=.When
provider is NoneandANTHROPIC_API_KEY/CLAUDE_API_KEYare not set,provideris reassigned to"local"on line 1105 and then falls into the LLM-routing branch on line 1120, which emits aDeprecationWarning(withstacklevel=2pointing at the user). That warning blames the caller for a "deprecated" usage they didn't actually use β they calledManagedAgent()with no arguments.This is a backward-compatibility regression for the most common implicit-construction path. Either:
- Remember that
providerwas auto-detected and skip the warning, or- Route the auto-detected
"local"case directly toLocalManagedAgentwithout passing through the deprecation branch.π§ Proposed fix
- if provider is None: - # Auto-detect - if os.getenv("ANTHROPIC_API_KEY") or os.getenv("CLAUDE_API_KEY"): - provider = "anthropic" - else: - provider = "local" + auto_detected = provider is None + if auto_detected: + # Auto-detect + if os.getenv("ANTHROPIC_API_KEY") or os.getenv("CLAUDE_API_KEY"): + provider = "anthropic" + else: + provider = "local" @@ # LLM routing hints (deprecated usage) elif provider in {"openai", "gemini", "ollama", "local"}: - warnings.warn( - f"ManagedAgent(provider='{provider}') is deprecated. " - f"Use LocalAgent directly with model= instead: " - f"LocalAgent(config=LocalAgentConfig(model='gpt-4o-mini'))", - DeprecationWarning, - stacklevel=2 - ) + if not auto_detected: + warnings.warn( + f"ManagedAgent(provider='{provider}') is deprecated. " + f"Use LocalAgent directly with model= instead: " + f"LocalAgent(config=LocalAgentConfig(model='gpt-4o-mini'))", + DeprecationWarning, + stacklevel=2, + ) from .managed_local import LocalManagedAgent return LocalManagedAgent(provider=provider, **kwargs)π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/integrations/managed_agents.py` around lines 1100 - 1137, The auto-detect path sets provider="local" and then triggers the deprecation warning; change the logic to remember when provider was auto-detected (e.g., set a flag when provider is None before the ENV checks) and, if auto_detected and provider == "local", bypass the warnings.warn branch and directly import and return LocalManagedAgent(provider=provider, **kwargs) (the same return used in the LLM-routing branch), or alternatively skip issuing the DeprecationWarning when provider was originally None; update the code around the provider auto-detection and the LLM-routing branch in managed_agents.py so the flag is checked before calling warnings.warn and before returning LocalManagedAgent.
π§Ή Nitpick comments (5)
src/praisonai/praisonai/integrations/hosted_agent.py (1)
31-33: Optional:import osappears unused in this module.Only
AnthropicManagedAgent/ManagedConfigandOptional/Anyare referenced. If no future use is planned here, droppingimport oswould keep the module lean; harmless otherwise.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/integrations/hosted_agent.py` around lines 31 - 33, The module imports an unused symbol "os" which should be removed to keep the file lean; open hosted_agent.py and delete the unused import line "import os" (leaving the existing imports for Optional, Any and from .managed_agents import AnthropicManagedAgent, ManagedConfig intact), then run lint/type checks to confirm no usage remains.tests/unit/integrations/test_backend_semantics.py (2)
173-177: Nit: useisfor alias identity checks.
HostedAgentConfig == ManagedConfigandLocalAgentConfig == LocalManagedConfigwork today because class==falls back to identity, but the intent here is "these two names refer to the same object."isexpresses that directly and won't be affected if someone adds a custom__eq__/ metaclass later.β»οΈ Proposed refactor
- # HostedAgentConfig should alias ManagedConfig - assert HostedAgentConfig == ManagedConfig - - # LocalAgentConfig should alias LocalManagedConfig - assert LocalAgentConfig == LocalManagedConfig + # HostedAgentConfig should alias ManagedConfig + assert HostedAgentConfig is ManagedConfig + + # LocalAgentConfig should alias LocalManagedConfig + assert LocalAgentConfig is LocalManagedConfigπ€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/integrations/test_backend_semantics.py` around lines 173 - 177, The assertions check identity but use equality; update the two assertions to use identity comparison: replace "HostedAgentConfig == ManagedConfig" with "HostedAgentConfig is ManagedConfig" and "LocalAgentConfig == LocalManagedConfig" with "LocalAgentConfig is LocalManagedConfig" so the test explicitly asserts the two names refer to the same object (use the symbols HostedAgentConfig, ManagedConfig, LocalAgentConfig, LocalManagedConfig to locate the assertions).
68-74: Assertion on exact warning count is fragile.
warnings.simplefilter("always")records every warning raised inside the block, not just the one fromLocalAgent.__init__. IfLocalManagedAgent.__init__(or any lazy import it triggers) ever emits its ownDeprecationWarning/UserWarningβ including from stdlib β this test will fail onlen(w) == 1.Prefer filtering by category or matching on content:
β»οΈ Proposed refactor
- with warnings.catch_warnings(record=True) as w: - warnings.simplefilter("always") - LocalAgent(provider="openai", config=LocalAgentConfig(model="gpt-4o-mini")) - assert len(w) == 1 - assert issubclass(w[0].category, DeprecationWarning) - assert "provider=" in str(w[0].message) - assert "config.model=" in str(w[0].message) + with warnings.catch_warnings(record=True) as w: + warnings.simplefilter("always") + LocalAgent(provider="openai", config=LocalAgentConfig(model="gpt-4o-mini")) + dep = [x for x in w + if issubclass(x.category, DeprecationWarning) + and "provider=" in str(x.message)] + assert len(dep) == 1 + assert "config.model=" in str(dep[0].message)π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/integrations/test_backend_semantics.py` around lines 68 - 74, The test currently asserts an exact warning count (len(w) == 1) which is fragile; change the assertions inside the warnings.catch_warnings(record=True) block to check that at least one recorded warning from LocalAgent(__init__) matches the expected category and message content instead of asserting len(w) == 1 β e.g., replace the len(w) assertion with a predicate like any(issubclass(rec.category, DeprecationWarning) and "provider=" in str(rec.message) and "config.model=" in str(rec.message) for rec in w) so the test looks for a matching warning produced by LocalAgent/LocalAgentConfig rather than relying on exact count.examples/python/managed-agents/provider/runtime_local_openai.py (1)
22-44: Nit: add expected-output comments to the example.Per the examples guideline, canonical examples should show expected output in comments so readers can judge correctness without running the script. Consider annotating the
print(...)lines with the approximate expected values (e.g.# Result: Paris,# Usage: in=~120, out=~30) so this doubles as runnable documentation.As per coding guidelines: "Code examples must run without modification (copy-paste success), include necessary imports, use realistic but simple data, show expected output in comments, and be the shortest way to accomplish the task".
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/python/managed-agents/provider/runtime_local_openai.py` around lines 22 - 44, Annotate the example's print statements with inline expected-output comments (without changing runtime behavior) so readers can see canonical outputs: after the agent.start call for "What is the capital of France?" add a comment like "# Result: Paris"; after the second multi-turn agent.start add a comment like "# Result: France" or similar; annotate the metadata prints referencing local.agent_id, local.agent_version, local.environment_id, local.session_id with example UUID/version values (e.g. "# Agent ID: 123e4567..."); after the local.retrieve_session usage print add an approximate token-usage comment like "# Usage: in=~120, out=~30"; and after the tool execution agent.start add a comment like "# Tool result: Created hello.txt" β ensure all comments are inline (so code remains copy-paste runnable) and reference the existing symbols agent.start and local.retrieve_session/local.* when adding them.src/praisonai/praisonai/integrations/managed_agents.py (1)
1078-1099: Docstring vs implementation drift forprovider="anthropic".The docstring says
provider="anthropic" β HostedAgent(provider="anthropic", ...)(line 1087), but the implementation returnsAnthropicManagedAgent(...)directly (line 1109). That's semantically fine sinceHostedAgentsubclassesAnthropicManagedAgent, but the docstring will mislead maintainers who try to trace the canonical path. Either route throughHostedAgentor update the docstring to sayAnthropicManagedAgent.Also applies to: 1107-1109
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/integrations/managed_agents.py` around lines 1078 - 1099, The docstring and implementation diverge for provider="anthropic": the docstring says to return HostedAgent(provider="anthropic", ...) but the code constructs AnthropicManagedAgent(...) directly; update them to be consistent by either 1) changing the factory implementation to instantiate/return HostedAgent(provider="anthropic", ...) so the canonical path goes through HostedAgent, or 2) update the docstring to state that the factory returns AnthropicManagedAgent for provider="anthropic" (and keep the existing behavior). Reference the symbols provider="anthropic", HostedAgent, and AnthropicManagedAgent when making the change so maintainers can find and reconcile the factory and its docstring.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/praisonai/praisonai/integrations/hosted_agent.py`:
- Around line 66-72: The ValueError in hosted_agent.py currently always suggests
using LocalAgent(compute='{provider}', config=LocalAgentConfig(...)) which is
misleading for LLM-hint providers; update the error message in the branch that
checks provider (the block raising ValueError) to detect whether provider is a
compute backend (e.g., 'e2b','modal','flyio','daytona','docker') or an LLM hint
(e.g., 'openai','gemini','ollama') and provide the appropriate guidance: for
compute backends suggest LocalAgent(compute='...'),
config=LocalAgentConfig(...); for LLM hints suggest setting the model via
LocalAgentConfig(model='...') (or show both patterns in a single message), and
reference ManagedAgent/LocalAgent/LocalAgentConfig in the text so callers can
find the correct usage.
---
Outside diff comments:
In `@src/praisonai/praisonai/integrations/managed_agents.py`:
- Around line 1100-1137: The auto-detect path sets provider="local" and then
triggers the deprecation warning; change the logic to remember when provider was
auto-detected (e.g., set a flag when provider is None before the ENV checks)
and, if auto_detected and provider == "local", bypass the warnings.warn branch
and directly import and return LocalManagedAgent(provider=provider, **kwargs)
(the same return used in the LLM-routing branch), or alternatively skip issuing
the DeprecationWarning when provider was originally None; update the code around
the provider auto-detection and the LLM-routing branch in managed_agents.py so
the flag is checked before calling warnings.warn and before returning
LocalManagedAgent.
---
Nitpick comments:
In `@examples/python/managed-agents/provider/runtime_local_openai.py`:
- Around line 22-44: Annotate the example's print statements with inline
expected-output comments (without changing runtime behavior) so readers can see
canonical outputs: after the agent.start call for "What is the capital of
France?" add a comment like "# Result: Paris"; after the second multi-turn
agent.start add a comment like "# Result: France" or similar; annotate the
metadata prints referencing local.agent_id, local.agent_version,
local.environment_id, local.session_id with example UUID/version values (e.g. "#
Agent ID: 123e4567..."); after the local.retrieve_session usage print add an
approximate token-usage comment like "# Usage: in=~120, out=~30"; and after the
tool execution agent.start add a comment like "# Tool result: Created hello.txt"
β ensure all comments are inline (so code remains copy-paste runnable) and
reference the existing symbols agent.start and local.retrieve_session/local.*
when adding them.
In `@src/praisonai/praisonai/integrations/hosted_agent.py`:
- Around line 31-33: The module imports an unused symbol "os" which should be
removed to keep the file lean; open hosted_agent.py and delete the unused import
line "import os" (leaving the existing imports for Optional, Any and from
.managed_agents import AnthropicManagedAgent, ManagedConfig intact), then run
lint/type checks to confirm no usage remains.
In `@src/praisonai/praisonai/integrations/managed_agents.py`:
- Around line 1078-1099: The docstring and implementation diverge for
provider="anthropic": the docstring says to return
HostedAgent(provider="anthropic", ...) but the code constructs
AnthropicManagedAgent(...) directly; update them to be consistent by either 1)
changing the factory implementation to instantiate/return
HostedAgent(provider="anthropic", ...) so the canonical path goes through
HostedAgent, or 2) update the docstring to state that the factory returns
AnthropicManagedAgent for provider="anthropic" (and keep the existing behavior).
Reference the symbols provider="anthropic", HostedAgent, and
AnthropicManagedAgent when making the change so maintainers can find and
reconcile the factory and its docstring.
In `@tests/unit/integrations/test_backend_semantics.py`:
- Around line 173-177: The assertions check identity but use equality; update
the two assertions to use identity comparison: replace "HostedAgentConfig ==
ManagedConfig" with "HostedAgentConfig is ManagedConfig" and "LocalAgentConfig
== LocalManagedConfig" with "LocalAgentConfig is LocalManagedConfig" so the test
explicitly asserts the two names refer to the same object (use the symbols
HostedAgentConfig, ManagedConfig, LocalAgentConfig, LocalManagedConfig to locate
the assertions).
- Around line 68-74: The test currently asserts an exact warning count (len(w)
== 1) which is fragile; change the assertions inside the
warnings.catch_warnings(record=True) block to check that at least one recorded
warning from LocalAgent(__init__) matches the expected category and message
content instead of asserting len(w) == 1 β e.g., replace the len(w) assertion
with a predicate like any(issubclass(rec.category, DeprecationWarning) and
"provider=" in str(rec.message) and "config.model=" in str(rec.message) for rec
in w) so the test looks for a matching warning produced by
LocalAgent/LocalAgentConfig rather than relying on exact count.
πͺ Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
βΉοΈ Review info
βοΈ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2d73841b-c648-495a-98c0-6a1566a4fece
π Files selected for processing (12)
examples/python/managed-agents/provider/all_providers.pyexamples/python/managed-agents/provider/local_basic.pyexamples/python/managed-agents/provider/runtime_hosted_anthropic.pyexamples/python/managed-agents/provider/runtime_local_gemini.pyexamples/python/managed-agents/provider/runtime_local_ollama.pyexamples/python/managed-agents/provider/runtime_local_openai.pysrc/praisonai/praisonai/__init__.pysrc/praisonai/praisonai/integrations/__init__.pysrc/praisonai/praisonai/integrations/hosted_agent.pysrc/praisonai/praisonai/integrations/local_agent.pysrc/praisonai/praisonai/integrations/managed_agents.pytests/unit/integrations/test_backend_semantics.py
| if provider != "anthropic": | ||
| raise ValueError( | ||
| f"Managed runtime for provider '{provider}' is not yet available. " | ||
| f"Currently supported: 'anthropic'. " | ||
| f"For local execution with cloud compute, use: " | ||
| f"LocalAgent(compute='{provider}', config=LocalAgentConfig(...))" | ||
| ) |
There was a problem hiding this comment.
Misleading guidance for LLM-hint providers in the ValueError.
The error unconditionally suggests LocalAgent(compute='{provider}', config=LocalAgentConfig(...)). That guidance only makes sense when provider is a compute provider (e2b/modal/flyio/daytona/docker). For LLM hints such as openai/gemini/ollama (the exact legacy values the ManagedAgent factory now re-routes), compute='openai' is invalid β users should set the model instead via LocalAgentConfig(model='gpt-4o-mini') (or 'gemini/...', 'ollama/...'). Consider differentiating the message (or pointing to both patterns) so the suggestion doesn't send LLM-hint callers down a dead end.
π οΈ Suggested message refinement
if provider != "anthropic":
+ _llm_hints = {"openai", "gemini", "ollama", "local"}
+ _compute_hints = {"e2b", "modal", "flyio", "daytona", "docker"}
+ if provider in _llm_hints:
+ hint = (
+ f"For local agent loops with this LLM, use: "
+ f"LocalAgent(config=LocalAgentConfig(model='...')) "
+ f"(e.g. 'gpt-4o-mini', 'gemini/gemini-2.0-flash', 'ollama/llama3')."
+ )
+ elif provider in _compute_hints:
+ hint = (
+ f"For local execution with cloud compute, use: "
+ f"LocalAgent(compute='{provider}', config=LocalAgentConfig(...))"
+ )
+ else:
+ hint = (
+ "Use LocalAgent(config=LocalAgentConfig(model='...')) for local loops, "
+ "or LocalAgent(compute='e2b'|'modal'|'docker'|...) for cloud-sandboxed tools."
+ )
raise ValueError(
f"Managed runtime for provider '{provider}' is not yet available. "
- f"Currently supported: 'anthropic'. "
- f"For local execution with cloud compute, use: "
- f"LocalAgent(compute='{provider}', config=LocalAgentConfig(...))"
+ f"Currently supported: 'anthropic'. {hint}"
)π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/praisonai/integrations/hosted_agent.py` around lines 66 - 72,
The ValueError in hosted_agent.py currently always suggests using
LocalAgent(compute='{provider}', config=LocalAgentConfig(...)) which is
misleading for LLM-hint providers; update the error message in the branch that
checks provider (the block raising ValueError) to detect whether provider is a
compute backend (e.g., 'e2b','modal','flyio','daytona','docker') or an LLM hint
(e.g., 'openai','gemini','ollama') and provide the appropriate guidance: for
compute backends suggest LocalAgent(compute='...'),
config=LocalAgentConfig(...); for LLM hints suggest setting the model via
LocalAgentConfig(model='...') (or show both patterns in a single message), and
reference ManagedAgent/LocalAgent/LocalAgentConfig in the text so callers can
find the correct usage.
|
@claude You are the FINAL architecture reviewer. If the branch is under MervinPraison/PraisonAI (not a fork), you are able to make modifications to this branch and push directly. SCOPE: Focus ONLY on Python packages (praisonaiagents, praisonai). Do NOT modify praisonai-rust or praisonai-ts. Read ALL comments above from Gemini, Qodo, CodeRabbit, and Copilot carefully before responding. Phase 1: Review per AGENTS.md
Phase 2: FIX Valid Issues Phase 3: Final Verdict |
|
Claude finished @MervinPraison's task ββ View job Final Architecture Review & Issue Resolution βComprehensive review completed per AGENTS.md with all critical issues resolved. Todo List
Phase 1: Architecture Review per AGENTS.md ββ Protocol-Driven Core: Changes correctly follow the protocol-driven pattern with HostedAgent/LocalAgent in the wrapper layer as thin wrappers over existing core SDK classes. β Backward Compatibility: Factory approach preserves all existing imports with proper deprecation cycle. β Performance: No hot-path regressions - new classes are minimal overhead wrappers. Phase 2: Critical P1 Issues - All Fixed βIssues Addressedβ P1 Issue 1: ManagedAgent Compute Provider Breaking Change
β P1 Issue 2: LocalAgent Silent LLM Routing Loss
β P1 Issue 3: Spurious Deprecation Warnings
Additional Improvements
Phase 3: Final Verdict - APPROVE βFiles Modified
Validation Resultsβ
Auto-detection works without spurious warnings This PR successfully implements the HostedAgent/LocalAgent architectural split while maintaining full backward compatibility. All critical issues identified by previous reviewers have been resolved. Recommendation: MERGEThe PR now provides the semantic clarity of HostedAgent vs LocalAgent while preserving all existing behavior. The architectural split is sound and follows AGENTS.md principles correctly. Files committed: 74c4cecc | Branch: claude/issue-1549-20260424-1027 |
- Fix ManagedAgent compute provider routing: maintain backward compatibility for e2b/modal/etc by routing to LocalManagedAgent with deprecation warning instead of ValueError (fixes hard breaking change) - Fix LocalAgent provider preservation: preserve provider value for LLM routing to maintain _resolve_model() prefix behavior (ollama/, gemini/) - Fix spurious deprecation warnings: auto-detected provider=local no longer triggers warnings for users who never passed provider= - Remove unused os import from hosted_agent.py - Improve error messages with provider-specific guidance - Fix test fragility with exact warning counts and identity checks Addresses all P1 issues identified by Greptile and CodeRabbit reviewers. Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Fixes #1549
Creates new HostedAgent and LocalAgent classes to replace the overloaded ManagedAgent provider pattern.
Summary
Changes
Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
HostedAgentfor Anthropic-managed cloud executionLocalAgentfor local-only agent executionDeprecation
ManagedAgentis deprecated; useHostedAgentfor hosted execution orLocalAgentfor local execution