Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis PR implements a comprehensive refactored OpenFeature provider architecture for Superposition across Rust, Java, and Python, introducing pluggable data sources (HTTP/file-based), configurable refresh strategies (polling/watch/on-demand/manual), FFI-based local evaluation with caching, and both local and remote provider implementations. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LocalProvider as LocalResolutionProvider
participant DataSrc as Data Source<br/>(HTTP/File)
participant FFICache as FFI Provider Cache
participant SDKClient as Superposition SDK
Client->>LocalProvider: initialize(context)
LocalProvider->>DataSrc: fetch_config()
DataSrc->>SDKClient: get_config()
SDKClient-->>DataSrc: Config response
DataSrc-->>LocalProvider: FetchResponse[ConfigData]
LocalProvider->>FFICache: update with config
LocalProvider->>LocalProvider: start refresh_strategy
LocalProvider-->>Client: ready
Client->>LocalProvider: resolve_boolean_details(flag_key, context)
LocalProvider->>LocalProvider: merge contexts
LocalProvider->>LocalProvider: convert context to FFI format
LocalProvider->>FFICache: evaluate with query_data
FFICache-->>LocalProvider: resolved value
LocalProvider->>LocalProvider: convert FFI result to Python
LocalProvider-->>Client: FlagResolutionDetails
rect rgba(100, 200, 150, 0.5)
Note over LocalProvider,DataSrc: Refresh Strategy (e.g., Polling/Watch)
LocalProvider->>LocalProvider: refresh_strategy trigger
LocalProvider->>DataSrc: fetch_config(if_modified_since)
alt Config Updated
DataSrc-->>LocalProvider: FetchResponse[ConfigData]
LocalProvider->>FFICache: atomic cache update
else 304 Not Modified
DataSrc-->>LocalProvider: FetchResponse.not_modified()
LocalProvider->>LocalProvider: keep last-known-good
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR updates the provider spec to support a refactored Python OpenFeature provider architecture (local-cached + remote-eval providers), while also aligning Rust provider/bindings with a ConfigData shape change and adding an FFI API for applicable-variant resolution.
Changes:
- Refactors Python provider into modular data sources + providers + shared interfaces/types, and adds extensive documentation/examples.
- Updates Rust provider to fetch config and experiments concurrently during refresh and renames
ConfigData.config→ConfigData.data. - Adds
ProviderCache.get_applicable_variantsto Rust UniFFI and updates generated bindings (Python + Java).
Reviewed changes
Copilot reviewed 23 out of 23 changed files in this pull request and generated 21 comments.
Show a summary per file
| File | Description |
|---|---|
| crates/superposition_provider/src/local_provider.rs | Concurrent refresh via tokio::join! and updated ConfigData field access. |
| crates/superposition_provider/src/data_source/http.rs | Constructs ConfigData using renamed data field. |
| crates/superposition_provider/src/data_source/file.rs | Constructs/filters ConfigData using renamed data field. |
| crates/superposition_provider/src/data_source.rs | Renames ConfigData.config → ConfigData.data and updates Display. |
| crates/superposition_core/src/ffi.rs | Adds UniFFI-exposed ProviderCache.get_applicable_variants. |
| clients/python/provider/superposition_provider/types.py | Introduces expanded refresh strategy and provider options types for Python. |
| clients/python/provider/superposition_provider/remote_provider.py | Adds Python remote provider that evaluates via API per request. |
| clients/python/provider/superposition_provider/local_provider.py | Adds Python local cached provider using FFI ProviderCache. |
| clients/python/provider/superposition_provider/interfaces.py | Adds Python “trait-like” interfaces for bulk resolution and variant meta. |
| clients/python/provider/superposition_provider/http_data_source.py | Adds Python HTTP data source using the SDK client. |
| clients/python/provider/superposition_provider/file_data_source.py | Adds Python file data source with parsing + watch support. |
| clients/python/provider/superposition_provider/exp_config.py | Refactors legacy experimentation config to use shared conversions helpers. |
| clients/python/provider/superposition_provider/data_source.py | Adds Python data-source abstraction and FetchResponse/ConfigData wrappers. |
| clients/python/provider/superposition_provider/conversions.py | Centralizes SDK/Document → Python/FFI conversion utilities. |
| clients/python/provider/superposition_provider/cac_config.py | Updates legacy CAC config to reuse shared conversion helpers. |
| clients/python/provider/superposition_provider/builder.py | Adds builder + convenience constructors for provider initialization. |
| clients/python/provider/superposition_provider/init.py | Adds package exports for the refactored Python provider. |
| clients/python/provider/README_REFACTORED.md | Adds refactor overview, migration guide, and usage examples. |
| clients/python/provider/PROVIDER_ARCHITECTURE.md | Adds deeper architecture documentation aligned to the OpenFeature guide. |
| clients/python/provider/examples.py | Adds runnable examples for local/remote providers and strategies. |
| clients/python/bindings/superposition_bindings/superposition_client.py | Regenerates Python UniFFI bindings to include get_applicable_variants. |
| clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.md | Adds a cross-language OpenFeature provider implementation guide. |
| clients/java/bindings/src/main/kotlin/uniffi/superposition_client/superposition_client.kt | Regenerates Java UniFFI bindings to include getApplicableVariants. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| __all__ = [ | ||
| # Data sources | ||
| "SuperpositionDataSource", | ||
| "FetchResponse", | ||
| "ConfigData", | ||
| "ExperimentData", | ||
| "FileDataSource", | ||
| "HttpDataSource", | ||
| # Traits | ||
| "AllFeatureProvider", | ||
| "FeatureExperimentMeta", | ||
| # Providers | ||
| "LocalResolutionProvider", | ||
| "SuperpositionAPIProvider", | ||
| "RefreshStrategy", | ||
| # Types | ||
| "SuperpositionOptions", | ||
| "EvaluationCacheOptions", | ||
| "PollingStrategy", | ||
| "OnDemandStrategy", | ||
| "WatchStrategy", | ||
| "ManualStrategy", | ||
| "RefreshStrategyType", | ||
| "ConfigurationOptions", | ||
| "ExperimentationOptions", | ||
| "LocalProviderOptions", | ||
| "RemoteProviderOptions", | ||
| "SuperpositionProviderOptions", | ||
| ] |
There was a problem hiding this comment.
Docs/examples expect create_local_provider, create_remote_provider, ProviderBuilder, etc. to be importable from superposition_provider, but __init__.py doesn’t re-export anything from builder.py. Re-export these symbols (and add them to __all__) or update docs to import from superposition_provider.builder.
| from .local_provider import LocalResolutionProvider, RefreshStrategy | ||
| from .remote_provider import SuperpositionAPIProvider | ||
| from .file_data_source import FileDataSource | ||
| from .http_data_source import HttpDataSource | ||
| from .types import ( | ||
| SuperpositionOptions, | ||
| PollingStrategy, |
There was a problem hiding this comment.
builder.py imports RefreshStrategy from .local_provider, but that type lives in types.py (and isn’t exported by local_provider.py). This will fail at import time; import RefreshStrategy from .types instead.
| ## New File Structure | ||
|
|
||
| ``` | ||
| superposition_provider/ | ||
| ├── __init__.py # Package exports | ||
| ├── traits.py # AllFeatureProvider, FeatureExperimentMeta | ||
| ├── data_source.py # SuperpositionDataSource interface | ||
| ├── file_data_source.py # Local file-based implementation | ||
| ├── http_data_source.py # HTTP API implementation | ||
| ├── local_provider.py # LocalResolutionProvider (caching + refresh) | ||
| ├── remote_provider.py # SuperpositionAPIProvider (direct remote) | ||
| ├── builder.py # Fluent provider builder | ||
| ├── conversions.py # Type conversion utilities | ||
| ├── types.py # Configuration types | ||
| ├── provider.py # Old provider (for compatibility) | ||
| ├── configuration_client.py # Old config client (legacy) | ||
| ├── cac_config.py # Old CAC config (legacy) | ||
| └── exp_config.py # Old exp config (legacy) |
There was a problem hiding this comment.
The refactored docs reference traits.py (file structure section), but the implementation adds interfaces.py and imports that elsewhere. Align the documentation and module naming to avoid confusion and broken import instructions.
| await self._ensure_fresh_data() | ||
|
|
||
| if not self.ffi_cache or not self.cached_config: | ||
| raise RuntimeError("Provider not properly initialized") | ||
|
|
||
| # Merge contexts | ||
| targeting_key, query_data = self._merge_contexts(context) | ||
|
|
||
| if targeting_key is None: | ||
| return [] | ||
|
|
||
| try: | ||
| # Use FFI for local evaluation | ||
| return self.ffi_cache.get_applicable_variants( | ||
| query_data, | ||
| prefix_filter, | ||
| targeting_key, | ||
| ) |
There was a problem hiding this comment.
get_applicable_variants() calls self.ffi_cache.get_applicable_variants(...) even when no experiment config has been loaded into the FFI cache (e.g., initial experiment fetch failed, or experiments are disabled). The Rust FFI method returns an error when experiments aren’t initialized, so this should short-circuit to [] when self.cached_experiments is None (or when experiments are disabled) to match the Rust provider behavior.
| try: | ||
| now = datetime.now(timezone.utc) | ||
| # Read and parse file | ||
| with open(self.config_path, 'r') as f: | ||
| content = f.read() | ||
|
|
||
| config = _parse_config_file(content, self.config_path) |
There was a problem hiding this comment.
fetch_config() calls open(self.config_path, ...) but config_path is declared as Optional[str] and may be None (default). Add an explicit check and raise a clear error if config_path isn’t provided to avoid a confusing TypeError at runtime.
| try: | |
| now = datetime.now(timezone.utc) | |
| # Read and parse file | |
| with open(self.config_path, 'r') as f: | |
| content = f.read() | |
| config = _parse_config_file(content, self.config_path) | |
| if self.config_path is None: | |
| logger.error("FileDataSource.fetch_config called but config_path was not provided.") | |
| raise ValueError("config_path must be set to fetch configuration from file.") | |
| try: | |
| now = datetime.now(timezone.utc) | |
| config_path = self.config_path | |
| # Read and parse file | |
| with open(config_path, 'r') as f: | |
| content = f.read() | |
| config = _parse_config_file(content, config_path) |
| def __init__( | ||
| self, | ||
| primary_source: SuperpositionDataSource, | ||
| fallback_source: Optional[SuperpositionDataSource] = None, | ||
| refresh_strategy: RefreshStrategy = default_on_demand_strategy(), | ||
| ): |
There was a problem hiding this comment.
refresh_strategy uses a constructed dataclass instance as a default argument (default_on_demand_strategy()). Because default arguments are evaluated at import time, this instance can be unintentionally shared across provider instances if mutated; prefer refresh_strategy: Optional[RefreshStrategy] = None and assign the default inside __init__.
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
clients/python/provider/superposition_provider/exp_config.py (1)
227-228:⚠️ Potential issue | 🔴 CriticalBug: Missing
returnstatement.The method retrieves from cache but doesn't return the value, causing it to always return
None.Proposed fix
def _get_from_eval_cache(self, key: str) -> Optional[Any]: - self.evaluation_cache.get(key) + return self.evaluation_cache.get(key)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/exp_config.py` around lines 227 - 228, The _get_from_eval_cache method in exp_config.py calls self.evaluation_cache.get(key) but does not return its result; change the method so it returns the fetched value (i.e., add a return of self.evaluation_cache.get(key)) so callers of _get_from_eval_cache receive the cached value; ensure you update the function _get_from_eval_cache and keep the signature def _get_from_eval_cache(self, key: str) -> Optional[Any]: unchanged.clients/python/provider/superposition_provider/cac_config.py (2)
318-319:⚠️ Potential issue | 🔴 CriticalBug: Missing
returnstatement.Same issue as in
exp_config.py- the method doesn't return the retrieved value.Proposed fix
def _get_from_eval_cache(self, key: str) -> Optional[Any]: - self.evaluation_cache.get(key) + return self.evaluation_cache.get(key)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/cac_config.py` around lines 318 - 319, The method _get_from_eval_cache is missing a return, so it never returns the cached value; update the implementation in cac_config.py (method _get_from_eval_cache) to return the retrieved value (e.g., change the body to return self.evaluation_cache.get(key)) so callers receive the cached result.
34-69: 🛠️ Refactor suggestion | 🟠 MajorDuplicate code:
document_to_python_valueexists inconversions.py.This function is duplicated from
conversions.py(lines 318-353). Import it instead to maintain a single source of truth.Proposed fix
-from smithy_core.documents import Document -from smithy_core.shapes import ShapeType +from .conversions import to_dimension_type, document_to_python_value - -def document_to_python_value(doc: Document) -> Any: - """Recursively unwrap smithy_core.Document into plain Python values.""" - if doc.is_none(): - return None - - match doc.shape_type: - case ShapeType.BOOLEAN: - return doc.as_boolean() - # ... rest of the functionUpdate line 7 to also import
document_to_python_value:from .conversions import to_dimension_type, document_to_python_valueAnd remove lines 19-20 (the now-unnecessary imports) and lines 34-69 (the duplicated function).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/cac_config.py` around lines 34 - 69, The function document_to_python_value is duplicated here; remove the local implementation and import the canonical version from conversions.py instead: add document_to_python_value to the existing import from .conversions (alongside to_dimension_type) and delete the duplicate function definition in cac_config.py, also removing any now-unused imports (e.g., Decimal if only used by the removed code) so cac_config.py uses the single source-of-truth document_to_python_value from conversions.py.
🧹 Nitpick comments (4)
clients/python/provider/superposition_provider/conversions.py (2)
363-381: Incorrect type annotations for enum conversion functions.Both
to_group_typeandto_experiment_status_typehave parameter typestrbut pattern-match against SDK enum values. The type annotations should match the actual expected types.Proposed fix
-def to_group_type(sdk_group_type: str) -> GroupType: +def to_group_type(sdk_group_type: SDKGroupType) -> GroupType: match sdk_group_type: case SDKGroupType.USER_CREATED: return GroupType.USER_CREATED case _: return GroupType.SYSTEM_GENERATED -def to_experiment_status_type(sdk_status_type: str) -> ExperimentStatusType: +def to_experiment_status_type(sdk_status_type: SDKExperimentStatusType) -> ExperimentStatusType: match sdk_status_type: case SDKExperimentStatusType.CREATED: return ExperimentStatusType.CREATED🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/conversions.py` around lines 363 - 381, Both conversion functions declare their input as str but actually pattern-match on SDK enums; update the parameter annotations to SDKGroupType for to_group_type and SDKExperimentStatusType for to_experiment_status_type, and ensure the functions return the corresponding target enums (GroupType and ExperimentStatusType) by returning GroupType.USER_CREATED (not SDKGroupType.USER_CREATED) and the appropriate ExperimentStatusType members in each match arm; adjust any imports if necessary to reference SDKGroupType and SDKExperimentStatusType.
80-152: Remove commented-out code.This large block of commented code (
document_to_value) appears to be superseded bydocument_to_python_valueat the bottom of the file. Remove it to improve maintainability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/conversions.py` around lines 80 - 152, Delete the large commented-out document_to_value block entirely; it is redundant with the active function document_to_python_value—remove all commented lines from the start of the def comment through its final fallback return so only document_to_python_value remains, and then run linters/tests to ensure no references to document_to_value remain (if any, replace them to call document_to_python_value).clients/python/provider/superposition_provider/exp_config.py (1)
19-21: Reconsider global logger level manipulation.This code sets all loggers not starting with "python" to INFO level, which could suppress useful debug output from dependencies. Consider removing this or making it configurable.
-for name in logging.root.manager.loggerDict: - if not name.startswith("python"): # e.g., "my_project" - logging.getLogger(name).setLevel(logging.INFO)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/exp_config.py` around lines 19 - 21, The global logger-level manipulation in exp_config.py iterates logging.root.manager.loggerDict and forces non-"python"-prefixed loggers to INFO via logging.getLogger(...).setLevel(logging.INFO); remove this unconditional loop or make it configurable: provide a flag (e.g., ENV var or config option like set_default_logger_level) that, when true, applies the level change, or instead only target module-specific loggers (e.g., package name(s) for this project) by using explicit logger names; update the code around logging.root.manager.loggerDict and logging.getLogger to consult that flag or a whitelist before calling setLevel so dependencies' debug logs aren't suppressed.clients/python/provider/superposition_provider/types.py (1)
62-71: Consider documenting the default strategy parameters.The
WatchStrategydataclass defaultsdebounce_msto1000, butdefault_watch_strategy()creates one with500. This inconsistency could confuse users about what the "default" actually is.Consider either:
- Aligning the values
- Adding docstrings to the factory functions explaining the chosen defaults
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/types.py` around lines 62 - 71, The WatchStrategy dataclass defines debounce_ms default as 1000 but default_watch_strategy() constructs WatchStrategy(500), causing an inconsistency; update them to match (choose one value and set both debounce_ms default in the WatchStrategy dataclass and the value used in default_watch_strategy()), or add a docstring to default_watch_strategy() explaining why it returns 500 while the dataclass default is 1000 so users understand the intended “default” behavior; refer to WatchStrategy, its debounce_ms field, and default_watch_strategy when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.md`:
- Around line 429-431: Update the diagram and text to reflect the renamed field
on the ConfigData wrapper: replace references to ConfigData {config: Config,
fetchedAt: LocalDateTime} with the current shape that uses the data property
(e.g., ConfigData {data: Config, fetchedAt: LocalDateTime}), and ensure any
examples or prose match the runtime access patterns shown in the provider code
(such as data.data.* and assignments like c.data = ...), so new SDKs use
ConfigData.data rather than a non-existent config field.
In `@clients/python/provider/superposition_provider/__init__.py`:
- Around line 13-43: The package entrypoint is missing exports for the
builder/factory API used by docs/examples; import and re-export ProviderBuilder
and the factory functions referenced in README and examples (ProviderBuilder,
create_local_provider, create_remote_provider, create_local_provider_with_files)
from their defining modules (e.g., the module that defines ProviderBuilder and
create_* functions) in __init__.py so that from superposition_provider import
ProviderBuilder/create_local_provider/create_remote_provider/create_local_provider_with_files
works; ensure you add these imports alongside the existing re-exports (also
apply the same change to the later export block referenced in lines 45-73).
In `@clients/python/provider/superposition_provider/builder.py`:
- Around line 128-156: The _build_local_provider function is constructing
HttpDataSource and LocalResolutionProvider with incorrect parameters; change the
HttpDataSource instantiation to pass the single options object (e.g.,
HttpDataSource(options)) instead of separate endpoint/token/org_id/workspace_id,
and call LocalResolutionProvider with only the supported arguments
primary_source, fallback_source, and refresh_strategy (remove refresh_interval
and on_demand_ttl), ensuring fallback_source is still created via FileDataSource
when self.config_file_path or self.experiment_file_path is set; update any local
variable names (primary_source, fallback_source, refresh_strategy) accordingly
so the returned LocalResolutionProvider uses the correct constructor signature.
In `@clients/python/provider/superposition_provider/cac_config.py`:
- Line 7: The module currently imports to_dimension_type from .conversions but
duplicates document_to_python_value locally; update the import list to also
import document_to_python_value from .conversions and remove the local duplicate
definition/usage so the code uses the shared conversion function (reference
symbols: to_dimension_type, document_to_python_value, and the local duplicate
currently in cac_config.py).
In `@clients/python/provider/superposition_provider/conversions.py`:
- Around line 363-368: The to_group_type function is returning the SDK enum
value instead of the FFI enum for the USER_CREATED branch; update the
USER_CREATED case in to_group_type to return GroupType.USER_CREATED (the
FFI/FFI-exposed enum) rather than SDKGroupType.USER_CREATED, keeping the
fallback case returning GroupType.SYSTEM_GENERATED so the function consistently
returns GroupType values.
In `@clients/python/provider/superposition_provider/file_data_source.py`:
- Around line 96-99: The FileDataSource currently constructs ConfigData using an
incorrect keyword argument (`config=`) which doesn't match ConfigData's
signature (data, fetched_at), causing attribute errors in fetch_filtered_config
and violating FetchResponse[ConfigData]; update the construction in
FileDataSource (and the similar block around lines 123-135) to pass the payload
as the first argument or use the correct keyword name (`data=`) and keep
fetched_at=now so you return FetchResponse.data(ConfigData(data=config,
fetched_at=now)).
- Around line 149-172: The code claims support for experiments via
supports_experiments() while fetch_candidate_active_experiments and
fetch_matching_active_experiments still raise NotImplementedError; change
supports_experiments to not advertise support until real fetchers exist—either
implement the two async fetch methods to use self.experiment_path, or, as a safe
stopgap, make supports_experiments() return False (remove the experiment_path
truthy check) so callers won’t assume experiment capability when
fetch_candidate_active_experiments and fetch_matching_active_experiments remain
unimplemented.
In `@clients/python/provider/superposition_provider/http_data_source.py`:
- Around line 86-88: The except blocks catching UnknownApiError in
_fetch_and_cache_config and _fetch_and_cache_experiments should not be mapped
unconditionally to FetchResponse.not_modified() because init fetches use
if_modified_since=None and UnknownApiError denotes real failures; locate the two
handlers that currently "return FetchResponse.not_modified()" for
UnknownApiError in http_data_source.py and change the behavior to surface the
failure instead of faking a 304: either re-raise the UnknownApiError or return a
distinct error FetchResponse (e.g., FetchResponse.error or similar) when
init=True or when if_modified_since is None, and only treat a NotModified result
when you have an explicit 304/conditional success path. Ensure you reference the
_fetch_and_cache_config and _fetch_and_cache_experiments flows so the init case
is handled differently from a real 304.
In `@clients/python/provider/superposition_provider/local_provider.py`:
- Around line 323-324: The code dereferences a nullable cache
(self.cached_experiments) when computing if_modified_since before calling
primary_source.fetch_active_experiments, causing AttributeError when the cache
is None; update the call sites (the one around fetch_active_experiments and the
similar occurrence at lines referenced near 359-360) to guard the cache by using
a conditional check (e.g., use self.cached_experiments.fetched_at only if
self.cached_experiments is not None, otherwise pass None) so
fetch_active_experiments receives None when the cache is absent.
In `@clients/python/provider/superposition_provider/remote_provider.py`:
- Around line 250-261: The code in _get_merged_context currently serializes
eval_ctx.attributes with json.dumps before returning query_data, which breaks
remote path handling because dict_to_document expects raw Python values; update
_get_merged_context (and likewise local helper _merge_contexts if present) to
return the attributes map as raw values (dict[str, Any]) instead of JSON strings
so that dict_to_document receives original types; ensure the function
signature/return types are adjusted accordingly (returning
eval_ctx.targeting_key and eval_ctx.attributes directly) and remove any
json.dumps wrapping around attributes used for the remote/provider path.
In `@clients/python/provider/superposition_provider/types.py`:
- Around line 110-119: The dataclass LocalProviderOptions has
config_fetch_strategy typed as RefreshStrategy but defaults to None, causing a
type mismatch and runtime errors in
LocalResolutionProvider._start_refresh_strategy; fix this by making
config_fetch_strategy non-None by supplying a default strategy using
dataclasses.field(default_factory=default_polling_strategy) (ensure you import
field and default_polling_strategy) so the type remains RefreshStrategy and
_start_refresh_strategy no longer needs to handle None.
In `@crates/superposition_core/src/ffi.rs`:
- Around line 392-444: The method get_applicable_variants currently locks
self.data twice (once to read exp_config.experiments and experiment_groups, then
again to read config.dimensions), which can produce an inconsistent snapshot;
change it to acquire the mutex once, clone out exp_config.experiments,
exp_config.experiment_groups and cache_data.config.dimensions while holding the
single lock (use the same cache_data and exp_config variables currently
referenced), then release the lock and call the free function
get_applicable_variants(&dimensions, experiments, &experiment_groups,
&dimension_data, &targeting_key, prefix) so all inputs come from the same atomic
snapshot. Ensure you still preserve the existing error mapping
(OperationError::Unexpected) when lock acquisition fails.
In `@crates/superposition_provider/src/local_provider.rs`:
- Around line 271-336: The current do_refresh() mutates cached_config and
cached_experiments inside each future causing partial publishes when one fetch
succeeds and the other fails; change the logic to fetch both results into
temporary local variables (e.g., use local vars like new_config:
Option<ConfigData>, new_experiments: Option<ExperimentsData> or enum
FetchOutcome) by calling primary.fetch_config(...) and
primary.fetch_active_experiments(...) without writing to cached_config or
cached_experiments, then await both with tokio::join! and only if both fetch
outcomes indicate success (or NotModified where appropriate) acquire the write
locks and update cached_config and cached_experiments together; ensure errors
return without mutating caches and preserve existing log messages and use the
same function names (do_refresh, cached_config, cached_experiments,
primary.fetch_config, primary.fetch_active_experiments) to locate changes.
---
Outside diff comments:
In `@clients/python/provider/superposition_provider/cac_config.py`:
- Around line 318-319: The method _get_from_eval_cache is missing a return, so
it never returns the cached value; update the implementation in cac_config.py
(method _get_from_eval_cache) to return the retrieved value (e.g., change the
body to return self.evaluation_cache.get(key)) so callers receive the cached
result.
- Around line 34-69: The function document_to_python_value is duplicated here;
remove the local implementation and import the canonical version from
conversions.py instead: add document_to_python_value to the existing import from
.conversions (alongside to_dimension_type) and delete the duplicate function
definition in cac_config.py, also removing any now-unused imports (e.g., Decimal
if only used by the removed code) so cac_config.py uses the single
source-of-truth document_to_python_value from conversions.py.
In `@clients/python/provider/superposition_provider/exp_config.py`:
- Around line 227-228: The _get_from_eval_cache method in exp_config.py calls
self.evaluation_cache.get(key) but does not return its result; change the method
so it returns the fetched value (i.e., add a return of
self.evaluation_cache.get(key)) so callers of _get_from_eval_cache receive the
cached value; ensure you update the function _get_from_eval_cache and keep the
signature def _get_from_eval_cache(self, key: str) -> Optional[Any]: unchanged.
---
Nitpick comments:
In `@clients/python/provider/superposition_provider/conversions.py`:
- Around line 363-381: Both conversion functions declare their input as str but
actually pattern-match on SDK enums; update the parameter annotations to
SDKGroupType for to_group_type and SDKExperimentStatusType for
to_experiment_status_type, and ensure the functions return the corresponding
target enums (GroupType and ExperimentStatusType) by returning
GroupType.USER_CREATED (not SDKGroupType.USER_CREATED) and the appropriate
ExperimentStatusType members in each match arm; adjust any imports if necessary
to reference SDKGroupType and SDKExperimentStatusType.
- Around line 80-152: Delete the large commented-out document_to_value block
entirely; it is redundant with the active function
document_to_python_value—remove all commented lines from the start of the def
comment through its final fallback return so only document_to_python_value
remains, and then run linters/tests to ensure no references to document_to_value
remain (if any, replace them to call document_to_python_value).
In `@clients/python/provider/superposition_provider/exp_config.py`:
- Around line 19-21: The global logger-level manipulation in exp_config.py
iterates logging.root.manager.loggerDict and forces non-"python"-prefixed
loggers to INFO via logging.getLogger(...).setLevel(logging.INFO); remove this
unconditional loop or make it configurable: provide a flag (e.g., ENV var or
config option like set_default_logger_level) that, when true, applies the level
change, or instead only target module-specific loggers (e.g., package name(s)
for this project) by using explicit logger names; update the code around
logging.root.manager.loggerDict and logging.getLogger to consult that flag or a
whitelist before calling setLevel so dependencies' debug logs aren't suppressed.
In `@clients/python/provider/superposition_provider/types.py`:
- Around line 62-71: The WatchStrategy dataclass defines debounce_ms default as
1000 but default_watch_strategy() constructs WatchStrategy(500), causing an
inconsistency; update them to match (choose one value and set both debounce_ms
default in the WatchStrategy dataclass and the value used in
default_watch_strategy()), or add a docstring to default_watch_strategy()
explaining why it returns 500 while the dataclass default is 1000 so users
understand the intended “default” behavior; refer to WatchStrategy, its
debounce_ms field, and default_watch_strategy when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6f79bd33-73ce-4366-a915-3b96cafbe579
📒 Files selected for processing (23)
clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.mdclients/java/bindings/src/main/kotlin/uniffi/superposition_client/superposition_client.ktclients/python/bindings/superposition_bindings/superposition_client.pyclients/python/provider/PROVIDER_ARCHITECTURE.mdclients/python/provider/README_REFACTORED.mdclients/python/provider/examples.pyclients/python/provider/superposition_provider/__init__.pyclients/python/provider/superposition_provider/builder.pyclients/python/provider/superposition_provider/cac_config.pyclients/python/provider/superposition_provider/conversions.pyclients/python/provider/superposition_provider/data_source.pyclients/python/provider/superposition_provider/exp_config.pyclients/python/provider/superposition_provider/file_data_source.pyclients/python/provider/superposition_provider/http_data_source.pyclients/python/provider/superposition_provider/interfaces.pyclients/python/provider/superposition_provider/local_provider.pyclients/python/provider/superposition_provider/remote_provider.pyclients/python/provider/superposition_provider/types.pycrates/superposition_core/src/ffi.rscrates/superposition_provider/src/data_source.rscrates/superposition_provider/src/data_source/file.rscrates/superposition_provider/src/data_source/http.rscrates/superposition_provider/src/local_provider.rs
| ┌─ ConfigData ──────────────────────────────────┐ | ||
| │ {config: Config, fetchedAt: LocalDateTime} │ | ||
| └──────────────────────────────────────────────┘ |
There was a problem hiding this comment.
Update the guide to the renamed ConfigData.data shape.
This diagram still shows ConfigData {config: ...}, but the current provider code now reads and writes ConfigData.data (data.data.*, c.data = ...). Leaving the old field name here will steer new SDK implementations toward the wrong wrapper contract.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.md` around lines 429 - 431,
Update the diagram and text to reflect the renamed field on the ConfigData
wrapper: replace references to ConfigData {config: Config, fetchedAt:
LocalDateTime} with the current shape that uses the data property (e.g.,
ConfigData {data: Config, fetchedAt: LocalDateTime}), and ensure any examples or
prose match the runtime access patterns shown in the provider code (such as
data.data.* and assignments like c.data = ...), so new SDKs use ConfigData.data
rather than a non-existent config field.
| from .data_source import ( | ||
| SuperpositionDataSource, | ||
| FetchResponse, | ||
| ConfigData, | ||
| ExperimentData, | ||
| ) | ||
| from .file_data_source import FileDataSource | ||
| from .http_data_source import HttpDataSource | ||
|
|
||
| # Traits | ||
| from .traits import AllFeatureProvider, FeatureExperimentMeta | ||
|
|
||
| # Providers | ||
| from .local_provider import LocalResolutionProvider, RefreshStrategy | ||
| from .remote_provider import SuperpositionAPIProvider | ||
|
|
||
| # Types | ||
| from .types import ( | ||
| SuperpositionOptions, | ||
| EvaluationCacheOptions, | ||
| PollingStrategy, | ||
| OnDemandStrategy, | ||
| WatchStrategy, | ||
| ManualStrategy, | ||
| RefreshStrategy as RefreshStrategyType, | ||
| ConfigurationOptions, | ||
| ExperimentationOptions, | ||
| LocalProviderOptions, | ||
| RemoteProviderOptions, | ||
| SuperpositionProviderOptions, | ||
| ) |
There was a problem hiding this comment.
Re-export the builder/factory API before documenting it as the package entrypoint.
README_REFACTORED.md, PROVIDER_ARCHITECTURE.md, and examples.py all use from superposition_provider import ProviderBuilder / create_local_provider / create_remote_provider / create_local_provider_with_files, but this __init__.py never imports or exposes them. As merged, the advertised entrypoint raises ImportError.
Suggested export surface
# Providers
from .local_provider import LocalResolutionProvider, RefreshStrategy
from .remote_provider import SuperpositionAPIProvider
+from .builder import (
+ ProviderBuilder,
+ create_local_provider,
+ create_local_provider_with_files,
+ create_remote_provider,
+)
@@
"LocalProviderOptions",
"RemoteProviderOptions",
"SuperpositionProviderOptions",
+ "ProviderBuilder",
+ "create_local_provider",
+ "create_local_provider_with_files",
+ "create_remote_provider",
]Also applies to: 45-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider/superposition_provider/__init__.py` around lines 13 -
43, The package entrypoint is missing exports for the builder/factory API used
by docs/examples; import and re-export ProviderBuilder and the factory functions
referenced in README and examples (ProviderBuilder, create_local_provider,
create_remote_provider, create_local_provider_with_files) from their defining
modules (e.g., the module that defines ProviderBuilder and create_* functions)
in __init__.py so that from superposition_provider import
ProviderBuilder/create_local_provider/create_remote_provider/create_local_provider_with_files
works; ensure you add these imports alongside the existing re-exports (also
apply the same change to the later export block referenced in lines 45-73).
clients/python/provider/superposition_provider/local_provider.py
Outdated
Show resolved
Hide resolved
7463d4f to
77f5179
Compare
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
clients/python/provider/superposition_provider/cac_config.py (1)
281-282:⚠️ Potential issue | 🟠 MajorBug: Missing
returnstatement causes cache lookup to always returnNone.Same issue as in
exp_config.py- the method retrieves the cached value but doesn't return it.Proposed fix
def _get_from_eval_cache(self, key: str) -> Optional[Any]: - self.evaluation_cache.get(key) + return self.evaluation_cache.get(key)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/cac_config.py` around lines 281 - 282, The _get_from_eval_cache method in cac_config.py currently calls self.evaluation_cache.get(key) but does not return the value, causing callers to always get None; update the method _get_from_eval_cache to return the retrieved value (e.g., return self.evaluation_cache.get(key)) so cache lookups work correctly, ensuring you reference the method name _get_from_eval_cache and the evaluation_cache attribute when making the change.clients/python/provider/superposition_provider/exp_config.py (1)
227-228:⚠️ Potential issue | 🟠 MajorBug: Missing
returnstatement causes cache lookup to always returnNone.The method retrieves the cached value but doesn't return it. This breaks the evaluation cache functionality.
Proposed fix
def _get_from_eval_cache(self, key: str) -> Optional[Any]: - self.evaluation_cache.get(key) + return self.evaluation_cache.get(key)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/exp_config.py` around lines 227 - 228, The method _get_from_eval_cache currently calls self.evaluation_cache.get(key) but does not return the result; add a return so the cached value is returned (i.e., change the body of _get_from_eval_cache to return the result of self.evaluation_cache.get(key)) to restore correct cache lookup behavior in the class.
🧹 Nitpick comments (10)
clients/python/provider/superposition_provider/cac_config.py (1)
19-20: Remove unused imports.
DocumentandShapeTypeare imported but no longer used directly in this file sincedocument_to_python_valueis now imported fromconversions.py.Proposed fix
-from smithy_core.documents import Document -from smithy_core.shapes import ShapeType🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/cac_config.py` around lines 19 - 20, Remove the unused imports Document and ShapeType from cac_config.py: locate the top import block where Document and ShapeType are imported and delete those two names so only required imports (e.g., document_to_python_value from conversions.py) remain; ensure no other references to Document or ShapeType exist in the file after removal (check functions like any that reference document_to_python_value) and run tests or lint to confirm no import errors.clients/python/provider/superposition_provider/conversions.py (3)
80-152: Remove commented-out dead code.This large block of commented-out code (
document_to_value) appears to be replaced bydocument_to_python_value(lines 318-353). Dead code adds maintenance burden and confusion. Consider removing it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/conversions.py` around lines 80 - 152, Remove the large commented-out dead block defining document_to_value: delete the entire commented section (the commented function and its internals) so only the active implementation document_to_python_value remains; ensure no other references to document_to_value exist (search for symbol document_to_value) and run tests/type-checking to confirm nothing else depends on the removed commented code.
31-53: Docstring slightly misleading, but logic is sound.The docstring states "Converts all values to JSON strings" but string values pass through unchanged (line 45) while non-strings are JSON-encoded (line 48). This is intentional to avoid double-quoting strings (e.g.,
"hello"staying ashellorather than"\"hello\""), and the reverse functionquery_data_to_evaluation_contexthandles this correctly by falling back to the original string when JSON parsing fails.Consider updating the docstring to clarify this behavior:
- """Convert evaluation context attributes to query data. - - Converts all values to JSON strings for API compatibility. + """Convert evaluation context attributes to query data. + + String values pass through unchanged; non-string values are JSON-encoded.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/conversions.py` around lines 31 - 53, The docstring for evaluation_context_to_query_data is misleading because it says "Converts all values to JSON strings" while the function intentionally leaves existing str values unchanged to avoid double-quoting; update the docstring of evaluation_context_to_query_data to state that non-string values are JSON-encoded while string values are passed through as-is (and mention that query_data_to_evaluation_context will attempt to JSON-decode entries and fall back to the original string), so readers understand the asymmetric behavior between evaluation_context_to_query_data and query_data_to_evaluation_context.
336-339: Precision loss on BIG_DECIMAL conversion is acceptable but worth noting.Converting
Decimaltofloat(line 339) can lose precision for high-precision decimal values. The comment "for JSON compatibility" documents this trade-off. If precision is critical for monetary or scientific values in the future, consider using string representation instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/conversions.py` around lines 336 - 339, The current ShapeType.BIG_DECIMAL handling converts Decimal via doc.as_decimal() to float which can lose precision; change the conversion to return the string form of the Decimal (e.g., use str(decimal_val) when decimal_val is not None) and update the comment to note that we serialize BIG_DECIMAL as a string to preserve precision for monetary/scientific values while remaining JSON-compatible; reference ShapeType.BIG_DECIMAL, doc.as_decimal(), and decimal_val to locate the change.clients/python/provider/superposition_provider/http_data_source.py (1)
89-90: Use bareraisein these pass-through handlers.
raise eadds no value and re-anchors the traceback to this helper, which makes SDK failures harder to debug. Dropping the genericexcept Exceptionblocks entirely, or switching them to bareraise, preserves the original call site.♻️ Minimal cleanup
- except Exception as e: - raise e + except Exception: + raiseAlso applies to: 153-154
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/http_data_source.py` around lines 89 - 90, Replace the pass-through exception handlers that use "except Exception as e: raise e" in http_data_source.py with bare re-raises (i.e., use "except Exception: raise") or remove the unnecessary try/except entirely; specifically update both occurrences that capture Exception as e (the handler around the HTTP data source call and the second similar handler later in the module) so the original traceback is preserved instead of being re-anchored by raising the caught exception object.clients/python/provider/superposition_provider/builder.py (1)
14-19: ImportManualStrategyat the module level for consistency.
ManualStrategyis imported inline at line 159 but other strategies (PollingStrategy,OnDemandStrategy,WatchStrategy) are imported at the top. This inconsistency makes the code harder to follow and could mask import errors until runtime.♻️ Proposed fix
from .types import ( SuperpositionOptions, PollingStrategy, OnDemandStrategy, WatchStrategy, + ManualStrategy, )Then at line 159:
else: - from .types import ManualStrategy return ManualStrategy()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/builder.py` around lines 14 - 19, The inline import of ManualStrategy should be moved to the module-level import block for consistency and to surface import errors early: add ManualStrategy to the existing import list alongside SuperpositionOptions, PollingStrategy, OnDemandStrategy, and WatchStrategy at the top of builder.py, and remove the inline "from .strategies import ManualStrategy" statement near line 159 so all strategy classes are imported in one place.clients/python/provider/superposition_provider/local_provider.py (1)
42-47: Function call in default argument (B008) - consider usingNonewith factory in body.The static analyzer flags
default_on_demand_strategy()in the default argument. While this works for immutable dataclasses, the idiomatic pattern is to useNoneas sentinel and instantiate in the function body.♻️ Optional: Avoid function call in default
def __init__( self, primary_source: SuperpositionDataSource, fallback_source: Optional[SuperpositionDataSource] = None, - refresh_strategy: RefreshStrategy = default_on_demand_strategy(), + refresh_strategy: Optional[RefreshStrategy] = None, ): ... self.primary_source = primary_source self.fallback_source = fallback_source - self.refresh_strategy = refresh_strategy + self.refresh_strategy = refresh_strategy or default_on_demand_strategy()This avoids the function being called at module import time.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/local_provider.py` around lines 42 - 47, The __init__ signature currently calls default_on_demand_strategy() as a default argument which triggers a function at import time; change the parameter refresh_strategy: RefreshStrategy = None (or Optional[RefreshStrategy] = None) and inside SuperpositionProvider.__init__ (or the class containing this __init__) set refresh_strategy = refresh_strategy or default_on_demand_strategy() (or explicitly if refresh_strategy is None: refresh_strategy = default_on_demand_strategy()) so the factory is invoked at runtime only; update type hints/imports as needed to accept Optional[RefreshStrategy].clients/python/provider/superposition_provider/remote_provider.py (2)
82-88:shutdown()should also beasyncfor consistency.Same rationale as
initialize()- for consistency withLocalResolutionProvider.shutdown()and to support potential async cleanup in the future.♻️ Proposed fix
- def shutdown(self): + async def shutdown(self): """Shutdown the provider."""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/remote_provider.py` around lines 82 - 88, The shutdown method in SuperpositionAPIProvider should be made async to match initialize() and LocalResolutionProvider.shutdown(); change def shutdown(self): to async def shutdown(self):, perform any async cleanup (e.g., await self.client.aclose() or other client-specific async close if available) before setting self.client = None and self.status = ProviderStatus.NOT_READY, and update any callers to await provider.shutdown() where this provider is used; ensure the method signature and any tests/mocks are updated to reflect the coroutine.
66-80: MakeSuperpositionAPIProvider.initialize()async to match the OpenFeature provider contract and align withLocalResolutionProvider.SuperpositionAPIProvider's
initialize()method is synchronous, while LocalResolutionProvider'sinitialize()is async. Both share the same base class (FeatureProvider from openfeature) and all other methods in SuperpositionAPIProvider are async. For consistency with the async-first pattern and the OpenFeature provider interface expectations,initialize()should be async.♻️ Proposed fix
- def initialize(self, evaluation_context: EvaluationContext): + async def initialize(self, evaluation_context: EvaluationContext): """Initialize the provider. Args: evaluation_context: Global evaluation context to apply to all resolutions. """🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/remote_provider.py` around lines 66 - 80, SuperpositionAPIProvider.initialize is currently synchronous but must be async to match the OpenFeature FeatureProvider contract and LocalResolutionProvider; change the method signature to async def initialize(self, evaluation_context: EvaluationContext) and keep the existing body (setting self.global_context, self.status = ProviderStatus.READY, and the logger calls) but ensure any callers await initialize; preserve the exception handling (logger.error, set self.status = ProviderStatus.FATAL, and re-raise) so behavior remains the same while aligning the method with the async pattern used by other methods in SuperpositionAPIProvider and LocalResolutionProvider.clients/python/provider/superposition_provider/types.py (1)
47-48: Consider adding type hints to factory functions.The default factory functions work correctly but lack explicit return type annotations.
♻️ Optional: Add return type hints
-def default_polling_strategy(): - return PollingStrategy(60, 30) +def default_polling_strategy() -> PollingStrategy: + return PollingStrategy(60, 30)Same pattern applies to
default_on_demand_strategy()anddefault_watch_strategy().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/types.py` around lines 47 - 48, Add explicit return type annotations to the factory functions so their signatures declare they return the appropriate strategy types (e.g., annotate default_polling_strategy() to return PollingStrategy) and apply the same pattern to default_on_demand_strategy() and default_watch_strategy(); update the function definitions to include the return type references (PollingStrategy or the corresponding strategy classes) to improve type checking and IDE hints while keeping the current returned values unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@clients/python/provider/examples.py`:
- Around line 318-338: The main() currently claims to "Run all examples" but
only invokes example_local_with_polling, example_local_on_demand,
example_remote_provider, and example_error_handling; update it to either call
the missing example_local_with_file_fallback(), example_manual_refresh(), and
example_openfeature_client() in the proper order (and adjust the printed
numbered headers to match) or change the header/text to indicate this is a
curated subset (e.g., "Run selected examples") and keep the existing calls;
locate main() and the printed section headers to implement the chosen fix so
numbering and invoked functions (example_local_with_file_fallback,
example_manual_refresh, example_openfeature_client) stay consistent.
In `@clients/python/provider/superposition_provider/file_data_source.py`:
- Around line 216-220: The close() method contains dead code because _watch_task
is never assigned in this class (watch() yields an async generator) — the actual
background task is created by LocalResolutionProvider._start_refresh_strategy()
and stored as LocalResolutionProvider._background_task; remove the unused
self._watch_task handling from close() (and delete _watch_task from __init__ if
present) or, if intended, change the implementation to reference the correct
background task attribute (LocalResolutionProvider._background_task) so close()
cancels the real task instead of a never-set _watch_task.
- Around line 88-102: The fetch_config method lacks validation of
self.config_path which will cause open(self.config_path, 'r') to raise a
TypeError when config_path is None; inside FileDataSource.fetch_config, add a
guard that checks if self.config_path is a non-empty string (or not None), and
if not, log a clear error via logger.error and raise a ValueError (or return an
appropriate FetchResponse error) before attempting to open the file so that
_parse_config_file, ConfigData, and FetchResponse are only used when a valid
path is present.
In `@clients/python/provider/superposition_provider/local_provider.py`:
- Line 362: In _ensure_fresh_data, avoid accessing self.cached_config.fetched_at
when self.cached_config may be None by adding a null guard like the existing
cached_experiments check: compute should_refresh_config only if
self.cached_config is not None and then call
is_elapsed(self.cached_config.fetched_at); otherwise treat it as needing refresh
(True). Update the logic around cached_config in _ensure_fresh_data to mirror
the pattern used for cached_experiments to prevent AttributeError when
cached_config is None.
- Around line 289-298: _guard against cached_config being None in
_fetch_and_cache_config: when computing if_modified_since use a null-safe
expression (similar to cached_experiments handling) so that if
self.cached_config is None you pass None to primary_source.fetch_config; update
the reference in the method _fetch_and_cache_config (and any subsequent uses of
self.cached_config.fetched_at) to check for self.cached_config before accessing
fetched_at and only read fetched_at when cached_config is not None.
---
Outside diff comments:
In `@clients/python/provider/superposition_provider/cac_config.py`:
- Around line 281-282: The _get_from_eval_cache method in cac_config.py
currently calls self.evaluation_cache.get(key) but does not return the value,
causing callers to always get None; update the method _get_from_eval_cache to
return the retrieved value (e.g., return self.evaluation_cache.get(key)) so
cache lookups work correctly, ensuring you reference the method name
_get_from_eval_cache and the evaluation_cache attribute when making the change.
In `@clients/python/provider/superposition_provider/exp_config.py`:
- Around line 227-228: The method _get_from_eval_cache currently calls
self.evaluation_cache.get(key) but does not return the result; add a return so
the cached value is returned (i.e., change the body of _get_from_eval_cache to
return the result of self.evaluation_cache.get(key)) to restore correct cache
lookup behavior in the class.
---
Nitpick comments:
In `@clients/python/provider/superposition_provider/builder.py`:
- Around line 14-19: The inline import of ManualStrategy should be moved to the
module-level import block for consistency and to surface import errors early:
add ManualStrategy to the existing import list alongside SuperpositionOptions,
PollingStrategy, OnDemandStrategy, and WatchStrategy at the top of builder.py,
and remove the inline "from .strategies import ManualStrategy" statement near
line 159 so all strategy classes are imported in one place.
In `@clients/python/provider/superposition_provider/cac_config.py`:
- Around line 19-20: Remove the unused imports Document and ShapeType from
cac_config.py: locate the top import block where Document and ShapeType are
imported and delete those two names so only required imports (e.g.,
document_to_python_value from conversions.py) remain; ensure no other references
to Document or ShapeType exist in the file after removal (check functions like
any that reference document_to_python_value) and run tests or lint to confirm no
import errors.
In `@clients/python/provider/superposition_provider/conversions.py`:
- Around line 80-152: Remove the large commented-out dead block defining
document_to_value: delete the entire commented section (the commented function
and its internals) so only the active implementation document_to_python_value
remains; ensure no other references to document_to_value exist (search for
symbol document_to_value) and run tests/type-checking to confirm nothing else
depends on the removed commented code.
- Around line 31-53: The docstring for evaluation_context_to_query_data is
misleading because it says "Converts all values to JSON strings" while the
function intentionally leaves existing str values unchanged to avoid
double-quoting; update the docstring of evaluation_context_to_query_data to
state that non-string values are JSON-encoded while string values are passed
through as-is (and mention that query_data_to_evaluation_context will attempt to
JSON-decode entries and fall back to the original string), so readers understand
the asymmetric behavior between evaluation_context_to_query_data and
query_data_to_evaluation_context.
- Around line 336-339: The current ShapeType.BIG_DECIMAL handling converts
Decimal via doc.as_decimal() to float which can lose precision; change the
conversion to return the string form of the Decimal (e.g., use str(decimal_val)
when decimal_val is not None) and update the comment to note that we serialize
BIG_DECIMAL as a string to preserve precision for monetary/scientific values
while remaining JSON-compatible; reference ShapeType.BIG_DECIMAL,
doc.as_decimal(), and decimal_val to locate the change.
In `@clients/python/provider/superposition_provider/http_data_source.py`:
- Around line 89-90: Replace the pass-through exception handlers that use
"except Exception as e: raise e" in http_data_source.py with bare re-raises
(i.e., use "except Exception: raise") or remove the unnecessary try/except
entirely; specifically update both occurrences that capture Exception as e (the
handler around the HTTP data source call and the second similar handler later in
the module) so the original traceback is preserved instead of being re-anchored
by raising the caught exception object.
In `@clients/python/provider/superposition_provider/local_provider.py`:
- Around line 42-47: The __init__ signature currently calls
default_on_demand_strategy() as a default argument which triggers a function at
import time; change the parameter refresh_strategy: RefreshStrategy = None (or
Optional[RefreshStrategy] = None) and inside SuperpositionProvider.__init__ (or
the class containing this __init__) set refresh_strategy = refresh_strategy or
default_on_demand_strategy() (or explicitly if refresh_strategy is None:
refresh_strategy = default_on_demand_strategy()) so the factory is invoked at
runtime only; update type hints/imports as needed to accept
Optional[RefreshStrategy].
In `@clients/python/provider/superposition_provider/remote_provider.py`:
- Around line 82-88: The shutdown method in SuperpositionAPIProvider should be
made async to match initialize() and LocalResolutionProvider.shutdown(); change
def shutdown(self): to async def shutdown(self):, perform any async cleanup
(e.g., await self.client.aclose() or other client-specific async close if
available) before setting self.client = None and self.status =
ProviderStatus.NOT_READY, and update any callers to await provider.shutdown()
where this provider is used; ensure the method signature and any tests/mocks are
updated to reflect the coroutine.
- Around line 66-80: SuperpositionAPIProvider.initialize is currently
synchronous but must be async to match the OpenFeature FeatureProvider contract
and LocalResolutionProvider; change the method signature to async def
initialize(self, evaluation_context: EvaluationContext) and keep the existing
body (setting self.global_context, self.status = ProviderStatus.READY, and the
logger calls) but ensure any callers await initialize; preserve the exception
handling (logger.error, set self.status = ProviderStatus.FATAL, and re-raise) so
behavior remains the same while aligning the method with the async pattern used
by other methods in SuperpositionAPIProvider and LocalResolutionProvider.
In `@clients/python/provider/superposition_provider/types.py`:
- Around line 47-48: Add explicit return type annotations to the factory
functions so their signatures declare they return the appropriate strategy types
(e.g., annotate default_polling_strategy() to return PollingStrategy) and apply
the same pattern to default_on_demand_strategy() and default_watch_strategy();
update the function definitions to include the return type references
(PollingStrategy or the corresponding strategy classes) to improve type checking
and IDE hints while keeping the current returned values unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fc9d7b34-0a84-43e1-b401-7fc42ac06378
📒 Files selected for processing (23)
clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.mdclients/java/bindings/src/main/kotlin/uniffi/superposition_client/superposition_client.ktclients/python/bindings/superposition_bindings/superposition_client.pyclients/python/provider/PROVIDER_ARCHITECTURE.mdclients/python/provider/README_REFACTORED.mdclients/python/provider/examples.pyclients/python/provider/superposition_provider/__init__.pyclients/python/provider/superposition_provider/builder.pyclients/python/provider/superposition_provider/cac_config.pyclients/python/provider/superposition_provider/conversions.pyclients/python/provider/superposition_provider/data_source.pyclients/python/provider/superposition_provider/exp_config.pyclients/python/provider/superposition_provider/file_data_source.pyclients/python/provider/superposition_provider/http_data_source.pyclients/python/provider/superposition_provider/interfaces.pyclients/python/provider/superposition_provider/local_provider.pyclients/python/provider/superposition_provider/remote_provider.pyclients/python/provider/superposition_provider/types.pycrates/superposition_core/src/ffi.rscrates/superposition_provider/src/data_source.rscrates/superposition_provider/src/data_source/file.rscrates/superposition_provider/src/data_source/http.rscrates/superposition_provider/src/local_provider.rs
clients/python/provider/examples.py
Outdated
| async def main(): | ||
| """Run all examples.""" | ||
| print("=" * 60) | ||
| print("Example 1: Local Provider with Polling") | ||
| print("=" * 60) | ||
| await example_local_with_polling() | ||
|
|
||
| print("\n" + "=" * 60) | ||
| print("Example 2: Local Provider with On-Demand Refresh") | ||
| print("=" * 60) | ||
| await example_local_on_demand() | ||
|
|
||
| print("\n" + "=" * 60) | ||
| print("Example 3: Remote Provider for Serverless") | ||
| print("=" * 60) | ||
| await example_remote_provider() | ||
|
|
||
| print("\n" + "=" * 60) | ||
| print("Example 4: Error Handling") | ||
| print("=" * 60) | ||
| await example_error_handling() |
There was a problem hiding this comment.
main() only runs a subset of the advertised examples.
Line 319 says “Run all examples,” but this runner skips example_local_with_file_fallback(), example_manual_refresh(), and example_openfeature_client(). That also makes the printed numbering drift from the section headers above. Either execute the missing routines or rename this as a curated subset runner.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider/examples.py` around lines 318 - 338, The main()
currently claims to "Run all examples" but only invokes
example_local_with_polling, example_local_on_demand, example_remote_provider,
and example_error_handling; update it to either call the missing
example_local_with_file_fallback(), example_manual_refresh(), and
example_openfeature_client() in the proper order (and adjust the printed
numbered headers to match) or change the header/text to indicate this is a
curated subset (e.g., "Run selected examples") and keep the existing calls;
locate main() and the printed section headers to implement the chosen fix so
numbering and invoked functions (example_local_with_file_fallback,
example_manual_refresh, example_openfeature_client) stay consistent.
| try: | ||
| now = datetime.now(timezone.utc) | ||
| # Read and parse file | ||
| with open(self.config_path, 'r') as f: | ||
| content = f.read() | ||
|
|
||
| config = _parse_config_file(content, self.config_path) | ||
|
|
||
| return FetchResponse.data(ConfigData( | ||
| data=config, | ||
| fetched_at=now, | ||
| )) | ||
| except Exception as e: | ||
| logger.error(f"Failed to fetch config from {self.config_path}: {e}") | ||
| raise |
There was a problem hiding this comment.
Missing null check for self.config_path before file read.
If FileDataSource is initialized with config_path=None, calling fetch_config() will raise a TypeError when open(None, 'r') is called. Consider adding validation.
🛡️ Proposed fix
async def fetch_config(
self,
if_modified_since: Optional[datetime] = None,
) -> FetchResponse[ConfigData]:
+ if not self.config_path:
+ raise ValueError("config_path is required to fetch configuration")
+
if if_modified_since is not None:
logger.debug("FileDataSource: ignoring if_modified_since, always reading fresh from file")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider/superposition_provider/file_data_source.py` around
lines 88 - 102, The fetch_config method lacks validation of self.config_path
which will cause open(self.config_path, 'r') to raise a TypeError when
config_path is None; inside FileDataSource.fetch_config, add a guard that checks
if self.config_path is a non-empty string (or not None), and if not, log a clear
error via logger.error and raise a ValueError (or return an appropriate
FetchResponse error) before attempting to open the file so that
_parse_config_file, ConfigData, and FetchResponse are only used when a valid
path is present.
| async def _fetch_and_cache_config(self, init: bool = False) -> None: | ||
| """Fetch and cache configuration from primary/fallback sources.""" | ||
| try: | ||
| # Try primary source | ||
| if_modified_since = None if init else self.cached_config.fetched_at | ||
| response = await self.primary_source.fetch_config(if_modified_since) | ||
| if response.get_data(): | ||
| self.cached_config = response.get_data() | ||
| self._update_config_ffi_cache() | ||
| logger.info("Config updated from primary source") |
There was a problem hiding this comment.
Potential AttributeError if cached_config is None during non-init refresh.
Line 293 accesses self.cached_config.fetched_at without a null guard. While cached_config should be set after successful initialization, defensive coding would add the same guard pattern used for cached_experiments.
🛡️ Proposed fix
try:
# Try primary source
- if_modified_since = None if init else self.cached_config.fetched_at
+ if_modified_since = (
+ None if init or self.cached_config is None
+ else self.cached_config.fetched_at
+ )
response = await self.primary_source.fetch_config(if_modified_since)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider/superposition_provider/local_provider.py` around
lines 289 - 298, _guard against cached_config being None in
_fetch_and_cache_config: when computing if_modified_since use a null-safe
expression (similar to cached_experiments handling) so that if
self.cached_config is None you pass None to primary_source.fetch_config; update
the reference in the method _fetch_and_cache_config (and any subsequent uses of
self.cached_config.fetched_at) to check for self.cached_config before accessing
fetched_at and only read fetched_at when cached_config is not None.
| def is_elapsed(cached_at: datetime) -> bool: | ||
| return (datetime.now(timezone.utc) - cached_at).total_seconds() > ttl | ||
|
|
||
| should_refresh_config = self.cached_config.fetched_at is None or is_elapsed(self.cached_config.fetched_at) |
There was a problem hiding this comment.
Same null guard needed for cached_config in _ensure_fresh_data.
Line 362 directly accesses self.cached_config.fetched_at without checking if cached_config is None. This mirrors the pattern already applied for cached_experiments at lines 364-366.
🛡️ Proposed fix
- should_refresh_config = self.cached_config.fetched_at is None or is_elapsed(self.cached_config.fetched_at)
+ should_refresh_config = (
+ self.cached_config is None
+ or self.cached_config.fetched_at is None
+ or is_elapsed(self.cached_config.fetched_at)
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| should_refresh_config = self.cached_config.fetched_at is None or is_elapsed(self.cached_config.fetched_at) | |
| should_refresh_config = ( | |
| self.cached_config is None | |
| or self.cached_config.fetched_at is None | |
| or is_elapsed(self.cached_config.fetched_at) | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider/superposition_provider/local_provider.py` at line
362, In _ensure_fresh_data, avoid accessing self.cached_config.fetched_at when
self.cached_config may be None by adding a null guard like the existing
cached_experiments check: compute should_refresh_config only if
self.cached_config is not None and then call
is_elapsed(self.cached_config.fetched_at); otherwise treat it as needing refresh
(True). Update the logic around cached_config in _ensure_fresh_data to mirror
the pattern used for cached_experiments to prevent AttributeError when
cached_config is None.
bf6d704 to
ba39961
Compare
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
clients/python/provider/superposition_provider/cac_config.py (1)
281-282:⚠️ Potential issue | 🟠 MajorMissing
returnstatement in_get_from_eval_cache.The method calls
self.evaluation_cache.get(key)but doesn't return the result, so callers will always receiveNone.🐛 Proposed fix
def _get_from_eval_cache(self, key: str) -> Optional[Any]: - self.evaluation_cache.get(key) + return self.evaluation_cache.get(key)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/cac_config.py` around lines 281 - 282, The helper method _get_from_eval_cache calls self.evaluation_cache.get(key) but never returns it, so callers always get None; update _get_from_eval_cache to return the result of self.evaluation_cache.get(key) (i.e., add a return statement) while keeping the signature Optional[Any], ensuring any cached value is propagated to callers.clients/python/provider-sdk-tests/main.py (1)
336-453:⚠️ Potential issue | 🟠 MajorMissing
provider.shutdown()in first test phase.The
LocalResolutionProviderinitialized at line 342-346 is never shut down before the next test phase begins. This could leave background polling tasks running and cause resource leaks or test interference.🐛 Proposed fix
Add shutdown before the finally block ends:
except Exception as error: print(f"\n❌ Error running tests: {error}") raise error finally: + await provider.shutdown() print("OpenFeature closed successfully")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider-sdk-tests/main.py` around lines 336 - 453, The LocalResolutionProvider instance created as provider (see LocalResolutionProvider(primary_source=primary_source, refresh_strategy=refresh_strategy) and await provider.initialize(EvaluationContext())) is never shut down; add an awaited call to provider.shutdown() after the test block (before the final/cleanup phase or before creating/initializing any new provider) to stop background polling tasks and release resources; ensure you await provider.shutdown() and consider clearing the provider via api.set_provider(None) if appropriate.
♻️ Duplicate comments (3)
clients/python/provider/superposition_provider/local_provider.py (2)
456-464:⚠️ Potential issue | 🟡 MinorAdd null guard for
cached_configto matchcached_experimentspattern.Line 459 accesses
self.cached_config.fetched_atwithout checking ifcached_configisNone, while lines 460-464 properly guardcached_experiments. This inconsistency could causeAttributeErrorin edge cases.🛡️ Proposed fix
- should_refresh_config = self.cached_config.fetched_at is None or is_elapsed(self.cached_config.fetched_at) + should_refresh_config = ( + self.cached_config is None + or self.cached_config.fetched_at is None + or is_elapsed(self.cached_config.fetched_at) + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/local_provider.py` around lines 456 - 464, The check for should_refresh_config accesses self.cached_config.fetched_at without guarding for self.cached_config being None; update the should_refresh_config expression to mirror the should_refresh_experiments pattern by first checking if self.cached_config is None or self.cached_config.fetched_at is None before calling is_elapsed, referencing is_elapsed, should_refresh_config, self.cached_config, and fetched_at so the null guard prevents AttributeError in edge cases.
378-387:⚠️ Potential issue | 🟡 MinorAdd null guard for
cached_configin non-init refresh path.Line 382 accesses
self.cached_config.fetched_atwithout checking forNonewheninit=False. While this path should only be reached after successful initialization, defensive coding would add the same guard pattern.🛡️ Proposed fix
async def _fetch_and_cache_config(self, init: bool = False) -> None: """Fetch and cache configuration from primary/fallback sources.""" try: # Try primary source - if_modified_since = None if init else self.cached_config.fetched_at + if_modified_since = ( + None if init or self.cached_config is None + else self.cached_config.fetched_at + ) response = await self.primary_source.fetch_config(if_modified_since)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/local_provider.py` around lines 378 - 387, In _fetch_and_cache_config, add a null guard before accessing self.cached_config.fetched_at when init is False: ensure you check that self.cached_config is not None (or use getattr(self.cached_config, "fetched_at", None)) and fall back to None for if_modified_since so the non-init refresh path won't raise if cached_config is unexpectedly None; update the variable used in the call to self.primary_source.fetch_config(if_modified_since) accordingly and keep the rest of the logic intact (refs: method _fetch_and_cache_config, attribute cached_config.fetched_at).clients/python/provider/superposition_provider/file_data_source.py (1)
119-133:⚠️ Potential issue | 🟡 MinorMissing null check for
self.file_pathbefore file read.If
FileDataSourceis initialized withfile_path=None, calling_fetch_config_with_filters()will raise aTypeErrorwhenopen(None, 'r')is called.🛡️ Proposed fix
async def _fetch_config_with_filters( self, context: Optional[Dict[str, Any]] = None, prefix_filter: Optional[List[str]] = None, if_modified_since: Optional[datetime] = None, ) -> FetchResponse[ConfigData]: + if not self.file_path: + raise ValueError("file_path is required to fetch configuration") + if if_modified_since is not None: logger.debug("FileDataSource: ignoring if_modified_since, always reading fresh from file")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/file_data_source.py` around lines 119 - 133, The method _fetch_config_with_filters in FileDataSource should validate that self.file_path is not None/empty before attempting to open the file; add an explicit null/empty check at the top of the try block (or before it) and if invalid log a clear error referencing self.file_path and raise a descriptive exception (e.g., ValueError) or return an appropriate FetchResponse error instead of letting open(None, 'r') trigger a TypeError; ensure the log message mentions FileDataSource and the problematic file_path so it’s easy to trace.
🧹 Nitpick comments (9)
clients/python/provider/pyproject.toml (1)
14-15: Prefer compatible version ranges for newly added runtime deps.If this wheel is intended to be consumed as a library, exact pins on
openfeature-sdkandwatchdogmake downstream dependency resolution much harder. Unless the provider depends on exact patch behavior, use compatible ranges here instead of==.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/pyproject.toml` around lines 14 - 15, Replace the exact pinned dependencies "openfeature-sdk==0.8.3" and "watchdog==6.0.0" with compatible version ranges so downstream consumers can resolve ranges; for example change "openfeature-sdk==0.8.3" to "openfeature-sdk>=0.8,<0.9" (or use "~=0.8" if you prefer PEP 440 caret semantics) and "watchdog==6.0.0" to "watchdog>=6.0,<7.0" to allow patch updates while preventing breaking minor bumps.clients/python/provider/superposition_provider/exp_config.py (1)
116-123: Extract the repeated SDK client bootstrap.The auth/client setup is duplicated in both fetch paths. A small helper would keep future auth or endpoint changes from drifting between experiments and experiment-groups.
♻️ Possible cleanup
class ExperimentationConfig(): + `@staticmethod` + def _create_client(superposition_options: SuperpositionOptions) -> Superposition: + resolver, schemes = bearer_auth_config(token=superposition_options.token) + sdk_config = Config( + endpoint_uri=superposition_options.endpoint, + http_auth_scheme_resolver=resolver, + http_auth_schemes=schemes, + ) + return Superposition(config=sdk_config) + `@staticmethod` async def _get_experiments(superposition_options: SuperpositionOptions) -> Optional[list[FfiExperiment]]: ... - (resolver, schemes) = bearer_auth_config( - token=superposition_options.token - ) - sdk_config = Config( - endpoint_uri=superposition_options.endpoint, - http_auth_scheme_resolver=resolver, - http_auth_schemes=schemes - ) - - # Create Superposition client - client = Superposition(config=sdk_config) + client = ExperimentationConfig._create_client(superposition_options)Also applies to: 159-166
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/exp_config.py` around lines 116 - 123, Extract the duplicated SDK client bootstrap into a single helper function (e.g., create_superposition_sdk_config) that takes superposition_options (or token and endpoint) and returns the Config built by calling bearer_auth_config(token=...) and then Config(endpoint_uri=..., http_auth_scheme_resolver=resolver, http_auth_schemes=schemes); replace the two inline blocks that call bearer_auth_config and construct Config (the one using superposition_options.token and superposition_options.endpoint) with calls to this helper so both fetch paths share the same bootstrap logic.clients/python/provider/superposition_provider/__init__.py (1)
33-33: RedundantRefreshStrategyexport under two names.
RefreshStrategyis exported directly (line 23, included in__all__at line 55) and also aliased asRefreshStrategyType(line 33, line 63). This creates potential confusion. Consider keeping only one export name.Also applies to: 55-55, 63-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/__init__.py` at line 33, The module currently exports RefreshStrategy twice (as RefreshStrategy and as an alias RefreshStrategyType) causing confusion; remove the redundant alias by deleting the alias import/export of RefreshStrategyType and any references to RefreshStrategyType in __all__ so only RefreshStrategy remains exported (update any internal uses that referenced RefreshStrategyType to use RefreshStrategy instead); specifically edit the import/export lines that declare RefreshStrategyType and remove "RefreshStrategyType" from the __all__ list.clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.md (1)
37-37: Consider adding language specifiers to pseudocode blocks.Multiple code blocks use pseudocode/ASCII diagrams without language specifiers. Adding
textorplaintextas the language identifier would satisfy markdown linters and improve rendering consistency.Also applies to: 64-64, 104-104, 128-128
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.md` at line 37, Several fenced code blocks in the document are plain pseudocode/ASCII diagrams and lack a language specifier; update each such block (the fenced blocks that currently begin with ``` and contain pseudocode/ASCII diagrams) to use a language identifier like ```text or ```plaintext so markdown linters render them correctly and avoid warnings; search for the ASCII/pseudocode blocks (e.g., the pseudocode/diagram blocks shown around the sections referenced in the review) and replace the opening triple backticks with ```text or ```plaintext.clients/python/provider-sdk-tests/main.py (1)
316-688: Consider extracting duplicated test logic into a helper function.The same set of 8 pricing/currency tests is repeated three times with only the provider type changing. Extracting these into a parameterized helper would reduce duplication and make maintenance easier.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider-sdk-tests/main.py` around lines 316 - 688, Duplicate test blocks for pricing/currency should be extracted into a reusable helper; create a helper like run_pricing_tests_sync(client) for synchronous providers and run_pricing_tests_async(client) for async providers (or a single run_pricing_tests(client, is_async)) that runs the eight assertions currently using client.get_integer_value / client.get_string_value and client.get_integer_value_async / client.get_string_value_async, then replace the repeated blocks after creating LocalResolutionProvider, SuperpositionAPIProvider, and the fallback-local provider with calls to the helper(s); reference the provider constructors LocalResolutionProvider, SuperpositionAPIProvider, HttpDataSource(wrong_http_options), and the client methods get_integer_value/get_string_value and get_integer_value_async/get_string_value_async to locate and refactor the code.clients/python/provider/superposition_provider/http_data_source.py (1)
86-90: Use bareraisehere.
raise eresets the traceback to this helper, which makes upstream SDK failures much harder to debug. Either drop the catch or re-raise with bareraise.♻️ Proposed fix
- except Exception as e: - raise e + except Exception: + raise @@ - except Exception as e: - raise e + except Exception: + raiseAlso applies to: 150-154
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/http_data_source.py` around lines 86 - 90, The except blocks in http_data_source.py that catch Exception and re-raise with "raise e" (specifically the UnknownApiError handling and the broader Exception handler around FetchResponse.not_modified()) must be changed to preserve tracebacks: either remove the generic "except Exception as e" entirely or re-raise using a bare "raise" instead of "raise e"; update the UnknownApiError block to return FetchResponse.not_modified() and let other exceptions propagate with bare raise (and apply the same change to the similar handler around lines 150-154), ensuring you edit the except blocks in the functions/methods where UnknownApiError is caught rather than altering unrelated logic.clients/python/provider/superposition_provider/local_provider.py (2)
158-162: Replace()withpassfor clarity.The empty tuple
()is a valid but confusing no-op. Usepassexplicitly.✨ Proposed fix
match self.refresh_strategy: case OnDemandStrategy(): logger.debug("ON_DEMAND strategy: data might be stale, use async resolve to ensure freshness") case _: - () + pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/local_provider.py` around lines 158 - 162, The default branch of the match on self.refresh_strategy currently uses the confusing no-op expression `()`; replace that with the explicit no-op `pass` in the match _ case (the branch alongside `case OnDemandStrategy():`) so the intent is clear and Pythonic.
42-47: Avoid mutable default argument from function call.
default_on_demand_strategy()is called at function definition time, creating a shared default instance. This can cause subtle bugs if the strategy object is mutated.♻️ Proposed fix
def __init__( self, primary_source: SuperpositionDataSource, fallback_source: Optional[SuperpositionDataSource] = None, - refresh_strategy: RefreshStrategy = default_on_demand_strategy(), + refresh_strategy: Optional[RefreshStrategy] = None, ): + if refresh_strategy is None: + refresh_strategy = default_on_demand_strategy()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/local_provider.py` around lines 42 - 47, The __init__ in local_provider.py currently uses default_on_demand_strategy() as a default argument which is evaluated at import time and can create a shared mutable RefreshStrategy instance; change the signature to accept Optional[RefreshStrategy] = None and inside __init__ (method __init__ of the class in local_provider.py) set self.refresh_strategy = refresh_strategy or default_on_demand_strategy() so a fresh strategy is created per instance, ensuring RefreshStrategy instances are not shared across objects.clients/python/provider/superposition_provider/interfaces.py (1)
339-344: Format_to_boolto follow PEP 8 (one statement per line).The static analyzer flags multiple statements on one line. While functional, this style is discouraged.
✨ Proposed fix
def _to_bool(value: Any) -> Optional[bool]: - if isinstance(value, bool): return value + if isinstance(value, bool): + return value if isinstance(value, str): - if value.lower() == "true": return True - if value.lower() == "false": return False - if isinstance(value, (int, float)): return value != 0 + if value.lower() == "true": + return True + if value.lower() == "false": + return False + if isinstance(value, (int, float)): + return value != 0 return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/python/provider/superposition_provider/interfaces.py` around lines 339 - 344, The helper function _to_bool contains multiple statements on single lines which violates PEP 8; rewrite it so each conditional and return is on its own line (keep the same logic for bool, str checks with .lower() comparisons, numeric check using isinstance(value, (int, float)) returning value != 0) and add an explicit return None at the end so the function’s control flow is clear; update the body of _to_bool accordingly without changing its behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@clients/python/provider-sdk-tests/main.py`:
- Around line 571-688: The fallback test phase creates a LocalResolutionProvider
instance (provider) and initializes it but never calls provider.shutdown(), so
add a shutdown call for the provider to ensure proper cleanup; after the test
assertions complete (before printing the final "OpenFeature closed successfully"
or in the finally block) call provider.shutdown() on the LocalResolutionProvider
instance (and await it if it's async) to mirror the other test phases that
properly tear down the provider.
- Around line 455-569: The SuperpositionAPIProvider instance created in the test
block (variable provider, constructed via
SuperpositionAPIProvider(http_options)) is not being shut down; add a call to
provider.shutdown() in the finally cleanup (or just before the final print) to
mirror the first test phase cleanup so resources are released properly; ensure
you call provider.shutdown() only if provider is defined/initialized to avoid
NameError.
In `@clients/python/provider/examples/local_file_example.py`:
- Line 15: Remove the unused RefreshStrategy import from the import statement
that currently reads "from superposition_provider.types import OnDemandStrategy,
RefreshStrategy" so only OnDemandStrategy is imported; update the import to
exclude RefreshStrategy and verify there are no references to RefreshStrategy
elsewhere in the file (e.g., any example code or function calls) before
committing.
In `@clients/python/provider/examples/local_file_watch_example.py`:
- Around line 31-37: Move the call to await provider.initialize() inside the try
block that surrounds provider usage so that any exception during initialization
still triggers provider.shutdown() (or the existing shutdown path) in the
finally/except cleanup; specifically update the block where provider is created
via builder.create_local_provider_with_files and ensure provider.initialize() is
invoked inside the try and provider.shutdown() is always called in finally
(apply the same change to the second occurrence around the code between lines
50-59), referencing the variables/methods provider,
builder.create_local_provider_with_files, provider.initialize, and
provider.shutdown to locate and fix the code.
In `@clients/python/provider/examples/local_with_fallback_example.py`:
- Line 19: The example incorrectly awaits and passes unsupported kwargs to
create_local_provider; update calls to use the synchronous builder API signature
by calling create_local_provider(endpoint=..., token=..., org_id=...,
workspace_id=..., refresh_interval=...) (remove any extra kwargs like
SuperpositionOptions/PollingStrategy) and do not await its return; also adjust
the fallback/example usages that currently await or pass unsupported options
(referencing create_local_provider in builder.py and the examples in
local_with_fallback_example.py) so the provider is created synchronously with
only the supported parameters.
In `@clients/python/provider/examples/README.md`:
- Around line 158-163: The sample config TOML under [default-configs] is not
parseable because `enum = [...]` is placeholder text; update the `currency`
entry in the config example (the `currency` key within the [default-configs]
block) to use a valid TOML array or remove the enum field entirely (e.g.,
replace the placeholder with a real array of string values like ["Rupee",
"Dollar"] or drop the enum line) so the TOML block parses correctly along with
the existing `timeout` and `price` entries.
- Around line 185-189: The README's dependency step only installs
superposition-sdk and openfeature but the examples import
superposition_provider, so in a fresh checkout the examples will raise
ModuleNotFoundError; update the installation instructions to also install the
local provider package (the package providing the superposition_provider
module)—e.g., add a command that installs the local package in editable mode
with pip (so the examples can import superposition_provider) and include a brief
note clarifying that the local package must be installed before running the
examples.
In `@clients/python/provider/PROVIDER_ARCHITECTURE.md`:
- Around line 119-129: The snippet uses the old HttpDataSource constructor;
update it to pass a SuperpositionOptions instance to HttpDataSource (import
SuperpositionOptions) and populate that options object with endpoint, token,
org_id and workspace_id (or renamed keys if the options use different names),
e.g., create a SuperpositionOptions(...) and pass it into
HttpDataSource(superposition_options) so the code matches the new
HttpDataSource(Signature: SuperpositionOptions) API instead of endpoint/token
kwargs.
- Around line 302-316: The Error Handling section conflicts with
README_REFACTORED.md: update this section to reflect the actual runtime behavior
of the provider (resolve_bool/resolve_string/resolve_* return Resolution objects
with an error_code field rather than throwing FlagNotFound/TypeMismatch); remove
or reframe the statements that say callers should catch
FlagNotFound/TypeMismatch, and instead document that DataSourceError and
ProviderError are only thrown for fatal init/fetch failures while flag
evaluation returns resolution details (including error_code and default value
usage) for type mismatches or missing flags; reference
resolve_bool/resolve_string/resolve_* and the exception types DataSourceError
and ProviderError in the text so integrators can find the relevant code paths.
In `@clients/python/provider/README_REFACTORED.md`:
- Around line 155-164: The README example uses the old HttpDataSource
constructor; update it to build and pass a SuperpositionOptions instance instead
(create a SuperpositionOptions with endpoint, token, org_id, workspace_id) and
then construct HttpDataSource with that options object; adjust any other data
source examples (e.g., FileDataSource) and usages in LocalResolutionProvider or
PollingStrategy to follow the new SuperpositionOptions-based constructor pattern
so the example no longer raises TypeError.
- Around line 319-326: The example MockDataSource.fetch_config uses the old
ConfigData field name "config"; update the call constructing ConfigData to use
the current field name "data" (i.e., pass data={"flag1":"value1"}) so the
example aligns with how ConfigData is constructed elsewhere and remains
compatible with FetchResponse.data/ConfigData usage in tests and docs.
In `@clients/python/provider/superposition_provider/builder.py`:
- Around line 79-82: with_watch_refresh() sets refresh_strategy to "watch" but
_build_local_provider() still instantiates an HttpDataSource as the primary
source, producing a LocalResolutionProvider(HttpDataSource, ...,
WatchStrategy()) which cannot be watched; change _build_local_provider() (and
parallel logic in the 124-160 block) to detect when self.refresh_strategy ==
"watch" and create a file-backed primary (e.g., FileDataSource or the existing
local file loader) instead of HttpDataSource, or otherwise construct
LocalResolutionProvider with a watchable data source, so that
LocalResolutionProvider and WatchStrategy operate on a truly watchable source
(refer to with_watch_refresh, _build_local_provider, LocalResolutionProvider,
HttpDataSource, and WatchStrategy).
In `@clients/python/provider/superposition_provider/conversions.py`:
- Around line 42-52: The loop building query_data must JSON-encode string values
so that downstream query_data_to_evaluation_context() which json.loads()'s every
string doesn't coerce literal strings like "true", "123", or "null" into other
types; in conversions.py change the branch for isinstance(value, str) to set
query_data[key] to json.dumps(value) (preserving the existing try/except
behavior for non-strings: attempt json.dumps(value) and on TypeError/ValueError
log via logger.warning and fall back to str(value)), keeping the same keys and
error logging semantics.
In `@clients/python/provider/superposition_provider/file_data_source.py`:
- Around line 46-55: The on_modified handler uses
asyncio.create_task(self.callback(self.file_path)) without keeping a reference
so exceptions can be lost; change it to create and retain the Task (e.g., assign
to self._tasks or similar) and attach a done callback that logs exceptions and
removes the task from the collection. Specifically, in on_modified (and the
class that defines self.callback and self.file_path) create the task via
asyncio.create_task(...), add task.add_done_callback(lambda t: handle/log
t.exception() if t.exception() is not None and remove the task from
self._tasks), and ensure self._tasks is initialized (e.g., set()) so tasks are
not garbage-collected and their exceptions are surfaced.
In `@clients/python/provider/superposition_provider/interfaces.py`:
- Around line 257-262: The async error handler in resolve_typed_async is missing
the default value in the returned FlagResolutionDetails, unlike the sync
resolve_typed; update the except block inside resolve_typed_async to return
FlagResolutionDetails with the default as the first/primary value (e.g.,
FlagResolutionDetails(default, reason=..., error_code=ErrorCode.GENERAL,
error_message=...)) so the fallback is preserved; reference resolve_typed_async,
resolve_typed, FlagResolutionDetails, default, and ErrorCode.GENERAL to locate
and mirror the sync path behavior.
In `@clients/python/provider/superposition_provider/remote_provider.py`:
- Around line 172-179: The sync resolution helpers (e.g.,
resolve_boolean_details) call resolve_bool which relies on async-only
resolve_all_features_with_filter and therefore always fails; update
resolve_boolean_details to immediately raise NotImplementedError with a clear
message (e.g., "Synchronous resolution is not supported; use the async
resolve_*_details methods") instead of calling resolve_bool, and apply the same
change to resolve_string_details, resolve_integer_details,
resolve_float_details, and resolve_object_details so all synchronous detail
methods explicitly and consistently raise NotImplementedError with that clear
guidance.
In `@crates/superposition_core/src/ffi.rs`:
- Around line 223-232: The FFI currently treats each HashMap<String,String>
value as JSON and calls serde_json::from_str, which rejects bare strings; update
the parsing so that when converting the incoming dimension_data (used by
ffi_parse_config_file_with_filters) and the map in
ProviderCache::get_applicable_variants you attempt serde_json::from_str on each
value but on Err fall back to producing
serde_json::Value::String(original_value.clone()); adjust json_from_map (or the
mapping logic where json_from_map is invoked) to implement this
try-parse-then-fallback behavior so Kotlin/Java callers can pass plain strings
like "Boston" without pre-serializing them.
In `@crates/superposition_provider/tests/integration_test.rs`:
- Around line 409-426: The test leaks LocalResolutionProvider refresh loops
because provider_clone.initialize(...) and the fallback provider aren't shut
down; after using provider_clone (the instance created with provider.clone())
call its async shutdown/stop method and await it (e.g.,
provider_clone.shutdown().await or the equivalent API) before the block ends,
and likewise locate the fallback provider instance used later and invoke its
shutdown/stop and await it before leaving that scope so the background polling
tasks are terminated; ensure you use the actual async shutdown method provided
by LocalResolutionProvider and handle/unwrap any returned Result.
---
Outside diff comments:
In `@clients/python/provider-sdk-tests/main.py`:
- Around line 336-453: The LocalResolutionProvider instance created as provider
(see LocalResolutionProvider(primary_source=primary_source,
refresh_strategy=refresh_strategy) and await
provider.initialize(EvaluationContext())) is never shut down; add an awaited
call to provider.shutdown() after the test block (before the final/cleanup phase
or before creating/initializing any new provider) to stop background polling
tasks and release resources; ensure you await provider.shutdown() and consider
clearing the provider via api.set_provider(None) if appropriate.
In `@clients/python/provider/superposition_provider/cac_config.py`:
- Around line 281-282: The helper method _get_from_eval_cache calls
self.evaluation_cache.get(key) but never returns it, so callers always get None;
update _get_from_eval_cache to return the result of
self.evaluation_cache.get(key) (i.e., add a return statement) while keeping the
signature Optional[Any], ensuring any cached value is propagated to callers.
---
Duplicate comments:
In `@clients/python/provider/superposition_provider/file_data_source.py`:
- Around line 119-133: The method _fetch_config_with_filters in FileDataSource
should validate that self.file_path is not None/empty before attempting to open
the file; add an explicit null/empty check at the top of the try block (or
before it) and if invalid log a clear error referencing self.file_path and raise
a descriptive exception (e.g., ValueError) or return an appropriate
FetchResponse error instead of letting open(None, 'r') trigger a TypeError;
ensure the log message mentions FileDataSource and the problematic file_path so
it’s easy to trace.
In `@clients/python/provider/superposition_provider/local_provider.py`:
- Around line 456-464: The check for should_refresh_config accesses
self.cached_config.fetched_at without guarding for self.cached_config being
None; update the should_refresh_config expression to mirror the
should_refresh_experiments pattern by first checking if self.cached_config is
None or self.cached_config.fetched_at is None before calling is_elapsed,
referencing is_elapsed, should_refresh_config, self.cached_config, and
fetched_at so the null guard prevents AttributeError in edge cases.
- Around line 378-387: In _fetch_and_cache_config, add a null guard before
accessing self.cached_config.fetched_at when init is False: ensure you check
that self.cached_config is not None (or use getattr(self.cached_config,
"fetched_at", None)) and fall back to None for if_modified_since so the non-init
refresh path won't raise if cached_config is unexpectedly None; update the
variable used in the call to self.primary_source.fetch_config(if_modified_since)
accordingly and keep the rest of the logic intact (refs: method
_fetch_and_cache_config, attribute cached_config.fetched_at).
---
Nitpick comments:
In `@clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.md`:
- Line 37: Several fenced code blocks in the document are plain pseudocode/ASCII
diagrams and lack a language specifier; update each such block (the fenced
blocks that currently begin with ``` and contain pseudocode/ASCII diagrams) to
use a language identifier like ```text or ```plaintext so markdown linters
render them correctly and avoid warnings; search for the ASCII/pseudocode blocks
(e.g., the pseudocode/diagram blocks shown around the sections referenced in the
review) and replace the opening triple backticks with ```text or ```plaintext.
In `@clients/python/provider-sdk-tests/main.py`:
- Around line 316-688: Duplicate test blocks for pricing/currency should be
extracted into a reusable helper; create a helper like
run_pricing_tests_sync(client) for synchronous providers and
run_pricing_tests_async(client) for async providers (or a single
run_pricing_tests(client, is_async)) that runs the eight assertions currently
using client.get_integer_value / client.get_string_value and
client.get_integer_value_async / client.get_string_value_async, then replace the
repeated blocks after creating LocalResolutionProvider,
SuperpositionAPIProvider, and the fallback-local provider with calls to the
helper(s); reference the provider constructors LocalResolutionProvider,
SuperpositionAPIProvider, HttpDataSource(wrong_http_options), and the client
methods get_integer_value/get_string_value and
get_integer_value_async/get_string_value_async to locate and refactor the code.
In `@clients/python/provider/pyproject.toml`:
- Around line 14-15: Replace the exact pinned dependencies
"openfeature-sdk==0.8.3" and "watchdog==6.0.0" with compatible version ranges so
downstream consumers can resolve ranges; for example change
"openfeature-sdk==0.8.3" to "openfeature-sdk>=0.8,<0.9" (or use "~=0.8" if you
prefer PEP 440 caret semantics) and "watchdog==6.0.0" to "watchdog>=6.0,<7.0" to
allow patch updates while preventing breaking minor bumps.
In `@clients/python/provider/superposition_provider/__init__.py`:
- Line 33: The module currently exports RefreshStrategy twice (as
RefreshStrategy and as an alias RefreshStrategyType) causing confusion; remove
the redundant alias by deleting the alias import/export of RefreshStrategyType
and any references to RefreshStrategyType in __all__ so only RefreshStrategy
remains exported (update any internal uses that referenced RefreshStrategyType
to use RefreshStrategy instead); specifically edit the import/export lines that
declare RefreshStrategyType and remove "RefreshStrategyType" from the __all__
list.
In `@clients/python/provider/superposition_provider/exp_config.py`:
- Around line 116-123: Extract the duplicated SDK client bootstrap into a single
helper function (e.g., create_superposition_sdk_config) that takes
superposition_options (or token and endpoint) and returns the Config built by
calling bearer_auth_config(token=...) and then Config(endpoint_uri=...,
http_auth_scheme_resolver=resolver, http_auth_schemes=schemes); replace the two
inline blocks that call bearer_auth_config and construct Config (the one using
superposition_options.token and superposition_options.endpoint) with calls to
this helper so both fetch paths share the same bootstrap logic.
In `@clients/python/provider/superposition_provider/http_data_source.py`:
- Around line 86-90: The except blocks in http_data_source.py that catch
Exception and re-raise with "raise e" (specifically the UnknownApiError handling
and the broader Exception handler around FetchResponse.not_modified()) must be
changed to preserve tracebacks: either remove the generic "except Exception as
e" entirely or re-raise using a bare "raise" instead of "raise e"; update the
UnknownApiError block to return FetchResponse.not_modified() and let other
exceptions propagate with bare raise (and apply the same change to the similar
handler around lines 150-154), ensuring you edit the except blocks in the
functions/methods where UnknownApiError is caught rather than altering unrelated
logic.
In `@clients/python/provider/superposition_provider/interfaces.py`:
- Around line 339-344: The helper function _to_bool contains multiple statements
on single lines which violates PEP 8; rewrite it so each conditional and return
is on its own line (keep the same logic for bool, str checks with .lower()
comparisons, numeric check using isinstance(value, (int, float)) returning value
!= 0) and add an explicit return None at the end so the function’s control flow
is clear; update the body of _to_bool accordingly without changing its behavior.
In `@clients/python/provider/superposition_provider/local_provider.py`:
- Around line 158-162: The default branch of the match on self.refresh_strategy
currently uses the confusing no-op expression `()`; replace that with the
explicit no-op `pass` in the match _ case (the branch alongside `case
OnDemandStrategy():`) so the intent is clear and Pythonic.
- Around line 42-47: The __init__ in local_provider.py currently uses
default_on_demand_strategy() as a default argument which is evaluated at import
time and can create a shared mutable RefreshStrategy instance; change the
signature to accept Optional[RefreshStrategy] = None and inside __init__ (method
__init__ of the class in local_provider.py) set self.refresh_strategy =
refresh_strategy or default_on_demand_strategy() so a fresh strategy is created
per instance, ensuring RefreshStrategy instances are not shared across objects.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: cfcfe2f1-35a0-40b3-916b-e1b78ed10844
⛔ Files ignored due to path filters (1)
clients/python/provider-sdk-tests/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (37)
clients/OPENFEATURE_PROVIDER_IMPLEMENTATION.mdclients/java/bindings/src/main/kotlin/uniffi/superposition_client/superposition_client.ktclients/python/bindings/superposition_bindings/superposition_client.pyclients/python/provider-sdk-tests/config.tomlclients/python/provider-sdk-tests/main.pyclients/python/provider/PROVIDER_ARCHITECTURE.mdclients/python/provider/README_REFACTORED.mdclients/python/provider/examples/README.mdclients/python/provider/examples/__init__.pyclients/python/provider/examples/config.tomlclients/python/provider/examples/local_file_example.pyclients/python/provider/examples/local_file_watch_example.pyclients/python/provider/examples/local_http_example.pyclients/python/provider/examples/local_with_fallback_example.pyclients/python/provider/examples/polling_example.pyclients/python/provider/pyproject.tomlclients/python/provider/superposition_provider/__init__.pyclients/python/provider/superposition_provider/builder.pyclients/python/provider/superposition_provider/cac_config.pyclients/python/provider/superposition_provider/conversions.pyclients/python/provider/superposition_provider/data_source.pyclients/python/provider/superposition_provider/exp_config.pyclients/python/provider/superposition_provider/file_data_source.pyclients/python/provider/superposition_provider/http_data_source.pyclients/python/provider/superposition_provider/interfaces.pyclients/python/provider/superposition_provider/local_provider.pyclients/python/provider/superposition_provider/remote_provider.pyclients/python/provider/superposition_provider/types.pycrates/superposition_core/src/ffi.rscrates/superposition_provider/src/data_source.rscrates/superposition_provider/src/data_source/file.rscrates/superposition_provider/src/data_source/http.rscrates/superposition_provider/src/local_provider.rscrates/superposition_provider/src/remote_provider.rscrates/superposition_provider/src/traits.rscrates/superposition_provider/tests/config.tomlcrates/superposition_provider/tests/integration_test.rs
💤 Files with no reviewable changes (1)
- crates/superposition_provider/src/remote_provider.rs
| try: | ||
| print("\n=== Starting OpenFeature tests ===\n") | ||
| print(f"Running on CPU architecture: {platform.machine()}") | ||
|
|
||
| print("Testing SuperpositionAPIProvider") | ||
|
|
||
| provider = SuperpositionAPIProvider(http_options) | ||
| print("Provider created successfully") | ||
|
|
||
| api.set_provider(provider) | ||
| print("Provider initialized successfully\n") | ||
|
|
||
| client = api.get_client() | ||
|
|
||
| # Test 1: Default values (no context) | ||
| print("Test 1: Default values (no context)") | ||
| evaluation_context = EvaluationContext() | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 10000, "Default price should be 10000" | ||
| assert currency == "Rupee", "Default currency should be Rupee" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| # Test 2: Platinum customer - Agush, no city | ||
| print("Test 2: Platinum customer - Agush (no city)") | ||
| evaluation_context = EvaluationContext(attributes={"name": "Agush"}) | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 5000, "Price should be default 5000 (platinum customer)" | ||
| assert currency == "Rupee", "Currency should be default Rupee" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| # Test 3: Platinum customer - Sauyav, no city | ||
| print("Test 3: Platinum customer - Sauyav (no city)") | ||
| evaluation_context = EvaluationContext( | ||
| attributes={"name": "Sauyav", "city": "Boston"} | ||
| ) | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 5000, "Price should be 5000" | ||
| assert currency == "Dollar", "Currency should be dollar" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 4: Regular customer - John (no city)") | ||
| evaluation_context = EvaluationContext(attributes={"name": "John"}) | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 10000, "Price should be default 10000" | ||
| assert currency == "Rupee", "Currency should be default Rupee" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 5: Platinum customer - Sauyav with city Berlin") | ||
| evaluation_context = EvaluationContext( | ||
| attributes={"name": "Sauyav", "city": "Berlin"} | ||
| ) | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 5000, "Price should be 5000" | ||
| assert currency == "Euro", "Currency should be Euro in Berlin" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 6: Regular customer - John with city Boston") | ||
| evaluation_context = EvaluationContext( | ||
| attributes={"name": "John", "city": "Boston"} | ||
| ) | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 10000, "Price should be default 10000" | ||
| assert currency == "Dollar", "Currency should be Dollar in Boston" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 7: Edge case customer - karbik (specific override)") | ||
| evaluation_context = EvaluationContext(attributes={"name": "karbik"}) | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 1, "Price should be 1 for karbik" | ||
| assert currency == "Rupee", "Currency should be default Rupee" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 8: Edge case customer - karbik with city Boston") | ||
| evaluation_context = EvaluationContext( | ||
| attributes={"name": "karbik", "city": "Boston"} | ||
| ) | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 1, "Price should be 1 for karbik" | ||
| assert currency == "Dollar", "Currency should be Dollar in Boston" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 9: Experiment case: Kolkata pricing") | ||
| evaluation_context = EvaluationContext( | ||
| targeting_key= "test", | ||
| attributes={"city": "Kolkata"} | ||
| ) | ||
| price = await client.get_integer_value_async("price", 0, evaluation_context) | ||
| currency = await client.get_string_value_async("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price in [8000, 7000], "Price should be either 8000 (control) or 7000 (experimental) in Kolkata" | ||
| assert currency == "Rupee", "Currency should be Rupee in Kolkata" | ||
| print(" ✓ Experiment Test passed\n") | ||
|
|
||
| print("\n=== All tests passed! ===\n") | ||
| except Exception as error: | ||
| print(f"\n❌ Error running tests: {error}") | ||
| raise error | ||
| finally: | ||
| print("OpenFeature closed successfully") |
There was a problem hiding this comment.
Missing provider.shutdown() in SuperpositionAPIProvider test phase.
Similar to the first test phase, the SuperpositionAPIProvider should be shut down to ensure proper resource cleanup.
🐛 Proposed fix
except Exception as error:
print(f"\n❌ Error running tests: {error}")
raise error
finally:
+ await provider.shutdown()
print("OpenFeature closed successfully")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider-sdk-tests/main.py` around lines 455 - 569, The
SuperpositionAPIProvider instance created in the test block (variable provider,
constructed via SuperpositionAPIProvider(http_options)) is not being shut down;
add a call to provider.shutdown() in the finally cleanup (or just before the
final print) to mirror the first test phase cleanup so resources are released
properly; ensure you call provider.shutdown() only if provider is
defined/initialized to avoid NameError.
| try: | ||
| print("\n=== Starting OpenFeature tests ===\n") | ||
| print(f"Running on CPU architecture: {platform.machine()}") | ||
|
|
||
| print("Testing local provider with wrong HTTP data source and polling refresh strategy, with fallback to file data source") | ||
|
|
||
| provider = LocalResolutionProvider(primary_source=HttpDataSource(wrong_http_options), fallback_source=fallback_source, refresh_strategy=refresh_strategy) | ||
| print("Provider created successfully") | ||
|
|
||
| # Initialize the provider | ||
| await provider.initialize(EvaluationContext()) | ||
| api.set_provider(provider) | ||
| print("Provider initialized successfully\n") | ||
|
|
||
| client = api.get_client() | ||
|
|
||
| # Test 1: Default values (no context) | ||
| print("Test 1: Default values (no context)") | ||
| evaluation_context = EvaluationContext() | ||
| price = client.get_integer_value("price", 0, evaluation_context) | ||
| currency = client.get_string_value("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 10000, "Default price should be 10000" | ||
| assert currency == "Rupee", "Default currency should be Rupee" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| # Test 2: Platinum customer - Agush, no city | ||
| print("Test 2: Platinum customer - Agush (no city)") | ||
| evaluation_context = EvaluationContext(attributes={"name": "Agush"}) | ||
| price = client.get_integer_value("price", 0, evaluation_context) | ||
| currency = client.get_string_value("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 5000, "Price should be default 5000 (platinum customer)" | ||
| assert currency == "Rupee", "Currency should be default Rupee" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| # Test 3: Platinum customer - Sauyav, no city | ||
| print("Test 3: Platinum customer - Sauyav (no city)") | ||
| evaluation_context = EvaluationContext( | ||
| attributes={"name": "Sauyav", "city": "Boston"} | ||
| ) | ||
| price = client.get_integer_value("price", 0, evaluation_context) | ||
| currency = client.get_string_value("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 5000, "Price should be 5000" | ||
| assert currency == "Dollar", "Currency should be dollar" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 4: Regular customer - John (no city)") | ||
| evaluation_context = EvaluationContext(attributes={"name": "John"}) | ||
| price = client.get_integer_value("price", 0, evaluation_context) | ||
| currency = client.get_string_value("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 10000, "Price should be default 10000" | ||
| assert currency == "Rupee", "Currency should be default Rupee" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 5: Platinum customer - Sauyav with city Berlin") | ||
| evaluation_context = EvaluationContext( | ||
| attributes={"name": "Sauyav", "city": "Berlin"} | ||
| ) | ||
| price = client.get_integer_value("price", 0, evaluation_context) | ||
| currency = client.get_string_value("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 5000, "Price should be 5000" | ||
| assert currency == "Euro", "Currency should be Euro in Berlin" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 6: Regular customer - John with city Boston") | ||
| evaluation_context = EvaluationContext( | ||
| attributes={"name": "John", "city": "Boston"} | ||
| ) | ||
| price = client.get_integer_value("price", 0, evaluation_context) | ||
| currency = client.get_string_value("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 10000, "Price should be default 10000" | ||
| assert currency == "Dollar", "Currency should be Dollar in Boston" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 7: Edge case customer - karbik (specific override)") | ||
| evaluation_context = EvaluationContext(attributes={"name": "karbik"}) | ||
| price = client.get_integer_value("price", 0, evaluation_context) | ||
| currency = client.get_string_value("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 1, "Price should be 1 for karbik" | ||
| assert currency == "Rupee", "Currency should be default Rupee" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Test 8: Edge case customer - karbik with city Boston") | ||
| evaluation_context = EvaluationContext( | ||
| attributes={"name": "karbik", "city": "Boston"} | ||
| ) | ||
| price = client.get_integer_value("price", 0, evaluation_context) | ||
| currency = client.get_string_value("currency", "", evaluation_context) | ||
| print(f" - Retrieved price: {price}, currency: {currency}") | ||
| assert price == 1, "Price should be 1 for karbik" | ||
| assert currency == "Dollar", "Currency should be Dollar in Boston" | ||
| print(" ✓ Test passed\n") | ||
|
|
||
| print("Experiment not supported in file data source, skipping experiment test") | ||
| # print("Test 9: Experiment case: Kolkata pricing") | ||
| # evaluation_context = EvaluationContext( | ||
| # targeting_key= "test", | ||
| # attributes={"city": "Kolkata"} | ||
| # ) | ||
| # price = client.get_integer_value("price", 0, evaluation_context) | ||
| # currency = client.get_string_value("currency", "", evaluation_context) | ||
| # print(f" - Retrieved price: {price}, currency: {currency}") | ||
| # assert price in [8000, 7000], "Price should be either 8000 (control) or 7000 (experimental) in Kolkata" | ||
| # assert currency == "Rupee", "Currency should be Rupee in Kolkata" | ||
| # print(" ✓ Experiment Test passed\n") | ||
|
|
||
| print("\n=== All tests passed! ===\n") | ||
| except Exception as error: | ||
| print(f"\n❌ Error running tests: {error}") | ||
| raise error | ||
| finally: | ||
| print("OpenFeature closed successfully") |
There was a problem hiding this comment.
Missing provider.shutdown() in fallback test phase.
The third test phase also lacks provider shutdown, which should be added for consistency and proper cleanup.
🐛 Proposed fix
except Exception as error:
print(f"\n❌ Error running tests: {error}")
raise error
finally:
+ await provider.shutdown()
print("OpenFeature closed successfully")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider-sdk-tests/main.py` around lines 571 - 688, The
fallback test phase creates a LocalResolutionProvider instance (provider) and
initializes it but never calls provider.shutdown(), so add a shutdown call for
the provider to ensure proper cleanup; after the test assertions complete
(before printing the final "OpenFeature closed successfully" or in the finally
block) call provider.shutdown() on the LocalResolutionProvider instance (and
await it if it's async) to mirror the other test phases that properly tear down
the provider.
| def on_modified(self, event): | ||
| """Called when a file is modified.""" | ||
| if event.is_directory: | ||
| return | ||
|
|
||
| # Check if the modified file matches our watched file | ||
| if os.path.abspath(event.src_path) == os.path.abspath(self.file_path): | ||
| logger.info(f"File changed (watchdog event): {self.file_path}") | ||
| # Schedule the callback in the event loop | ||
| asyncio.create_task(self.callback(self.file_path)) |
There was a problem hiding this comment.
Fire-and-forget task may silently swallow exceptions.
asyncio.create_task() without storing a reference means exceptions in self.callback() will be silently lost, and the task may be garbage collected unexpectedly. The static analysis hint (RUF006) is valid here.
🛠️ Proposed fix
+import weakref
+
class _FileEventHandler(FileSystemEventHandler):
"""Handler for file system events from watchdog."""
def __init__(self, file_path: str, callback):
...
super().__init__()
self.file_path = file_path
self.callback = callback
+ self._pending_tasks: set = set()
def on_modified(self, event):
"""Called when a file is modified."""
if event.is_directory:
return
# Check if the modified file matches our watched file
if os.path.abspath(event.src_path) == os.path.abspath(self.file_path):
logger.info(f"File changed (watchdog event): {self.file_path}")
# Schedule the callback in the event loop
- asyncio.create_task(self.callback(self.file_path))
+ task = asyncio.create_task(self.callback(self.file_path))
+ self._pending_tasks.add(task)
+ task.add_done_callback(self._pending_tasks.discard)🧰 Tools
🪛 Ruff (0.15.7)
[warning] 55-55: Store a reference to the return value of asyncio.create_task
(RUF006)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider/superposition_provider/file_data_source.py` around
lines 46 - 55, The on_modified handler uses
asyncio.create_task(self.callback(self.file_path)) without keeping a reference
so exceptions can be lost; change it to create and retain the Task (e.g., assign
to self._tasks or similar) and attach a done callback that logs exceptions and
removes the task from the collection. Specifically, in on_modified (and the
class that defines self.callback and self.file_path) create the task via
asyncio.create_task(...), add task.add_done_callback(lambda t: handle/log
t.exception() if t.exception() is not None and remove the task from
self._tasks), and ensure self._tasks is initialized (e.g., set()) so tasks are
not garbage-collected and their exceptions are surfaced.
| def resolve_boolean_details( | ||
| self, | ||
| flag_key: str, | ||
| default_value: bool, | ||
| evaluation_context: Optional[EvaluationContext] = None, | ||
| ) -> FlagResolutionDetails[bool]: | ||
| """Resolve a boolean flag.""" | ||
| return self.resolve_bool(flag_key, default_value, evaluation_context) |
There was a problem hiding this comment.
Sync resolution methods will always fail.
resolve_boolean_details calls resolve_bool (from AllFeatureProvider), which calls resolve_all_features, which calls resolve_all_features_with_filter — but that method raises NotImplementedError at line 112. All sync resolve_*_details methods will fail at runtime.
Either implement sync resolution or document that only async methods work, potentially by overriding the sync methods to raise NotImplementedError directly with a clear message.
🛠️ Proposed fix - Make the limitation explicit
def resolve_boolean_details(
self,
flag_key: str,
default_value: bool,
evaluation_context: Optional[EvaluationContext] = None,
) -> FlagResolutionDetails[bool]:
"""Resolve a boolean flag."""
- return self.resolve_bool(flag_key, default_value, evaluation_context)
+ raise NotImplementedError(
+ "Synchronous resolution is not supported. Use resolve_boolean_details_async instead."
+ )Apply similarly to resolve_string_details, resolve_integer_details, resolve_float_details, and resolve_object_details.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/python/provider/superposition_provider/remote_provider.py` around
lines 172 - 179, The sync resolution helpers (e.g., resolve_boolean_details)
call resolve_bool which relies on async-only resolve_all_features_with_filter
and therefore always fails; update resolve_boolean_details to immediately raise
NotImplementedError with a clear message (e.g., "Synchronous resolution is not
supported; use the async resolve_*_details methods") instead of calling
resolve_bool, and apply the same change to resolve_string_details,
resolve_integer_details, resolve_float_details, and resolve_object_details so
all synchronous detail methods explicitly and consistently raise
NotImplementedError with that clear guidance.
| fn ffi_parse_config_file_with_filters( | ||
| file_content: String, | ||
| format: String, | ||
| dimension_data: Option<HashMap<String, String>>, | ||
| prefix: Option<Vec<String>>, | ||
| ) -> Result<Config, OperationError> { | ||
| let dimension_data = dimension_data | ||
| .map(json_from_map) | ||
| .transpose() | ||
| .map_err(|err| OperationError::Unexpected(err.to_string()))?; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
printf '\n# Rust helper used by the new entry points\n'
sed -n '22,32p' crates/superposition_core/src/ffi.rs
printf '\n# New entry points that call json_from_map\n'
sed -n '223,233p' crates/superposition_core/src/ffi.rs
sed -n '421,431p' crates/superposition_core/src/ffi.rs
printf '\n# Kotlin binding forwards Map<String, String> values as raw strings\n'
sed -n '1958,1982p' clients/java/bindings/src/main/kotlin/uniffi/superposition_client/superposition_client.kt
sed -n '1533,1538p' clients/java/bindings/src/main/kotlin/uniffi/superposition_client/superposition_client.kt
sed -n '2322,2327p' clients/java/bindings/src/main/kotlin/uniffi/superposition_client/superposition_client.kt
printf '\n# JSON grammar check: bare strings are invalid JSON, quoted strings are valid\n'
python - <<'PY'
import json
samples = ["Boston", '"Boston"', "42", "true", '{"city":"Boston"}']
for sample in samples:
try:
print(f"{sample!r} -> {json.loads(sample)!r}")
except Exception as exc:
print(f"{sample!r} -> ERROR: {exc}")
PYRepository: juspay/superposition
Length of output: 3846
Accept bare string dimension values at the FFI boundary.
Both new entry points take HashMap<String, String> but immediately feed each value into serde_json::from_str. That rejects normal strings like Boston unless callers pre-serialize them as JSON ("\"Boston\""), which does not match the Kotlin/Java surface being exposed. Either encode at the bindings or fall back to Value::String when JSON parsing fails.
The Rust helpers expect JSON literals (json_from_map uses serde_json::from_str on each value), while Kotlin bindings forward plain Map<String, String> values as raw strings. Bare strings fail parsing; only pre-serialized JSON or native JSON types ("42", "true") are accepted.
Affects: ffi_parse_config_file_with_filters (lines 223–232) and ProviderCache::get_applicable_variants (lines 421–430).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/superposition_core/src/ffi.rs` around lines 223 - 232, The FFI
currently treats each HashMap<String,String> value as JSON and calls
serde_json::from_str, which rejects bare strings; update the parsing so that
when converting the incoming dimension_data (used by
ffi_parse_config_file_with_filters) and the map in
ProviderCache::get_applicable_variants you attempt serde_json::from_str on each
value but on Err fall back to producing
serde_json::Value::String(original_value.clone()); adjust json_from_map (or the
mapping logic where json_from_map is invoked) to implement this
try-parse-then-fallback behavior so Kotlin/Java callers can pass plain strings
like "Boston" without pre-serializing them.
442bf07 to
bbb4002
Compare
272e106 to
e3015f7
Compare
mahatoankitkumar
left a comment
There was a problem hiding this comment.
Weakref not implemented
e3015f7 to
303e368
Compare
clients/python/provider/superposition_provider/file_data_source.py
Outdated
Show resolved
Hide resolved
clients/python/provider/superposition_provider/file_data_source.py
Outdated
Show resolved
Hide resolved
| from superposition_core for proper parsing. | ||
| """ | ||
| try: | ||
| query_data = {k: json.dumps(v) for k, v in context.items()} if context else None |
There was a problem hiding this comment.
But the context can be empty dict {} , is it valid ?
There was a problem hiding this comment.
Can we rather have a if context is not None rather than a falsy check ?
There was a problem hiding this comment.
empty dict should not give false, right ?
and even if it does give false, that should not be a problem either ways
There was a problem hiding this comment.
empty dict will give false , empty is falsy value in python.
But yes , it won't be a problem in our case. It's fine.
| data=config_response_to_ffi_config(response), | ||
| )) | ||
| except UnknownApiError as e: | ||
| # this is a hack for now, need to fix it properly by checking the content of UnkownApiError |
There was a problem hiding this comment.
Can we also log it as a warning ?
There was a problem hiding this comment.
this is not a warning or issue, it just needs to be handled separately
that is only being mentioned here, its not an issue
clients/python/provider/superposition_provider/http_data_source.py
Outdated
Show resolved
Hide resolved
clients/python/provider/superposition_provider/http_data_source.py
Outdated
Show resolved
Hide resolved
| async def close(self) -> None: | ||
| """Close the HTTP client.""" | ||
| if self.client: | ||
| self.client = None |
There was a problem hiding this comment.
Setting self.client = None does not close any underlying HTTP connections, connection pools, or sockets.
Can we check if our sdk has some method that closes it ?
There was a problem hiding this comment.
I dont think there is anything available for this
There was a problem hiding this comment.
then this close function doesn't do what it is intending to.
303e368 to
9bfb0cf
Compare
9bfb0cf to
5827677
Compare
5827677 to
cb348a8
Compare
Problem
Describe the problem you are trying to solve here
Solution
Provide a brief summary of your solution so that reviewers can understand your code
Environment variable changes
What ENVs need to be added or changed
Pre-deployment activity
Things needed to be done before deploying this change (if any)
Post-deployment activity
Things needed to be done after deploying this change (if any)
API changes
Possible Issues in the future
Describe any possible issues that could occur because of this change
Summary by CodeRabbit
New Features
Documentation