Skip to content

feat: add local/remote provider implementations following new structure#881

Closed
knutties wants to merge 4 commits intomainfrom
local-provider-clean
Closed

feat: add local/remote provider implementations following new structure#881
knutties wants to merge 4 commits intomainfrom
local-provider-clean

Conversation

@knutties
Copy link
Copy Markdown
Collaborator

@knutties knutties commented Feb 17, 2026

Problem

#745

Solution

Implementation based on spec in 745

Environment variable changes

NA

Pre-deployment activity

NA

Post-deployment activity

NA

API changes

NA

Summary by CodeRabbit

New Features

  • Added LocalResolutionProvider for in-process feature resolution with configurable refresh strategies (Polling, OnDemand, Watch, Manual)
  • Added SuperpositionAPIProvider for remote feature resolution with optional response caching
  • Support for file-based and HTTP data source configurations
  • Bulk feature resolution APIs with optional prefix filtering and context-driven experiment variant determination
  • Primary and fallback data source support for resilience

@knutties knutties requested a review from a team as a code owner February 17, 2026 10:56
@semanticdiff-com
Copy link
Copy Markdown

semanticdiff-com bot commented Feb 17, 2026

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 17, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Walkthrough

This PR introduces a trait-based, pluggable architecture for the superposition_provider crate, adding data source abstractions (HTTP, File), two provider implementations (LocalResolutionProvider for in-process caching and refresh, SuperpositionAPIProvider for remote resolution), supporting multiple refresh strategies, and providing concrete examples demonstrating the new APIs.

Changes

Cohort / File(s) Summary
Core Architecture & Traits
src/traits.rs, src/types.rs, src/data_source.rs, src/data_source/file.rs, src/data_source/http.rs
New async traits (AllFeatureProvider, FeatureExperimentMeta) and SuperpositionDataSource abstraction with ConfigData/ExperimentData wrappers. Extended RefreshStrategy with Watch variant. Concrete FileDataSource (TOML-based, +file watching) and HttpDataSource (+parallel experiment fetching) implementations with filtering support.
Provider Implementations
src/local_provider.rs, src/remote_provider.rs
LocalResolutionProvider with primary+fallback sources, multiple refresh strategies (Polling/OnDemand/Watch/Manual), RwLock-protected caches, TTL-based freshness, and polling/watching background tasks (703 lines). SuperpositionAPIProvider with optional TTL-based response caching, cache key generation from EvaluationContext, and remote resolution workflow (506 lines).
Integration & API Expansion
src/lib.rs, src/client.rs, src/utils.rs
New module re-exports for data_source, local_provider, remote_provider, traits. Logging branches added for Watch/Manual refresh strategies. New evaluation_context_to_query utility for converting EvaluationContext to query tuples.
Examples & Configuration
examples/config.toml, examples/local_file_example.rs, examples/local_file_watch_example.rs, examples/local_http_example.rs, examples/local_with_fallback_example.rs, examples/polling_example.rs
Six new examples demonstrating FileDataSource with OnDemand/Watch refresh, HttpDataSource with Polling, and local-with-fallback patterns (up to 102 lines each). Config example shows default configs, dimensions, and context-based overrides (TOML).
Project Setup
Cargo.toml
Added dev dependencies: env_logger = "0.11", notify = "7" for file watching in examples.
Documentation
docs/plans/2026-02-14-configuration-resolver-design.md, docs/plans/2026-02-14-configuration-resolver-plan.md
Design and high-level plan documentation (226 and 1761 lines) detailing trait-based architecture, module layout, data source abstractions, refresh strategies, filtering semantics, and use cases.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant LRP as LocalResolutionProvider
    participant DS as DataSource<br/>(File/HTTP)
    participant Cache as Config Cache
    participant Refresh as Refresh<br/>Strategy
    
    App->>LRP: new(primary, fallback, strategy)
    App->>LRP: init()
    
    LRP->>DS: fetch_config()
    DS-->>LRP: ConfigData
    LRP->>Cache: store ConfigData
    
    alt On-Demand/Polling
        LRP->>Refresh: start_polling(interval)
        activate Refresh
        loop Every interval
            Refresh->>Refresh: elapsed >= TTL?
            alt TTL expired
                Refresh->>DS: fetch_config()
                DS-->>Refresh: ConfigData
                Refresh->>Cache: update ConfigData
            end
        end
    else Watch Strategy
        LRP->>DS: watch()
        DS-->>LRP: WatchStream
        activate Refresh
        loop On file event
            Refresh->>DS: fetch_config()
            DS-->>Refresh: ConfigData
            Refresh->>Cache: update ConfigData
        end
    end
    
    App->>LRP: resolve_all_features(context)
    LRP->>LRP: ensure_fresh_data()
    LRP->>Cache: read ConfigData
    LRP->>LRP: eval_with_context(context)
    LRP-->>App: Map<String, Value>
Loading
sequenceDiagram
    participant App as Application
    participant SAPI as SuperpositionAPIProvider
    participant Cache as Response<br/>Cache
    participant SDK as Superposition<br/>SDK Client
    
    App->>SAPI: new(options) or with_cache(options)
    
    App->>SAPI: resolve_all_features(context)
    
    alt Cache enabled & hit
        SAPI->>Cache: lookup(cache_key)
        Cache-->>SAPI: cached result
        SAPI->>SAPI: apply optional<br/>prefix_filter
        SAPI-->>App: filtered result
    else Cache miss or disabled
        SAPI->>SDK: get_resolved_config(query)
        SDK-->>SAPI: Document response
        SAPI->>SAPI: convert to<br/>Map<String, Value>
        alt Cache enabled
            SAPI->>Cache: store(cache_key, result)
        end
        SAPI->>SAPI: apply optional<br/>prefix_filter
        SAPI-->>App: filtered result
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • fix: exp and config partial apply #831: Introduces new evaluate_local_cohorts variants and alters experiment filtering signatures; directly impacts the dimension-based filtering and experiment-matching logic used by the new FileDataSource and HttpDataSource implementations in this PR.

Suggested labels

enhancement, P1

Suggested reviewers

  • Datron
  • mahatoankitkumar
  • ayushjain17

Poem

🐰 A trait-based garden now blooms with new life,
Where FileDataSource and HTTP thrive, free from strife,
LocalResolutionProvider tends the cache with care,
While Watch and Poll refresh the data with flair,
From remote APIs to files, configurations dance,
Giving features their feature-flag romance! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 24.36% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add local/remote provider implementations following new structure' clearly and specifically summarizes the main change—adding implementations for local and remote providers based on a new architectural structure.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch local-provider-clean

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (18)
docs/plans/2026-02-14-configuration-resolver-design.md (1)

18-32: Add language specifier to the fenced code block.

The module layout code block (line 18) is missing a language specifier, flagged by markdownlint (MD040). Use ```text for plain-text diagrams.

Fix
-```
+```text
 crates/superposition_provider/src/
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/2026-02-14-configuration-resolver-design.md` around lines 18 - 32,
The fenced code block that shows the module layout (starts with ``` and the text
"crates/superposition_provider/src/") needs a language specifier to satisfy
markdownlint MD040; change the opening fence from ``` to ```text so the
plain-text diagram is marked correctly, leaving the rest of the block (the tree
lines like "├── lib.rs", "types.rs", etc.) unchanged.
crates/superposition_provider/src/client.rs (2)

100-105: Watch/Manual strategies are silently no-ops in the legacy client.

These new arms only log and don't start any refresh mechanism. If someone mistakenly configures CacConfig (the legacy client) with Watch or Manual, they'll get no refresh with no warning beyond an info-level log. Consider logging at warn level to indicate these strategies are not supported by the legacy client, or documenting that users should migrate to LocalResolutionProvider.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/client.rs` around lines 100 - 105, The
current RefreshStrategy::Watch and ::Manual arms in client.rs only log at info
and silently do nothing for the legacy CacConfig client; change those info!
calls to warn! and make the message explicit that Watch and Manual are not
supported by the legacy client (e.g., "Watch/Manual refresh strategies are not
supported by legacy CacConfig; no automatic refresh will occur — migrate to
LocalResolutionProvider for full support"), so users see a clear warning, and
optionally add a brief note recommending migration to LocalResolutionProvider in
the same warning.

135-175: last_updated read lock is held across the async get_config call.

In on_demand_config, the RwLock read guard last_updated (acquired at line 139) is not explicitly dropped before the should_refresh check and subsequent write path. While the guard does go out of scope at line 140 due to the block, the pattern is fragile — if someone refactors and removes the block braces, the read guard would be held across the write acquisition at line 152, causing a deadlock with a tokio::sync::RwLock. The existing block scope works, but the empty block with a trailing semicolon is a code smell.

Cleaner pattern
-        let last_updated;
-        {
-            last_updated = self.last_updated.read().await;
-        }
-        let should_refresh = match *last_updated {
+        let should_refresh = {
+            let last_updated = self.last_updated.read().await;
+            match *last_updated {
+                Some(last) => (now - last).num_seconds() > ttl as i64,
+                None => true,
+            }
+        };
-            Some(last) => (now - last).num_seconds() > ttl as i64,
-            None => true,
-        };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/client.rs` around lines 135 - 175, The read
guard on self.last_updated in on_demand_config is being acquired in a fragile
empty block and could accidentally be held across the async get_config call,
risking a deadlock when later acquiring write locks (e.g., cached_config or
last_updated write). Fix by explicitly scoping or dropping the read guard before
any await: capture the value (e.g., let last = { let g =
self.last_updated.read().await; *g }; or call drop(last_updated) right before
the match that may call self.get_config().await) so no RwLock read guard remains
held when calling get_config or when later acquiring mutable locks (references:
on_demand_config, self.last_updated, self.get_config, self.cached_config).
crates/superposition_provider/examples/local_file_watch_example.rs (1)

33-37: provider.close() is never called due to the infinite loop.

The provider's file watcher and any spawned tasks will never be cleaned up. Consider adding a signal handler (e.g., tokio::signal::ctrl_c()) to break out of the loop and call close().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/examples/local_file_watch_example.rs` around
lines 33 - 37, The infinite loop prevents cleaning up the provider; modify the
loop around provider.resolve_all_features(&context).await so it listens for a
shutdown signal (e.g., using tokio::signal::ctrl_c().await or select! with a
shutdown future) and breaks when received, then call and await provider.close()
to ensure the file watcher and spawned tasks are cleaned up; reference the
existing provider, resolve_all_features, and context symbols and ensure
provider.close() is invoked after exiting the loop (and handle any errors from
close()).
crates/superposition_provider/src/lib.rs (1)

10-16: Potential namespace pollution from multiple glob re-exports.

pub use client::*, pub use provider::*, pub use types::*, and pub use traits::* all glob-export into the crate root. As the crate grows, this increases the risk of name collisions and makes it harder for consumers to discover where a symbol originates. The new modules already use selective re-exports (lines 11–14) — consider migrating the older modules to the same pattern over time.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/lib.rs` around lines 10 - 16, The root
issue is unsafe glob re-exports (pub use client::*, pub use provider::*, pub use
types::*, pub use traits::*); replace these with selective, explicit re-exports
to avoid namespace pollution. Open lib.rs and for each module (client, provider,
types, traits) replace the glob with explicit symbol lists (e.g., pub use
client::{ClientType, ClientError, new_client} and similarly for
provider::{ProviderTrait, ProviderImpl}, types::{TypeA, TypeB}, traits::{TraitA,
TraitB}) — include only the public items intended for consumers and update
imports where needed; when new public items are added in the future, add them
explicitly or re-export the module as pub mod client to keep origins clear.
crates/superposition_provider/Cargo.toml (1)

34-34: Consider gating notify behind a feature flag.

notify is a non-trivial dependency (pulls in platform-specific file-watching backends) but is only used by FileDataSource. Users who only use HttpDataSource would unnecessarily pull in the additional dependency. A file-watch or file-data-source feature flag would allow keeping the default dependency footprint lighter for users who don't need file-based configuration.

crates/superposition_provider/examples/local_with_fallback_example.rs (1)

39-41: Initialization wait is timing-dependent.

sleep(Duration::from_secs(2)) is a best-effort wait for the provider to become ready. If the HTTP source is slow and the file fallback takes a moment to parse, the client may still be NotReady. Since this is an example, it's acceptable, but a brief comment noting this caveat (or a TODO suggesting readiness polling) would help users adapting this for production use.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/examples/local_with_fallback_example.rs` around
lines 39 - 41, Add a short explanatory comment near the
sleep(Duration::from_secs(2)).await call noting that this is a best-effort,
timing-dependent wait for the provider to initialize via OpenFeature and may not
guarantee readiness if the HTTP source or file fallback is slow; suggest
replacing with readiness polling or retry logic in production so the client
doesn't remain NotReady. Reference the existing sleep call and the concepts
"provider initialization via OpenFeature", "file fallback", and "NotReady" so
maintainers know where to improve.
crates/superposition_provider/src/data_source.rs (1)

84-89: Doc comment on watch() is overly specific.

The comment says "Set up a file watcher" but this trait is transport-agnostic. Non-file data sources could implement watching via other mechanisms (SSE, WebSocket, etc.).

📝 Suggested wording
-    /// Set up a file watcher and return a stream of change notifications.
+    /// Set up a change watcher and return a stream of change notifications.
     ///
     /// Returns `Ok(None)` if this data source does not support watching.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/data_source.rs` around lines 84 - 89, The
doc comment for the trait method `watch()` is too specific about "Set up a file
watcher" while the trait is transport-agnostic; update the comment on `fn
watch(&self) -> Result<Option<crate::types::WatchStream>>` to describe a generic
change-notification/watch stream (e.g., "Return an optional stream of change
notifications; implementations may use file watchers, SSE, WebSockets, polling,
etc.") and keep the note about returning `Ok(None)` when watching is not
supported.
crates/superposition_provider/src/data_source/file.rs (2)

100-132: Watch implementation looks correct.

The watcher is stored in the Arc<Mutex<…>> to keep it alive, try_send avoids blocking the notify thread, and the bounded channel (16) provides reasonable back-pressure. One subtlety: calling watch() a second time silently replaces the previous watcher (dropping its sender), which will cause the old WatchStream receiver to close. If that's intentional, a brief doc comment would help; if not, consider returning an error when a watcher is already active.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/data_source/file.rs` around lines 100 -
132, The current watch() silently replaces an existing watcher which closes any
previous WatchStream; update watch() to check the Arc<Mutex<Option<...>>>
(self.watcher) before creating a new watcher and return a
SuperpositionError::ConfigError if a watcher is already active instead of
overwriting it, i.e., lock self.watcher, if guard.is_some() return Err(...) with
a clear message like "watcher already active", otherwise proceed to create/store
the watcher and return the WatchStream; reference the watch() function, the
self.watcher mutex/guard, and the WatchStream type when making the change.

51-74: Duplicated filtering logic with HttpDataSource::fetch_filtered_config.

The dimension-filtering and prefix-filtering blocks (lines 59-71) are identical to http.rs lines 166-179. Consider extracting a shared helper (e.g., on ConfigData or a free function) to keep filtering logic in one place:

fn apply_filters(
    config: Config,
    context: Option<&Map<String, Value>>,
    prefix_filter: Option<&[String]>,
) -> Config { ... }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/data_source/file.rs` around lines 51 - 74,
The filtering logic in fetch_filtered_config is duplicated with
HttpDataSource::fetch_filtered_config; extract a single helper (e.g., a free fn
apply_filters(config: Config, context: Option<&Map<String, Value>>,
prefix_filter: Option<&[String]>) -> Config or an associated method on
ConfigData) and call it from both fetch_filtered_config implementations; the
helper should apply config.filter_by_dimensions when context is Some and
non-empty and apply config.filter_by_prefix when prefix_filter is Some and
non-empty (creating a HashSet<String> via prefixes.iter().cloned() as done now),
then return the filtered Config which you wrap with ConfigData::new in each
caller.
crates/superposition_provider/src/utils.rs (1)

580-598: Near-duplicate of context_to_dimension_data (lines 553-578).

Both methods iterate custom_fields with convert_evaluation_context_value_to_serde_value. The only difference is how targeting_key is handled (embedded in the map vs. returned separately). Consider factoring the shared iteration into a private helper and having both methods call it, to keep the conversion logic in one place.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/utils.rs` around lines 580 - 598, The two
functions evaluation_context_to_query and context_to_dimension_data duplicate
the iteration over ctx.custom_fields and calls to
Self::convert_evaluation_context_value_to_serde_value; extract that shared logic
into a private helper (e.g., fn convert_custom_fields(ctx:
&open_feature::EvaluationContext) -> Map<String, Value>) that returns the
Map<String, Value>, then have evaluation_context_to_query return (helper(ctx),
ctx.targeting_key.clone()) and have context_to_dimension_data call the same
helper and then insert the targeting_key into the returned map as it currently
does; update both functions to use the new helper to centralize the conversion
logic.
crates/superposition_provider/src/data_source/http.rs (1)

23-31: Cache the SDK client to avoid recreating it on every fetch call.

create_client() is invoked on each call to fetch_config and fetch_experiments_and_groups. Per the SDK documentation, client construction is expensive due to connection thread pool initialization and should be done once at application start-up. Since Client is Arc-wrapped (cloning is cheap), construct the client once in HttpDataSource::new and reuse it across all methods.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/data_source/http.rs` around lines 23 - 31,
The create_client() helper currently builds a new expensive Client on each call
(used by fetch_config and fetch_experiments_and_groups); instead, construct the
Client once in HttpDataSource::new, add a field on HttpDataSource (e.g., client:
Client) to store it, initialize that field using the SdkConfig building logic
now inside create_client, and update fetch_config and
fetch_experiments_and_groups to reuse self.client.clone() rather than calling
create_client(); remove or repurpose create_client() so client construction is
not repeated and rely on the Arc-wrapped Client cloning for cheap copies.
crates/superposition_provider/src/local_provider.rs (3)

519-535: init() already sets status to Readyinitialize() sets it again redundantly.

init() (Line 164–167) sets self.status to Ready on success. Then initialize() unconditionally sets it to Ready again at Line 531–532. Consider removing the status update from one location to keep a single source of truth for status transitions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/local_provider.rs` around lines 519 - 535,
The initialize() method redundantly sets self.status to ProviderStatus::Ready
after calling init(), but init() already sets the status to Ready on success;
remove the duplicate status write in initialize() (the block that sets *status =
ProviderStatus::Ready after init()) so status transitions are managed in a
single place (init()), leaving initialize() to only set NotReady, call init(),
handle the error path by setting ProviderStatus::Error, and return.

257-295: Polling task unconditionally fetches experiments, unlike do_refresh().

do_refresh() (Line 235) gates experiment fetching on self.primary.supports_experiments(), but start_polling always calls fetch_active_experiments. While this is harmless for sources returning Ok(None), it's an inconsistency that could become a problem if a data source starts returning errors for unsupported operations.

♻️ Suggested fix
 tokio::spawn(async move {
     loop {
         sleep(Duration::from_secs(interval)).await;
         // ... config refresh ...

+        if primary.supports_experiments() {
             match primary.fetch_active_experiments().await {
                 // ...
             }
+        }
     }
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/local_provider.rs` around lines 257 - 295,
The polling loop in start_polling unconditionally calls
primary.fetch_active_experiments(), unlike do_refresh(), so add the same guard:
call primary.supports_experiments().await (or use the same supports_experiments
check used in do_refresh()) and only run the fetch_active_experiments() branch
when that returns true; otherwise skip it (leaving cached_experiments
untouched). Locate start_polling and modify the experiments refresh section to
mirror the conditional logic in do_refresh(), using
cached_experiments.write().await to set Some(data) when fetched.

537-686: Significant code duplication with remote_provider.rs.

The resolve_bool_value, resolve_string_value, resolve_int_value, resolve_float_value, resolve_struct_value, and status() implementations are nearly identical between LocalResolutionProvider and SuperpositionAPIProvider. Consider extracting shared resolution logic into a helper function or a default trait method to reduce duplication.

Also applies to: 692-702

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/local_provider.rs` around lines 537 - 686,
The resolve_* methods in LocalResolutionProvider (resolve_bool_value,
resolve_string_value, resolve_int_value, resolve_float_value,
resolve_struct_value) duplicate logic present in SuperpositionAPIProvider;
extract the common flow into a shared helper (e.g., a private async method like
resolve_typed_value or a default trait method on the provider trait) that: calls
resolve_all_features(evaluation_context).await, checks config.get(flag_key),
returns FlagNotFound, performs type conversion (use a generic conversion closure
or Result-returning converter param), and maps conversion errors to TypeMismatch
(special-case ConversionUtils::serde_value_to_struct_value for StructValue). Do
the same for the duplicated status() logic by moving it to the trait default or
a shared helper used by both providers; update the provider methods to call the
new helper to eliminate duplication while preserving the exact error
codes/messages and logging calls (error!).
crates/superposition_provider/src/remote_provider.rs (2)

495-505: Verbose status() match can be simplified.

The exhaustive match reconstructs each variant identically. If ProviderStatus implements Clone or Copy, this can be a direct dereference.

♻️ Suggested simplification
     fn status(&self) -> ProviderStatus {
         match self.status.try_read() {
-            Ok(status) => match *status {
-                ProviderStatus::Ready => ProviderStatus::Ready,
-                ProviderStatus::Error => ProviderStatus::Error,
-                ProviderStatus::NotReady => ProviderStatus::NotReady,
-                ProviderStatus::STALE => ProviderStatus::STALE,
-            },
+            Ok(status) => *status,
             Err(_) => ProviderStatus::NotReady,
         }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/remote_provider.rs` around lines 495 - 505,
The status() method currently matches every ProviderStatus variant just to
return the same value; simplify by returning the read guard value directly
instead of exhaustive matching: after successfully calling
self.status.try_read(), return either *status (if ProviderStatus is Copy) or
status.clone() (if it implements Clone) and preserve the Err(_) branch returning
ProviderStatus::NotReady; update the body of status() to use the direct
dereference/clone of the guard from self.status.try_read().

147-155: Store SDK client in struct to avoid recreation on cache misses.

create_client() is invoked inside resolve_remote() on every cache miss, recreating the client each time. For a provider handling many concurrent evaluations, consider constructing the client once in new()/with_cache() and storing it as a struct field to preserve connection pooling benefits.

♻️ Suggested refactor
 pub struct SuperpositionAPIProvider {
     options: SuperpositionOptions,
+    client: Client,
     cache: Option<Arc<RwLock<ResponseCache>>>,
     metadata: ProviderMetadata,
     status: RwLock<ProviderStatus>,
 }

Then initialize in the constructors and remove the create_client() method.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/remote_provider.rs` around lines 147 - 155,
The create_client() function currently builds a new Client on every cache miss
(called from resolve_remote()); refactor by creating the Client once and storing
it as a field on the provider struct (e.g., add a client: Client field to the
RemoteProvider struct), initialize that client in the
constructors/new()/with_cache() using the SdkConfig builder code from
create_client(), and remove the create_client() method so resolve_remote()
reuses self.client to benefit from connection pooling and avoid repeated
recreation.
docs/plans/2026-02-14-configuration-resolver-plan.md (1)

1-1987: Plan document code diverges from actual implementation — consider reconciling or noting discrepancies.

Several implementation details in the plan differ from the actual code in remote_provider.rs and local_provider.rs. For example, the plan's SuperpositionAPIProvider (Task 7) uses get_context_from_evaluation_context as a method and a different caching/request strategy, while the implementation uses ConversionUtils::evaluation_context_to_query and serde_value_to_document for SDK context construction. The plan's LocalResolutionProvider (Task 6) also lacks the Watch strategy that the implementation includes.

If this plan is intended as living documentation, it may be worth updating it to reflect the final implementation, or adding a note that it's a pre-implementation design sketch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/2026-02-14-configuration-resolver-plan.md` around lines 1 - 1987,
The plan document diverges from the actual implementation; update the plan or
mark it as a design sketch to avoid confusion. Edit the plan to reflect the real
code: mention that SuperpositionAPIProvider (remote_provider.rs) uses
ConversionUtils::evaluation_context_to_query and serde_value_to_document for SDK
context construction and the actual caching/request flow rather than the plan's
get_context_from_evaluation_context and get_resolved_config approach, and
mention the exact cache behavior; update LocalResolutionProvider
(local_provider.rs) to include the Watch refresh strategy (and any differences
in init/refresh/polling/do_refresh/start_polling/ensure_fresh_data
implementations and method names like eval_with_context, get_dimensions_info,
get_applicable_variants usage). Ensure the plan lists current exported
types/traits (SuperpositionDataSource, ConfigData, ExperimentData,
AllFeatureProvider, FeatureExperimentMeta) and notes any intentional deviations
(e.g., Watch strategy, SDK API differences) so the document matches the
repository state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/superposition_provider/examples/local_file_watch_example.rs`:
- Around line 33-37: The loop currently calls
provider.resolve_all_features(&context).await.unwrap(), which will panic on
transient TOML parse errors; change it to handle the Result from
resolve_all_features (e.g., use match or unwrap_or_else) so that on Err you log
the error (with context) and continue the loop instead of crashing, while on Ok
you print the config — update references around resolve_all_features, provider
and context to implement this non-panicking behavior.

In `@crates/superposition_provider/src/local_provider.rs`:
- Around line 556-563: The Err(e) arms in the resolve_*_value methods (e.g.,
resolve_boolean_value and the other resolve methods in local_provider.rs)
incorrectly map all evaluation/network errors to
EvaluationErrorCode::FlagNotFound; change those arms to return EvaluationError {
code: EvaluationErrorCode::General, message: Some(format!("Error evaluating flag
'{}': {}", flag_key, e)) } (and keep the error! log) so non-missing-flag errors
propagate as General; apply the same replacement to the other Err(e) arms
referenced (the blocks at the other resolve_*_value implementations around the
given ranges).

In `@crates/superposition_provider/src/remote_provider.rs`:
- Around line 84-104: The cache_key function currently builds a delimited string
in cache_key(context: &EvaluationContext) using '|' and '=' which can collide
when targeting_key, custom field names, or values contain those characters; fix
by producing a collision-safe key: after constructing the deterministic parts
(still use context.targeting_key and the sorted context.custom_fields with
ConversionUtils::convert_evaluation_context_value_to_serde_value), compute a
cryptographic hash (e.g., SHA-256) or use a Rust Hasher to derive a compact hex
digest of the joined string and return that digest instead of the raw delimited
string so keys cannot collide due to delimiter characters.
- Around line 359-366: The Err(e) arms in the resolve_*_value methods (e.g.,
resolve_boolean_value, resolve_string_value, resolve_number_value,
resolve_object_value, resolve_struct_value) incorrectly map all failures to
EvaluationErrorCode::FlagNotFound; change them to preserve real error semantics
by mapping only genuine "flag not found" cases to
EvaluationErrorCode::FlagNotFound and mapping all other failures to
EvaluationErrorCode::General (or propagate the existing EvaluationError code if
e is already an EvaluationError). Update each Err(e) branch to inspect or
convert e into an EvaluationError carrying the original error text (message:
Some(e.to_string())) and use EvaluationErrorCode::General for
network/serialization/cache errors so callers can distinguish missing flags from
other failures.

In `@docs/plans/2026-02-14-configuration-resolver-plan.md`:
- Line 15: The markdown skips an h2 between the document title "Configuration
Resolver Implementation Plan" and the "Task 1: Define core traits (`traits.rs`)"
heading; update the task heading to use an h2 (change "### Task 1: Define core
traits (`traits.rs`)" to "## Task 1: Define core traits (`traits.rs`)") or
insert a new "## Tasks" section above the existing "### Task 1" so the heading
hierarchy is sequential and satisfies markdownlint MD001.
- Around line 1-3: The file "Configuration Resolver Implementation Plan"
contains a Claude-specific directive line that must be removed from shipped
docs; open the document titled "Configuration Resolver Implementation Plan" and
delete the entire line starting with "> **For Claude:** REQUIRED SUB-SKILL: Use
superpowers:executing-plans..." (the internal AI-assistant instruction), then
save/commit the change and optionally scan the repo for other similar
agent-specific prompt artifacts and remove them as well.

---

Nitpick comments:
In `@crates/superposition_provider/examples/local_file_watch_example.rs`:
- Around line 33-37: The infinite loop prevents cleaning up the provider; modify
the loop around provider.resolve_all_features(&context).await so it listens for
a shutdown signal (e.g., using tokio::signal::ctrl_c().await or select! with a
shutdown future) and breaks when received, then call and await provider.close()
to ensure the file watcher and spawned tasks are cleaned up; reference the
existing provider, resolve_all_features, and context symbols and ensure
provider.close() is invoked after exiting the loop (and handle any errors from
close()).

In `@crates/superposition_provider/examples/local_with_fallback_example.rs`:
- Around line 39-41: Add a short explanatory comment near the
sleep(Duration::from_secs(2)).await call noting that this is a best-effort,
timing-dependent wait for the provider to initialize via OpenFeature and may not
guarantee readiness if the HTTP source or file fallback is slow; suggest
replacing with readiness polling or retry logic in production so the client
doesn't remain NotReady. Reference the existing sleep call and the concepts
"provider initialization via OpenFeature", "file fallback", and "NotReady" so
maintainers know where to improve.

In `@crates/superposition_provider/src/client.rs`:
- Around line 100-105: The current RefreshStrategy::Watch and ::Manual arms in
client.rs only log at info and silently do nothing for the legacy CacConfig
client; change those info! calls to warn! and make the message explicit that
Watch and Manual are not supported by the legacy client (e.g., "Watch/Manual
refresh strategies are not supported by legacy CacConfig; no automatic refresh
will occur — migrate to LocalResolutionProvider for full support"), so users see
a clear warning, and optionally add a brief note recommending migration to
LocalResolutionProvider in the same warning.
- Around line 135-175: The read guard on self.last_updated in on_demand_config
is being acquired in a fragile empty block and could accidentally be held across
the async get_config call, risking a deadlock when later acquiring write locks
(e.g., cached_config or last_updated write). Fix by explicitly scoping or
dropping the read guard before any await: capture the value (e.g., let last = {
let g = self.last_updated.read().await; *g }; or call drop(last_updated) right
before the match that may call self.get_config().await) so no RwLock read guard
remains held when calling get_config or when later acquiring mutable locks
(references: on_demand_config, self.last_updated, self.get_config,
self.cached_config).

In `@crates/superposition_provider/src/data_source.rs`:
- Around line 84-89: The doc comment for the trait method `watch()` is too
specific about "Set up a file watcher" while the trait is transport-agnostic;
update the comment on `fn watch(&self) ->
Result<Option<crate::types::WatchStream>>` to describe a generic
change-notification/watch stream (e.g., "Return an optional stream of change
notifications; implementations may use file watchers, SSE, WebSockets, polling,
etc.") and keep the note about returning `Ok(None)` when watching is not
supported.

In `@crates/superposition_provider/src/data_source/file.rs`:
- Around line 100-132: The current watch() silently replaces an existing watcher
which closes any previous WatchStream; update watch() to check the
Arc<Mutex<Option<...>>> (self.watcher) before creating a new watcher and return
a SuperpositionError::ConfigError if a watcher is already active instead of
overwriting it, i.e., lock self.watcher, if guard.is_some() return Err(...) with
a clear message like "watcher already active", otherwise proceed to create/store
the watcher and return the WatchStream; reference the watch() function, the
self.watcher mutex/guard, and the WatchStream type when making the change.
- Around line 51-74: The filtering logic in fetch_filtered_config is duplicated
with HttpDataSource::fetch_filtered_config; extract a single helper (e.g., a
free fn apply_filters(config: Config, context: Option<&Map<String, Value>>,
prefix_filter: Option<&[String]>) -> Config or an associated method on
ConfigData) and call it from both fetch_filtered_config implementations; the
helper should apply config.filter_by_dimensions when context is Some and
non-empty and apply config.filter_by_prefix when prefix_filter is Some and
non-empty (creating a HashSet<String> via prefixes.iter().cloned() as done now),
then return the filtered Config which you wrap with ConfigData::new in each
caller.

In `@crates/superposition_provider/src/data_source/http.rs`:
- Around line 23-31: The create_client() helper currently builds a new expensive
Client on each call (used by fetch_config and fetch_experiments_and_groups);
instead, construct the Client once in HttpDataSource::new, add a field on
HttpDataSource (e.g., client: Client) to store it, initialize that field using
the SdkConfig building logic now inside create_client, and update fetch_config
and fetch_experiments_and_groups to reuse self.client.clone() rather than
calling create_client(); remove or repurpose create_client() so client
construction is not repeated and rely on the Arc-wrapped Client cloning for
cheap copies.

In `@crates/superposition_provider/src/lib.rs`:
- Around line 10-16: The root issue is unsafe glob re-exports (pub use
client::*, pub use provider::*, pub use types::*, pub use traits::*); replace
these with selective, explicit re-exports to avoid namespace pollution. Open
lib.rs and for each module (client, provider, types, traits) replace the glob
with explicit symbol lists (e.g., pub use client::{ClientType, ClientError,
new_client} and similarly for provider::{ProviderTrait, ProviderImpl},
types::{TypeA, TypeB}, traits::{TraitA, TraitB}) — include only the public items
intended for consumers and update imports where needed; when new public items
are added in the future, add them explicitly or re-export the module as pub mod
client to keep origins clear.

In `@crates/superposition_provider/src/local_provider.rs`:
- Around line 519-535: The initialize() method redundantly sets self.status to
ProviderStatus::Ready after calling init(), but init() already sets the status
to Ready on success; remove the duplicate status write in initialize() (the
block that sets *status = ProviderStatus::Ready after init()) so status
transitions are managed in a single place (init()), leaving initialize() to only
set NotReady, call init(), handle the error path by setting
ProviderStatus::Error, and return.
- Around line 257-295: The polling loop in start_polling unconditionally calls
primary.fetch_active_experiments(), unlike do_refresh(), so add the same guard:
call primary.supports_experiments().await (or use the same supports_experiments
check used in do_refresh()) and only run the fetch_active_experiments() branch
when that returns true; otherwise skip it (leaving cached_experiments
untouched). Locate start_polling and modify the experiments refresh section to
mirror the conditional logic in do_refresh(), using
cached_experiments.write().await to set Some(data) when fetched.
- Around line 537-686: The resolve_* methods in LocalResolutionProvider
(resolve_bool_value, resolve_string_value, resolve_int_value,
resolve_float_value, resolve_struct_value) duplicate logic present in
SuperpositionAPIProvider; extract the common flow into a shared helper (e.g., a
private async method like resolve_typed_value or a default trait method on the
provider trait) that: calls resolve_all_features(evaluation_context).await,
checks config.get(flag_key), returns FlagNotFound, performs type conversion (use
a generic conversion closure or Result-returning converter param), and maps
conversion errors to TypeMismatch (special-case
ConversionUtils::serde_value_to_struct_value for StructValue). Do the same for
the duplicated status() logic by moving it to the trait default or a shared
helper used by both providers; update the provider methods to call the new
helper to eliminate duplication while preserving the exact error codes/messages
and logging calls (error!).

In `@crates/superposition_provider/src/remote_provider.rs`:
- Around line 495-505: The status() method currently matches every
ProviderStatus variant just to return the same value; simplify by returning the
read guard value directly instead of exhaustive matching: after successfully
calling self.status.try_read(), return either *status (if ProviderStatus is
Copy) or status.clone() (if it implements Clone) and preserve the Err(_) branch
returning ProviderStatus::NotReady; update the body of status() to use the
direct dereference/clone of the guard from self.status.try_read().
- Around line 147-155: The create_client() function currently builds a new
Client on every cache miss (called from resolve_remote()); refactor by creating
the Client once and storing it as a field on the provider struct (e.g., add a
client: Client field to the RemoteProvider struct), initialize that client in
the constructors/new()/with_cache() using the SdkConfig builder code from
create_client(), and remove the create_client() method so resolve_remote()
reuses self.client to benefit from connection pooling and avoid repeated
recreation.

In `@crates/superposition_provider/src/utils.rs`:
- Around line 580-598: The two functions evaluation_context_to_query and
context_to_dimension_data duplicate the iteration over ctx.custom_fields and
calls to Self::convert_evaluation_context_value_to_serde_value; extract that
shared logic into a private helper (e.g., fn convert_custom_fields(ctx:
&open_feature::EvaluationContext) -> Map<String, Value>) that returns the
Map<String, Value>, then have evaluation_context_to_query return (helper(ctx),
ctx.targeting_key.clone()) and have context_to_dimension_data call the same
helper and then insert the targeting_key into the returned map as it currently
does; update both functions to use the new helper to centralize the conversion
logic.

In `@docs/plans/2026-02-14-configuration-resolver-design.md`:
- Around line 18-32: The fenced code block that shows the module layout (starts
with ``` and the text "crates/superposition_provider/src/") needs a language
specifier to satisfy markdownlint MD040; change the opening fence from ``` to
```text so the plain-text diagram is marked correctly, leaving the rest of the
block (the tree lines like "├── lib.rs", "types.rs", etc.) unchanged.

In `@docs/plans/2026-02-14-configuration-resolver-plan.md`:
- Around line 1-1987: The plan document diverges from the actual implementation;
update the plan or mark it as a design sketch to avoid confusion. Edit the plan
to reflect the real code: mention that SuperpositionAPIProvider
(remote_provider.rs) uses ConversionUtils::evaluation_context_to_query and
serde_value_to_document for SDK context construction and the actual
caching/request flow rather than the plan's get_context_from_evaluation_context
and get_resolved_config approach, and mention the exact cache behavior; update
LocalResolutionProvider (local_provider.rs) to include the Watch refresh
strategy (and any differences in
init/refresh/polling/do_refresh/start_polling/ensure_fresh_data implementations
and method names like eval_with_context, get_dimensions_info,
get_applicable_variants usage). Ensure the plan lists current exported
types/traits (SuperpositionDataSource, ConfigData, ExperimentData,
AllFeatureProvider, FeatureExperimentMeta) and notes any intentional deviations
(e.g., Watch strategy, SDK API differences) so the document matches the
repository state.

Comment on lines +33 to +37
loop {
let config = provider.resolve_all_features(&context).await.unwrap();
println!("Config: {:?}", config);
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

unwrap() in a loop will crash on transient file parse errors.

When users edit the TOML file (as the example encourages), intermediate saves may produce temporarily invalid content. Consider using match or unwrap_or_else to log errors and continue the loop, which better demonstrates the watch strategy's resilience.

Suggested improvement
     loop {
-        let config = provider.resolve_all_features(&context).await.unwrap();
-        println!("Config: {:?}", config);
+        match provider.resolve_all_features(&context).await {
+            Ok(config) => println!("Config: {:?}", config),
+            Err(e) => eprintln!("Resolution error (will retry): {}", e),
+        }
         tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/examples/local_file_watch_example.rs` around
lines 33 - 37, The loop currently calls
provider.resolve_all_features(&context).await.unwrap(), which will panic on
transient TOML parse errors; change it to handle the Result from
resolve_all_features (e.g., use match or unwrap_or_else) so that on Err you log
the error (with context) and continue the loop instead of crashing, while on Ok
you print the config — update references around resolve_all_features, provider
and context to implement this non-panicking behavior.

Comment on lines +84 to +104
fn cache_key(context: &EvaluationContext) -> String {
let mut parts: Vec<String> = Vec::new();

// Include targeting_key
if let Some(ref tk) = context.targeting_key {
parts.push(format!("tk={}", tk));
}

// Include sorted custom_fields for deterministic keys
let mut field_keys: Vec<&String> = context.custom_fields.keys().collect();
field_keys.sort();
for k in field_keys {
if let Some(v) = context.custom_fields.get(k) {
let serde_val =
ConversionUtils::convert_evaluation_context_value_to_serde_value(v);
parts.push(format!("{}={}", k, serde_val));
}
}

parts.join("|")
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Cache key is susceptible to collisions when field keys or values contain the delimiter characters.

The key format tk=...|field=value|... uses | and = as delimiters without escaping. If a targeting key, field name, or field value contains these characters, distinct contexts could produce the same cache key. For example, context {a: "x|b=y"} and context {a: "x", b: "y"} would collide.

Consider hashing the final key (e.g., SHA-256 or using a Hasher) to avoid structural collisions, or escaping the delimiter characters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition_provider/src/remote_provider.rs` around lines 84 - 104,
The cache_key function currently builds a delimited string in cache_key(context:
&EvaluationContext) using '|' and '=' which can collide when targeting_key,
custom field names, or values contain those characters; fix by producing a
collision-safe key: after constructing the deterministic parts (still use
context.targeting_key and the sorted context.custom_fields with
ConversionUtils::convert_evaluation_context_value_to_serde_value), compute a
cryptographic hash (e.g., SHA-256) or use a Rust Hasher to derive a compact hex
digest of the joined string and return that digest instead of the raw delimited
string so keys cannot collide due to delimiter characters.

Comment on lines +1 to +3
# Configuration Resolver Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove AI-agent prompt instruction from shipped documentation.

Line 3 contains a Claude-specific directive (> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans...). This is an internal AI-assisted development artifact and should not be committed to the repository's documentation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/2026-02-14-configuration-resolver-plan.md` around lines 1 - 3, The
file "Configuration Resolver Implementation Plan" contains a Claude-specific
directive line that must be removed from shipped docs; open the document titled
"Configuration Resolver Implementation Plan" and delete the entire line starting
with "> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans..."
(the internal AI-assistant instruction), then save/commit the change and
optionally scan the repo for other similar agent-specific prompt artifacts and
remove them as well.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will remove at the end.


---

### Task 1: Define core traits (`traits.rs`)
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Heading levels skip from h1 to h3.

The document starts with # Configuration Resolver Implementation Plan (h1) and jumps directly to ### Task 1 (h3), skipping h2. This violates heading hierarchy best practices and is flagged by markdownlint (MD001).

Consider using ## Task 1: ... or adding a ## Tasks section header before the task list.

🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 15-15: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/2026-02-14-configuration-resolver-plan.md` at line 15, The
markdown skips an h2 between the document title "Configuration Resolver
Implementation Plan" and the "Task 1: Define core traits (`traits.rs`)" heading;
update the task heading to use an h2 (change "### Task 1: Define core traits
(`traits.rs`)" to "## Task 1: Define core traits (`traits.rs`)") or insert a new
"## Tasks" section above the existing "### Task 1" so the heading hierarchy is
sequential and satisfies markdownlint MD001.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will remove at the end.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@knutties, sounds good! I'll keep this noted for when you address it at the end. 👍

@ayushjain17
Copy link
Copy Markdown
Collaborator

open new PR #930

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants