Skip to content

Conversation

@michaelteboul
Copy link
Collaborator

Description of the goal of the PR

Description:

Changes this PR introduces (fill it before implementation)

  • A new Attention model for basket modelling using dot product instead of distance for embedding

Checklist before requesting a review

  • I have commented my code, particularly in hard-to-understand areas
  • I have typed my code
  • I have created / updated the docstrings
  • I have updated the README, if relevant
  • I have updated the requirements files if a new package is used
  • I have tested my code
  • The CI pipeline passes
  • I have performed a self-review of my code

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @michaelteboul, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a novel self-attention model, SelfAttentionModelScalar, designed to improve item recommendations within a basket modeling context. The primary innovation is its use of dot products for calculating embedding similarities, moving away from traditional distance-based approaches. This change aims to enhance the model's ability to learn both immediate (short-term) and enduring (long-term) user preferences, with additional mechanisms to account for price and store-specific influences on purchasing decisions.

Highlights

  • New Model Introduction: A new SelfAttentionModelScalar has been added for basket recommendation, which is inspired by existing self-attention models but utilizes dot products instead of distance metrics for embedding similarity.
  • Configurable Latent Sizes: The model allows for configurable latent sizes for short-term, long-term, and price embeddings, providing flexibility in capturing different aspects of user and item preferences.
  • Enhanced Utility Calculation: The utility computation now incorporates optional item intercepts, price effects, and store effects, allowing for a more nuanced understanding of item desirability.
  • Robust Negative Sampling and Loss: The implementation includes a method for sampling negative items that are distinct from already purchased items and the next item, and the loss function uses binary cross-entropy with L2 regularization for training.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new SelfAttentionModelScalar for basket modeling, utilizing a dot product for embeddings instead of distance. The changes primarily involve adding this new model class, which inherits from BaseBasketModel. The implementation includes attention mechanisms, short-term and long-term utility calculations, and a custom loss function. While the overall structure is sound, several docstring inaccuracies, shape mismatches in comments, and a critical logical error in the compute_psi method's tf.einsum operations need to be addressed to ensure correctness and maintainability.

if self.store_effects:
theta_store = tf.gather(self.theta, indices=store_batch)
# Compute the dot product along the last dimension
store_preferences = tf.einsum("kj,klj->kl", theta_store, x_item)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The tf.einsum operation tf.einsum("kj,klj->kl", theta_store, x_item) is likely incorrect given the shapes of theta_store and x_item. theta_store is (batch_size, d) and x_item is (batch_size, d). To compute a batch-wise dot product, you should use tf.reduce_sum(theta_store * x_item, axis=-1) or tf.einsum("bd,bd->b", theta_store, x_item). The current einsum pattern implies x_item has an extra dimension L which it does not. This will lead to runtime errors or incorrect calculations.

Suggested change
store_preferences = tf.einsum("kj,klj->kl", theta_store, x_item)
store_preferences = tf.reduce_sum(theta_store * x_item, axis=-1)

price_effects = (
-1
# Compute the dot product along the last dimension
* tf.einsum("kj,klj->kl", delta_store, beta_item)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the previous comment, the tf.einsum operation tf.einsum("kj,klj->kl", delta_store, beta_item) is incorrect. delta_store is (batch_size, latent_sizes["price"]) and beta_item is (batch_size, latent_sizes["price"]). For a batch-wise dot product, it should be tf.reduce_sum(delta_store * beta_item, axis=-1) or tf.einsum("bp,bp->b", delta_store, beta_item).

Suggested change
* tf.einsum("kj,klj->kl", delta_store, beta_item)
* tf.reduce_sum(delta_store * beta_item, axis=-1)

attention_weights = tf.ones_like(scaled_scores) # Shape: (batch_size, L, 1)

else:
# Masque de la diagonale, désactivé pour l'instant
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment Masque de la diagonale, désactivé pour l'instant is in French. Please translate it to English for consistency. Also, the code scaled_scores = tf.where(diag_mask, tf.constant(-np.inf, dtype=scaled_scores.dtype), scaled_scores) does apply the diagonal mask, which contradicts the 'désactivé' (deactivated) part of the comment. Please either remove the masking code if it's truly deactivated or update the comment to reflect that it is active.

self,
n_items: int,
n_users: int,
n_stores: int,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The n_stores parameter is present in the function signature but is not documented in the instantiate method's docstring. Please add its description.

Comment on lines +148 to +149
shape=(n_stores, self.d)
), # Dimension for 1 item: latent_sizes["preferences"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment # Dimension for 1 item: latent_sizes["preferences"] is misleading. latent_sizes does not contain a 'preferences' key, and theta's shape is (n_stores, self.d) which corresponds to latent_sizes["short_term"].

Comment on lines +537 to +542
basket_batch: np.ndarray
Batch of baskets (ID of items already in the baskets) (arrays) for each purchased item
Shape must be (batch_size, max_basket_size)
store_batch: np.ndarray
Batch of store IDs (integers) for each purchased item
Shape must be (batch_size,)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The basket_batch parameter is listed in the docstring but is not used in the compute_psi function signature. This is misleading and should be removed from the docstring.

"""
store_batch = tf.cast(store_batch, dtype=tf.int32)
price_batch = tf.cast(price_batch, dtype=tf.float32)
x_item = tf.gather(self.X, indices=item_batch) # Shape: (batch_size, L, d)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment Shape: (batch_size, L, d) for x_item is incorrect. Since item_batch has shape (batch_size,) and self.X has shape (n_items, d), x_item will have shape (batch_size, d).

Comment on lines +70 to +74
Whether to include item intercept in the model, by default True
price_effects: bool, optional
Whether to include price effects in the model, by default True
epsilon_price: float, optional
Epsilon value to add to prices to avoid NaN values (log(0)), by default 1e-4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The parameters store_effects, l2_regularization, and dropout_rate are defined in the __init__ signature but are missing from the docstring. Please add their descriptions for better clarity.

Comment on lines +49 to +50
short_term_weight : float
Weighting factor between long-term and short-term preferences.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstring refers to short_term_weight, but the actual parameter name is short_term_ratio. Please correct this mismatch.

Comment on lines +598 to +641
) -> tuple[tf.Variable]:
"""Compute total loss.

Parameters
----------
item_batch: np.ndarray
Batch of purchased items ID (integers)
Shape must be (batch_size,)
basket_batch: np.ndarray
Batch of baskets (ID of items already in the baskets) (arrays) for each purchased item
Shape must be (batch_size, max_basket_size)
future_batch: np.ndarray
Batch of items to be purchased in the future (ID of items not yet in the
basket) (arrays) for each purchased item
Shape must be (batch_size, max_basket_size)
Here for signature reasons, unused for this model
store_batch: np.ndarray
Batch of store IDs (integers) for each purchased item
Shape must be (batch_size,)
week_batch: np.ndarray
Batch of week numbers (integers) for each purchased item
Shape must be (batch_size,)
price_batch: np.ndarray
Batch of prices (floats) for each purchased item
Shape must be (batch_size,)
available_item_batch: np.ndarray
List of availability matrices (indicating the availability (1) or not (0)
of the products) (arrays) for each purchased item
Shape must be (batch_size, n_items)
user_batch: np.ndarray
Batch of user IDs (integers) for each purchased item
Shape must be (batch_size,)
is_training: bool
Whether the model is in training mode or not, to activate dropout if needed.
True by default, cause compute_batch_loss is only used during training.

Returns
-------
tf.Variable
Value of the loss for the batch (Hinge loss),
Shape must be (1,)
_: None
Placeholder to match the signature of the parent class method
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The return type hint for compute_batch_loss is tuple[tf.Variable], but the function returns a tuple of two tf.Tensors (loss and loglikelihood). It should be tuple[tf.Tensor, tf.Tensor].

Additionally, the docstring states _: None for the second return value, but loglikelihood is actually returned. Please update the docstring to reflect this.

Finally, epsilon = 0.0 is used when computing loglikelihood. If tf.sigmoid(...) evaluates to 0, tf.math.log(0) will result in NaN. A small positive epsilon (e.g., 1e-8) should be used to prevent this.

    ) -> tuple[tf.Tensor, tf.Tensor]:
        """Compute total loss.

        Parameters
        ----------
        item_batch: np.ndarray
            Batch of purchased items ID (integers)
            Shape must be (batch_size,)
        basket_batch: np.ndarray
            Batch of baskets (ID of items already in the baskets) (arrays) for each purchased item
            Shape must be (batch_size, max_basket_size)
        future_batch: np.ndarray
            Batch of items to be purchased in the future (ID of items not yet in the
            basket) (arrays) for each purchased item
            Shape must be (batch_size, max_basket_size)
            Here for signature reasons, unused for this model
        store_batch: np.ndarray
            Batch of store IDs (integers) for each purchased item
            Shape must be (batch_size,)
        week_batch: np.ndarray
            Batch of week numbers (integers) for each purchased item
            Shape must be (batch_size,)
        price_batch: np.ndarray
            Batch of prices (floats) for each purchased item
            Shape must be (batch_size,)
        available_item_batch: np.ndarray
            List of availability matrices (indicating the availability (1) or not (0)
            of the products) (arrays) for each purchased item
            Shape must be (batch_size, n_items)
        user_batch: np.ndarray
            Batch of user IDs (integers) for each purchased item
            Shape must be (batch_size,)
        is_training: bool
            Whether the model is in training mode or not, to activate dropout if needed.
            True by default, cause compute_batch_loss is only used during training.

        Returns
        -------
        batch_loss: tf.Tensor
            Value of the loss for the batch (Hinge loss),
            Shape must be (1,)
        loglikelihood: tf.Tensor
            Computed log-likelihood of the batch of items
            Approximated by difference of utilities between positive and negative samples
            Shape must be (1,)
        """
        _ = future_batch  # Unused for this model
        batch_size = len(item_batch)

        negative_samples = tf.stack(
            [
                self.get_negative_samples(
                    available_items=available_item_batch[idx],
                    purchased_items=basket_batch[idx],
                    next_item=item_batch[idx],
                    n_samples=self.n_negative_samples,
                )
                for idx in range(batch_size)
            ],
            axis=0,
        )  # Shape: (batch_size, n_negative_samples)

        item_batch = tf.cast(item_batch, tf.int32)
        negative_samples = tf.cast(negative_samples, tf.int32)

        augmented_item_batch = tf.cast(
            tf.concat([tf.expand_dims(item_batch, axis=-1), negative_samples], axis=1),
            dtype=tf.int32,
        )  # Shape: (batch_size, 1 + n_negative_samples)

        basket_batch_ragged = tf.cast(
            tf.ragged.boolean_mask(basket_batch, basket_batch != -1),
            dtype=tf.int32,
        )
        basket_batch = basket_batch_ragged.to_tensor(self.n_items)
        augmented_price_batch = tf.gather(
            params=price_batch, indices=augmented_item_batch, batch_dims=1
        )
        all_utilities = self.compute_batch_utility(
            item_batch=augmented_item_batch,
            basket_batch=basket_batch,
            store_batch=store_batch,
            week_batch=week_batch,
            price_batch=augmented_price_batch,
            available_item_batch=available_item_batch,
            user_batch=user_batch,
            is_training=is_training,
        )  # Shape: (batch_size, 1 + n_negative_samples)

        positive_samples_utility = tf.gather(params=all_utilities, indices=[0], axis=1)
        negative_samples_utility = tf.gather(
            params=all_utilities, indices=tf.range(1, self.n_negative_samples + 1), axis=1
        )  # (batch_size, n_negative_samples)

        ridge_regularization = self.l2_regularization * tf.add_n(
            [tf.nn.l2_loss(weight) for weight in self.trainable_weights]
        )
        epsilon = 1e-8

@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

Coverage

Coverage Report for Python 3.9
FileStmtsMissCoverMissing
choice_learn
   __init__.py20100% 
   tf_ops.py62198%283
choice_learn/basket_models
   __init__.py40100% 
   alea_carta.py1482285%86–90, 92–96, 98–102, 106, 109, 131, 159, 308, 431–455
   base_basket_model.py2352789%111–112, 123, 141, 185, 255, 377, 485, 585–587, 676, 762, 772, 822–830, 891–894, 934–935
   basic_attention_model.py89496%424, 427, 433, 440
   self_attention_model.py133993%71, 73, 75, 450–454, 651
   self_attention_model_scalar.py1641640%3–735
   shopper.py184995%130, 159, 325, 345, 360, 363, 377, 489, 618
choice_learn/basket_models/data
   __init__.py20100% 
   basket_dataset.py1903084%74–77, 295–297, 407, 540–576, 636, 658–661, 700–705, 790–801, 849
   preprocessing.py947817%43–45, 128–364
choice_learn/basket_models/datasets
   __init__.py30100% 
   bakery.py38392%47, 51, 61
   synthetic_dataset.py81693%62, 194–199, 247
choice_learn/basket_models/utils
   __init__.py00100% 
   permutation.py22195%37
choice_learn/data
   __init__.py30100% 
   choice_dataset.py6493395%198, 250, 283, 421, 463–464, 589, 724, 738, 840, 842, 937, 957–961, 1140, 1159–1161, 1179–1181, 1209, 1214, 1223, 1240, 1281, 1293, 1307, 1346, 1361, 1366, 1395, 1408, 1443–1444
   indexer.py2412390%20, 31, 45, 60–67, 202–204, 219–230, 265, 291, 582
   storage.py161696%22, 33, 51, 56, 61, 71
   store.py72720%3–275
choice_learn/datasets
   __init__.py40100% 
   base.py400599%42–43, 153–154, 714
   expedia.py1028319%37–301
   tafeng.py490100% 
choice_learn/datasets/data
   __init__.py00100% 
choice_learn/models
   __init__.py14286%15–16
   base_model.py3032193%144, 186, 283, 291, 297, 306, 346, 353, 382, 401, 432–433, 442–443, 544, 546, 562, 566, 568, 691–692
   baseline_models.py490100% 
   conditional_logit.py2692690%49, 52, 54, 85, 88, 91–95, 98–102, 136, 206, 212–216, 351, 388, 445, 520–526, 651, 685, 822, 826
   halo_mnl.py124298%186, 374
   latent_class_base_model.py2863986%55–61, 273–279, 288, 325–330, 497–500, 605, 624, 665–701, 715, 720, 751–752, 774–775, 869–870, 974
   latent_class_mnl.py62690%257–261, 296
   learning_mnl.py67396%157, 182, 188
   nested_logit.py2911296%55, 77, 160, 269, 351, 484, 530, 600, 679, 848, 900, 904
   reslogit.py132695%285, 360, 369, 374, 382, 432
   rumnet.py236399%748–751, 982
   simple_mnl.py139696%167, 275, 347, 355, 357, 359
   tastenet.py94397%142, 180, 188
choice_learn/toolbox
   __init__.py00100% 
   assortment_optimizer.py27678%28–30, 93–95, 160–162
   gurobi_opt.py2362360%3–675
   or_tools_opt.py2301195%103, 107, 296–305, 315, 319, 607, 611
choice_learn/utils
   metrics.py854349%74, 126–130, 147–166, 176, 190–199, 211–232, 242
TOTAL5776100183% 

Tests Skipped Failures Errors Time
221 0 💤 0 ❌ 0 🔥 8m 7s ⏱️

@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

Coverage

Coverage Report for Python 3.11
FileStmtsMissCoverMissing
choice_learn
   __init__.py20100% 
   tf_ops.py62198%283
choice_learn/basket_models
   __init__.py40100% 
   alea_carta.py1482285%86–90, 92–96, 98–102, 106, 109, 131, 159, 308, 431–455
   base_basket_model.py2352789%111–112, 123, 141, 185, 255, 377, 485, 585–587, 676, 762, 772, 822–830, 891–894, 934–935
   basic_attention_model.py89496%424, 427, 433, 440
   self_attention_model.py133993%71, 73, 75, 450–454, 651
   self_attention_model_scalar.py1641640%3–735
   shopper.py184995%130, 159, 325, 345, 360, 363, 377, 489, 618
choice_learn/basket_models/data
   __init__.py20100% 
   basket_dataset.py1903084%74–77, 295–297, 407, 540–576, 636, 658–661, 700–705, 790–801, 849
   preprocessing.py947817%43–45, 128–364
choice_learn/basket_models/datasets
   __init__.py30100% 
   bakery.py38392%47, 51, 61
   synthetic_dataset.py81693%62, 194–199, 247
choice_learn/basket_models/utils
   __init__.py00100% 
   permutation.py22195%37
choice_learn/data
   __init__.py30100% 
   choice_dataset.py6493395%198, 250, 283, 421, 463–464, 589, 724, 738, 840, 842, 937, 957–961, 1140, 1159–1161, 1179–1181, 1209, 1214, 1223, 1240, 1281, 1293, 1307, 1346, 1361, 1366, 1395, 1408, 1443–1444
   indexer.py2412390%20, 31, 45, 60–67, 202–204, 219–230, 265, 291, 582
   storage.py161696%22, 33, 51, 56, 61, 71
   store.py72720%3–275
choice_learn/datasets
   __init__.py40100% 
   base.py400599%42–43, 153–154, 714
   expedia.py1028319%37–301
   tafeng.py490100% 
choice_learn/datasets/data
   __init__.py00100% 
choice_learn/models
   __init__.py14286%15–16
   base_model.py3032193%144, 186, 283, 291, 297, 306, 346, 353, 382, 401, 432–433, 442–443, 544, 546, 562, 566, 568, 691–692
   baseline_models.py490100% 
   conditional_logit.py2692690%49, 52, 54, 85, 88, 91–95, 98–102, 136, 206, 212–216, 351, 388, 445, 520–526, 651, 685, 822, 826
   halo_mnl.py124298%186, 374
   latent_class_base_model.py2863986%55–61, 273–279, 288, 325–330, 497–500, 605, 624, 665–701, 715, 720, 751–752, 774–775, 869–870, 974
   latent_class_mnl.py62690%257–261, 296
   learning_mnl.py67396%157, 182, 188
   nested_logit.py2911296%55, 77, 160, 269, 351, 484, 530, 600, 679, 848, 900, 904
   reslogit.py132695%285, 360, 369, 374, 382, 432
   rumnet.py236399%748–751, 982
   simple_mnl.py139696%167, 275, 347, 355, 357, 359
   tastenet.py94397%142, 180, 188
choice_learn/toolbox
   __init__.py00100% 
   assortment_optimizer.py27678%28–30, 93–95, 160–162
   gurobi_opt.py2382380%3–675
   or_tools_opt.py2301195%103, 107, 296–305, 315, 319, 607, 611
choice_learn/utils
   metrics.py854349%74, 126–130, 147–166, 176, 190–199, 211–232, 242
TOTAL5778100383% 

Tests Skipped Failures Errors Time
221 0 💤 0 ❌ 0 🔥 8m 37s ⏱️

@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

Coverage

Coverage Report for Python 3.10
FileStmtsMissCoverMissing
choice_learn
   __init__.py20100% 
   tf_ops.py62198%283
choice_learn/basket_models
   __init__.py40100% 
   alea_carta.py1482285%86–90, 92–96, 98–102, 106, 109, 131, 159, 308, 431–455
   base_basket_model.py2352789%111–112, 123, 141, 185, 255, 377, 485, 585–587, 676, 762, 772, 822–830, 891–894, 934–935
   basic_attention_model.py89496%424, 427, 433, 440
   self_attention_model.py133993%71, 73, 75, 450–454, 651
   self_attention_model_scalar.py1641640%3–735
   shopper.py184995%130, 159, 325, 345, 360, 363, 377, 489, 618
choice_learn/basket_models/data
   __init__.py20100% 
   basket_dataset.py1903084%74–77, 295–297, 407, 540–576, 636, 658–661, 700–705, 790–801, 849
   preprocessing.py947817%43–45, 128–364
choice_learn/basket_models/datasets
   __init__.py30100% 
   bakery.py38392%47, 51, 61
   synthetic_dataset.py81693%62, 194–199, 247
choice_learn/basket_models/utils
   __init__.py00100% 
   permutation.py22195%37
choice_learn/data
   __init__.py30100% 
   choice_dataset.py6493395%198, 250, 283, 421, 463–464, 589, 724, 738, 840, 842, 937, 957–961, 1140, 1159–1161, 1179–1181, 1209, 1214, 1223, 1240, 1281, 1293, 1307, 1346, 1361, 1366, 1395, 1408, 1443–1444
   indexer.py2412390%20, 31, 45, 60–67, 202–204, 219–230, 265, 291, 582
   storage.py161696%22, 33, 51, 56, 61, 71
   store.py72720%3–275
choice_learn/datasets
   __init__.py40100% 
   base.py400599%42–43, 153–154, 714
   expedia.py1028319%37–301
   tafeng.py490100% 
choice_learn/datasets/data
   __init__.py00100% 
choice_learn/models
   __init__.py14286%15–16
   base_model.py3032193%144, 186, 283, 291, 297, 306, 346, 353, 382, 401, 432–433, 442–443, 544, 546, 562, 566, 568, 691–692
   baseline_models.py490100% 
   conditional_logit.py2692690%49, 52, 54, 85, 88, 91–95, 98–102, 136, 206, 212–216, 351, 388, 445, 520–526, 651, 685, 822, 826
   halo_mnl.py124298%186, 374
   latent_class_base_model.py2863986%55–61, 273–279, 288, 325–330, 497–500, 605, 624, 665–701, 715, 720, 751–752, 774–775, 869–870, 974
   latent_class_mnl.py62690%257–261, 296
   learning_mnl.py67396%157, 182, 188
   nested_logit.py2911296%55, 77, 160, 269, 351, 484, 530, 600, 679, 848, 900, 904
   reslogit.py132695%285, 360, 369, 374, 382, 432
   rumnet.py236399%748–751, 982
   simple_mnl.py139696%167, 275, 347, 355, 357, 359
   tastenet.py94397%142, 180, 188
choice_learn/toolbox
   __init__.py00100% 
   assortment_optimizer.py27678%28–30, 93–95, 160–162
   gurobi_opt.py2382380%3–675
   or_tools_opt.py2301195%103, 107, 296–305, 315, 319, 607, 611
choice_learn/utils
   metrics.py854349%74, 126–130, 147–166, 176, 190–199, 211–232, 242
TOTAL5778100383% 

Tests Skipped Failures Errors Time
221 0 💤 0 ❌ 0 🔥 8m 37s ⏱️

@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

Coverage

Coverage Report for Python 3.12
FileStmtsMissCoverMissing
choice_learn
   __init__.py20100% 
   tf_ops.py62198%283
choice_learn/basket_models
   __init__.py40100% 
   alea_carta.py1482285%86–90, 92–96, 98–102, 106, 109, 131, 159, 308, 431–455
   base_basket_model.py2352789%111–112, 123, 141, 185, 255, 377, 485, 585–587, 676, 762, 772, 822–830, 891–894, 934–935
   basic_attention_model.py89496%424, 427, 433, 440
   self_attention_model.py133993%71, 73, 75, 450–454, 651
   self_attention_model_scalar.py1641640%3–735
   shopper.py184995%130, 159, 325, 345, 360, 363, 377, 489, 618
choice_learn/basket_models/data
   __init__.py20100% 
   basket_dataset.py1903084%74–77, 295–297, 407, 540–576, 636, 658–661, 700–705, 790–801, 849
   preprocessing.py947817%43–45, 128–364
choice_learn/basket_models/datasets
   __init__.py30100% 
   bakery.py38392%47, 53, 61
   synthetic_dataset.py81693%62, 194–199, 247
choice_learn/basket_models/utils
   __init__.py00100% 
   permutation.py22195%37
choice_learn/data
   __init__.py30100% 
   choice_dataset.py6493395%198, 250, 283, 421, 463–464, 589, 724, 738, 840, 842, 937, 957–961, 1140, 1159–1161, 1179–1181, 1209, 1214, 1223, 1240, 1281, 1293, 1307, 1346, 1361, 1366, 1395, 1408, 1443–1444
   indexer.py2412390%20, 31, 45, 60–67, 202–204, 219–230, 265, 291, 582
   storage.py161696%22, 33, 51, 56, 61, 71
   store.py72720%3–275
choice_learn/datasets
   __init__.py40100% 
   base.py400599%42–43, 153–154, 714
   expedia.py1028319%37–301
   tafeng.py490100% 
choice_learn/datasets/data
   __init__.py00100% 
choice_learn/models
   __init__.py14286%15–16
   base_model.py3032193%144, 186, 283, 291, 297, 306, 346, 353, 382, 401, 432–433, 442–443, 544, 546, 562, 566, 568, 691–692
   baseline_models.py490100% 
   conditional_logit.py2692690%49, 52, 54, 85, 88, 91–95, 98–102, 136, 206, 212–216, 351, 388, 445, 520–526, 651, 685, 822, 826
   halo_mnl.py1241885%186, 341, 360, 364–380
   latent_class_base_model.py2863986%55–61, 273–279, 288, 325–330, 497–500, 605, 624, 665–701, 715, 720, 751–752, 774–775, 869–870, 974
   latent_class_mnl.py62690%257–261, 296
   learning_mnl.py67396%157, 182, 188
   nested_logit.py2911296%55, 77, 160, 269, 351, 484, 530, 600, 679, 848, 900, 904
   reslogit.py132695%285, 360, 369, 374, 382, 432
   rumnet.py236399%748–751, 982
   simple_mnl.py139696%167, 275, 347, 355, 357, 359
   tastenet.py94397%142, 180, 188
choice_learn/toolbox
   __init__.py00100% 
   assortment_optimizer.py27678%28–30, 93–95, 160–162
   gurobi_opt.py2382380%3–675
   or_tools_opt.py2301195%103, 107, 296–305, 315, 319, 607, 611
choice_learn/utils
   metrics.py854349%74, 126–130, 147–166, 176, 190–199, 211–232, 242
TOTAL5778101982% 

Tests Skipped Failures Errors Time
221 0 💤 1 ❌ 0 🔥 7m 54s ⏱️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants