-
Notifications
You must be signed in to change notification settings - Fork 33
solver parameterised pytest #780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Added some more functionality. Can now define more then one solver: python -m pytest ./tests --solver exact,gurobiSolver-specific tests will be filtered to these two. Non-solver-specific tests will be parametrised with each of these solvers. You can also run on all installed solvers: python -m pytest ./tests --solver allOr skipp all tests which depend on "solving a model": python -m pytest ./tests --solver NoneAlso added a README with instructions on how to use the testsuite, how to use the new decorators and in general how to write new tests. |
|
Currently a lot of tests have their own solver parametrisation, for example in SOLVERNAMES = [name for name, solver in SolverLookup.base_solvers() if solver.supported()]
def _generate_inputs(generator):
exprs = []
for solver in SOLVERNAMES:
exprs += [(solver, expr) for expr in generator(solver)]
return exprs
@pytest.mark.parametrize(("solver","constraint"),list(_generate_inputs(bool_exprs)), ids=str)
def test_bool_constraints(solver, constraint):
...This makes it so that for the default behavior (no |
|
Now examples with optional dependencies are also "skipped"; an exception due to missing dependencies causes the test to be ignored. This is a hacky way of not having to label each example script with its dependencies. An additional downside is that when the required dependencies are installed, the test is always run (even when the required solver has been excluded from |
|
Had to remove the parametrisation on some tests that use |
Attempt to parametrise our pytest test-suite. The default behavior stays the same: generic tests get run on the default solver
OR-Toolsand the more solver specific tests / across solver tests check which backends are available on the current system.But sometimes you want to just run the testsuite on a single solver without having to uninstall all other solvers / create a fresh environment. Now you can pass an optional argument
--solver:This will have three consequences:
exactinstead of the defaultortoolsexactIn general, I opted for filtering instead of skipping tests. So the non-
exacttests will not count towards the total number of tests. I believe we should reserve "skipping" for tests which don't get run due reasons of which we want to inform the user, e.g. missing dependencies which they need to install. When the user provides the--solveroption, they already know that tests targeting other solvers won't be run so it would just clutter the results if we were to skip instead of filter those tests.To parameterise a unittest class for the "generic" tests, simply decorate it with:
@pytest.mark.usefixtures("solver")After which
self.solverwill be available, matching the user provided solver argument.All solver specific tests can now be decorated with:
@pytest.mark.requires_solver("<SOLVER_NAME>")And will automatically be skipped if a
--solverargument has bee provided which doesn't matchSOLVER_NAME.For the across-solver tests which use generators (those in
test_constraints.py), thepytest_collection_modifyitemshook will filter out parameterised pytest functions which have been instantiated with a solver different than the user provided one. Both the argumentsolverandsolver_nameget filtered on.There are still some smaller places (see
test_solveAll.py) wherecp.SolverLookup.base_solvers()is used more directly, which can't be filtered without making changes to the test itself (not possible with one of the decorators / callback functions)As a further improvement, it might be possible to merge the following two (Already did it ;) )
So do the skipping if solver is not available also through the first mark and skip the tests more centrally in
pytest_collection_modifyitems.Using this parameterisation with solver different from OR-Tools revealed some issues with our testsuite related to #779