Skip to content

[breaking] [Tests] Large refactoring of Test#1404

Closed
odow wants to merge 23 commits intomasterfrom
od/test_refactor
Closed

[breaking] [Tests] Large refactoring of Test#1404
odow wants to merge 23 commits intomasterfrom
od/test_refactor

Conversation

@odow
Copy link
Copy Markdown
Member

@odow odow commented Jun 22, 2021

This PR was motivated by issue #1398

Where we are

Our current testing regime is comprehensive, but a bit all over the place.

There's a mix of things like MOI.Test.unittest that wrap a whole lot of tests, and others like MOI.Test.default_status_test that you just need to add. Even the documentation for how to test a solver is complicated (#224)! https://jump.dev/MathOptInterface.jl/dev/submodules/Test/overview/#How-to-test-a-solver

The current design is also bad because it's hard to add new tests.

As evidenced by the documentation: https://jump.dev/MathOptInterface.jl/dev/submodules/Test/overview/#How-to-add-a-test, you need to write a test, then write a test for the test, and then make sure that everything works. It's hard to run a small contingent of tests, and it's hard to decide where to put a new test, and where to put the test of the test.

This also evidenced by the large number of open test issues that aren't getting addressed. These range from #470 (August 2018!) to #1201 (November 2020). If they were easier to add, it would happen quicker!

Naming and visibility of each test is also a problem: #1029. If linear10btest fails, what does that mean?

New design

The basic design is runtests, a single entry point to all tests in MOI.

Instead of breaking down tests by files or dictionaries, tests are normal Julia functions with descriptive names that can be excluded or included by the user.

Here's the code to test the MockOptimizer:

MOI.Test.runtests(
    MOIU.MockOptimizer(MOIU.UniversalFallback(MOIU.Model{Float64}())),
    MOI.Test.Config(basis = true),
    exclude = ["test_nonlinear_"],
)

Much better.

There is also a need for certain tests to modify the model prior to running the test (changing solver parameters/tolerances, for example). That can be achieved by overloading setup_test(::typeof(f), ::MOI.ModelLike, ::MOI.Test.Config) for the particular function f.

Decisions and TODOs

This is horribly breaking, but I think we're okay with that. Sorting out the tests is a high priority.

  • Update documentation
  • Finish migrating tests
    • attributes
    • basic_constriant_tests
    • constraints
    • modifications
    • objectives
    • solve
    • variables
    • contconic
    • contlinear
    • contquadratic
    • intconic
    • intlinear
    • modellike
    • nlp
  • It's also apparent that we would need some style guide for the test names. How descriptive do you get? These integration ones are difficult because they cover lots of topics. The shorter functions would be easier.
  • We need an easier way of writing tests for the tests. Perhaps we can create a callback in runtests that sets up the model (e.g., modifying parameters, etc) and the config? Then we could make sure that every test is actually tested, and it would be simpler to add new tests.
  • Split into smaller PRs? Now that I've worked out the design and migrated the majority of the tests, it seems to work well.

@odow odow added breaking Submodule: Tests About the Tests submodule labels Jun 22, 2021
@odow odow changed the base branch from master to od/whitespace June 22, 2021 01:58
@odow odow changed the base branch from od/whitespace to master June 22, 2021 01:58
@odow odow force-pushed the od/test_refactor branch from e0060c0 to 422dc6a Compare June 22, 2021 02:43
@odow odow changed the title [breaking] [Tests] Large refactoring of Test/contlinear.jl [breaking] [Tests] Large refactoring of Test Jun 22, 2021
@odow
Copy link
Copy Markdown
Member Author

odow commented Jun 22, 2021

I think I'm making progress on the design.

The main premise is that you write a short test, and then immediately afterwards you define setup_test which handles the overload for Utilities.MockOptimizer:

"""
test_solve_with_upperbound(model::MOI.ModelLike, config::Config)
Test setting the upper bound of a variable, confirm that it solves correctly,
and if `config.duals=true`, check that the dual is computed correctly.
"""
function test_solve_with_upperbound(model::MOI.ModelLike, config::Config)
MOI.empty!(model)
@test MOI.is_empty(model)
MOIU.loadfromstring!(
model,
"""
variables: x
maxobjective: 2.0x
x <= 1.0
x >= 0.0
""",
)
x = MOI.get(model, MOI.VariableIndex, "x")
c1 = MOI.ConstraintIndex{MOI.SingleVariable,MOI.LessThan{Float64}}(x.value)
# We test this after the creation of every `SingleVariable` constraint
# to ensure a good coverage of corner cases.
@test c1.value == x.value
c2 = MOI.ConstraintIndex{MOI.SingleVariable,MOI.GreaterThan{Float64}}(
x.value,
)
@test c2.value == x.value
test_model_solution(
model,
config;
objective_value = 2.0,
variable_primal = [(x, 1.0)],
constraint_primal = [(c1, 1.0), (c2, 1.0)],
constraint_dual = [(c1, -2.0), (c2, 0.0)],
)
return
end
function setup_test(
::typeof(test_solve_with_upperbound),
model::MOIU.MockOptimizer,
::Config,
)
MOIU.set_mock_optimize!(
model,
(mock::MOIU.MockOptimizer) -> MOIU.mock_optimize!(
mock,
MOI.OPTIMAL,
(MOI.FEASIBLE_POINT, [1]),
MOI.FEASIBLE_POINT,
(MOI.SingleVariable, MOI.LessThan{Float64}) => [-2.0],
(MOI.SingleVariable, MOI.GreaterThan{Float64}) => [0.0],
),
)
model.eval_variable_constraint_dual = false
return () -> model.eval_variable_constraint_dual = true
end

That means you only need to write the test in one place, and you don't have to worry about hunting through the tests of the tests to figure out how to test the test!

Then, there's just a single entry point where you can run all of the tests in MOI for your solver:

MOI.Test.runtests(
MOIU.MockOptimizer(MOIU.UniversalFallback(MOIU.Model{Float64}())),
MOI.Test.Config(basis = true),
# Oops! Name clash.
exclude = [
"test_linear_mixed_complementarity",
"test_qp_complementarity_constraint",
],
)

odow added 2 commits June 22, 2021 15:54
The basic design is 'runtests', a single entry point to all solver.
Instead of breaking down tests by files or dictionaries, tests are
normal Julia functions with descriptive names that can be excluded
or included by the user.
…bles.jl

This makes things much easier: you now write the test and a check with
MockOptimizer in a single place.

It's also a demonstration that this works for unit tests as well as the
larger integration tests.
@odow odow force-pushed the od/test_refactor branch from c4f7da3 to e2da047 Compare June 22, 2021 03:55
Copy link
Copy Markdown
Contributor

@dourouc05 dourouc05 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would indeed make the Test module much easier to understand, in my opinion!

Test that the [`MOI.SolverName`](@ref) attribute is implemented for `model`.
"""
function solver_name(model::MOI.ModelLike, config::Config)
function test_SolverName(model::MOI.ModelLike, config::Config)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name of the function is not terribly consistent with the other function names (the same holds for all attributes). Maybe something like test_attribute(model, ::MOI.SolverName, config), with the same change for setup_test?

It could also be easier for solvers to implement, with a simple loop over attributes that are supported instead of a longer list of function calls.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is my naming problem. I don't have a good solution.

Specific attributes:

test_attribute_SolverName
test_modification_MultirowChange
test_modification_ScalarCoefficientChange_objective
test_modification_ScalarCoefficientChange_constraint

Problem classes?

test_lp_
test_milp_
test_qp_
test_qcp_
test_nlp_
test_soc_
test_conic_

Ideally, we need a formulaic way of generating test names:

test_ + class + feature + unique identifier?

  • test_lp_TerminationStatus_INFEASIBLE
  • test_lp_TerminationStatus_DUAL_INFEASIBLE
  • test_lp_PrimalStatus_INFEASIBILITY_CERTIFICATE
  • test_milp_integration_knapsack
  • test_soc_VectorOfVariables_extra_terms
  • test_soc_VectorAffineFunction_empty_row

You want to be able to say things like
"run all test_soc_ problems excluding INFEAS"

@blegat
Copy link
Copy Markdown
Member

blegat commented Jun 23, 2021

I feel that it's easier to maintain the tests we the solver don't need to include all tests involving things they don't support.
The set of things a solver supports is fixed while what it does not supports may grow with the growth of MOI.
So at the beginning of each test, a test will do

function test_...(model, config, force=false)
    if !(MOI.supports(...) && MOI.supports_constraint(...))
        @test !force
        return
    end
end

So by default, calling runtests with no exclude should work for all solvers and the excluded tests will actually be bugs in the MOI wrappers or the solvers.
If a solver want to check that supports is implemented correctly, it can add force = [...] with the name of some tests so that these tests will be called with force=true.
To make writing the tests easier and make the tests more readable, we can create a macro like @require force MOI.supports_constraint(...) which is rewritten into

if force
   @test MOI.supports_constraint(...)
elseif !MOI.supports_constraint(...)
    return
end

@odow
Copy link
Copy Markdown
Member Author

odow commented Jun 23, 2021

👍 to this. I'm not planning on merging this before I update some of the solvers to check logistics.

I envisaged something like:

if !MOI.supports_constraint(model, F, S)
    if force
        @warn("Skipping test xxx because you don't support F-in-S")
    end
    return
end

My plans is

  • first to convert all of the existing tests.
  • Then to bring some naming structure.
  • And then to add these skips.

@odow odow added this to the v0.10 milestone Jun 24, 2021
function test_intconic()
MOI.Test.intconictest(BRIDGED, CONFIG)
end
# This line at tne end of the file runs all the tests!
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tne -> the

@joaquimg
Copy link
Copy Markdown
Member

This is very breaking.
Should we do it in two steps since MOI 0.10 will already break many methods?
So 0.10 breaks all methods and 0.11 only update tests. They could even be released in the same day (?).
Do both at once might be a bit cumbersome for wrappers. Developer will have to figure if they are supposed or not to pass a test that was renamed and at the same time fix many changes.

@odow odow mentioned this pull request Jun 24, 2021
22 tasks
@odow
Copy link
Copy Markdown
Member Author

odow commented Jun 24, 2021

Closing for now.

I have a plan for progress that doesn't involve this multi-thousand line diff that breaks every existing solver :)

Issue #1398 is to track progress.

@odow odow closed this Jun 24, 2021
@odow odow deleted the od/test_refactor branch June 24, 2021 21:08
@odow odow restored the od/test_refactor branch June 24, 2021 21:39
@odow odow deleted the od/test_refactor branch July 13, 2021 22:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

breaking Submodule: Tests About the Tests submodule

Development

Successfully merging this pull request may close these issues.

4 participants