[breaking] [Tests] Large refactoring of Test#1404
Conversation
|
I think I'm making progress on the design. The main premise is that you write a short test, and then immediately afterwards you define MathOptInterface.jl/src/Test/UnitTests/variables.jl Lines 202 to 258 in ca415c0 That means you only need to write the test in one place, and you don't have to worry about hunting through the tests of the tests to figure out how to test the test! Then, there's just a single entry point where you can run all of the tests in MOI for your solver: MathOptInterface.jl/test/Test/Test.jl Lines 14 to 22 in ca415c0 |
The basic design is 'runtests', a single entry point to all solver. Instead of breaking down tests by files or dictionaries, tests are normal Julia functions with descriptive names that can be excluded or included by the user.
…bles.jl This makes things much easier: you now write the test and a check with MockOptimizer in a single place. It's also a demonstration that this works for unit tests as well as the larger integration tests.
dourouc05
left a comment
There was a problem hiding this comment.
This would indeed make the Test module much easier to understand, in my opinion!
src/Test/UnitTests/attributes.jl
Outdated
| Test that the [`MOI.SolverName`](@ref) attribute is implemented for `model`. | ||
| """ | ||
| function solver_name(model::MOI.ModelLike, config::Config) | ||
| function test_SolverName(model::MOI.ModelLike, config::Config) |
There was a problem hiding this comment.
The name of the function is not terribly consistent with the other function names (the same holds for all attributes). Maybe something like test_attribute(model, ::MOI.SolverName, config), with the same change for setup_test?
It could also be easier for solvers to implement, with a simple loop over attributes that are supported instead of a longer list of function calls.
There was a problem hiding this comment.
Yeah this is my naming problem. I don't have a good solution.
Specific attributes:
test_attribute_SolverName
test_modification_MultirowChange
test_modification_ScalarCoefficientChange_objective
test_modification_ScalarCoefficientChange_constraintProblem classes?
test_lp_
test_milp_
test_qp_
test_qcp_
test_nlp_
test_soc_
test_conic_Ideally, we need a formulaic way of generating test names:
test_ + class + feature + unique identifier?
test_lp_TerminationStatus_INFEASIBLEtest_lp_TerminationStatus_DUAL_INFEASIBLEtest_lp_PrimalStatus_INFEASIBILITY_CERTIFICATEtest_milp_integration_knapsacktest_soc_VectorOfVariables_extra_termstest_soc_VectorAffineFunction_empty_row
You want to be able to say things like
"run all test_soc_ problems excluding INFEAS"
|
I feel that it's easier to maintain the tests we the solver don't need to include all tests involving things they don't support. function test_...(model, config, force=false)
if !(MOI.supports(...) && MOI.supports_constraint(...))
@test !force
return
end
endSo by default, calling if force
@test MOI.supports_constraint(...)
elseif !MOI.supports_constraint(...)
return
end |
|
👍 to this. I'm not planning on merging this before I update some of the solvers to check logistics. I envisaged something like: if !MOI.supports_constraint(model, F, S)
if force
@warn("Skipping test xxx because you don't support F-in-S")
end
return
endMy plans is
|
| function test_intconic() | ||
| MOI.Test.intconictest(BRIDGED, CONFIG) | ||
| end | ||
| # This line at tne end of the file runs all the tests! |
|
This is very breaking. |
|
Closing for now. I have a plan for progress that doesn't involve this multi-thousand line diff that breaks every existing solver :) Issue #1398 is to track progress. |
This PR was motivated by issue #1398
Where we are
Our current testing regime is comprehensive, but a bit all over the place.
There's a mix of things like
MOI.Test.unittestthat wrap a whole lot of tests, and others likeMOI.Test.default_status_testthat you just need to add. Even the documentation for how to test a solver is complicated (#224)! https://jump.dev/MathOptInterface.jl/dev/submodules/Test/overview/#How-to-test-a-solverThe current design is also bad because it's hard to add new tests.
As evidenced by the documentation: https://jump.dev/MathOptInterface.jl/dev/submodules/Test/overview/#How-to-add-a-test, you need to write a test, then write a test for the test, and then make sure that everything works. It's hard to run a small contingent of tests, and it's hard to decide where to put a new test, and where to put the test of the test.
This also evidenced by the large number of open test issues that aren't getting addressed. These range from #470 (August 2018!) to #1201 (November 2020). If they were easier to add, it would happen quicker!
Naming and visibility of each test is also a problem: #1029. If
linear10btestfails, what does that mean?New design
The basic design is
runtests, a single entry point to all tests in MOI.Instead of breaking down tests by files or dictionaries, tests are normal Julia functions with descriptive names that can be excluded or included by the user.
Here's the code to test the
MockOptimizer:Much better.
There is also a need for certain tests to modify the model prior to running the test (changing solver parameters/tolerances, for example). That can be achieved by overloading
setup_test(::typeof(f), ::MOI.ModelLike, ::MOI.Test.Config)for the particular functionf.Decisions and TODOs
This is horribly breaking, but I think we're okay with that. Sorting out the tests is a high priority.
runteststhat sets up the model (e.g., modifying parameters, etc) and the config? Then we could make sure that every test is actually tested, and it would be simpler to add new tests.