Currently, MOI only provides one implementation of model: MOIU.AbstractModel that are created with MOIU.@model with the common choice of MOIU.Model.
This represents models in the following way:
We have a struct containing:
- The objective which can be
SingleVariable, ScalarAffineFunction or ScalarQuadraticFunction
- A datastructure for
SingleVariable constraints
- One field per supported function type containing
- One field per supported set type containing
- a datastructure for storing each constraint of that type.
This is useful to use as the cache used by JuMP to store the model as it gives good performance for all MOI operations. However, when copying this model to a solver, some work should be done by the solver to transfrom it into its canonical form.
If we look at it more closely, the form required by the solvers are a combination of the following:
- An objective type
- A datastructure for
SingleVariable constraints
- One or more separate datastructures for subsets of constraint types. For example, some conic solvers have one datastructure for
VectorAffineFunction-in-Zeros and one for VectorAffineFunction-in-K in other cones K.
LP solvers such as Clp, GLPK, HiGHS need all the constraints mixed up in a single matrix stored either row-wise or column-wise.
If we refactor AbstractModel, it could cover all these use case, simply by being parametrized by:
- The type of objective
- The type of the datastructure for
SingleVariable constraints
- The type of the datastructure for the other constraints.
This third type is either
Then, each solver would be able to significantly simplify it's copy_to function by doing, e.g. LP solvers could do
const MODEL = MOI.Utilities.GenericModel{MOI.ScalarAffineFunction{Float64}, ContinuousVariables, MatrixOfConstraints{..., SparseMatrixCSC{..., MOI.ZeroBasedIndexing}}
function MOI.copy_to(dest::Optimizer, src::MODEL)
# No need to shift `rowval` by `-1` as it's already zero-base indexing.
Solver_C_function(src.constraints.matrix.rowvals, ...)
return identity_index_map(src)
end
function MOI.copy_to(dest::Optimizer, src::MOI.ModelLike)
model = MODEL()
index_map = MOI.copy_to(model, src)
MOI.copy_to(dest, model)
return index_map
end
In order to drastically increase the chance a MODEL is copied to the solver, we could add an MOI attribute and replace Utilities.Model{with_bridge_type}() in
|
Utilities.UniversalFallback(Utilities.Model{with_bridge_type}()) |
by
MOI.get(optimizer, MOI.PreferedModelType())().
The result of all this is two-fold:
- It would drastically simplify
copy_to for solver wrappers and ensure they are all efficiently optimized
- It would spare the copy of the data that is done in this
copy_to as the cache of the CachingOptimizer will already contain the data in the right form.
Currently, MOI only provides one implementation of model:
MOIU.AbstractModelthat are created withMOIU.@modelwith the common choice ofMOIU.Model.This represents models in the following way:
We have a struct containing:
SingleVariable,ScalarAffineFunctionorScalarQuadraticFunctionSingleVariableconstraintsThis is useful to use as the cache used by JuMP to store the model as it gives good performance for all MOI operations. However, when copying this model to a solver, some work should be done by the solver to transfrom it into its canonical form.
If we look at it more closely, the form required by the solvers are a combination of the following:
SingleVariableconstraintsVectorAffineFunction-in-Zerosand one forVectorAffineFunction-in-Kin other conesK.LP solvers such as Clp, GLPK, HiGHS need all the constraints mixed up in a single matrix stored either row-wise or column-wise.
If we refactor
AbstractModel, it could cover all these use case, simply by being parametrized by:SingleVariableconstraintsThis third type is either
VectorOfConstraints: Refactor Model and UniversalFallback #1245MatrixOfConstraintsin either aMutableSparseMatrixCSCor aTranspose{<:MutableSparseMatrixCSC}. ForMutableSparseMatrixCSC, incrementally adding the constraints is not supported. See https://github.com/jump-dev/MatrixOptInterface.jl/blob/master/src/sparse_matrix.jl for what it would look likeThen, each solver would be able to significantly simplify it's copy_to function by doing, e.g. LP solvers could do
In order to drastically increase the chance a
MODELis copied to the solver, we could add an MOI attribute and replaceUtilities.Model{with_bridge_type}()inMathOptInterface.jl/src/instantiate.jl
Line 122 in e9b03cb
by
MOI.get(optimizer, MOI.PreferedModelType())().The result of all this is two-fold:
copy_tofor solver wrappers and ensure they are all efficiently optimizedcopy_toas the cache of theCachingOptimizerwill already contain the data in the right form.