Currently, solvers such as GLPK skip copy_to via JuMP, due to the behavior of CachingOptimizer.
model = Model(GLPK.Optimizer)
@variable(model, x)
optimize!(model)
Doesn't use copy_to, even though GLPK provides a fast one.
Rather than making an ad-hoc change, we should take the time to throughly document the caching optimizer system, choose a design that makes the most sense, and then implement that. There are quite a few points at which various layers are added, dropped, emptied, and reset.
Currently, solvers such as GLPK skip
copy_tovia JuMP, due to the behavior ofCachingOptimizer.Doesn't use
copy_to, even though GLPK provides a fast one.Rather than making an ad-hoc change, we should take the time to throughly document the caching optimizer system, choose a design that makes the most sense, and then implement that. There are quite a few points at which various layers are added, dropped, emptied, and reset.