I am revisiting the performance results I shared in #108 and I have found that the time to translate the initial model with POI to the solver now scales quite poorly relative to JuMP (I don't recall having this problem before after #114 when I tried last year).
using JuMP, Gurobi, ParametricOptInterface
const POI = ParametricOptInterface
function create_model(n, use_param)
# Initialize the model object
if use_param
model = Model(() -> POI.Optimizer(Gurobi.Optimizer(), evaluate_duals = false))
@variable(model, Mp in POI.Parameter(2.0))
else
model = Model(Gurobi.Optimizer)
Mp = 2.0
end
set_silent(model)
set_time_limit_sec(model, 0) # We want to include how long it takes to reach the solver, but that's it.
# Add the variables
@variable(model, d)
@variable(model, 0 ≤ y[1:n, 1:2] ≤ 1)
@variable(model, z[0:n, 0:n, 1:n], Bin)
@variable(model, 0 ≤ s[0:n, 0:n, 1:n])
@variable(model, r[0:n, 0:n, 1:n, 1:2])
# Set the objective
@objective(model, Min, d)
# Add the constraints
@constraint(model, [i ∈ 0:n, j ∈ 0:n], sum(z[i,j,f] for f ∈ 1:n) == 1)
@constraint(model, [i ∈ 0:n, j ∈ 0:n, f ∈ 1:n], s[i,j,f] == d + Mp*(1 - z[i,j,f]))
@constraint(model, [i ∈ 0:n, j ∈ 0:n, f ∈ 1:n], r[i,j,f,1] == i/n - y[f,1])
@constraint(model, [i ∈ 0:n, j ∈ 0:n, f ∈ 1:n], r[i,j,f,2] == j/n - y[f,2])
@constraint(model, [i ∈ 0:n, j ∈ 0:n, f ∈ 1:n], r[i,j,f,1]^2 + r[i,j,f,2]^2 ≤ s[i,j,f]^2)
# Return the model
return model
end
# Set test settings
n = 10 # This can vary the problem size
# Account for the jit time
optimize!(create_model(2, true))
optimize!(create_model(2, false))
# Time with POI
@time optimize!(create_model(n, true))
# Time without POI
@time optimize!(create_model(n, false))
With n=10 using POI v0.4.3, JuMP v1.9, and Gurobi v1.0 I get:
0.075508 seconds (529.07 k allocations: 28.067 MiB) # POI
0.056373 seconds (405.03 k allocations: 23.210 MiB, 32.37% gc time, 17.23% compilation time) # JuMP
With n = 20 I get:
2.823058 seconds (3.84 M allocations: 198.879 MiB, 1.44% gc time) # POI (factor of 37 increase)
0.186521 seconds (2.78 M allocations: 155.695 MiB, 23.99% gc time) # JuMP (factor of 3 increase)
With n = 30 I get:
34.339958 seconds (12.53 M allocations: 646.085 MiB, 1.24% gc time) # POI (factor of 12 increase)
0.685356 seconds (9.04 M allocations: 505.112 MiB, 27.98% gc time) # JuMP (factor of 4 increase)
With n = 40 I get:
283.982509 seconds (29.20 M allocations: 1.475 GiB, 0.29% gc time) # POI (factor of 8 increase)
2.037777 seconds (21.06 M allocations: 1.148 GiB, 35.14% gc time) # JuMP (factor of 3 increase)
For my tests, this poor scalability in the initial build/solve time negates the benefit from faster resolve builds.
I am revisiting the performance results I shared in #108 and I have found that the time to translate the initial model with POI to the solver now scales quite poorly relative to JuMP (I don't recall having this problem before after #114 when I tried last year).
With
n=10using POIv0.4.3, JuMPv1.9, and Gurobiv1.0I get:With
n = 20I get:With
n = 30I get:With
n = 40I get:For my tests, this poor scalability in the initial build/solve time negates the benefit from faster resolve builds.