🚀 Add NonLinearProgram Support to DiffOpt.jl#260
Conversation
frapac
left a comment
There was a problem hiding this comment.
good job @andrewrosemberg ! I appreciate the numerous unit-tests you have shipped with this PR.
I will try to run your code locally on small NLP instances, to assess how fast is the current implementation (my main concern is the total number of allocations). I will try also to test your code on parameterized OPF instances, to assess how far we can get in term of size.
|
@andrewrosemberg Playing with your branch right now, I must say I like DiffOpt's interface! I did a dummy mistake when solving the problem HS15 and retrieving the sensitivity. A MWE is: model = Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer))
@variable(model, p[1:2] ∈ MOI.Parameter.([100.0, 1.0]))
@variable(model, x[1:2])
set_upper_bound.(x[1], 0.5)
@objective(model, Min, p[1] * (x[2] - x[1]^2)^2 + (p[2] - x[1])^2)
@constraint(model, x[1] * x[2] >= 1.0)
@constraint(model, x[1] + x[2]^2 >= 0.0)
optimize!(model)
# set parameter pertubations
MOI.set(model, DiffOpt.ForwardParameter(), p[1], 1.0)
# # forward differentiate
DiffOpt.forward_differentiate!(model)
Δx = [
MOI.get(model, DiffOpt.ForwardVariablePrimal(), var) for
var in x
]
I forgot to specify the sensitivity for
|
|
Also, if I query the solution after calling The MWE is: model = Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer))
@variable(model, p[1:2] ∈ MOI.Parameter.([100.0, 1.0]))
@variable(model, x[1:2])
set_upper_bound.(x[1], 0.5)
@objective(model, Min, p[1] * (x[2] - x[1]^2)^2 + (p[2] - x[1])^2)
@constraint(model, x[1] * x[2] >= 1.0)
@constraint(model, x[1] + x[2]^2 >= 0.0)
optimize!(model)
# set parameter pertubations
MOI.set(model, DiffOpt.ForwardParameter(), p[1], 1.0)
MOI.set(model, DiffOpt.ForwardParameter(), p[2], 1.0)
# # forward differentiate
DiffOpt.forward_differentiate!(model)
JuMP.value.(x)
Output: |
@frapac good question! I am fine either way. @joaquimg what would be the consistent approach given the rest of DIffOpt? |
I don't think this is the desired outcome haha Not sure how to ask MOI to ignore the forward attributes, but I will look into it. @joaquimg do you have an idea on how to avoid this? |
|
@andrewrosemberg Another issue I noted: if we are using non-standard indexing in JuMP ( A MWE is: model = Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer))
@variable(model, p[1:2] ∈ MOI.Parameter.([100.0, 1.0]))
# N.B: use non-standard indexing
@variable(model, x[0:2])
set_upper_bound.(x[1], 0.5)
@objective(model, Min, p[1] * (x[2] - x[1]^2)^2 + (p[2] - x[1])^2)
@constraint(model, x[1] * x[2] >= 1.0)
@constraint(model, x[1] + x[2]^2 >= 0.0)
optimize!(model)
# set parameter pertubations
MOI.set(model, DiffOpt.ForwardParameter(), p[1], 1.0)
MOI.set(model, DiffOpt.ForwardParameter(), p[2], 1.0)
# # forward differentiate
DiffOpt.forward_differentiate!(model)
Δx = [
MOI.get(model, DiffOpt.ForwardVariablePrimal(), var) for
var in x
]
|
|
Pushing it one step further, I have tried to differentiate an ACOPF instance using DiffOpt. I re-used the code in rosetta-opf, and ported it to DiffOpt. You can find a gist here. Two observations:
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #260 +/- ##
==========================================
+ Coverage 86.67% 89.07% +2.40%
==========================================
Files 13 15 +2
Lines 1478 1968 +490
==========================================
+ Hits 1281 1753 +472
- Misses 197 215 +18 ☔ View full report in Codecov by Sentry. |
| objective_sense(form::Form) = form.sense | ||
| objective_sense(model::Model) = objective_sense(model.model) |
This PR introduces a new module,
NonLinearProgram, to extend DiffOpt.jl's functionality for differentiating nonlinear optimization problems (NLPs). The implementation integrates with JuMP-based nonlinear models and supports advanced derivative computation through custom evaluator and differentiation logic.🆕 Features
Nonlinear Model Differentiation:
focus_vars) and dual variables (focus_duals) with respect to a given set of parameters.Core Structures:
Cache: Stores primal variables, parameters, evaluator, and constraints for efficient reuse.ForwCache: Holds results of forward differentiation, including sensitivities for specified variables.ReverseCache: Holds results of reverse differentiation (implemented in this PR).Integration with DiffOpt API:
DiffOpt.AbstractModelfor seamless compatibility with DiffOpt's API.forward_differentiate!andreverse_differentiate!functions for NLPs.🔧 How It Works
Custom Sensitivity Calculations:
Forward Differentiation:
Reverse Differentiation:
ReverseCache.📜 Implementation Highlights
Forward Differentiation:
w.r.t.params`.ForwCache.Reverse Differentiation:
ReverseCache.Custom Utilities:
create_evaluator,compute_sensitivity, and other utilities fromnlp_utilities.jlfor efficient derivative computation.📋 Example Usage
Forward Differentiation
Reverse Differentiation
🚧 TODO
🛠 Future Work