-
Notifications
You must be signed in to change notification settings - Fork 20
Improved precompilation #496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests.
Additional details and impacted files@@ Coverage Diff @@
## master #496 +/- ##
===========================================
- Coverage 84.69% 69.60% -15.10%
===========================================
Files 49 14 -35
Lines 4398 1033 -3365
===========================================
- Hits 3725 719 -3006
+ Misses 673 314 -359 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@sshin23 We can check if we have the HSL linear solvers available with |
|
@frapac @amontoison With these changes, we can eliminate the TTFS ~ 1 sec is spent for loading julia nad MadNLP |
| @@ -0,0 +1,17 @@ | |||
| module MadNLPJuMPExt | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the pain points in this implementation is that whenver we import a package that extends the function;
e.g.,
struct MyModel <: AbstractNLPModel{Float64,Vector{Float64}} end
NLPModels.obj(m::MyModel, x::AbstractVector) = 0.Any function that is dependent on NLPModels.obj will be invalidated and needs to be recompiled. Because of this behavior, we need to do precompilation whenever we import e.g., ExaModels, MOI, JuMP, etc.
|
@amontoison @frapac @apozharski Due to the invalidation issue, I was wondering if we can revisit our overall package structure. We can individually write precompilation scripts e.g., MadNLPHSLJuMPExt, MadNLPPardisioJuMPExt, etc., but we'll need I wonder if we can introduce |
The one concern I would have is about the impact of this on the c interface in Edit: On the other hand, it won't need to maintain it's own precompile ideally which would be a benefit. |
frapac
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It overall looks good to me, except a few minor comments. Just out of curiosity, by how much do we improve the time-to-first-solve with this PR, compared to master?
| J[3] = 1.0 # (2, 1) | ||
| J[4] = 2*x[2] # (2, 2) | ||
| return J | ||
| return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think NLPModels' API's require returning explicitly the Jacobian J
| Instantiate a new `MadNLPSolver` associated to the nonlinear program | ||
| `nlp::AbstractNLPModel`. The options are passed as optional arguments. | ||
| function MadNLPSolver(nlp::AbstractNLPModel; kwargs...) | ||
| @nospecialize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any reason why we need @nospecialize? I would maybe add a comment explaining why we need this macro
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without @nospecialize, MadNLP tries to specialize this function to nlp's type. That is, if you have an ExaModel input, which has a complicated type signature, MadNLP will try to compile MadNLPSolver specialied for that data type. You end up having to compile lots of madnlp functions for these different types. This change allows to compile MadNLP only once
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thank you for the clarification! It makes sense to use @nospecialize in that case. I think not compiling MadNLP on every ExaModels instance is an improvement.
|
|
||
| get_counters(nlp::NLPModels.AbstractNLPModel) = nlp.counters | ||
| get_counters(nlp::NLPModels.AbstractNLSModel) = nlp.counters.counters | ||
| get_counters(@nospecialize(nlp::NLPModels.AbstractNLPModel)) = nlp.counters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we move this macro outside the function's arguments (I confess I am not an expert of @nospecialize, so feel free to discard this comment)?
|
For compilation of MadNLPGPU underlying issue is probably GPUCompiler #611. GPU instructions leak as CPU instructions during compilation. Help is needed to make better MWE so it can be filed upstream to Julia issues. |
Tried MadNLPGPU as well, but it seems not to be working as of now