Refactor msr#1781
Conversation
RoyStegeman
left a comment
There was a problem hiding this comment.
Not completely done yet with my review, but I ran out of time. Anyway, here are a few comments
| IDX = { | ||
| 'photon': 0, | ||
| 'sigma': 1, | ||
| 'g': 2, | ||
| 'v': 3, | ||
| 'v3': 4, | ||
| 'v8': 5, | ||
| 'v15': 6, | ||
| 'v35': 7, | ||
| 'v24': 8, | ||
| } |
There was a problem hiding this comment.
It doesn't affect the result, but the order of v35 and v24 is reversed.
Since we rely on hardcoding the basis of the fktable anyway, it may be best to have a dict like this one stored in a single location which all modules that need it can then import (which at the moment I think are just x_operations.py, the current module and writer.py).
There was a problem hiding this comment.
I agree, do you want me to do that in this commit? If so I'm not sure where to put it.
There was a problem hiding this comment.
That's also what I'm not too sure about, perhaps we should add an n3fit/utils.py? @scarlehoff do you have any ideas?
There was a problem hiding this comment.
The information should be here
nnpdf/validphys2/src/validphys/pdfbases.py
Line 695 in 9cf5f9e
incidentally, the name of that function, fitbasis_to_NN31IC, is no longer correct.
Some things that we said (a very long time ago) someone should do are:
- Create a function that generate the PDF output -> fktable rotation and use that on the fit (half of the work @APJansen already did with Implement FkRotation as subclass of Rotation by rewriting using a rotation tensor #1772)
- Define in that module what the "final fitbasis" actually is so that both
fitbasis_to_NN31ICand any other function can use that information.
Maybe fitbasis_to_fitoutput would be more correct. Or even skip that step and have just fitbasis_to_fkbasis since the fitoutput->fkbasis step is known and constant. That would also require (small) changes to the QED fit or the integration.
There was a problem hiding this comment.
This is probably better left to another PR?
scarlehoff
left a comment
There was a problem hiding this comment.
I wrote a lot because I was writing as I went but basically the summary is:
- Feel free to change the whole
msr_normalization.pylayer (as long as the input-outputs are the same) and the more things you can put as constants at the top of the module the better. - I'd strongly prefer to leave the input fitted (integrated) PDF and photon separated.
| normalization_factor = MSR_Normalization(output_dim, mode, name="msr_weights")( | ||
| pdf_integrated, photon_integral | ||
| ) | ||
| normalization_factor = MSR_Normalization(mode, name="msr_weights")(pdf_integrated) |
There was a problem hiding this comment.
Why this change? I feel the previous one was actually much clearer since it makes the difference between the PDF integration and the photon explicit in the MSR.
(and they are really different objects, while the PDF comes from the NN, the photon comes from an analytical computation and is constant throughout the fit, so the separation in the call is good)
There was a problem hiding this comment.
Maybe I misunderstand (I never understood why the photon can't be treated the same) but they're not really different objects right? They might arise differently but they are all integrals right? I thought a dedicated layer that inserts the photon integral in the first component makes it more transparent, and simplifies the MSR layer.
But if you still prefer the old way I'll change it back.
There was a problem hiding this comment.
The photon is analytically computed before the fit
https://github.com/NNPDF/nnpdf/blob/9cf5f9ecc83ec59fff804f859b531198139021fe/validphys2/src/validphys/photon/compute.py#L125C13-L125C13
Computing the integral in the same way of the others would require too many points and the photon contribution is not large so we don't need the same level of precision.
So, while they are all "the result of integrals", the photon is a constant that gets injected into the calculation while the others are quantities that are computed and change from epoch to epoch.
There was a problem hiding this comment.
Ok, I changed it back to what it was.
Any idea why the CI is failing? Thought it was some fluke but it happend twice in a row now, just says the operation was cancelled when solving the environment..
|
Greetings from your nice fit 🤖 !
Check the report carefully, and please buy me a ☕ , or better, a GPU 😉! |
|
@scarlehoff @RoyStegeman Tests are failing, at the stage of resolving the environment. Also on master now after merging the preprocessing and rotation PRs, though I assume only because they reran and something external has happened, I can't imagine why the code changes would break the conda environment when I didn't add or remove any dependencies. Not sure what to do about this? |
|
I see the CI has come back to life (yey!) Is this ready for review? (I'm going to be on holiday starting next week and I'd like to review the changes because of being a control-freak but also don't want to block the progress in the GPU/multireplica stuff ¡!) |
|
Ah great :) Yes it's ready. You wouldn't really block progress, but if it's possible to review before you go that'd be great :) I'll reply to #1773 there |
scarlehoff
left a comment
There was a problem hiding this comment.
Beyond the stylstic nitpicking, I think this is good to go.
Ah great :) Yes it's ready. You wouldn't really block progress, but if it's possible to review before you go that'd be great :)
Better safe than sorry!
| """ | ||
| y = op.flatten(pdf_integrated) | ||
| y = pdf_integrated[0] # get rid of the batch dimension | ||
| photon_integral = photon_integral[0] # get rid of the batch dimension |
There was a problem hiding this comment.
Does the photon integral need a batch dimension? You could as well remove the [[ in below (or is it because of symmetry arguments?)
(you can also do directly here photon_integral[0][0] btw)
There was a problem hiding this comment.
I suppose this was more relevant when I actually joined them together before being passed to this layer, but still having the same shapes makes it easier to change things I think, even if it's a bit superfluous now.
Co-authored-by: Juan M. Cruz-Martinez <juacrumar@lairen.eu>
|
Conda broke again... I'll rerun the tests once in a while and merge it if it passes, thanks for the review :) |
Adds a regression test for the
MSR_Normalizationlayer.Mainly improving readability by rewriting the indices.
Also makes it compatible with higher rank pdfs by changing a
flatten(y)toy[0]and allowing for anout_shaperather than only a 1doutput_dim.Awaiting further tests as per discussion here.