Skip to content

[WIP] Running n3fit on the flavour basis#689

Closed
scarlehoff wants to merge 11 commits into
masterfrom
n3fit_in_flavour_basis
Closed

[WIP] Running n3fit on the flavour basis#689
scarlehoff wants to merge 11 commits into
masterfrom
n3fit_in_flavour_basis

Conversation

@scarlehoff
Copy link
Copy Markdown
Member

This is a template commit with the two or three changes needed tor un n3fit in the flavour basis.

The basis rotation needs to be implemented in n3fit/src/n3fit/layers/rotations.py, I've added an example. Ideally it would be a class taking the basis information and preparing the rotation dynamically instead of having it fixed. The basis rotation should, however, be limited to a class (or a number of classes with a switch).

In n3fit/src/n3fit/model_gen.py we need to have the information about the basis. Be it the name of the basis or the basis list from the runcard. Anything that can tell n3fit "ey, this is the basis you'll be using" but nothing more.

Assigning @scarrazza as placeholder developer for now.

@scarlehoff scarlehoff added n3fit Issues and PRs related to n3fit 4.0-blocker labels Mar 30, 2020
@scarrazza
Copy link
Copy Markdown
Member

@scarlehoff the code is fine, the fitbasis flag is something we can keep in the runcard for the time being. At some point the runcard layout should be discussed.

Comment thread n3fit/src/n3fit/layers/rotations.py Outdated
@scarlehoff scarlehoff changed the base branch from n3fit_use_tf_two to master April 1, 2020 15:39
@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 2, 2020

@scarlehoff at some point in #684 you were mentioning some documentation. Has it been done? Or should it be part of this PR? Also I m not able to find the fits you ran in the flavour basis on the server, like for example 270320_DIS_flavbas_jcm. Could you please upload it?

@scarlehoff
Copy link
Copy Markdown
Member Author

@scarlehoff at some point in #684 you were mentioning some documentation. Has it been done?

I don't think so...

Or should it be part of this PR?

Would be appreciated!

Also I m not able to find the fits you ran in the flavour basis on the server, like for example 270320_DIS_flavbas_jcm. Could you please upload it?

Sure, I've uploaded 270320_DIS_flavbas_jcm_1 https://data.nnpdf.science/fits/270320_DIS_flavbas_jcm_1.tar.gz corresponding to this report https://vp.nnpdf.science/L33wX9PlSQWGoZkBrpnfjg==/

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 2, 2020

uhm and what have you used to produce the effective preprocessing exponents table appearing in this report? Looking at pdf.md I see effective_exponents_table but if I run

template_text: |
 {@ effective_exponents_table @}

basis: flavour

fit: 270320_DIS_flavbas_jcm_1

actions_:
 - report(main=True)

I get

Could not process the resource 'next_effective_exponents_table', required by:
 - effective_exponents_table_internal
 - effective_exponents_table
 - template_text
 - report
Unknown basis 'FLAFLA'

@scarlehoff
Copy link
Copy Markdown
Member Author

The basis in the runcard of the fit is fake I'm afraid. I ran that with a frankstein code. The ranges I took from using #684 with the 3.1 fit. You would need to run a new fit with this code to get something useful.
For that you can start by hardcodding fitbasis = flavour in the model_gen file for instance.

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 6, 2020

@scarlehoff @Zaharid is the last commit similar to what you had in mind?

@scarlehoff
Copy link
Copy Markdown
Member Author

That's dangerous because it will only work as long as you are using pure python things and it can easily broke if things come in a different order for instance. Thinking about it I guess what I have now in mind (which, mind you, is different from last week) is to let pdfbases.py generate a rotation matrix in the init. Like:

def __init__(self, flav_info):
     rotation_matrix = pdfbasis.some_function_that_generates_trans_matrix(flav_info)
     self.rotation_matrix = self.np_to_tensor(rotation_matrix)

def call(self, xflav):
     evol_basis = self.tensordot(rotation_matrix, xflav, axes=1)
     return evol_basis

Some transpositions might be missing, but this would be the idea. That way the rotation is 100% done by pdfbases.py and at the same time n3fit doesn't see any of that (it just get some 8x8 matrix and applies it, what's inside doesn't matter).

The docs for tensordot: https://www.tensorflow.org/api_docs/python/tf/tensordot

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 7, 2020

ok, will have a go

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 7, 2020

@scarlehoff which flavour ordering should I consider to construct the rotation matrix in pdfbases? I mean, when you wrote the call method of FlavourToEvolution you added a comment saying

# Let's decide that the input is
# u, ubar, d, dbar, s, sbar, c, g
# TODO: it needs to match

where I guess you are referring to the fact that x_raw[0]=u, x_raw[1]=ubar ..... and so on. Right?
Where is this ordering coming from? From the basis dictionary in the runcard? what do you mean with it needs to match? (sorry for the list of questions..)

If we choose that the input x_raw is gonna be always like this, then to construct the rotation basis I don't need any information from the basis dictionary, it is given by something like

[[1,1,1,1,1,1,1,1,0],
[0,0,0,0,0,0,0,0,1],
[1,-1,1,-1,1,-1,1,-1,0],
[1,-1,-1,1,0,0,0,0,0],
[1,-1,1,-1,-2,2,0,0,0],
[1,1,-1,-1,0,0,0,0,0],
[1,1,1,1,-2,-2,0,0,0],
[0,0,0,0,0,0,1,1,0]]

and I can hardcode it in pdfbasis.
But I guess this is not what you want..?

@scarlehoff
Copy link
Copy Markdown
Member Author

where I guess you are referring to the fact that x_raw[0]=u, x_raw[1]=ubar ..... and so on. Right?

Yes exactly.

Where is this ordering coming from? From the basis dictionary in the runcard? what do you mean with it needs to match? (sorry for the list of questions..)

The order is completely arbitrary, it is just the output of the NN and by itself doesn't mean anything. It "receives" a meaning when you apply the rotation, so you just have to make sure that the rotation is following the same ordering as in the runcard.

If we choose that the input x_raw is gonna be always like this, then to construct the rotation basis I don't need any information from the basis dictionary, it is given by something like

Actually, I think it is better to construct the rotation matrix looking at the basis dictionary, that way if someone messes up the ordering of the runcard it will still work the same. A way you can do that is by constructing the matrix using the dictionary in PDF basis:

'photon': {'photon': 1},

So that the first line of the matrix (the one corresponding to sigma) is constructed automatically with the basis dictionary. So if the dictionary is:
'u' 'd' 's' 'c' ubar' 'dbar' 'sbar' 'g' you get what you have, but if it happens to be 'g' 'd' 's' 'c' ubar' 'dbar' 'sbar' 'u' you automatically get [0,1,1,1,1,1,1,1,1].

Hope that makes sense!

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 7, 2020

@scarlehoff ok thanks, i ve tried to do something like that, let me know if it may work

@scarlehoff
Copy link
Copy Markdown
Member Author

Perfect. Haven't tested but it looks god.

The only thing is that instead of reshaping the output (which can lead to mistakes as you might reshape in the wrong order + forces you to know the shape of the x at compile time, which is not always true) it is better to reorganize the tensorproduct so that the output already has the right shape.

If I'm not wrong, here I think you can avoid the first transpose and then invert the order of the arguments in the tensorproduct, then the output will automatically have the right shape.

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 7, 2020

ok then if you re happy with the code I ll start testing it with a dis only fit, and if everything looks fine I ll move to iterate the global ones

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 8, 2020

I guess it looks fine
https://vp.nnpdf.science/jLSe77mBSqW8DYkHJ4f8Uw==

@scarlehoff
Copy link
Copy Markdown
Member Author

Thanks! The code looks quite good, I like it. It is basically what I had in mind.

Wrt the report, I am a bit worried about the low-x behaviour for some of the flavours but might be because of it being a DIS fit. Maybe it makes sense to do a 3.1 global fit with the same parameters.

As a wish list for the future:
1 - some test, ensuring that the functions do what they are expected to etc
2 - given that it works, the next step I think would be for n3fit to receive, instead of the basis dictionary and doing something with it, a "basis" object that includes:
a) the rotation matrix
b) the range of the preprocessing
c) any other flavour info I might be missing
in this way we reduce the dependency of n3fit in what the actual flavours are (it will receive just a rotation matrix to apply, which can be the identity) and a list of preprocessing ranges to apply. And will also catch possible errors in the runcard before they happen.

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 10, 2020

sure, it s here
https://data.nnpdf.science/fits/080420-global-flavbas-tg.tar.gz
It s the same runcard of your DIS only fit of last week, but using all the datasets of nnpdf31

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 10, 2020

should I use the same runcard for the iteration? or should I change some of the settings?

@scarlehoff
Copy link
Copy Markdown
Member Author

No, that's fine. I was just wondering whether there was any different settings.

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 15, 2020

some update:
I run two iterations for the preprocessing exponenets 120420-global-flavbas-tg and 140420-global-flavbas-tg. The reports are https://vp.nnpdf.science/XCRr29HQRI-yNaKYK0YJdQ==/
and https://vp.nnpdf.science/ffLlFxZ3TgOvZLzQyJnlTw==/. I m comparing them with 110420-global-evol-tg which is a baseline obtained from the same runcard (same experiments and settings) but giving the evolution basis as input, using the code from this branch. Lookig at the second report, the one showing the last iteration, it looks as if the preprocessing exponents have almost converged, however:

  • the chi2 is generally worse. In particular the one for LHCb is very bad
  • the small-x bejaviour of s and sbar is totally different from the baseline
  • c is also really wired at medium and high values of x
  • the PDFs error is generally much bigger than the one of the baseline

Not sure how to proceed, we can discuss later at the pc

@scarlehoff
Copy link
Copy Markdown
Member Author

scarlehoff commented Apr 15, 2020

I am worried about the large-x behaviour actually, the u-quark for instance is completely out... I think we want to do some kind of check to ensure we are fitting what we think we are fitting.

The first two things that come to mind are

  1. A possible bug in the rotation that is mixing flavours "wrongly" (like, the T3 having an extra contribution from the c quark or whatever)
  2. The momentum sum rule not being computed ok. This is not impossible, maybe the flavour basis makes our naive MSR (which is basically a summation) too weak. It works well in the evolution basis. Is there a VP action to compute it from the PDF set to check whether it is ok?

Edit: the sumrules enters here

layers["fitbasis"], layer_pdf
which is using the fitbasis layer (which is the layer after applying rotation so it should be ok).

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 17, 2020

@scarlehoff I ve computed the sum rules for the fits in the evolution and flavour basis using vp2
https://vp.nnpdf.science/CyIR6VJmSdKJIDZiTe2Zew==/
clearly in the case of the flavour basis they are much more unstable.
Assuming that the rotation is done correctly (will check further that this is the case) the problem could be the following: when you use the evolution basis each sum rule involves the integration of a single neural net (two in the case of the gluon normalization), while when you work in the flavour basis, each sum rule involves the computation of some linear combination of neural nets. Of course if the rotation is done correctly everything is conceptually ok, however maybe numerically this does make a difference, meaning that in the case of the flavour basis the integrand is basically more complicated. I could try to do the following:

  1. test the the integrator you are using to compute the sum rules to see if it needs to be improved in some way when working in the flavour basis (not idea how..)
  2. implement the sum rules in the flavour basis, rather than in the evolution basis. In this way the normalization would be computed for u, d, s, and g rather than for V, V3, V8 and g, and it would be applied to the pdfs before doing the rotation in the evolution basis.

I m not sure how to proceed for 1). I guess I can try to use the function check_integration in msr.py which btw seems to be broken in this branch. As for 2), I ve looked a bit at the code and I don't think that it would be too difficult to implement (maybe I m wrong..), what do you think?

@scarlehoff
Copy link
Copy Markdown
Member Author

Ok, so the sum rules are indeed all around the place. This is good! (having a culprit for your problems is always good :P)

About the possibility of having a problem in the rotation, of course this has to be checked to ensure everything is ok (maybe the gluon in the rotation matrix coming from vp is in the 3rd position and in n3fit in the 2nd position for instance) but:

when you use the evolution basis each sum rule involves the integration of a single neural net (two in the case of the gluon normalization), while when you work in the flavour basis, each sum rule involves the computation of some linear combination of neural nets

I think you are probably right here.

  1. test the the integrator you are using to compute the sum rules to see if it needs to be improved in some way when working in the flavour basis (not idea how..)

As first approximation for this we can just add more points to the integration (I think right now it is like 1000, we can try 10 times more, the fit will be slower but we'll get a lot of information from there)

2 implement the sum rules in the flavour basis, rather than in the evolution basis. In this way the normalization would be computed for u, d, s, and g rather than for V, V3, V8 and g, and it would be applied to the pdfs before doing the rotation in the evolution basis.

I think this would be much better in the long run.

We can try first adding more points with our eyes closed and see how the results change (just to have some more info).

wrt to the suggestion in today's PC about running without preprocessing, the easiest thing would be to do just set the range for alpha and beta like [0.0, 0.0]

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 17, 2020

ok so having in mind today s pc I think I could proceed ad follows:

  1. fits (in both flavour and evolution basis) without preprocessing and without sumrules.
    We should get more or less the same result in the two bases, if not there s probably a bug in the rotation

  2. fits adding preprocessing (but not sum rules). For the one in the flavour basis we should implement alpha_q = alpha_qbar for q =u,d,s as suggested today

if up to this point the results in the two basis are compatible than everything is fine and we can move to study the best way to implement the sum rules, and whether or not the integrator is the problem. To do this

  1. fits adding sum rules:
    3.1) keep the same implementation of the sumrules and increase the number of points in the integration when running in the flavour basis
    3.2) implement the sum rules directly in the flavour basis

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 19, 2020

I thought that to run a fit without sumrules it would have been enough to set impose_sumrule = False in ModelTrainer. Doing this n3fit crashes with

[CRITICAL]: Bug in n3fit ocurred. Please report it.
Traceback (most recent call last):
  File "/home/tommy/physics/nnpdfgit/nnpdf/n3fit/src/n3fit/n3fit.py", line 188, in run
    super().run()
  File "/home/tommy/physics/nnpdfgit/nnpdf/validphys2/src/validphys/app.py", line 144, in run
    super().run()
  File "/home/tommy/miniconda3/envs/n3fit/lib/python3.7/site-packages/reportengine/app.py", line 361, in run
    rb.execute_sequential()
  File "/home/tommy/miniconda3/envs/n3fit/lib/python3.7/site-packages/reportengine/resourcebuilder.py", line 168, in execute_sequential
    perform_final=self.perform_final)
  File "/home/tommy/miniconda3/envs/n3fit/lib/python3.7/site-packages/reportengine/resourcebuilder.py", line 175, in get_result
    fres =  function(**kwdict)
  File "/home/tommy/physics/nnpdfgit/nnpdf/n3fit/src/n3fit/performfit.py", line 287, in performfit
    replica_path_set, output_path.name, training_chi2, val_chi2, true_chi2
  File "/home/tommy/physics/nnpdfgit/nnpdf/n3fit/src/n3fit/io/writer.py", line 89, in write_data
    self.timings,
  File "/home/tommy/physics/nnpdfgit/nnpdf/n3fit/src/n3fit/io/writer.py", line 208, in storefit
    result = pdf_function(xgrid)
  File "/home/tommy/physics/nnpdfgit/nnpdf/n3fit/src/n3fit/performfit.py", line 269, in pdf_function
    [integrator_input], [], extra_tensors=[(export_xgrid, layer_pdf)]
  File "/home/tommy/physics/nnpdfgit/nnpdf/n3fit/src/n3fit/backends/keras_backend/MetaModel.py", line 96, in __init__
    super(MetaModel, self).__init__(input_list, output_list, **kwargs)
  File "/home/tommy/miniconda3/envs/n3fit/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 146, in __init__
    super(Model, self).__init__(*args, **kwargs)
  File "/home/tommy/miniconda3/envs/n3fit/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 169, in __init__
    self._init_graph_network(*args, **kwargs)
  File "/home/tommy/miniconda3/envs/n3fit/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/home/tommy/miniconda3/envs/n3fit/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 275, in _init_graph_network
    self._validate_graph_inputs_and_outputs()
  File "/home/tommy/miniconda3/envs/n3fit/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 1328, in _validate_graph_inputs_and_outputs
    ' (missing previous layer metadata).')
ValueError: Input tensors to a MetaModel must come from `tf.keras.Input`. Received: None (missing previous layer metadata).

I think that the problem is that, when setting impose_sumrule = False, the xgrid integrator_input used to perform the sumerules integrals was set to None. However this same xgrid is used in the functionpdf_function of performfit.py, so it needs to be set even if the sumrules are not imposed. The last commit takes care of this, maybe there s a better way to do that

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 19, 2020

thinking again at what discusse at the pc on friday, I m writing here some considerations or I will forget:
as far as I understand one of the main reason for the preprocessing is that it ensures an integrable behaviour of the PDFs at small and big x, so that the integrals of the sumrules can be easily implemented.
If we implement the preprocessing in the flavour basis and then compute the sumrules in the evolution basis, as we have done so far, there s no much point in the preprocessing itself..it looks to me that we have 3 options

    • parametrize the PDFs in the flavour basis
    • apply preprocessing
    • rotate in the evolution basis
    • compute and apply sumrule in evolution basis
    • parametrize the PDFs in the flavour basis
    • apply preprocessing (imposing \alpha_q = \alpha_qbar)
    • compute and apply sumrule in flavour basis
    • rotate in the evolution basis
    • parametrize the PDFs in the flavour basis
    • rotate in the evolution basis
    • apply preprocessing
    • compute and apply sumrule in evolution basis

The first option is what is implemented right now, and we have seen that it doesn't work well.
The second is basically what was suggested at the pc on friday.
The third is similar to the second (the condition \alpha_q = \alpha_qbar is not completely equivalent but similar to impose the sumrules in the evolution basis) but would require less work from the code point of view, and maybe it is also a bit cleaner. Also, the preprocessing exponents should be the same as the old ones, in principle.

Having to choose between 2) and 3), I would start with 3)

@scarlehoff
Copy link
Copy Markdown
Member Author

Yeah, the impose sum rule flag has been broken for a while sadly (I basically never thought we didn't need it so didn't worry if something broke it...).

If we implement the preprocessing in the flavour basis and then compute the sumrules in the evolution basis, as we have done so far, there s no much point in the preprocessing itself

Maybe in theory, but in reality the preprocessing is also driving the behavior of the fit at small (and large) x so a fir without preprocessing will produce very different results.

Having to choose between 2) and 3), I would start with 3)

The problem with 3) is that I am not sure whether it counts as "fitting in the flavour basis". That said, as a first test it might make sense and it should work out of the box because that rotation should be easily "absorbed" by the neural network. i.e., I think in 3) at first approximation you should get exactly the same results as before.

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 20, 2020

The problem with 3) is that I am not sure whether it counts as "fitting in the flavour basis".

I guess that the point of the flavour basis is about imposing positivity of each single flavour, and in this way you could do that imposing the positivity of each neural net, without messing with preprocessing and sumrules. However I guess it makes sense to discuss this on friday

That said, as a first test it might make sense and it should work out of the box because that rotation should be easily "absorbed" by the neural network. i.e., I think in 3) at first approximation you should get exactly the same results as before.

yes that s what I was thinking, with the only difference that now you can impose positivity of each neural net. I ll start with this and then we can look at 2) as well

@scarlehoff
Copy link
Copy Markdown
Member Author

I guess that the point of the flavour basis is about imposing positivity of each single flavour,

This is what doesn't convince me because you are imposing positivity to some function which is not the final flavour function. And then later on multiply the preprocessing and normalization in a different basis. I think positivity will not be preserved if you rotate back to the flavour basis.

Actually, thinking about it I am not even sure you can apply preprocesing before the normalization of the sum rules...

But it is just a conjecture at this point, maybe it works fine.

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 21, 2020

Ok I ve changed my mind :) I m now thinking to implement 2 which is what discussed ad the pc..
I ve modified the preprocessing layer in order to have the same preprocessing exponents for both quarks and antiquarks when working in the flavour basis.

Not sure if this is the best way, I ve replaced the list kernel, which contained the preproc parameters in a given order, with a dictionary, so that it is easier to identify the couples quark antiquarks without having to rely on the specific order in which the differen flavours appears in the runcard. If the evolution basis is used nothing changes.

@scarlehoff
Copy link
Copy Markdown
Member Author

Not sure if this is the best way, I ve replaced the list kernel, which contained the preproc parameters in a given order, with a dictionary,

The problem is, inside the call you actually want to have a list (or an array). Also, the order at that point is fixed by the fktables so it is ok to be fixed.
You can change the order inside build however (so you are free to check the name of the flavour and add it to the correct position here).
Beyond all that, the kernels are actually parameters used by tensorflow so if you pass a dictionary it needs to convert the full dictionary.

All that said, being such a small dictionary it doesn't really matter and also we have decided not to train the prepossessing so it matters even less but:
since you are not allowed to use this information elsewhere and it only serves between the build and call method within the same function, I would keep the list.

tl;dr, I'd do the "digestion" of the basis in __init__ and build such that the call method is as simple as possible.

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 22, 2020

ah I see, yeah I ve broken everything..ok I ll give another go

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 22, 2020

Also, the order at that point is fixed by the fktables so it is ok to be fixed.

Uhm here the order is still the one which comes from the runcard, right? I mean, just like the rotation matrix of the FlavouToEvolution class is built dinamically according to the order in which the flavours are given in the runcard, so the order here should be read from the runcard as well I would say. No?

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 22, 2020

btw here there are some funny looking fits without preproc and without sumrules in both flav and evol basis
https://vp.nnpdf.science/1a3dZy5CRKCPhEeTJv8FiQ==
I guess the point is that at some point at small x the neural nets start being 0, so everything is screwed up?

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 25, 2020

@scarlehoff I was trying to implement the subtraction at x=1 discussed on friday, could the last commit work for that at first approximation?

@scarlehoff
Copy link
Copy Markdown
Member Author

@scarlehoff I was trying to implement the subtraction at x=1 discussed on friday, could the last commit work for that at first approximation?

From a practical point of view it could but we should discuss on Wednesday with @scarrazza how to go about this as this is no longer about "flavour basis" but rather about preprocessing.

imo we should do a second branch to deal with the preprocessing and once that's dealt with we come back here (you get an operation for the ones_like in PR #728 which should be merged soon).

But let's talk on Wednesday.

def dense_me(x):
""" Takes an input tensor `x` and applies all layers
from the `list_of_pdf_layers` in order """
x0 = operations.m_tensor_ones_like(x)
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of creating a full tensor of the same size as x every time it'd be better to just feed just a 1.0

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uhm I don't understand.. what do you mean by just feed 1.0?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of passing an array of, say, 50 1.0, you can just pass 1 1.0 and should be the same.

@scarlehoff
Copy link
Copy Markdown
Member Author

scarlehoff commented Apr 28, 2020

To merge the flavour part (rotation) would be better to have a new branch with only that.

@tgiani
Copy link
Copy Markdown
Contributor

tgiani commented Apr 29, 2020

ok I ve created a new branch of master where I ve cherry-picked the first commits of this branch regarding the implementation of the flavour basis. The corresponding PR is #749. We can merge that one while we keep working on preprocessing and sumrules on this one

@scarrazza scarrazza closed this May 13, 2020
@scarrazza scarrazza deleted the n3fit_in_flavour_basis branch August 27, 2021 21:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

n3fit Issues and PRs related to n3fit

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Expose basis choice in n3fit runcard n3fit should use existing code for pdfbases

5 participants