[WIP] Data Keyword#515
Conversation
|
Do we need to have both I think that what we should do instead is to have a production rule that you need to put in front explicitly to get data from experiments. In your runcard you would do: experiments: <...>
actions_:
- data_from_experiments action_taking_data |
|
That works nicely on the level of a single action, but what about vp-comparefits, is this just going to work on fits which have specified |
|
So to expand further we want to be able to use the |
|
i'd say we could patch up vp-comparefits to apply the production rule automatically. We do something similar for the theory covmat anyway. That said, one argument against the production rule approach would be that then you cannot quietly rewrite all actions to work with The argument in favour would be that then everything is nice and explicit. |
|
Ok I only just read your comment and was already trying something a bit different this morning. I haven't fully finished it yet but what about this approach? similar to use_cuts you can do a The only annoying thing is the old |
|
ok obviously this won't pass CI yet because I changed |
|
@Zaharid sorry to bother you, since you were away last week I guess you didn't look at this but I was wondering if you could have a look? I don't feel comfortable doing any more work on it until then since I'm concerned I'm going down the wrong path with it |
|
I'm not quite sure on how I'd go about all the details, but I'd say that one thing to improve over how So at some level there would be In a sense the biggest problem is to come up with good names for all of these things. |
|
Ok after having a conversation in Milan, writing this here for record:
the production rule which adds these together will produce a new namespace with defaults filled in and save this to a lock file #496 defaults will be added per dataset to the PLOTTING file. A new file shall be added with global defaults parse_data looks in the runcard for the data key and then constructs the data object parse_data_defaults takes data as input, will first look in the runcard (in the case it is a lock file) and fill in the defaults from there, if the defaults is not specified in the runcard it shall take them from the set location. produce_data_plus_defaults might be renamed but takes the above functions and creates a new runcard object (lock file) updated with a defaults key if it didn't already exist and returns the namespace with those defaults filled in the correct places. parse_theory_defaults gets defaults according to the specific theory like cfacs and cfac uncertainties either from runcard or set location produce_data_plus_theory_defaults also might be renamed but finally adds theory defaults to this namespace and updates the lock file, so probably lock file now all changes to lock file have happened it can be saved - this should happen before code starts and could possibly have uses in debugging, we can check people's defaults are up to date for example. |
|
see #651 |
I will add some more description here later
I basically realised straight away that I actually don't know a good way to be able to parse both
experimentsanddataand produce the same object - with a warning for the former and in the case that both are missing raise the correct exception thatdatakeyword is missingI found that this does some of the right things but having both a production rule and a parse for the same thing seems like I'm not using this in the way it was intended.
Any suggestions? Being able to process
experimentsis purely from a backwards compatibilty persepective