Skip to content

Cache of operators #173

@scarlehoff

Description

@scarlehoff

At the moment in order to generate a theory we need to generate an insane amount of EKOs.

However, due to the fact that many datasets are sharing the scale and that pineappl grids all share the same x-factors, it should be possible to generate a cache of EKOs (for a given theory).

So for instance, if I'm going to run:

pineko theory ekos dataset_n 410000000

Pineko should be able to

  1. Read all the ekos already present in the eko folder (in which the eko for dataset_n will be generated)
  2. Read the relevant operator cards (no need to parse up all ekos)
  3. Find out where some of the operators for dataset_n are already computed and take them directly from there.

The (ideal) next step would be to not save all operators, but just the union of all operators requested in all operators cards.

I'm wondering whether this is a crazy idea or this could be doable. I'm particularly interested in the ideal next step since I'm having storage problems...

Metadata

Metadata

Labels

enhancementNew feature or requestquestionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions