Skip to content

Design UI to run fitting experiments #135

@jonc125

Description

@jonc125

This will also need to link up with work on the back-end (fc-runner) to run these for real.

Should probably look much like running experiments in terms of logic flow.

UI elements related to fitting:

  1. See all fitting results for a given Model version
    • Listed by Dataset.
    • Compare fits for given Model to different Datasets - overlay histograms. This should be standard ‘compare’ type view, but for FittingResult not Prediction or Dataset.
    • Compare fits for multiple Models to same Dataset (as above). (This would include fitting multiple versions of the same Model.)
    • Model vs Dataset matrix view to make it easy to set up such comparisons?
      • (And click on blank cells to run new fitting experiments?)
  2. See all fits run for a given Dataset
    • List by Model, etc. as above.
  3. From a Protocol, see fits made using it?
    • Group by Model and/or Dataset? Submatrix view?
  4. Running a new fitting experiment
    • Start from a Dataset - if a/the linked Protocol has at least one FittingSpec associated?
      1. Select FittingSpec to use
      2. Select Model(s) to fit, constrained by (a) having parameters corresponding to the required priors (b) being the right species, or cell type, or other metadata matching?
        • What should this look like in the UI?
        • Drop-down list of all (visible) Models (that match constraints)
        • Other multi-select drop-downs to filter by available ontology terms?
      3. Having selected a Model, default to the latest version but with option to pick earlier one.
      4. Select version of linked Protocol to use (could be done in parallel with 1)
      5. Allow any parts of FittingSpec to be altered? (E.g. priors, noise, method, constraints) Certainly not initially, but maybe as a later feature (see also guided UI, below)?
    • Start from a Model
      • Would then have to select Dataset, then steps 1, 4, 5 above.
    • Start from a FittingSpec
      • Select the Dataset and Model; proceed as though from a Dataset
    • Start from a FittingResult - call this “re-fit to new model/data”
      • As above, but with Model & Dataset pre-selected but adjustable.
      • Should it be possible to change FittingSpec in this scenario? I’m thinking not.
        • So new Dataset would have to link to same Protocol.
        • Can change Protocol version?
    • Start from a Prediction? Probably not.
      • Similarly to starting from a Model, but Protocol is pre-selected, restricting FittingSpec and Dataset options?
  5. From a FittingResult, create a clone of the original Model with best fit parameter values?
    • Copies history (fork repo) and add new commit with parameter changes
    • New Model should also link back to the FittingResult, for provenance
    • Then you can do forward simulations from this state, and also download the new CellML
    • Predictions generated from such a Model need to make clear that they were generated from a fitted model - so they don't look super prescient when compared to data!
    • Ideally you’d also want to be able to do multiple forward simulations sampling from the distributions, and compare distributions of outputs - this would be a separate UI flow I think, going direct from the FittingResult to perhaps a new type of result (PredictionSet? Or just use Prediction but the result files are histograms rather than line plots? But would need to link to FittingResult rather than Model I think, which suggests a new type.)
  6. A guided UI to set up a new fitting spec? Or just textual, but generic enough to be easily reused? See also Epic 4: Develop fitting spec & implement it using FC+PINTS project_issues#60.
    • The ability to clone entities would help here too.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions