Skip to content

Add FittingResult model #134

@jonc125

Description

@jonc125

Probably in a fitting app, cf #133?

See https://github.com/ModellingWebLab/project_issues/wiki/Workshop-notes-2018 for ideas.

Viewing results should hopefully just reuse code from #131 etc. But we'll need to do (at least some of) #135 before this can be fully implemented and tested.

The plan is being sketched out further in #203 and #241. Possible steps:

  • Do the Runnable refactor outlined in First outline of fitting result models #241, with DB migrations, ensuring tests still pass etc. This can be a single PR.
    • This will mean that the RunningExperiment table won't need changing for fitting experiments, and indeed receipt of finished experiments in experiments/processing.py:process_callback (and cancelling running experiments) won't need any additions.
  • Then building on that and Use foreign keys for model/proto version references in Experiment table #250, create the FittingResult model inspired by the outline in First outline of fitting result models #241. No views initially, just tests of the model directly? (Like experiments/tests/test_models.py)
    • So create the FittingResult and FittingResultVersion models.
    • FittingResult needs to link to repocache tables for entity versions, not use SHA strings.
  • Then basic views along the lines of experiments/views.py, trying to reuse code where possible as was done for the specs & datasets:
    • FittingResultVersionListView
    • FittingResultVersionView
    • FittingResultVersionJsonView
    • FittingResultDeleteView
    • FittingResultVersionDeleteView
    • FittingResultFileDownloadView
    • FittingResultVersionArchiveView
  • Adapt submit_experiment to create FittingResult etc. instances, refactoring to share code where possible.
  • Harder views, probably need JS code rework e.g. as discussed in Create ExperimentalDataset app with basic model & views #131
    • FittingResultComparison[Json]View

General notes:

  • To get submission of fitting experiments to work we'll need a 'fitting' version of experiments/processing.py:submit_experiment that creates the FittingResult etc instances. We may be able to refactor so they share some code though.
  • We don't want to try automatically migrating existing hacky fitting specs (stored as Protocol instances) to FittingSpec instances etc. Instead manually re-create ones we care about (just one really! i.e. Kylie since wave cell 5).
  • We'll need to consider how best to test this as we develop, including building enough of the fitting experiment submission UI Design UI to run fitting experiments #135 to test it out manually. Perhaps better to use a sample result from the current hacky implementation as a test case? (Maybe downsample the 3MB CSV for speed!)

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions