Functionality to patch models with changes#1031
Conversation
|
@alexdewar This is nearly working. Before I finish it up, are you happy with the general approach (i.e. the approach for building patches, saving to a temp directory etc.)? One annoying thing is that it doesn't work with multiple tests in the same file because of issue with the logger (this is why the tests are currently failing). I guess we could just keep all these tests in separate files, but if we're going to do this a lot then that could get a bit annoying. |
| } | ||
|
|
||
| /// Merge a TOML patch into a base TOML string and return the merged TOML. | ||
| fn merge_model_toml(base_toml: &str, patch: &toml::value::Table) -> Result<String> { |
There was a problem hiding this comment.
Could this function just consume patch instead?
There was a problem hiding this comment.
Possibly, but since it's called by ModelPatch::build on self.toml_patch you'd have to clone it first
alexdewar
left a comment
There was a problem hiding this comment.
I think this looks great in general. Would be super handy!
We could probably add more helper functions/macros for the common cases, e.g. "if you add this line to assets.csv of the simple example, validation should fail with this error message", but we can add those as we go along.
I think it is a problem if it can only be used in integration tests though -- ideally we'll want to do this in normal unit tests. Can we rework things so we can optionally skip setting up the logger for validating/running models? For validation, we could actually just call the load_model function rather than handle_validate_command, because we also don't need or want to load settings.toml.
|
Thanks
Good idea
Agree we can just call The other problem is with the Lines 178 to 180 in db2c40a What do you think we should do about this? |
It's a bit gross but I think we'll just have to disable the check when running tests 😞. Maybe something like this? // Set flag signalling whether broken model options are allowed or not
let result = BROKEN_OPTIONS_ALLOWED.set(model_params.allow_broken_options);
if result.is_err() {
if cfg!(test) {
// Sanity check
assert_eq!(
model_params.allow_broken_options,
broken_model_options_allowed()
);
} else {
panic!("Attempted to set BROKEN_OPTIONS_ALLOWED twice");
}
}It should probably live in its own function. I guess this is the downside of having mutable global variables... The problem with this approach is that we'll only be able to have unit tests for non-broken or broken options, but not both. We can work around this by having any tests for broken options as an integration test instead so it runs in a separate process. |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1031 +/- ##
==========================================
+ Coverage 80.71% 81.07% +0.35%
==========================================
Files 52 53 +1
Lines 6924 7193 +269
Branches 6924 7193 +269
==========================================
+ Hits 5589 5832 +243
- Misses 1063 1066 +3
- Partials 272 295 +23 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Thanks @alexdewar. I think this is ready for another look now. I've added a couple of macros that could prove useful for testing. Currently just for applying file patches to the simple model, but could be adapted to toml patches and other example models. Like you've mentioned we wont be able to test broken options like this. I initially thought we could set allow_broken_results to true as part of the patch within |
alexdewar
left a comment
There was a problem hiding this comment.
This is great!
I've got a couple of small suggestions about the API, but these aren't important and could be done later.
- I'd add a
ModelPatch::from_examplehelper, as that's what we'll want to do 99% of the time - I'd provide a macro to check the actual error message in case of error, so the test doesn't pass because some other error has occurred. I know there are a lot of places where we don't check the error message, but I think this is pretty fragile. More recently I've been trying to use the
assert_error!macro infixture.rsfor tests. Example: we rename a commodity in thesimpleexample, now a test which checks for errors related to commodities continues to pass, but because the commodity doesn't exist, not because the region was wrong (or something). We then break the functionality for checking regions are valid for commodities, but the test continues to pass, because all we're checking is whether there is an error, not which error occurred.
Could have macros assert_validation_succeeds! and assert_validation_fails_with! macros, or something.
Up to you whether you fancy doing this now or opening an issue. I'm sure we'll want to tweak the API as we go along anyway.
Now I think we need to use this in anger! A lot of the code doesn't have tests (or doesn't test all possible cases). We might not have time to retrofit tests for old code, but it would be good to add new tests along with new code as much as possible. Now it should be a lot easier 😄
|
Thanks!
Good idea, I've added this
Yeah could be useful. I figured best for now just to return the error and up to the caller whether it checks the specific error message. But if we find we're doing this a lot then yes a dedicated macro would be helpful. I've also found with this kind of thing that the specific error message isn't always predictable or what you might expect. For example, deleting a commodity from the simple example you get
Not necessarily a bad error message, but also if you're writing a test for this you wouldn't necessarily predict that that's the error message you'd get. I could also imagine fairly innocent changes to the code such as changing the order that files are read could lead to a completely different error message. That said, maybe takeaway here is that we need to be a bit more intentional with error messages. In this case, I think something more direct like
👍 |
Description
Following on from the discussion in #937, this PR introduces a way to make changes to existing models for testing purposes. For example, if you wanted to make sure that the validation checks flag a certain parameter combination, you could introduce these changes as a patch to an existing example model, then run the validate command (
handle_validate_command) on the patched model. At the moment this is only for testing purposes - rather than letting users patch models with "_diff" files as discussed previously (I've explored this in #1026, but it proved very fiddly).The approach is to build a
ModelPatchobject, which can be made up of one or multipleFilePatchobjects, and/or a toml table to indicate changes tomodel.toml. TheModelPatchgets applied to an existing model, outputting a new set of files to a temporary directory. The temporary directory can then be passed toload_modelthe same way that permanent model directory would, and will be deleted after the command has finished.I've included some examples of how you might use this intests/patch.rs(not necessarily good/useful examples, just examples to check everything is working). There's currently one annoyance which I'll comment below.Fixes # (issue)
Type of change
Key checklist
$ cargo test$ cargo docFurther checks