The option of not rewriting the data to disk unless hash has changed is a great feature. However, it is currently not implemented for the imputed dataset, which is where massive data writing is common (original reason for suggesting the feature). I do not see why not implementing something of the .hashifyFile function in an lapply loop over the imputed datalist and the expected filename(s)? Would not capture changes to the number of imputations in the datalist, but I consider that a very rare case?
I now wait for about 10min each model I run (imputed datasets, different data)
The option of not rewriting the data to disk unless hash has changed is a great feature. However, it is currently not implemented for the imputed dataset, which is where massive data writing is common (original reason for suggesting the feature). I do not see why not implementing something of the .hashifyFile function in an lapply loop over the imputed datalist and the expected filename(s)? Would not capture changes to the number of imputations in the datalist, but I consider that a very rare case?
I now wait for about 10min each model I run (imputed datasets, different data)