say: adding canonical-data.json#399
say: adding canonical-data.json#399kytrinyx merged 1 commit intoexercism:masterfrom junedev:say-canonical-data
Conversation
|
Awesome! Thanks so much for this work. I think this can go in as-is, but I do have some suggestions that we may want to discuss. I've mostly seen exception expectations written as Is there a plan behind the order of the tests? There are two things I look for when ordering tests:
For example, these two tests probably pass with the same change And there may be some other cases. In cases like this we can usually remove one of these tests as duplicative. I'm fine with your test descriptions. |
|
To check what to do in case of exception I looked at some of the other json files and happened to see only those like binary and alphametics that use null. I switched to -1 now to follow the instructions in the readme and the majority of the other test data files. I also removed the redundant test case "one million two". Other than that I find it a bit difficult to judge what the perfect order and minimal set would be as it might depend on the implementation path people are taking. |
|
I prefer |
I agree that it's unfortunate that some descriptions simply duplicate the expected output. What is the alternative, as it relates to this JSON file? Simply delete the description for those cases from the JSON file? I would find that acceptable, I think. I would indeed leave the description on the bounds checks though. If it's better to be consistent and have descriptions on all the cases even if some descriptions are useless, I at least find that more acceptable than having no descriptions since at least the last few cases will benefit from it.
I used I do not consider this issue to block this PR. We will talk about it in #401. |
I think I prefer consistency. Either way, let's ship this! Thank you ❤️ ❤️ |
This is to close https://github.com/exercism/todo/issues/142.
The test cases were quite homogeneous in the different tracks. I ignored the following outliers:
For discussion: The description in my PR and in the existing tests currently only contains the number that is tested instead of a real description. This is not "best practice" but changing to some declarative description would add little value and mean a lot of work for the individual tracks to adapt. Any thoughts on that?
Complete list of links to the implementations:
https://github.com/exercism/xcsharp/tree/master/exercises/say
https://github.com/exercism/xcpp/tree/master/say
https://github.com/exercism/xecmascript/tree/master/exercises/say
https://github.com/exercism/xelm/tree/master/exercises/say
https://github.com/exercism/xfsharp/tree/master/exercises/say
https://github.com/exercism/xgo/tree/master/exercises/say
https://github.com/exercism/xhaskell/tree/master/exercises/say
https://github.com/exercism/xjavascript/tree/master/exercises/say
https://github.com/exercism/xocaml/tree/master/exercises/say
https://github.com/exercism/xperl5/tree/master/say
https://github.com/exercism/xpython/tree/master/exercises/say
https://github.com/exercism/xracket/tree/master/exercises/say
https://github.com/exercism/xruby/tree/master/exercises/say
https://github.com/exercism/xscala/tree/master/exercises/say