Conversation
devkabiir
left a comment
There was a problem hiding this comment.
The PR accomplishes it's purpose. So its good to merge. I do have some open questions though.
| /// Edge cases are scenarios that are problems that occur under specific conditions | ||
| /// Read more: https://en.wikipedia.org/wiki/Edge_case | ||
| group("edge case tests", edgeCaseTests); |
There was a problem hiding this comment.
As one of the early exercises, I believe the purpose here was to inform the learner about Edge cases as they progress in the track/exercise. At some point it even made sense to me to distribute tests cases and group them in the order of their rareness. As this is beneficial in real-world software development. First you want to get the MVP out, then you want to flesh out all the bugs and edge cases.
As this was a hand-written exercise, it had the human element of thinking about the journey of the learner.
I am curious to know what you think about automating this?
It removes the human element and makes it purely mechanical. Which is useful when you don't have enough human-resources.
There was a problem hiding this comment.
I think that makes sense, if you'd like to open a PR that suggests the grouping in the problem-specifications (or start with an issue, maybe, in case the overwhelming majority don't want to go down that road, then you wouldn't have done the actual work up-front).
There was a problem hiding this comment.
if you'd like to open a PR that suggests the grouping in the
problem-specifications(or start with an issue,
This is again moving in a direction of consistency and automation over preserving the human element.
My question was more about the fact that by automating, you'd be removing (or at least reducing) the human creativity that each exercise can have.
Each exercise's solving focus would shift on the problem at hand. Instead of the programming language, paradigm, testing, various programming concepts, etc.
Which is great, because problem solving is really what we do. But it is equally important to learn that you have many more tools available to solve the problem.
By automating test generation in a consistent format, you could end up limiting learners to only use specific tools to solve the problem.
There was a problem hiding this comment.
My question was more about the fact that by automating, you'd be removing (or at least reducing) the human creativity that each exercise can have.
I 100% agree that we want to introduce people to the language, its idioms, paradigms, etc.
The generator isn't something that is enforced. It's just a tool. The generator doesn't automate design choices—those have to be made up front. It then takes those choices, and replicates them, using the data from problem-specifications. The generator is fairly flexible. We can always tweak what a test suite will look like. Or we can ignore the generator and hand-craft a test suite if we need to.
In the case of the thought that was put into this test suite, it seems like it's not about Dart, but about the broader user-experience for someone solving the exercise (conceptually). That's something that is worth considering discussing as an improvement to the test data, since it is language-agnostic.
There was a problem hiding this comment.
@kytrinyx Spot on, if we can't assume someone has prior programming experience, then my intention was to introduce some broader concepts and terminology in the early exercises.
However, with the move to Exercism V3 these practice exercises can be tackled in any order, correct?
And we don't have a curriculum for the non-practice exercises, so assuming my understanding is correct, we'll have to rethink how we bring up these concepts to students.
I apologize for not getting the non-practice exercises created. I thought each track could set up its own path and I would work on the non-practice exercises every so often.
But during that time, changes kept being made to V3 about how things were done and I just got tired of having to refactor and change what I was doing.
So I got a little burned out. And now my time is pretty well spent during the week.
I do hope someone can sit up the non-practice exercises and keep some of these intentions on mine for the students who are coming fresh face into this track.
There was a problem hiding this comment.
Spot on, if we can't assume someone has prior programming experience, then my intention was to introduce some broader concepts and terminology in the early exercises.
Got it. That makes a lot of sense.
Overall we do assume that people have prior programming experience if they're using Exercism (even if it's not a whole lot). In the early days of Exercism we really didn't understand who was using Exercism, but now it's clear that Exercism is a tool for learning programming languages, if you already know how to program (at least a little).
However, with the move to Exercism V3 these practice exercises can be tackled in any order, correct?
Yes, that is true.
I apologize for not getting the non-practice exercises created.
No apologies necessary! You have no obligation whatsoever to do that; I'm just grateful for all the work that you've put into this track.
So I got a little burned out.
❤️ I completely understand!
Even if we do set up a syllabus for the Dart track, it will be focused on Dart-specific concepts rather than broader concepts within programming. We're starting to think about how to create courses that are specific to things other than programming languages, but it's a ways out yet.
| /// Edge cases are scenarios that are problems that occur under specific conditions | ||
| /// Read more: https://en.wikipedia.org/wiki/Edge_case | ||
| group("edge case tests", edgeCaseTests); | ||
| group("Anagram: anagram tests - ", anagramTests); |
There was a problem hiding this comment.
A top-level group is enough for this exercise, as canonical-data does not have any grouping. We do not need to extract out test cases to a named function anagramTests, a closure would suffice here. But I can understand that this would be special casing in your test generator.
There was a problem hiding this comment.
No, that's no problem, and a good shout. I'll tweak it so that it does it that way.
|
@devkabiir thanks for taking the time to discuss all the details. It sounds like there are three avenues here:
Which approach would you prefer? |
That sounds like more work for you. Future iterations of the problem spec. would require to do the same.
I might have been misleading. I sometimes do meta-commentary. Since we've been discussing #410 #412 #408 . I was talking in general about the grouping for tests in all exercises not just this one.
More exercises/tracks will benefit from this as this will require your generator to consider such cases. Future sync/updates will be automatable as an effect. I think the least effort, most fun & best outcome would be to go with option 3. Since you intend to polish things up in this track and bring it up to speed. Specifically adding more exercises. This is what concerned me about the test generator. When adding new exercises, generated tests would inevitably gravitate implementors towards a specific paradigm. But it's still better for the track than not having enough exercises. |
That works for me.
In terms of adding exercises, I would be delighted to have help defining what a "typical" test would look like for a given exercise, and then generate based on that. I can't do the design work, as I have never worked with Dart, so I wouldn't have good instincts. But if you're open to spending some time up front hashing out what the general API of the subject under test should be, etc, then I think that should be pretty simple to work with (for most exercises; there are some exceptions that I still don't have a good solution for). Let's get the existing exercises brought up to speed first, and then once that's all good, I can open issues for a couple of exercises that we can discuss and then try out the generator with, and then iterate on the process from there. |
This regenerates the anagram exercise following the structure of the canonical-data.json file in the problem-specifications repository.
This brings in updates to the docs, as well as some additional tests.
a8879f3 to
ae835ea
Compare
Uh oh!
There was an error while loading. Please reload this page.