-
Notifications
You must be signed in to change notification settings - Fork 41
Implement orquesta-rehearse command to run unit tests for workflow #209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
guzzijones
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixes for fixture_mock
Codecov Report
@@ Coverage Diff @@
## master #209 +/- ##
==========================================
- Coverage 94.04% 93.28% -0.76%
==========================================
Files 41 45 +4
Lines 2735 3141 +406
Branches 545 644 +99
==========================================
+ Hits 2572 2930 +358
- Misses 100 128 +28
- Partials 63 83 +20
Continue to review full report at Codecov.
|
guzzijones
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixture errors are not workflow errors
… workflow conduct
|
This isn't so much a WIP anymore. I have done some internal testing and fixed some bugs. Still need to cleanup some python2 patching in the unit tests for builtins. |
|
Please move test related changes under
As for example fixture, is it possible to simplify to the following? For result too large to include in the test fixture, it is cleaner to save and point to a different file. /path/to/extract_indicators/result.json |
m4dcoder
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please address my comments. Thanks.
|
@guzzijones Also, I don't think this address with-items task yet. |
|
@guzzijones I am also wondering we can add a separate command to list the routes for a workflow for reference. Therefore, users don't need to figure out expected routes for the workflow and users can use that reference to indicate which route number to use in the test fixtures. |
Yes will do
Yes will do
I need to understand what 0 means in this context?
I will attempt to add result_file as a mutually exclusive property vs result
|
Add orquesta-rehearse command that takes -p base path and -t relative path for test spec as arguments to run the test for the workflow definition.
|
I will run through this on Monday with a fresh workflow and test the UI. I need to test a workflow with multiple routes. It helped me a lot in debug mode for the route to print out and then I could go back and fix my test fixture |
|
So you can't interpret the |
@guzzijones Can you clarify your question just in case? The expected_task_sequences and mock_action_executions are separated because there are use cases where a task can have multiple action executions. The relationship of task to action execution is one:many. The most obvious example is with items. There can be multiple mock action executions for a with items task without triggering a new task sequence entry. The other example is task retry which doesn't write a new entry in the task sequence because that will inflate the size of the workflow state. Also, there are cases where we want to test intermediate status changes (i.e. pending) and not necessary directly from running to succeeded, failed, or other completed status. This requires multiple mock action executions for the task. I think I see what you mean. It is possible to have the mock action executions as an array under the task in the task sequence. I separate them for readability to avoid mock action executions interlace with the task sequence. It's just harder to read that way. Now I can look at a test spec and read the expected task sequence in its entirety. |
Ah yes, I don't think my way of using the task name as a key in the spec would have worked in the case of a cycle. I sorta cheated with with-items results and just used a list before. |
|
Btw, if the mock action execution is just default (succeeded and no result), then you don't need to provide a mock action execution entry for that task. |
|
It appears this new test fixture spec diverges from this comment from before: #209 (comment). I am curious why the change? |
|
@guzzijones From the comment you link to here, I see three differences 1) move class to namespace to orquesta/tests/mocks 2) move exceptions to namespace orquesta/tests/exceptions 3) different schema for task sequences. 1 + 2) I refactored and renamed the set of specs and classes that was implemented initially. On reconsideration where the classes should reside, the orquesta/tests namespace is specifically for unit tests and other modules related testing. Since WorkflowRehearsal and related classes are exposed as user features, this should be moved to the main orquesta namespace. |
|
Ah. Yes I missed this comment somehow. |
|
I have rewritten 2 workflow test specs in this new format and tested them against workflows. It works. 👍
The diff syntax is hard to read without color coding. The same UX goes for if I have missing tasks, etc.
|
|
Sure, I can look at improving the error messages. The task sequences and mock action executions schema will stay as is. The list of action executions need to be treated as a stream of action execution updates into the workflow engine. I cannot do that if the list is merged with the task sequences. |
Refactor error messages in workflow rehearsal to be more user friendly when printed out to console.
|
Works for me |
|
Any plans to merge this soon? |
|
@guzzijones Yes, we will merge this before the next StackStorm release. |
Implement
orquesta-rehearsecommand to allow users run unit tests for workflow definition. The command takes a base path (typically the StackStorm pack directory) and the relative path to the test spec which contains detail about which workflow definition and expectation to test.The following is the output of displaying help for the command.
The following is the sample workflow used for the tests below.
The following is the sample test spec for the successful tests below.
The following is the output for executing a successful test.
The following is the output for executing a failed test.
The following is the output for executing a successful test with debug turned on.
The following is the output for executing multiple test specs in a directory. Please note that if using result_path in the mock action execution, then the results should be stored in separate files under a subdirectory.
NOTE: Workflow definition that uses YAQL/Jinja functions that communicates with the StackStorm database is not supported in this PR.