Skip to content

Conversation

@guzzijones
Copy link
Contributor

@guzzijones guzzijones commented Jun 11, 2020

Implement orquesta-rehearse command to allow users run unit tests for workflow definition. The command takes a base path (typically the StackStorm pack directory) and the relative path to the test spec which contains detail about which workflow definition and expectation to test.

The following is the output of displaying help for the command.

$ orquesta-rehearse -h
usage: A utility for testing orquesta workflows. [-h] -p BASE_PATH
                                                 (-f TEST_SPEC | -d TEST_SPEC_DIR)
                                                 [--debug]

optional arguments:
  -h, --help            show this help message and exit
  -p BASE_PATH, --base-path BASE_PATH
                        The base path where the workflow definition, tests,
                        and files are located.
  -f TEST_SPEC, --test-spec TEST_SPEC
                        The path to the test spec (relative from base path) to
                        run the test.
  -d TEST_SPEC_DIR, --test-spec-dir TEST_SPEC_DIR
                        The path to the test spec directory (relative from
                        base path) to run multiple tests.
  --debug               Set the log level to debug.

The following is the sample workflow used for the tests below.

version: 1.0

description: A basic sequential workflow.

input:
  - name

vars:
  - greeting: null

output:
  - greeting: <% ctx().greeting %>

tasks:
  task1:
    action: core.echo message=<% ctx().name %>
    next:
      - when: <% succeeded() %>
        publish: greeting=<% result() %>
        do: task2
  task2:
    action: core.echo
    input:
      message: "All your base are belong to us!"
    next:
      - when: <% succeeded() %>
        publish:
          - greeting: <% ctx("greeting") %>, <% result() %>
        do:
          - task3
  task3:
    action: core.echo message=<% ctx('greeting') %>
    next:
      - when: <% succeeded() %>
        publish: greeting=<% result() %>

The following is the sample test spec for the successful tests below.

---
workflow: workflows/sequential.yaml
inputs:
  name: Stanley
expected_task_sequence:
  - task1
  - task2
  - task3
  - continue
mock_action_executions:
  - task_id: task1
    result: Stanley
  - task_id: task2
    result: All your base are belong to us!
  - task_id: task3
    result: Stanley, All your base are belong to us!
expected_output:
  greeting: Stanley, All your base are belong to us!

The following is the output for executing a successful test.

$ orquesta-rehearse -p /path/to/stackstorm/pack s -f tests/success.yaml
2020-12-18 21:16:29,232 - INFO - The test spec "/path/to/stackstorm/pack/tests/success.yaml" is successfully loaded.
2020-12-18 21:16:29,232 - INFO - Start test for workflow "/path/to/stackstorm/pack/workflows/sequential.yaml".
2020-12-18 21:16:29,309 - INFO - Completed running test and the workflow execution matches the test spec.

The following is the output for executing a failed test.

$ orquesta-rehearse -p /path/to/stackstorm/pack s -f tests/failure.yaml
2020-12-18 21:23:47,677 - INFO - The test spec "/path/to/stackstorm/pack/tests/failure.yaml" is successfully loaded.
2020-12-18 21:23:47,678 - INFO - Start test for workflow "/path/to/stackstorm/pack/workflows/sequential.yaml".
2020-12-18 21:23:47,825 - ERROR - The actual task sequence ['task1', 'task2', 'task3', 'continue'] does not match expected ['task1', 'task3', 'task2', 'continue'].

The following is the output for executing a successful test with debug turned on.

$ orquesta-rehearse -p /path/to/stackstorm/pack s -f tests/success.yaml --debug
2020-12-18 21:16:32,303 - INFO - The test spec "/path/to/stackstorm/pack/tests/success.yaml" is successfully loaded.
2020-12-18 21:16:32,304 - INFO - Start test for workflow "/path/to/stackstorm/pack/workflows/sequential.yaml".
2020-12-18 21:16:32,334 - DEBUG - Task "task1__r0" is set to "running".
2020-12-18 21:16:32,334 - DEBUG - Applying action execution with "succeeded" status to task "task1__r0".
2020-12-18 21:16:32,337 - DEBUG - Task "task1__r0" is set to "succeeded".
2020-12-18 21:16:32,339 - DEBUG - Task "task2__r0" is identified to run next.
2020-12-18 21:16:32,346 - DEBUG - Task "task2__r0" is set to "running".
2020-12-18 21:16:32,346 - DEBUG - Applying action execution with "succeeded" status to task "task2__r0".
2020-12-18 21:16:32,350 - DEBUG - Task "task2__r0" is set to "succeeded".
2020-12-18 21:16:32,363 - DEBUG - Task "task3__r0" is identified to run next.
2020-12-18 21:16:32,371 - DEBUG - Task "task3__r0" is set to "running".
2020-12-18 21:16:32,371 - DEBUG - Applying action execution with "succeeded" status to task "task3__r0".
2020-12-18 21:16:32,392 - DEBUG - Task "task3__r0" is set to "succeeded".
2020-12-18 21:16:32,412 - DEBUG - The workflow completed with status "succeeded".
2020-12-18 21:16:32,413 - DEBUG - The workflow output with "{'greeting': 'Stanley, All your base are belong to us!'}".
2020-12-18 21:16:32,415 - DEBUG - The workflow completed with no errors.
2020-12-18 21:16:32,415 - DEBUG - Comparing with expected task execution sequence.
2020-12-18 21:16:32,416 - DEBUG - Comparing with expected workflow route(s).
2020-12-18 21:16:32,417 - DEBUG - Comparing with expected workflow status.
2020-12-18 21:16:32,419 - DEBUG - Comparing with expected workflow output.
2020-12-18 21:16:32,420 - INFO - Completed running test and the workflow execution matches the test spec.

The following is the output for executing multiple test specs in a directory. Please note that if using result_path in the mock action execution, then the results should be stored in separate files under a subdirectory.

$ orquesta-rehearse -p /path/to/stackstorm/pack -d tests
2020-12-18 22:05:06,306 - INFO - Using the test spec directory "/path/to/stackstorm/pack/tests".
2020-12-18 22:05:06,307 - INFO - Identified the following test specs in the directory: tests/failure.yaml, tests/success.yaml
2020-12-18 22:05:06,323 - INFO - The test spec "/path/to/stackstorm/pack/tests/failure.yaml" is successfully loaded.
2020-12-18 22:05:06,323 - INFO - Start test for workflow "/path/to/stackstorm/pack/workflows/sequential.yaml".
2020-12-18 22:05:06,363 - ERROR - The actual task sequence ['task1', 'task2', 'task3', 'continue'] does not match expected ['task1', 'task3', 'task2', 'continue'].
2020-12-18 22:05:06,376 - INFO - The test spec "/path/to/stackstorm/pack/tests/success.yaml" is successfully loaded.
2020-12-18 22:05:06,376 - INFO - Start test for workflow "/path/to/stackstorm/pack/workflows/sequential.yaml".
2020-12-18 22:05:06,422 - INFO - Completed running test and the workflow execution matches the test spec.
2020-12-18 22:05:06,422 - ERROR - There are errors processing test spec(s). Please review details above.

NOTE: Workflow definition that uses YAQL/Jinja functions that communicates with the StackStorm database is not supported in this PR.

Copy link
Contributor Author

@guzzijones guzzijones left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixes for fixture_mock

@codecov-commenter
Copy link

codecov-commenter commented Jun 15, 2020

Codecov Report

Merging #209 into master will decrease coverage by 0.75%.
The diff coverage is 92.26%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #209      +/-   ##
==========================================
- Coverage   94.04%   93.28%   -0.76%     
==========================================
  Files          41       45       +4     
  Lines        2735     3141     +406     
  Branches      545      644      +99     
==========================================
+ Hits         2572     2930     +358     
- Misses        100      128      +28     
- Partials       63       83      +20     
Impacted Files Coverage Δ
orquesta/composers/base.py 91.66% <0.00%> (ø)
orquesta/composers/mock.py 57.14% <0.00%> (ø)
orquesta/requests.py 100.00% <ø> (ø)
orquesta/specs/mistral/__init__.py 100.00% <ø> (ø)
orquesta/specs/mistral/v2/policies.py 100.00% <ø> (ø)
orquesta/specs/native/__init__.py 100.00% <ø> (ø)
orquesta/utils/context.py 90.47% <ø> (ø)
orquesta/utils/date.py 89.65% <ø> (ø)
orquesta/utils/dictionary.py 90.90% <ø> (ø)
orquesta/utils/expression.py 100.00% <ø> (ø)
... and 43 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8035754...f5638a9. Read the comment docs.

Copy link
Contributor Author

@guzzijones guzzijones left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixture errors are not workflow errors

@guzzijones guzzijones changed the title WIP - add console script to test orquesta workflows add console script to test orquesta workflows Jun 22, 2020
@guzzijones
Copy link
Contributor Author

This isn't so much a WIP anymore. I have done some internal testing and fixed some bugs. Still need to cleanup some python2 patching in the unit tests for builtins.

@m4dcoder
Copy link
Collaborator

m4dcoder commented Jun 22, 2020

@guzzijones

Please move test related changes under orquesta/tests.

  • Can you move the mock classes under orquesta/tests/mocks.py?
  • Can you move the exception classes under orquesta/tests/exceptions.py?

As for example fixture, is it possible to simplify to the following?

---
  workflow: file_mitigate.yaml

  routes: [[],["indicators_exist__t0"]]

  task_sequence:
    - task: file_exist
      # route default to 0 if not given
      route: 0
      status: succeeded
      result: true
    - task: extract_indicators
      status: succeeded
      result_file: /path/to/extract_indicators/result.json
    - task: indicators_exist
      status: succeeded
      result: None
    - task: fail
      route: 1
      result: None

  workflow_status: failed

  workflow_output: ...

For result too large to include in the test fixture, it is cleaner to save and point to a different file.

/path/to/extract_indicators/result.json

{
  "Hash": []
}

Copy link
Collaborator

@m4dcoder m4dcoder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please address my comments. Thanks.

@m4dcoder
Copy link
Collaborator

@guzzijones Also, I don't think this address with-items task yet.

@m4dcoder
Copy link
Collaborator

@guzzijones I am also wondering we can add a separate command to list the routes for a workflow for reference. Therefore, users don't need to figure out expected routes for the workflow and users can use that reference to indicate which route number to use in the test fixtures.

@guzzijones
Copy link
Contributor Author

@guzzijones

Please move test related changes under orquesta/tests.

  • Can you move the mock classes under orquesta/tests/mocks.py?

Yes will do

  • Can you move the exception classes under orquesta/tests/exceptions.py?

Yes will do

As for example fixture, is it possible to simplify to the following?

---
  workflow: file_mitigate.yaml

  routes: [[],["indicators_exist__t0"]]

  task_sequence:
    - task: file_exist
      # route default to 0 if not given

I need to understand what 0 means in this context?

  route: 0
  status: succeeded
  result: true
- task: extract_indicators
  status: succeeded
  result_file: /path/to/extract_indicators/result.json
- task: indicators_exist
  status: succeeded
  result: None
- task: fail
  route: 1
  result: None

workflow_status: failed

workflow_output: ...


For result too large to include in the test fixture, it is cleaner to save and point to a different file.

/path/to/extract_indicators/result.json

I will attempt to add result_file as a mutually exclusive property vs result

{
  "Hash": []
}

@guzzijones guzzijones closed this Jun 23, 2020
@m4dcoder m4dcoder changed the title add console script to test orquesta workflows Implement orquesta-rehearse command to allow users run unit tests for workflow definition Dec 18, 2020
@m4dcoder m4dcoder changed the title Implement orquesta-rehearse command to allow users run unit tests for workflow definition Implement orquesta-rehearse command to run unit tests for workflow Dec 18, 2020
@m4dcoder m4dcoder removed the WIP label Dec 18, 2020
@guzzijones
Copy link
Contributor Author

I will run through this on Monday with a fresh workflow and test the UI. I need to test a workflow with multiple routes. It helped me a lot in debug mode for the route to print out and then I could go back and fix my test fixture

@guzzijones
Copy link
Contributor Author

So you can't interpret the expected_task_sequence from the mock_action_executions? @m4dcoder

@m4dcoder
Copy link
Collaborator

m4dcoder commented Dec 18, 2020

So you can't interpret the expected_task_sequence from the mock_action_executions?

@guzzijones Can you clarify your question just in case?

The expected_task_sequences and mock_action_executions are separated because there are use cases where a task can have multiple action executions. The relationship of task to action execution is one:many.

The most obvious example is with items. There can be multiple mock action executions for a with items task without triggering a new task sequence entry. The other example is task retry which doesn't write a new entry in the task sequence because that will inflate the size of the workflow state.

Also, there are cases where we want to test intermediate status changes (i.e. pending) and not necessary directly from running to succeeded, failed, or other completed status. This requires multiple mock action executions for the task.

I think I see what you mean. It is possible to have the mock action executions as an array under the task in the task sequence. I separate them for readability to avoid mock action executions interlace with the task sequence. It's just harder to read that way. Now I can look at a test spec and read the expected task sequence in its entirety.

@guzzijones
Copy link
Contributor Author

So you can't interpret the expected_task_sequence from the mock_action_executions?

@guzzijones Can you clarify your question just in case?

The expected_task_sequences and mock_action_executions are separated because there are use cases where a task can have multiple action executions. The relationship of task to action execution is one:many.

The most obvious example is with items. There can be multiple mock action executions for a with items task without triggering a new task sequence entry.

Also, there are cases where we want to test intermediate status changes (i.e. pending) and not necessary directly from running to succeeded, failed, or other completed status. This requires multiple mock action executions for the task.

Ah yes, I don't think my way of using the task name as a key in the spec would have worked in the case of a cycle. I sorta cheated with with-items results and just used a list before.

@m4dcoder
Copy link
Collaborator

Btw, if the mock action execution is just default (succeeded and no result), then you don't need to provide a mock action execution entry for that task.

@guzzijones
Copy link
Contributor Author

It appears this new test fixture spec diverges from this comment from before: #209 (comment).

I am curious why the change?

@m4dcoder
Copy link
Collaborator

m4dcoder commented Dec 19, 2020

@guzzijones From the comment you link to here, I see three differences 1) move class to namespace to orquesta/tests/mocks 2) move exceptions to namespace orquesta/tests/exceptions 3) different schema for task sequences.

1 + 2) I refactored and renamed the set of specs and classes that was implemented initially. On reconsideration where the classes should reside, the orquesta/tests namespace is specifically for unit tests and other modules related testing. Since WorkflowRehearsal and related classes are exposed as user features, this should be moved to the main orquesta namespace.
3) This is already covered in the comments above on task sequence and mock action executions.

@guzzijones
Copy link
Contributor Author

Ah. Yes I missed this comment somehow.
"I think I see what you mean. It is possible to have the mock action executions as an array under the task in the task sequence. I separate them for readability to avoid mock action executions interlace with the task sequence. It's just harder to read that way. Now I can look at a test spec and read the expected task sequence in its entirety."

@guzzijones
Copy link
Contributor Author

I have rewritten 2 workflow test specs in this new format and tested them against workflows. It works. 👍

  1. If I put in an incorrect expected_routes I see is that it doesn't print out the routes array for me. So I can't paste it into the 'expected_routes' section. Instead I get this diff which is hard to decipher.
    You notice some text is cut off has[49 chars]icket__t0 below on the task names which makes it hard to read.
 Comparing with expected workflow route(s).
2020-12-21 19:35:21,489 - ERROR - Lists differ: [[], [117 chars]__t0'], ['has_traffic__t0', 'bc_check__t0', 'h[75 chars]t0']] != [[], [117 chars]__t0', 'add_comment_mitigate_indicators_no_follow_up__t0']]

First differing element 3:
['has[49 chars]icket__t0']
['has[49 chars]icket__t0', 'add_comment_mitigate_indicators_no_follow_up__t0']

First list contains 1 additional elements.
First extra element 4:
['has_traffic__t0', 'bc_check__t0', 'has_traffic_pub_ticket__t0', 'add_comment_mitigate_indicators_no_follow_up__t0']

  [[],
   ['has_traffic__t0'],
   ['has_traffic__t0', 'bc_check__t0'],
-  ['has_traffic__t0', 'bc_check__t0', 'has_traffic_pub_ticket__t0'],
   ['has_traffic__t0',
    'bc_check__t0',
    'has_traffic_pub_ticket__t0',
    'add_comment_mitigate_indicators_no_follow_up__t0']] : The lists of workflow routes do not match.

The diff syntax is hard to read without color coding.
My solution was to just print out : X not equal to Y so I could then just copy paste the routes into my workflow test fixture.
I think you need to say found X vs expected Y instead of first differing element...
That UX needs work, imo.

The same UX goes for if I have missing tasks, etc.

  1. After rewriting a large workflow in this new format I don't feel like separating the expected_task_sequence from the mock_action_results adds much value. I end up with tasks in 2 different parts of the spec and the routes are now formatted as __rX

@m4dcoder
Copy link
Collaborator

@guzzijones

Sure, I can look at improving the error messages.

The task sequences and mock action executions schema will stay as is. The list of action executions need to be treated as a stream of action execution updates into the workflow engine. I cannot do that if the list is merged with the task sequences.

Refactor error messages in workflow rehearsal to be more user friendly
when printed out to console.
@guzzijones
Copy link
Contributor Author

guzzijones commented Dec 22, 2020

Works for me

(venv_orquesta_bin) [xxxx@xxxx]$ orquesta-rehearse -p /shared/dev/e390084/lmintel/actions/workflows -f tests/mitigate_success_no_traffic.yaml
2020-12-22 23:10:07,473 - INFO - The test spec "/shared/dev/e390084/lmintel/actions/workflows/tests/mitigate_success_no_traffic.yaml" is successfully loaded.
2020-12-22 23:10:07,473 - INFO - Start test for workflow "/shared/dev/e390084/lmintel/actions/workflows/mitigate.yaml".
2020-12-22 23:10:09,195 - ERROR - The actual routes [[], ['has_traffic__t0'], ['has_traffic__t0', 'bc_check__t0'], ['has_traffic__t0', 'bc_check__t0', 'has_traffic_pub_ticket__t0'], ['has_traffic__t0', 'bc_check__t0', 'has_traffic_pub_ticket__t0', 'add_comment_mitigate_indicators_no_follow_up__t0']] does not match expected [[], ['has_traffic__t0'], ['has_traffic__t0', 'bc_check__t0'], ['has_traffic__t0', 'bc_check__t0', 'has_traffic_pub_ticket__t0', 'add_comment_mitigate_indicators_no_follow_up__t0']].

@guzzijones
Copy link
Contributor Author

Any plans to merge this soon?

@m4dcoder
Copy link
Collaborator

@guzzijones Yes, we will merge this before the next StackStorm release.

@m4dcoder m4dcoder merged commit 0cacb39 into StackStorm:master Jan 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants