Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 51 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,13 @@ Produce an easy-to-read summary of your project's test data as part of your GitH
* Integrates easily with your existing GitHub Actions workflow
* Produces summaries from JUnit XML and TAP test output
* Compatible with most testing tools for most development platforms
* Customizable to show just a summary, just failed tests, or all test results.
* Produces step outputs, so you can pass summary data to other actions
* Customizable to show just a summary, just failed tests, or all test results
* Output can go to the [GitHub job summary](https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-job-summary) (default), to a file or `stdout`

Getting Started
---------------
To set up the test summary action, just add a few lines of YAML to your GitHub Actions workflow. For example, if your test harness produces JUnit XML outputs in the `test/results/` directory, and you want to produce a test summary in a file named `test-summary.md`, add a new step to your workflow YAML after your build and test step:
To set up the test summary action, just add a few lines of YAML to your GitHub Actions workflow. For example, if your test harness produces JUnit XML outputs in the `test/results/` directory, and you want the output attached to the job summary, add a new step to your workflow YAML after your build and test step:

```yaml
- name: Test Summary
Expand All @@ -37,11 +39,20 @@ Update `paths` to match the test output file(s) that your test harness produces.

> Note the `if: always()` conditional in this workflow step: you should always use this so that the test summary creation step runs _even if_ the previous steps have failed. This allows your test step to fail -- due to failing tests -- but still produce a test summary.

Upload the markdown
-------------------
The prior "getting started" step generates a summary in GitHub-flavored Markdown (GFM). Once the markdown is generated, you can upload it as a build artifact, add it to a pull request comment, or add it to an issue. For example, to upload the markdown generated in the prior example as a build artifact:
Generating and uploading a markdown file
----------------------------------------
You can also generate the summary in a GitHub-flavored Markdown (GFM) file, and upload it as a build artifact, add it to a pull request comment, or add it to an issue. Use the `output` parameter to define the target file.

For example, to create a summary and upload the markdown as a build artifact:

```yaml
- name: Test Summary
uses: test-summary/action@v1
with:
paths: "test/results/**/TEST-*.xml"
output: test-summary.md
if: always()

- name: Upload test summary
uses: actions/upload-artifact@v3
with:
Expand All @@ -50,6 +61,30 @@ The prior "getting started" step generates a summary in GitHub-flavored Markdown
if: always()
```

Outputs
-------
This action also generates several outputs you can reference in other steps, or even from your job or workflow. These outputs are `passed`, `failed`, `skipped`, and `total`.

For example, you may want to send a summary to Slack:

```yaml
- name: Test Summary
id: test_summary
uses: test-summary/action@v1
with:
paths: "test/results/**/TEST-*.xml"
if: always()
- name: Notify Slack
uses: slackapi/slack-github-action@v1.19.0
with:
payload: |-
{
"message": "${{ steps.test_summary.outputs.passed }}/${{ steps.test_summary.outputs.total }} tests passed"
}
if: always()
```


Examples
--------
There are examples for setting up a GitHub Actions step with many different platforms [in the examples repository](https://github.com/test-summary/examples).
Expand Down Expand Up @@ -94,13 +129,24 @@ Options are specified on the [`with` map](https://docs.github.com/en/actions/usi
```yaml
- uses: test-summary/action@v2
with:
paths: "test/results/**/TEST-*.xml"
output: "test/results/summary.md"
```

If this is not specified, the output will be to the workflow summary.

This file is [GitHub Flavored Markdown (GFM)](https://github.github.com/gfm/) and may include permitted HTML.

* **`show`: the test results to summarize in a table** (optional)
This controls whether a test summary table is created or not, as well as what tests are included. It could be `all`, `none`, `pass`, `skip`, or `fail`. The default is `fail` - that is, the summary table will only show the failed tests. For example, if you wanted to show failed and skipped tests:

```yaml
- uses: test-summary/action@v1
with:
paths: "test/results/**/TEST-*.xml"
show: "fail, skip"
```

FAQ
---
* **How is the summary graphic generated? Does any of my data ever leave GitHub?**
Expand Down
97 changes: 56 additions & 41 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
"author": "Edward Thomson",
"license": "MIT",
"dependencies": {
"@actions/core": "^1.6.0",
"@actions/core": "^1.10.0",
"glob": "^7.2.0",
"glob-promise": "^4.2.2",
"xml2js": "^0.4.23"
Expand All @@ -47,7 +47,7 @@
"eslint-plugin-github": "^4.3.5",
"eslint-plugin-jest": "^26.1.1",
"jest": "^27.5.1",
"mocha": "^9.2.1",
"mocha": "^9.2.2",
"mocha-junit-reporter": "^2.0.2",
"mocha-multi-reporters": "^1.5.1",
"prettier": "^2.5.1",
Expand Down
6 changes: 6 additions & 0 deletions src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,12 @@ async function run(): Promise<void> {
const writefile = util.promisify(fs.writeFile)
await writefile(outputFile, output)
}

core.setOutput('passed', total.counts.passed)
core.setOutput('failed', total.counts.failed)
core.setOutput('skipped', total.counts.skipped)
core.setOutput('total', total.counts.passed + total.counts.failed + total.counts.skipped)

Comment on lines +122 to +123
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
core.setOutput('total', total.counts.passed + total.counts.failed + total.counts.skipped)
core.setOutput('total', total.counts.passed + total.counts.failed + total.counts.skipped)

} catch (error) {
if (error instanceof Error) {
core.setFailed(error.message)
Expand Down
4 changes: 2 additions & 2 deletions src/test_parser.ts
Original file line number Diff line number Diff line change
Expand Up @@ -241,9 +241,9 @@ async function parseJunitXml(xml: any): Promise<TestResult> {
status = TestStatus.Skip

counts.skipped++
} else if (testcase.failure) {
} else if (testcase.failure || testcase.error) {
status = TestStatus.Fail
details = testcase.failure[0]._
details = (testcase.failure || testcase.error)[0]._

counts.failed++
} else {
Expand Down
19 changes: 19 additions & 0 deletions test/junit.ts
Original file line number Diff line number Diff line change
Expand Up @@ -87,4 +87,23 @@ describe("junit", async () => {
expect(result.suites[0].cases[8].name).to.eql("skipsTestNine")
expect(result.suites[0].cases[9].name).to.eql("skipsTestTen")
})

it("parses bazel", async() => {
// Not a perfect example of Bazel JUnit output - it typically does one file
// per test target, and aggregates all the test cases from the test tooling
// into one Junit testsuite / testcase. This does depend on the actual
// test platform; my experience is mostly with py_test() targets.
const result = await parseJunitFile(`${resourcePath}/04-bazel-junit.xml`)

expect(result.counts.passed).to.eql(1)
expect(result.counts.failed).to.eql(1)
expect(result.counts.skipped).to.eql(0)

expect(result.suites.length).to.eql(2)

expect(result.suites[0].cases[0].name).to.eql("dummy/path/to/project/and/failing_test_target")
expect(result.suites[0].cases[0].status).to.eql(TestStatus.Fail)
expect(result.suites[1].cases[0].name).to.eql("dummy/path/to/project/and/passing_test_target")
expect(result.suites[1].cases[0].status).to.eql(TestStatus.Pass)
})
})
Loading