From 2f880a62cf4050db4d8926695dfeb629ea579fae Mon Sep 17 00:00:00 2001 From: Dmitrii Gridnev Date: Mon, 9 Feb 2026 19:46:58 +0300 Subject: [PATCH] docs: unify documentation structure across all reporters - Standardize README structure: Features, Installation, Quick Start, Configuration, Usage, Running Tests, Requirements - Add table of contents for better navigation - Add PyPI version and downloads badges to all READMEs - Create detailed usage guides in docs/usage.md for each reporter - Add new documentation for pytest: ATTACHMENTS.md, STEPS.md, PARAMETERS.md - Restructure qase-python-commons README as configuration reference - Link all reporters to qase-python-commons for full configuration options - Fix status mapping: use 'invalid' instead of 'blocked' for failed (other exception) --- .gitignore | 5 +- qase-behave/README.md | 230 ++++++++---- qase-behave/docs/usage.md | 447 +++++++++++++++++------ qase-pytest/README.md | 318 +++++++++-------- qase-pytest/docs/ATTACHMENTS.md | 356 +++++++++++++++++++ qase-pytest/docs/PARAMETERS.md | 321 +++++++++++++++++ qase-pytest/docs/STEPS.md | 432 +++++++++++++++++++++++ qase-pytest/docs/usage.md | 565 +++++++++++++++++++++++------- qase-python-commons/README.md | 478 +++++++++++-------------- qase-robotframework/README.md | 240 ++++++++----- qase-robotframework/docs/usage.md | 492 ++++++++++++++++++-------- qase-tavern/README.md | 236 ++++++++++--- qase-tavern/docs/usage.md | 450 ++++++++++++++++++++---- 13 files changed, 3502 insertions(+), 1068 deletions(-) create mode 100644 qase-pytest/docs/ATTACHMENTS.md create mode 100644 qase-pytest/docs/PARAMETERS.md create mode 100644 qase-pytest/docs/STEPS.md diff --git a/.gitignore b/.gitignore index 81f165b9..c9226f9e 100644 --- a/.gitignore +++ b/.gitignore @@ -59,4 +59,7 @@ target/ qaseio.lock #Ipython Notebook -.ipynb_checkpoints \ No newline at end of file +.ipynb_checkpoints + +.planning/ +CLAUDE.md diff --git a/qase-behave/README.md b/qase-behave/README.md index ea767b69..b38954a7 100644 --- a/qase-behave/README.md +++ b/qase-behave/README.md @@ -1,79 +1,76 @@ # [Qase TestOps](https://qase.io) Behave Reporter -[![License](https://lxgaming.github.io/badges/License-Apache%202.0-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0) +[![PyPI version](https://img.shields.io/pypi/v/qase-behave?style=flat-square)](https://pypi.org/project/qase-behave/) +[![PyPI downloads](https://img.shields.io/pypi/dm/qase-behave?style=flat-square)](https://pypi.org/project/qase-behave/) +[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?style=flat-square)](https://www.apache.org/licenses/LICENSE-2.0) -## Installation +Qase Behave Reporter enables seamless integration between your Behave BDD tests and [Qase TestOps](https://qase.io), providing automatic test result reporting, test case management, and comprehensive test analytics. + +## Features -To install the latest version, run: +- Link automated tests to Qase test cases by ID +- Auto-create test cases from your feature files +- Report test results with rich metadata (fields, attachments) +- Automatic step reporting from Gherkin scenarios +- Multi-project reporting support +- Flexible configuration (file, environment variables) + +## Installation ```sh pip install qase-behave ``` -## Getting started - -The Behave reporter can auto-generate test cases -and suites from your test data. -Test results of subsequent test runs will match the same test cases -as long as their names and file paths don't change. - -You can also annotate the tests with the IDs of existing test cases -from Qase.io before executing tests. It's a more reliable way to bind -autotests to test cases, that persists when you rename, move, or -parameterize your tests. - -### Metadata - -- `qase.id` - set the ID of the test case -- `qase.fields` - set the fields of the test case -- `qase.suite` - set the suite of the test case -- `qase.ignore` - ignore the test case in Qase. The test will be executed, but the results will not be sent to Qase. - -For detailed instructions on using annotations and methods, refer to [Usage](docs/usage.md). +## Quick Start -For information about attaching files and content or adding comments to test results, see [Attachments](docs/ATTACHMENTS.md). +**1. Create `qase.config.json` in your project root:** -### Multi-Project Support - -Qase Behave Reporter supports sending test results to multiple Qase projects simultaneously. -You can specify different test case IDs for each project using the `@qase.project_id.PROJECT_CODE:IDS` tag format. - -For detailed information, configuration, and examples, see the [Multi-Project Support Guide](docs/MULTI_PROJECT.md). +```json +{ + "mode": "testops", + "testops": { + "project": "YOUR_PROJECT_CODE", + "api": { + "token": "YOUR_API_TOKEN" + } + } +} +``` -For example: +**2. Add Qase ID to your scenario:** ```gherkin -Feature: Example tests +Feature: Authentication - @qase.id:1 @qase.fields:{"description":"It_is_simple_test"} @qase.suite:MySuite - Scenario: Example test - Given I have a simple test - When I run it - Then it should pass + @qase.id:1 + Scenario: User can log in with valid credentials + Given I am on the login page + When I enter valid credentials + Then I should see the dashboard ``` -To execute Behave tests and report them to Qase.io, run the command: +**3. Run your tests:** -```bash +```sh behave --format=qase.behave.formatter:QaseFormatter ``` -You can try it with the example project at [`examples/behave`](../examples/behave/). - ## Configuration -Qase Behave Reporter is configured in multiple ways: +The reporter is configured via (in order of priority): -- using a config file `qase.config.json` -- using environment variables -- using command line options +1. **Environment variables** (`QASE_*`, highest priority) +2. **Config file** (`qase.config.json`) -Environment variables override the values given in the config file, -and command line options override both other values. +### Minimal Configuration -For complete configuration reference, see the [qase-python-commons README](../qase-python-commons/README.md) which contains all available configuration options. +| Option | Environment Variable | Description | +|--------|---------------------|-------------| +| `mode` | `QASE_MODE` | Set to `testops` to enable reporting | +| `testops.project` | `QASE_TESTOPS_PROJECT` | Your Qase project code | +| `testops.api.token` | `QASE_TESTOPS_API_TOKEN` | Your Qase API token | -### Example: qase.config.json +### Example `qase.config.json` ```json { @@ -82,11 +79,10 @@ For complete configuration reference, see the [qase-python-commons README](../qa "testops": { "project": "YOUR_PROJECT_CODE", "api": { - "token": "YOUR_API_TOKEN", - "host": "qase.io" + "token": "YOUR_API_TOKEN" }, "run": { - "title": "Test run title" + "title": "Behave Automated Run" }, "batch": { "size": 100 @@ -100,21 +96,131 @@ For complete configuration reference, see the [qase-python-commons README](../qa "format": "json" } } - }, - "logging": { - "console": true, - "file": false - }, - "environment": "local" + } } ``` +> **Full configuration reference:** See [qase-python-commons](../qase-python-commons/README.md) for all available options including logging, status mapping, execution plans, and more. + +## Usage + +### Link Tests with Test Cases + +Associate your scenarios with Qase test cases using the `@qase.id` tag: + +```gherkin +Feature: Shopping Cart + + @qase.id:1 + Scenario: Add item to cart + Given I am on the product page + When I click add to cart + Then the item should be in my cart + + @qase.id:2 + Scenario: Remove item from cart + Given I have an item in my cart + When I click remove + Then my cart should be empty +``` + +### Add Metadata + +Enhance your scenarios with additional information using the `@qase.fields` tag: + +```gherkin +Feature: Checkout + + @qase.id:1 + @qase.fields:{"severity":"critical","priority":"high","layer":"e2e"} + @qase.suite:Checkout + Scenario: Complete purchase + Given I have items in my cart + When I complete checkout + Then I should see order confirmation +``` + +**Note:** In field values, use underscores (`_`) instead of spaces. They will be automatically converted. + +### Ignore Tests + +Exclude specific scenarios from Qase reporting: + +```gherkin +Feature: Authentication + + @qase.ignore + Scenario: Work in progress test + Given this test is not ready + Then it should not be reported +``` + +### Test Result Statuses + +| Behave Result | Qase Status | +|---------------|-------------| +| Passed | `passed` | +| Failed (AssertionError) | `failed` | +| Failed (other exception) | `invalid` | +| Skipped | `skipped` | + +### Attachments + +Attach files and content to test results in step definitions: + +```python +from behave import given, when, then +from qase.behave import qase + +@when('I take a screenshot') +def step_impl(context): + screenshot = context.browser.get_screenshot_as_png() + qase.attach(content=screenshot, file_name="screenshot.png", mime_type="image/png") +``` + +> For detailed usage examples, see the [Usage Guide](docs/usage.md). + +## Running Tests + +### Basic Execution + +```sh +behave --format=qase.behave.formatter:QaseFormatter +``` + +### With Environment Variables + +```sh +export QASE_MODE=testops +export QASE_TESTOPS_PROJECT=PROJ +export QASE_TESTOPS_API_TOKEN=your_token +behave --format=qase.behave.formatter:QaseFormatter +``` + +### With Custom Run Title + +```sh +export QASE_TESTOPS_RUN_TITLE="Regression Run" +behave --format=qase.behave.formatter:QaseFormatter +``` + ## Requirements -We maintain the reporter on [LTS versions of Python](https://devguide.python.org/versions/). +- Python >= 3.9 +- behave >= 1.2.6 + +## Documentation + +| Guide | Description | +|-------|-------------| +| [Usage Guide](docs/usage.md) | Complete usage reference with all tags and options | +| [Attachments](docs/ATTACHMENTS.md) | Adding screenshots, logs, and files to test results | +| [Multi-Project Support](docs/MULTI_PROJECT.md) | Reporting to multiple Qase projects | + +## Examples -`python >= 3.9` -`behave >= 1.2.6` +See the [examples directory](../examples/single/behave/) for complete working examples. - +## License +Apache License 2.0. See [LICENSE](../LICENSE) for details. diff --git a/qase-behave/docs/usage.md b/qase-behave/docs/usage.md index bac4d509..1ae1de6f 100644 --- a/qase-behave/docs/usage.md +++ b/qase-behave/docs/usage.md @@ -1,199 +1,424 @@ # Qase Integration in Behave -This guide demonstrates how to integrate Qase with Behave, providing instructions on how to add Qase IDs, -fields and suites to your test cases. +This guide provides comprehensive instructions for integrating Qase with Behave BDD framework. > **Configuration:** For complete configuration reference including all available options, environment variables, and examples, see the [qase-python-commons README](../../qase-python-commons/README.md). --- -## Adding QaseID to a Test +## Table of Contents -To associate a QaseID with a test in Behave, use the `@qase.id` tag. This tag accepts a single integer -representing the test's ID in Qase. +- [Adding QaseID](#adding-qaseid) +- [Adding Fields](#adding-fields) +- [Adding Suite](#adding-suite) +- [Ignoring Tests](#ignoring-tests) +- [Working with Attachments](#working-with-attachments) +- [Multi-Project Support](#multi-project-support) +- [Running Tests](#running-tests) +- [Complete Examples](#complete-examples) -### Example: +--- + +## Adding QaseID + +Link your scenarios to existing test cases in Qase using the `@qase.id` tag. + +### Single ID ```gherkin -Feature: Example tests +Feature: Authentication @qase.id:1 - Scenario: Example test - Given I have a simple test - When I run it - Then it should pass + Scenario: User can log in + Given I am on the login page + When I enter valid credentials + Then I should see the dashboard ``` -### Multi-Project Support +### Multiple Scenarios -Qase Behave Reporter supports sending test results to multiple Qase projects simultaneously with different test case IDs for each project. +```gherkin +Feature: Shopping Cart -For detailed information, configuration, examples, and troubleshooting, see the [Multi-Project Support Guide](MULTI_PROJECT.md). + @qase.id:1 + Scenario: Add item to cart + Given I am on a product page + When I click add to cart + Then the item should be in my cart + + @qase.id:2 + Scenario: Remove item from cart + Given I have an item in my cart + When I click remove + Then my cart should be empty +``` + +### Multi-Project Support + +To send test results to multiple Qase projects simultaneously, see the [Multi-Project Support Guide](MULTI_PROJECT.md). --- -## Adding Fields to a Test +## Adding Fields + +Add metadata to your scenarios using the `@qase.fields` tag with JSON format. + +### System Fields -The `qase.fields` tag allows you to add additional metadata to a test case. You can specify multiple fields to -enhance test case information in Qase. In field values, underscores (_) should be used instead of spaces. The reporter -will automatically replace all underscores with spaces. +| Field | Description | Example Values | +|-------|-------------|----------------| +| `description` | Test case description | Any text | +| `preconditions` | Test preconditions | Any text | +| `postconditions` | Test postconditions | Any text | +| `severity` | Test severity | `blocker`, `critical`, `major`, `normal`, `minor`, `trivial` | +| `priority` | Test priority | `high`, `medium`, `low` | +| `layer` | Test layer | `e2e`, `api`, `unit` | -### System Fields: +### Example -- `description` — Description of the test case. -- `preconditions` — Preconditions for the test case. -- `postconditions` — Postconditions for the test case. -- `severity` — Severity of the test case (e.g., `critical`, `major`). -- `priority` — Priority of the test case (e.g., `high`, `low`). -- `layer` — Test layer (e.g., `UI`, `API`). +```gherkin +Feature: Checkout + + @qase.id:1 + @qase.fields:{"severity":"critical","priority":"high","layer":"e2e"} + Scenario: Complete purchase + Given I have items in my cart + When I complete checkout + Then I should see order confirmation +``` -### Example: +### Multiple Fields ```gherkin -Feature: Example tests +Feature: User Management - @qase.fields:{"description":"It_is_simple_test"} - Scenario: Example test - Given I have a simple test - When I run it - Then it should pass + @qase.id:1 + @qase.fields:{"description":"Verify_user_registration_flow","preconditions":"User_is_not_registered","severity":"critical"} + Scenario: User registration + Given I am on the registration page + When I fill in the registration form + And I submit the form + Then I should see a confirmation message ``` +**Note:** Use underscores (`_`) instead of spaces in field values. They will be automatically converted to spaces. + --- -## Adding a Suite to a Test +## Adding Suite -To assign a suite or sub-suite to a test, use the `qase.suite` tag. It can receive a suite name, and optionally a -sub-suite, both as strings. +Organize scenarios into suites using the `@qase.suite` tag. -### Example: +### Simple Suite ```gherkin -Feature: Example tests +Feature: Authentication - @qase.suite:MySuite - Scenario: Example test - Given I have a simple test - When I run it - Then it should pass + @qase.id:1 + @qase.suite:Authentication + Scenario: User login + Given I am on the login page + When I enter valid credentials + Then I should see the dashboard +``` + +### Nested Suite + +Use `||` to create nested suites: - @qase.suite:MySuite||SubSuite - Scenario: Example test - Given I have a simple test - When I run it - Then it should pass +```gherkin +Feature: Authentication + + @qase.id:1 + @qase.suite:Authentication||Login + Scenario: Valid login + Given I am on the login page + When I enter valid credentials + Then I should see the dashboard + + @qase.id:2 + @qase.suite:Authentication||Login||OAuth + Scenario: Google login + Given I am on the login page + When I click "Login with Google" + Then I should be redirected to Google ``` --- -## Ignoring a Test in Qase +## Ignoring Tests -To exclude a test from being reported to Qase (while still executing the test in Behave), use the `qase.ignore` -tag. The test will run, but its result will not be sent to Qase. - -### Example: +Exclude scenarios from Qase reporting while still executing them: ```gherkin -Feature: Example tests +Feature: Experimental @qase.ignore - Scenario: Example test - Given I have a simple test - When I run it - Then it should pass + Scenario: Work in progress + Given this feature is not ready + Then it should not be reported to Qase ``` --- -## Adding Attachments to Tests +## Working with Attachments -Qase Behave supports attaching files and content to test results. You can attach files or content either to the test case (scenario level) or to a specific test step. +Attach files and content to test results in step definitions. ### Attach to Test Case -Use `qase.attach()` to attach files or content to the test case. This is useful for screenshots, logs, or data files that are relevant to the entire test scenario. - -### Example: +```python +from behave import given, when, then +from qase.behave import qase -```gherkin -Feature: Example tests +@when('I take a screenshot') +def step_impl(context): + # Attach file from path + qase.attach(file_path="/path/to/screenshot.png") - @qase.id:1 - Scenario: Example test with attachments - Given I have a test with a file - When I attach a screenshot - Then the attachment should be in the test case +@then('I save the response') +def step_impl(context): + # Attach content directly + qase.attach( + content=context.response.text, + file_name="response.json", + mime_type="application/json" + ) ``` +### Attach to Step + ```python -from behave import * +from behave import when from qase.behave import qase -@given('I have a test with a file') +@when('I complete the form') def step_impl(context): - # Attach an existing file to the test case - qase.attach(file_path="/path/to/your/file.txt") + # Fill form... -@when('I attach a screenshot') -def step_impl(context): - # Attach binary data (e.g., screenshot) to the test case - screenshot_data = b"binary_screenshot_data" - qase.attach( - content=screenshot_data, - file_name="screenshot.png", + # Attach screenshot to this specific step + screenshot = context.browser.get_screenshot_as_png() + qase.attach_to_step( + content=screenshot, + file_name="form_completed.png", mime_type="image/png" ) ``` -### Attach to Test Step +### Method Parameters + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `file_path` | `str` | No* | Path to file to attach | +| `content` | `str` or `bytes` | No* | Content to attach | +| `file_name` | `str` | No | Custom filename | +| `mime_type` | `str` | No | MIME type (auto-detected) | + +\* Either `file_path` or `content` must be provided. + +> For more details, see [Attachments Guide](ATTACHMENTS.md). + +--- -Use `qase.attach_to_step()` to attach files or content directly to a specific test step. This is useful when you want to associate attachments with a particular step execution. +## Multi-Project Support -### Example: +Send test results to multiple Qase projects using the `@qase.project_id` tag: ```gherkin -Feature: Example tests +Feature: Shared Functionality + + @qase.project_id.PROJ1:1,2 + @qase.project_id.PROJ2:10 + Scenario: Test reported to multiple projects + Given I perform an action + Then it should be reported to PROJ1 and PROJ2 +``` + +For detailed configuration, examples, and troubleshooting, see the [Multi-Project Support Guide](MULTI_PROJECT.md). + +--- + +## Running Tests + +### Basic Execution + +```sh +behave --format=qase.behave.formatter:QaseFormatter +``` + +### With Environment Variables + +```sh +export QASE_MODE=testops +export QASE_TESTOPS_PROJECT=PROJ +export QASE_TESTOPS_API_TOKEN=your_token +behave --format=qase.behave.formatter:QaseFormatter +``` + +### With Config File + +Create `qase.config.json`: + +```json +{ + "mode": "testops", + "testops": { + "project": "PROJ", + "api": { + "token": "your_token" + } + } +} +``` + +Then run: + +```sh +behave --format=qase.behave.formatter:QaseFormatter +``` + +### Run Specific Feature + +```sh +behave --format=qase.behave.formatter:QaseFormatter features/login.feature +``` + +### With Tags + +```sh +behave --format=qase.behave.formatter:QaseFormatter --tags=@smoke +``` + +--- + +## Complete Examples + +### Full Feature File + +```gherkin +@qase.suite:Authentication +Feature: User Authentication + As a user + I want to authenticate + So that I can access my account + + Background: + Given the application is running @qase.id:1 - Scenario: Example test with step attachments - Given I have a test - When I attach a screenshot to this step - Then the attachment should be in the step + @qase.fields:{"severity":"critical","priority":"high"} + Scenario: Successful login + Given I am on the login page + When I enter username "testuser" + And I enter password "testpass" + And I click the login button + Then I should see the dashboard + And I should see "Welcome, testuser" + + @qase.id:2 + @qase.fields:{"severity":"major","priority":"medium"} + Scenario: Failed login with invalid password + Given I am on the login page + When I enter username "testuser" + And I enter password "wrongpass" + And I click the login button + Then I should see an error message + And I should remain on the login page + + @qase.id:3 + @qase.suite:Authentication||Logout + Scenario: User logout + Given I am logged in as "testuser" + When I click the logout button + Then I should be redirected to the login page + + @qase.ignore + Scenario: Password reset (WIP) + Given I am on the login page + When I click "Forgot password" + Then I should see the password reset form ``` +### Step Definitions with Attachments + ```python -from behave import * +from behave import given, when, then from qase.behave import qase -@when('I attach a screenshot to this step') +@given('I am on the login page') +def step_impl(context): + context.browser.goto("/login") + +@when('I enter username "{username}"') +def step_impl(context, username): + context.browser.fill("#username", username) + +@when('I enter password "{password}"') +def step_impl(context, password): + context.browser.fill("#password", password) + +@when('I click the login button') def step_impl(context): - # Attach binary data to the current step - screenshot_data = b"binary_screenshot_data" + context.browser.click("#login-btn") + # Attach screenshot after clicking + screenshot = context.browser.screenshot() qase.attach_to_step( - content=screenshot_data, - file_name="step_screenshot.png", + content=screenshot, + file_name="after_click.png", mime_type="image/png" ) - - # Attach text content to the current step - qase.attach_to_step( - content="Step execution log", - file_name="step_log.txt" - ) + +@then('I should see the dashboard') +def step_impl(context): + assert context.browser.url.endswith("/dashboard") + +@then('I should see "{text}"') +def step_impl(context, text): + assert text in context.browser.content() ``` -### Method Parameters +### Example Project Structure -Both `qase.attach()` and `qase.attach_to_step()` accept the same parameters: +``` +my-project/ +├── qase.config.json +├── features/ +│ ├── environment.py +│ ├── login.feature +│ ├── checkout.feature +│ └── steps/ +│ ├── login_steps.py +│ └── checkout_steps.py +└── requirements.txt +``` + +--- + +## Troubleshooting + +### Tests Not Appearing in Qase -- `file_path`: Path to the file to attach (mutually exclusive with `content`) -- `content`: Content to attach as string or bytes (mutually exclusive with `file_path`) -- `file_name`: Name for the attachment (auto-detected from `file_path` if not provided) -- `mime_type`: MIME type of the attachment (auto-detected if not provided) +1. Verify `mode` is set to `testops` +2. Check API token has write permissions +3. Verify project code is correct +4. Ensure formatter is specified: `--format=qase.behave.formatter:QaseFormatter` + +### Fields Not Applying + +1. Verify JSON syntax is correct +2. Use underscores instead of spaces in values +3. Check for typos in field names + +### Attachments Not Uploading + +1. Verify file path exists +2. Check file permissions +3. Enable debug logging: `"debug": true` + +--- -**Notes:** +## See Also -- Either `file_path` or `content` must be provided, but not both -- If `file_name` is not provided, it will be derived from `file_path` or default to "attachment.txt" -- If `mime_type` is not provided, it will be auto-detected from the file extension or default to "text/plain" -- Attachments are automatically included in the test result when the scenario completes +- [Configuration Reference](../../qase-python-commons/README.md) +- [Attachments Guide](ATTACHMENTS.md) +- [Multi-Project Support](MULTI_PROJECT.md) diff --git a/qase-pytest/README.md b/qase-pytest/README.md index 2e88d51f..8b57e4de 100644 --- a/qase-pytest/README.md +++ b/qase-pytest/README.md @@ -1,36 +1,80 @@ # [Qase TestOps](https://qase.io) Pytest Reporter -[![License](https://lxgaming.github.io/badges/License-Apache%202.0-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0) +[![PyPI version](https://img.shields.io/pypi/v/qase-pytest?style=flat-square)](https://pypi.org/project/qase-pytest/) +[![PyPI downloads](https://img.shields.io/pypi/dm/qase-pytest?style=flat-square)](https://pypi.org/project/qase-pytest/) +[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?style=flat-square)](https://www.apache.org/licenses/LICENSE-2.0) -## Installation +Qase Pytest Reporter enables seamless integration between your Pytest tests and [Qase TestOps](https://qase.io), providing automatic test result reporting, test case management, and comprehensive test analytics. + +## Features -To install the latest version, run: +- Link automated tests to Qase test cases by ID +- Auto-create test cases from your test code +- Report test results with rich metadata (fields, attachments, steps) +- Support for parameterized tests +- Multi-project reporting support +- Flexible configuration (file, environment variables, CLI) +- Built-in support for Playwright-based tests + +## Installation ```sh -pip install pre qase-pytest +pip install qase-pytest ``` -## Upgrade from 4.x to 5.x and to 6.x +## Quick Start -The new version 6.x of the Pytest reporter has breaking changes. -To migrate from versions 4.x or 5.x, follow the [upgrade guide](docs/UPGRADE.md). +**1. Create `qase.config.json` in your project root:** -## Configuration +```json +{ + "mode": "testops", + "testops": { + "project": "YOUR_PROJECT_CODE", + "api": { + "token": "YOUR_API_TOKEN" + } + } +} +``` + +**2. Add Qase ID to your test:** + +```python +from qase.pytest import qase -Qase Pytest Reporter is configured in multiple ways: +@qase.id(1) +def test_example(): + assert True +``` -- using a config file `qase.config.json` -- using environment variables -- using command line options +**3. Run your tests:** -Environment variables override the values given in the config file, -and command line options override both other values. +```sh +pytest +``` + +## Upgrading + +For migration guides between major versions, see [Upgrade Guide](docs/UPGRADE.md). + +## Configuration -For complete configuration reference, see the [qase-python-commons README](../qase-python-commons/README.md) which contains all available configuration options. +The reporter is configured via (in order of priority): +1. **CLI options** (`--qase-*`, highest priority) +2. **Environment variables** (`QASE_*`) +3. **Config file** (`qase.config.json`) +### Minimal Configuration -### Example: qase.config.json +| Option | Environment Variable | CLI Option | Description | +|--------|---------------------|------------|-------------| +| `mode` | `QASE_MODE` | `--qase-mode` | Set to `testops` to enable reporting | +| `testops.project` | `QASE_TESTOPS_PROJECT` | `--qase-testops-project` | Your Qase project code | +| `testops.api.token` | `QASE_TESTOPS_API_TOKEN` | `--qase-testops-api-token` | Your Qase API token | + +### Example `qase.config.json` ```json { @@ -39,11 +83,10 @@ For complete configuration reference, see the [qase-python-commons README](../qa "testops": { "project": "YOUR_PROJECT_CODE", "api": { - "token": "YOUR_API_TOKEN", - "host": "qase.io" + "token": "YOUR_API_TOKEN" }, "run": { - "title": "Test run title" + "title": "Pytest Automated Run" }, "batch": { "size": 100 @@ -62,187 +105,178 @@ For complete configuration reference, see the [qase-python-commons README](../qa "pytest": { "captureLogs": true } - }, - "logging": { - "console": true, - "file": false - }, - "environment": "local" + } } ``` +> **Full configuration reference:** See [qase-python-commons](../qase-python-commons/README.md) for all available options including logging, status mapping, execution plans, and more. + ## Usage -For detailed instructions on using annotations and methods, refer to [Usage](docs/usage.md). +### Link Tests with Test Cases -### Multi-Project Support +Associate your tests with Qase test cases using test case IDs: -Qase Pytest Reporter supports sending test results to multiple Qase projects simultaneously. -You can specify different test case IDs for each project using the `@qase.project_id()` decorator. +```python +from qase.pytest import qase -For detailed information, configuration, and examples, see the [Multi-Project Support Guide](docs/MULTI_PROJECT.md). +# Single ID +@qase.id(1) +def test_single_id(): + assert True + +# Multiple IDs +@qase.id([2, 3]) +def test_multiple_ids(): + assert True +``` -### Link tests with test cases in Qase TestOps +### Add Metadata -To link the automated tests with the test cases in Qase TestOps, use the `@qase.id()` decorator. -Other test data, such as case title, system and custom fields, -can be added with `@qase.title()` and `@qase.fields()`: +Enhance your tests with additional information: ```python from qase.pytest import qase -@qase.id(13) -@qase.title("My first test") +@qase.id(1) +@qase.title("User Login Test") +@qase.suite("Authentication") @qase.fields( ("severity", "critical"), ("priority", "high"), - ("layer", "unit"), - ("description", "Try to login to Qase TestOps using login and password"), - ("preconditions", "*Precondition 1*. Markdown is supported."), + ("layer", "e2e"), + ("description", "Verify user can log in with valid credentials"), + ("preconditions", "User account exists in the system"), ) -def test_example_1(): - pass - -@qase.id([14, 15]) -def test_example_2(): - pass +def test_user_login(): + assert True ``` -Each unique number can only be assigned once to the class or function being used. +### Ignore Tests -### Ignore a particular test - -To exclude a particular test from the report, use the `@qase.ignore` decorator: +Exclude specific tests from Qase reporting (test still runs, but results are not sent): ```python from qase.pytest import qase -@qase.ignore -def test_example_1(): - pass +@qase.ignore() +def test_not_reported(): + assert True ``` -### Possible test result statuses +### Test Result Statuses -- PASSED - when test passed -- FAILED - when test failed with `AssertionError` -- BLOCKED - when test failed with any other exception -- SKIPPED - when test has been skipped +| Pytest Result | Qase Status | +|---------------|-------------| +| Passed | `passed` | +| Failed (AssertionError) | `failed` | +| Failed (other exception) | `invalid` | +| Skipped | `skipped` | -### Capture network logs +### Attachments -To capture the network logs, enable the `http` option in the `framework.capture` section -of the configuration file. +Attach files, screenshots, and logs to test results: -The Qase Pytest reporter will capture all HTTP requests and responses -and save them as a test steps automatically. +```python +from qase.pytest import qase -### Add attachments to test results +def test_with_attachments(): + # Attach file from path + qase.attach("/path/to/file.txt") -To upload screenshots, logs, and other information to Qase.io, -use `qase.attach()`. -It works both with files in the filesystem and with data available in the code. -There is no limit on the amount of attachments from a single test. + # Attach with custom MIME type + qase.attach(("/path/to/file.json", "application/json")) -```python -import pytest -from qase.pytest import qase + # Attach content from memory + qase.attach((b"screenshot data", "image/png", "screenshot.png")) -@qase.title("File attachments") -def test_example_1(): - # attach files from the filesystem: - qase.attach("/path/to/file", "/path/to/file/2") - # to add multiple attachments, pass them in tuples: - qase.attach( - ("/path/to/file/1", "application/json"), - ("/path/to/file/3", "application/xml"), - ) - -@pytest.fixture(scope="session") -def driver(): - driver = webdriver.Chrome() - yield driver - logs = "\n".join(str(row) for row in driver.get_log('browser')).encode('utf-8') - # attach logs from a code variable as a text file: - qase.attach((logs, "text/plain", "browser.log")) - driver.quit() - -@qase.id(12) -def test_example_2(driver): - # attach the output of driver.get_screenshot_as_png() as a png image - qase.attach((driver.get_screenshot_as_png(), "image/png", "result.png")) + assert True ``` -### Linking code with steps +### Test Steps -To mark a test step, either annotate a function with `@qase.step()`, -or use the `with qase.step()` context: +Define test steps for detailed reporting: ```python from qase.pytest import qase -@qase.step("First step") # test step name -def some_step(): - sleep(5) +@qase.step("Open login page") +def open_login(): + pass -@qase.step("Second step") # test step name -def another_step(): - sleep(3) +@qase.step("Enter credentials") +def enter_credentials(username, password): + pass -# ... +def test_login(): + open_login() + enter_credentials("user", "pass") -def test_example(): - some_step() - another_step() - # test step hash - with qase.step("Third step"): - sleep(1) + # Inline step with context manager + with qase.step("Click login button"): + pass ``` -### Creating new testrun according to current pytest run +> For detailed usage examples, see the [Usage Guide](docs/usage.md). + +## Running Tests + +### Basic Execution + +```sh +pytest +``` -By default, qase-pytest will create a new test run in Qase TestOps -and report results to this test run. -To provide a custom name for this run, add -the option `--qase-testops-run-title`. +### With CLI Options -```bash +```sh pytest \ --qase-mode=testops \ - --qase-testops-api-token= \ - --qase-testops-project=PRJCODE \ # project, where your testrun would be created - --qase-testops-run-title=My\ First\ Automated\ Run + --qase-testops-project=PROJ \ + --qase-testops-api-token=your_token ``` -### Sending tests to existing testrun +### With Environment Variables -Test results can be reported to an existing test run in Qase using its ID. -This is useful when a test run combines tests from multiple sources: +```sh +export QASE_MODE=testops +export QASE_TESTOPS_PROJECT=PROJ +export QASE_TESTOPS_API_TOKEN=your_token +pytest +``` -* manual and automated -* autotests from different frameworks -* tests running in multiple shards on different machines +### With Existing Test Run -For example, if the test run has ID=3, the following command will -run tests and report results to this test run: +```sh +pytest --qase-testops-run-id=123 +``` -```bash -pytest \ - --qase-mode=testops \ - --qase-testops-api-token= \ - --qase-testops-project=PRJCODE \ # project, where the test run is created - --qase-testops-run-id=3 # testrun id +### With Test Plan + +```sh +pytest --qase-testops-plan-id=456 ``` -### Creating test run based on test plan (selective launch) +## Requirements -Create a new testrun base on a testplan. Testrun in Qase TestOps will contain only those -test results. `qase-pytest` supports selective execution. +- Python >= 3.9 +- pytest >= 7.0.0 -```bash -pytest \ - --qase-mode=testops \ - --qase-testops-api-token= \ - --qase-testops-project=PRJCODE \ # project, where your testrun exists in - --qase-testops-plan-id=3 # testplan id -``` +## Documentation + +| Guide | Description | +|-------|-------------| +| [Usage Guide](docs/usage.md) | Complete usage reference with all decorators and options | +| [Attachments](docs/ATTACHMENTS.md) | Adding screenshots, logs, and files to test results | +| [Steps](docs/STEPS.md) | Defining test steps for detailed reporting | +| [Parameters](docs/PARAMETERS.md) | Working with parameterized tests | +| [Multi-Project Support](docs/MULTI_PROJECT.md) | Reporting to multiple Qase projects | +| [Upgrade Guide](docs/UPGRADE.md) | Migration guide for breaking changes | + +## Examples + +See the [examples directory](../examples/) for complete working examples. + +## License + +Apache License 2.0. See [LICENSE](../LICENSE) for details. diff --git a/qase-pytest/docs/ATTACHMENTS.md b/qase-pytest/docs/ATTACHMENTS.md new file mode 100644 index 00000000..db2019ca --- /dev/null +++ b/qase-pytest/docs/ATTACHMENTS.md @@ -0,0 +1,356 @@ +# Attachments in Pytest + +This guide covers how to attach files, screenshots, logs, and other content to your Qase test results. + +--- + +## Overview + +Qase Pytest Reporter supports attaching various types of content to test results: + +- **Files** — Attach files from the filesystem +- **Screenshots** — Attach images captured during test execution +- **Logs** — Attach text logs or console output +- **Binary data** — Attach any binary content from memory + +Attachments appear in the test result details in Qase TestOps. + +--- + +## Attaching Files + +### From File Path + +```python +from qase.pytest import qase + +def test_with_file_attachment(): + qase.attach("/path/to/screenshot.png") + qase.attach("/path/to/test-data.json") + assert True +``` + +### Multiple Files + +```python +from qase.pytest import qase + +def test_with_multiple_files(): + qase.attach( + "/path/to/file1.txt", + "/path/to/file2.txt", + "/path/to/file3.txt" + ) + assert True +``` + +### With Custom MIME Type + +```python +from qase.pytest import qase + +def test_with_mime_types(): + qase.attach( + ("/path/to/data.json", "application/json"), + ("/path/to/report.xml", "application/xml"), + ) + assert True +``` + +--- + +## Attaching Content from Memory + +### Text Content + +```python +from qase.pytest import qase + +def test_with_text_content(): + log_content = "Step 1: Started\nStep 2: Completed\nStep 3: Verified" + qase.attach((log_content, "text/plain", "execution.log")) + assert True +``` + +### Binary Content (Screenshots) + +```python +from qase.pytest import qase + +def test_with_screenshot(browser): + # Capture screenshot as bytes + screenshot_bytes = browser.screenshot() + + # Attach with filename + qase.attach((screenshot_bytes, "image/png", "screenshot.png")) + assert True +``` + +### JSON Data + +```python +import json +from qase.pytest import qase + +def test_with_json_data(): + data = {"user": "test", "result": "success", "items": [1, 2, 3]} + json_str = json.dumps(data, indent=2) + qase.attach((json_str, "application/json", "response.json")) + assert True +``` + +--- + +## Attachments in Fixtures + +### Attach on Teardown + +```python +import pytest +from qase.pytest import qase + +@pytest.fixture +def browser(): + driver = create_webdriver() + yield driver + + # Attach screenshot regardless of test outcome + screenshot = driver.get_screenshot_as_png() + qase.attach((screenshot, "image/png", "final_state.png")) + + # Attach browser logs + logs = driver.get_log('browser') + log_text = '\n'.join(str(entry) for entry in logs) + qase.attach((log_text, "text/plain", "browser.log")) + + driver.quit() +``` + +### Attach Only on Failure + +```python +import pytest +from qase.pytest import qase + +@pytest.fixture +def browser(request): + driver = create_webdriver() + yield driver + + # Attach screenshot only if test failed + if request.node.rep_call.failed: + screenshot = driver.get_screenshot_as_png() + qase.attach((screenshot, "image/png", "failure_screenshot.png")) + + driver.quit() + +@pytest.hookimpl(tryfirst=True, hookwrapper=True) +def pytest_runtest_makereport(item, call): + outcome = yield + rep = outcome.get_result() + setattr(item, f"rep_{rep.when}", rep) +``` + +--- + +## Method Reference + +### `qase.attach()` + +Attach content to the test case. + +**Signatures:** + +```python +# Single file path +qase.attach("/path/to/file") + +# Multiple file paths +qase.attach("/path/to/file1", "/path/to/file2") + +# File with MIME type +qase.attach(("/path/to/file", "mime/type")) + +# Content from memory +qase.attach((content, "mime/type", "filename")) +``` + +**Parameters:** + +| Format | Description | +|--------|-------------| +| `str` | File path (MIME type auto-detected) | +| `(str, str)` | (file_path, mime_type) | +| `(bytes\|str, str, str)` | (content, mime_type, filename) | + +--- + +## MIME Types + +Common MIME types are auto-detected based on file extension: + +| Extension | MIME Type | +|-----------|-----------| +| `.png` | `image/png` | +| `.jpg`, `.jpeg` | `image/jpeg` | +| `.gif` | `image/gif` | +| `.svg` | `image/svg+xml` | +| `.webp` | `image/webp` | +| `.txt` | `text/plain` | +| `.log` | `text/plain` | +| `.json` | `application/json` | +| `.xml` | `application/xml` | +| `.html` | `text/html` | +| `.csv` | `text/csv` | +| `.pdf` | `application/pdf` | +| `.zip` | `application/zip` | + +For other file types, specify MIME type explicitly. + +--- + +## Common Use Cases + +### Selenium Screenshots + +```python +from selenium import webdriver +from qase.pytest import qase + +def test_with_selenium(selenium_driver): + selenium_driver.get("https://example.com") + + # Take screenshot + screenshot = selenium_driver.get_screenshot_as_png() + qase.attach((screenshot, "image/png", "page.png")) + + # Full page screenshot (if supported) + selenium_driver.save_screenshot("/tmp/full_page.png") + qase.attach("/tmp/full_page.png") + + assert selenium_driver.title == "Example Domain" +``` + +### Playwright Screenshots + +```python +from qase.pytest import qase + +def test_with_playwright(page): + page.goto("https://example.com") + + # Screenshot as bytes + screenshot = page.screenshot() + qase.attach((screenshot, "image/png", "playwright_screenshot.png")) + + # Screenshot of specific element + element_screenshot = page.locator("#main").screenshot() + qase.attach((element_screenshot, "image/png", "element.png")) + + assert page.title() == "Example Domain" +``` + +### API Response Logs + +```python +import requests +import json +from qase.pytest import qase + +def test_api_response(): + response = requests.get("https://api.example.com/users") + + # Attach response body + qase.attach(( + json.dumps(response.json(), indent=2), + "application/json", + "response.json" + )) + + # Attach response headers + headers_text = '\n'.join(f"{k}: {v}" for k, v in response.headers.items()) + qase.attach((headers_text, "text/plain", "headers.txt")) + + assert response.status_code == 200 +``` + +### Browser Console Logs + +```python +from qase.pytest import qase + +def test_with_console_logs(browser): + browser.goto("https://example.com") + + # Get console logs (browser-specific) + console_logs = browser.get_console_logs() + log_text = '\n'.join(str(log) for log in console_logs) + qase.attach((log_text, "text/plain", "console.log")) + + assert True +``` + +### HAR Files (Network Traffic) + +```python +from qase.pytest import qase + +def test_with_har(page, context): + # Start HAR recording + context.tracing.start(screenshots=True, snapshots=True) + + page.goto("https://example.com") + + # Stop and save + context.tracing.stop(path="trace.zip") + qase.attach("trace.zip") + + assert True +``` + +--- + +## Troubleshooting + +### Attachments Not Appearing + +1. Verify the file path exists and is readable +2. Check file permissions +3. Enable debug logging: + ```json + { + "debug": true + } + ``` +4. Check console output for upload errors + +### Large Files + +Large attachments may slow down test execution. Consider: +- Compressing files before attaching +- Limiting screenshot resolution +- Only attaching on failure +- Setting reasonable size limits in your tests + +### Binary Data Issues + +When attaching binary data: +- Always provide a filename with appropriate extension +- Specify the correct MIME type +- Ensure content is bytes, not string (for binary data) + +```python +# Correct +qase.attach((binary_data, "image/png", "screenshot.png")) + +# Incorrect - missing filename +qase.attach((binary_data, "image/png")) +``` + +--- + +## See Also + +- [Usage Guide](usage.md) +- [Steps Guide](STEPS.md) +- [Configuration Reference](../../qase-python-commons/README.md) diff --git a/qase-pytest/docs/PARAMETERS.md b/qase-pytest/docs/PARAMETERS.md new file mode 100644 index 00000000..1d3e26e7 --- /dev/null +++ b/qase-pytest/docs/PARAMETERS.md @@ -0,0 +1,321 @@ +# Parameters in Pytest + +This guide covers how to work with parameterized tests and control which parameters are reported to Qase. + +--- + +## Overview + +When using `@pytest.mark.parametrize`, Qase Pytest Reporter automatically captures parameter values and includes them in test results. This helps distinguish between different test variations. + +You can control which parameters are reported using: +- `@qase.ignore_parameters()` — Exclude specific parameters from reports +- `@qase.parametrize_ignore()` — Replace parametrize entirely with ignored parameters + +--- + +## Basic Parameterized Tests + +Parameters are automatically captured and reported: + +```python +import pytest +from qase.pytest import qase + +@pytest.mark.parametrize("username,password", [ + ("user1", "pass1"), + ("user2", "pass2"), + ("user3", "pass3"), +]) +@qase.id(1) +def test_login(username, password): + assert login(username, password) +``` + +Each parameter combination creates a separate test result in Qase with the parameter values visible. + +--- + +## Ignoring Parameters + +### Using `@qase.ignore_parameters()` + +Exclude specific parameters from Qase reports while still using them in tests: + +```python +import pytest +from qase.pytest import qase + +@pytest.mark.parametrize("browser", ["chrome", "firefox", "safari"]) +@pytest.mark.parametrize("user", ["admin", "regular"]) +@qase.ignore_parameters("browser") # Only 'user' is reported +def test_login(browser, user): + # browser parameter is used but not reported to Qase + driver = create_driver(browser) + login(driver, user) + assert True +``` + +### Multiple Parameters + +Ignore multiple parameters: + +```python +import pytest +from qase.pytest import qase + +@pytest.mark.parametrize("env", ["staging", "production"]) +@pytest.mark.parametrize("browser", ["chrome", "firefox"]) +@pytest.mark.parametrize("user_type", ["admin", "user"]) +@qase.ignore_parameters("env", "browser") # Only 'user_type' is reported +def test_cross_browser(env, browser, user_type): + assert True +``` + +--- + +## Using `@qase.parametrize_ignore()` + +Replace `@pytest.mark.parametrize` entirely — parameters are used in tests but never reported: + +```python +from qase.pytest import qase + +@qase.parametrize_ignore( + "internal_id,debug_data", + [ + ("id-001", {"verbose": True}), + ("id-002", {"verbose": False}), + ], + ids=["verbose", "quiet"] +) +def test_with_internal_data(internal_id, debug_data): + # internal_id and debug_data are used but not reported + process(internal_id, debug_data) + assert True +``` + +### With Test IDs + +```python +from qase.pytest import qase + +@qase.parametrize_ignore( + "test_input,expected", + [ + ("3+5", 8), + ("2+4", 6), + ("6*9", 54), + ], + ids=["addition_1", "addition_2", "multiplication"] +) +def test_eval(test_input, expected): + assert eval(test_input) == expected +``` + +--- + +## Combining Approaches + +Use both decorators together for fine-grained control: + +```python +import pytest +from qase.pytest import qase + +@qase.parametrize_ignore("debug_info", [{"log": True}, {"log": False}]) +@pytest.mark.parametrize("browser", ["chrome", "firefox"]) +@pytest.mark.parametrize("user", ["alice", "bob"]) +@pytest.mark.parametrize("env", ["staging", "prod"]) +@qase.ignore_parameters("browser", "env") +@qase.id(1) +def test_complex(debug_info, browser, user, env): + # Only 'user' is reported to Qase + # debug_info - ignored via parametrize_ignore + # browser, env - ignored via ignore_parameters + assert True +``` + +--- + +## Common Use Cases + +### Browser/Environment Testing + +When running the same test across multiple browsers or environments: + +```python +import pytest +from qase.pytest import qase + +@pytest.mark.parametrize("browser", ["chrome", "firefox", "safari"]) +@qase.ignore_parameters("browser") +@qase.id(1) +def test_login_flow(browser): + # Same test case in Qase, different browser runs + driver = get_driver(browser) + assert login(driver, "user", "pass") +``` + +### Test Data vs Test Logic + +Separate test data (that should be tracked) from test configuration (that shouldn't): + +```python +import pytest +from qase.pytest import qase + +@pytest.mark.parametrize("username,expected_role", [ + ("admin@example.com", "admin"), + ("user@example.com", "user"), +]) +@pytest.mark.parametrize("timeout", [5, 10, 30]) # Infrastructure config +@qase.ignore_parameters("timeout") +def test_user_role(username, expected_role, timeout): + # username and expected_role are reported (test data) + # timeout is not reported (test configuration) + user = fetch_user(username, timeout=timeout) + assert user.role == expected_role +``` + +### Sensitive Data + +Exclude sensitive information from reports: + +```python +import pytest +from qase.pytest import qase + +@pytest.mark.parametrize("user", ["admin", "regular"]) +@pytest.mark.parametrize("api_key", [API_KEY_1, API_KEY_2]) +@qase.ignore_parameters("api_key") +def test_api_access(user, api_key): + # api_key is used but not visible in Qase reports + response = make_api_call(user, api_key) + assert response.status_code == 200 +``` + +### Debug/Verbose Modes + +Exclude debug-only parameters: + +```python +import pytest +from qase.pytest import qase + +@qase.parametrize_ignore( + "debug_mode,verbose", + [(True, True), (False, False)], + ids=["debug", "normal"] +) +@pytest.mark.parametrize("feature", ["login", "checkout", "search"]) +def test_features(debug_mode, verbose, feature): + # feature is reported + # debug_mode and verbose are not + run_test(feature, debug=debug_mode, verbose=verbose) + assert True +``` + +--- + +## Global Parameter Exclusion + +Use configuration to exclude parameters across all tests: + +### Via Config File + +```json +{ + "excludeParams": ["password", "api_key", "token", "secret"] +} +``` + +### Via Environment Variable + +```bash +export QASE_EXCLUDE_PARAMS="password,api_key,token,secret" +``` + +This applies to all tests without needing decorators. + +--- + +## How Parameters Appear in Qase + +When parameters are reported, they appear in the test result: + +``` +Test: test_login[admin-staging] +Parameters: + - user: admin + - environment: staging +``` + +Ignored parameters are completely omitted from the report. + +--- + +## Best Practices + +### Report Meaningful Parameters + +Report parameters that: +- Distinguish test variations +- Are relevant to understanding failures +- Help identify which scenarios are covered + +### Ignore Infrastructure Parameters + +Ignore parameters that: +- Don't affect test logic (browser, timeout) +- Contain sensitive data +- Are for debugging only +- Create noise in reports + +### Use Descriptive IDs + +```python +@pytest.mark.parametrize("status,expected", [ + (200, "success"), + (404, "not_found"), + (500, "error"), +], ids=["success_response", "not_found_response", "server_error"]) +def test_handle_response(status, expected): + assert handle(status) == expected +``` + +--- + +## Troubleshooting + +### All Parameter Combinations Creating Separate Test Cases + +If you want one test case with multiple runs: +1. Use `@qase.ignore_parameters()` for varying parameters +2. Or use global `excludeParams` configuration + +### Parameters Not Appearing + +1. Check if parameters are in `excludeParams` config +2. Verify `@qase.ignore_parameters()` isn't applied +3. Enable debug logging to see parameter processing + +### Wrong Parameters Ignored + +Decorator order matters: +```python +# Parameters are processed in decorator order +@pytest.mark.parametrize("a", [1, 2]) +@pytest.mark.parametrize("b", [3, 4]) +@qase.ignore_parameters("a") # Must come after parametrize +def test_order(a, b): + pass +``` + +--- + +## See Also + +- [Usage Guide](usage.md) +- [Multi-Project Support](MULTI_PROJECT.md) +- [Configuration Reference](../../qase-python-commons/README.md) diff --git a/qase-pytest/docs/STEPS.md b/qase-pytest/docs/STEPS.md new file mode 100644 index 00000000..d8a346f7 --- /dev/null +++ b/qase-pytest/docs/STEPS.md @@ -0,0 +1,432 @@ +# Test Steps in Pytest + +This guide covers how to define and report test steps for detailed execution tracking in Qase. + +--- + +## Overview + +Test steps provide granular visibility into test execution. Each step is reported separately, showing: + +- Step name and description +- Step status (passed/failed) +- Step duration +- Attachments (if any) +- Error details (on failure) + +--- + +## Defining Steps + +### Using Decorator + +Annotate functions as test steps: + +```python +from qase.pytest import qase + +@qase.step("Open login page") +def open_login_page(browser): + browser.goto("/login") + +@qase.step("Enter credentials") +def enter_credentials(browser, username, password): + browser.fill("#username", username) + browser.fill("#password", password) + +@qase.step("Click login button") +def click_login(browser): + browser.click("#login-btn") + +def test_login_flow(browser): + open_login_page(browser) + enter_credentials(browser, "user", "pass") + click_login(browser) + assert browser.url == "/dashboard" +``` + +### Using Context Manager + +Use the context manager for inline steps: + +```python +from qase.pytest import qase + +def test_checkout_flow(browser): + with qase.step("Add item to cart"): + browser.click("#add-to-cart") + assert browser.locator(".cart-count").text_content() == "1" + + with qase.step("Proceed to checkout"): + browser.click("#checkout") + assert "/checkout" in browser.url + + with qase.step("Complete payment"): + browser.fill("#card", "4111111111111111") + browser.click("#pay") + assert browser.locator(".success").is_visible() +``` + +### Dynamic Step Names + +Include parameters in step names for better traceability: + +```python +from qase.pytest import qase + +@qase.step("Login as '{username}'") +def login_as(browser, username, password): + browser.fill("#username", username) + browser.fill("#password", password) + browser.click("#login") + +def test_user_roles(browser): + login_as(browser, "admin", "admin123") # Step: "Login as 'admin'" + assert is_admin_dashboard_visible(browser) +``` + +--- + +## Nested Steps + +Create hierarchical step structures: + +```python +from qase.pytest import qase + +@qase.step("Create test user") +def create_user(): + pass + +@qase.step("Create test product") +def create_product(): + pass + +def test_order_creation(browser): + with qase.step("Setup test data"): + create_user() + create_product() + + with qase.step("Create order"): + with qase.step("Add product to cart"): + browser.click("#add-to-cart") + + with qase.step("Checkout"): + browser.click("#checkout") + browser.click("#confirm") + + with qase.step("Verify order created"): + assert browser.locator(".order-id").is_visible() +``` + +This creates the following step hierarchy: +``` +├── Setup test data +│ ├── Create test user +│ └── Create test product +├── Create order +│ ├── Add product to cart +│ └── Checkout +└── Verify order created +``` + +--- + +## Steps with Expected Results + +Define expected results for documentation and verification: + +```python +from qase.pytest import qase + +@qase.step("Submit form", expected="Form is submitted and success message appears") +def submit_form(browser): + browser.click("#submit") + assert browser.locator(".success").is_visible() + +@qase.step("Verify redirect", expected="User is redirected to dashboard page") +def verify_redirect(browser): + assert browser.url.endswith("/dashboard") +``` + +--- + +## Steps with Attachments + +Attach content to a specific step: + +```python +from qase.pytest import qase + +def test_with_step_attachments(browser): + with qase.step("Fill form"): + browser.fill("#name", "Test User") + browser.fill("#email", "test@example.com") + + with qase.step("Submit and verify"): + browser.click("#submit") + + # Attach screenshot to this step + screenshot = browser.screenshot() + qase.attach((screenshot, "image/png", "after_submit.png")) + + assert browser.locator(".success").is_visible() +``` + +--- + +## Step Status + +Steps automatically inherit status from execution: + +| Execution | Step Status | +|-----------|-------------| +| Completes normally | Passed | +| Raises AssertionError | Failed | +| Raises other exception | Invalid | + +### Step Failure Behavior + +When a step fails: +1. The step is marked as failed +2. The error message is captured +3. Subsequent steps in the same test may still execute (depending on test structure) +4. The overall test is marked as failed + +```python +from qase.pytest import qase + +def test_with_failing_step(): + with qase.step("First step"): + assert True # Passes + + with qase.step("Failing step"): + assert False # Fails - step marked as failed + + with qase.step("Third step"): + # This step still executes but test already failed + assert True +``` + +--- + +## Best Practices + +### Keep Steps Atomic + +Each step should represent a single action or verification: + +```python +# Good: One action per step +@qase.step("Click login button") +def click_login(browser): + browser.click("#login-btn") + +@qase.step("Enter username") +def enter_username(browser, username): + browser.fill("#username", username) + +# Avoid: Multiple actions in one step +@qase.step("Fill form and submit") # Too broad +def fill_and_submit(browser): + browser.fill("#username", "user") + browser.fill("#password", "pass") + browser.click("#submit") +``` + +### Use Descriptive Names + +```python +# Good: Clear action description +@qase.step("Verify user is redirected to dashboard after login") +def verify_dashboard_redirect(browser): + assert "/dashboard" in browser.url + +# Avoid: Vague names +@qase.step("Check page") # What page? What check? +def check(browser): + pass +``` + +### Include Context in Step Names + +```python +# Good: Include relevant context +@qase.step("Add product '{product_name}' to cart") +def add_to_cart(browser, product_name): + browser.click(f"[data-product='{product_name}'] .add-to-cart") + +# Better than generic: +@qase.step("Add product") # Which product? +def add_to_cart(browser, product_name): + pass +``` + +### Group Related Steps + +```python +from qase.pytest import qase + +def test_e2e_purchase(browser): + # Authentication group + with qase.step("User authentication"): + with qase.step("Navigate to login"): + browser.goto("/login") + with qase.step("Enter credentials"): + login(browser, "user", "pass") + + # Purchase group + with qase.step("Complete purchase"): + with qase.step("Select product"): + select_product(browser, "Widget") + with qase.step("Checkout"): + checkout(browser) + + # Verification group + with qase.step("Verify purchase"): + assert order_confirmation_visible(browser) +``` + +--- + +## Common Patterns + +### Page Object Steps + +```python +from qase.pytest import qase + +class LoginPage: + def __init__(self, browser): + self.browser = browser + + @qase.step("Open login page") + def open(self): + self.browser.goto("/login") + + @qase.step("Enter username '{username}'") + def enter_username(self, username): + self.browser.fill("#username", username) + + @qase.step("Enter password") + def enter_password(self, password): + self.browser.fill("#password", password) + + @qase.step("Click login button") + def click_login(self): + self.browser.click("#login") + +def test_login(browser): + login_page = LoginPage(browser) + login_page.open() + login_page.enter_username("testuser") + login_page.enter_password("password123") + login_page.click_login() + assert browser.url == "/dashboard" +``` + +### API Testing Steps + +```python +import requests +from qase.pytest import qase + +@qase.step("Create user via API") +def create_user(name, email): + response = requests.post("/api/users", json={"name": name, "email": email}) + assert response.status_code == 201 + return response.json()["id"] + +@qase.step("Get user by ID") +def get_user(user_id): + response = requests.get(f"/api/users/{user_id}") + assert response.status_code == 200 + return response.json() + +@qase.step("Delete user") +def delete_user(user_id): + response = requests.delete(f"/api/users/{user_id}") + assert response.status_code == 204 + +def test_user_crud(): + user_id = create_user("Test", "test@example.com") + user = get_user(user_id) + assert user["name"] == "Test" + delete_user(user_id) +``` + +### Setup/Teardown Steps + +```python +import pytest +from qase.pytest import qase + +@pytest.fixture +def test_user(): + with qase.step("Setup: Create test user"): + user = create_test_user() + + yield user + + with qase.step("Teardown: Delete test user"): + delete_user(user.id) + +def test_user_profile(test_user): + with qase.step("View user profile"): + profile = get_profile(test_user.id) + assert profile is not None +``` + +--- + +## Troubleshooting + +### Steps Not Appearing + +1. Verify the step decorator/context is properly imported from `qase.pytest` +2. Check that steps are executed within a test context +3. Enable debug logging to trace step recording + +### Nested Steps Flattened + +Ensure you're using the context manager correctly for nesting: + +```python +# Correct: Nested context managers +with qase.step("Parent step"): + with qase.step("Child step"): + pass + +# Incorrect: Sequential, not nested +with qase.step("Step 1"): + pass +with qase.step("Step 2"): # Not nested under Step 1 + pass +``` + +### Step Duration Shows 0 + +Steps need measurable execution time. Very fast steps may show 0ms duration. This is normal for simple operations. + +### Steps Not Reporting Failure Details + +Ensure the exception is raised within the step context: + +```python +# Correct: Exception within step +with qase.step("Verify result"): + assert result == expected # Failure captured in step + +# Incorrect: Exception outside step +with qase.step("Get result"): + result = get_result() +assert result == expected # Failure not associated with step +``` + +--- + +## See Also + +- [Usage Guide](usage.md) +- [Attachments Guide](ATTACHMENTS.md) +- [Configuration Reference](../../qase-python-commons/README.md) diff --git a/qase-pytest/docs/usage.md b/qase-pytest/docs/usage.md index fa73e1b9..7425c806 100644 --- a/qase-pytest/docs/usage.md +++ b/qase-pytest/docs/usage.md @@ -1,71 +1,128 @@ # Qase Integration in Pytest -This guide demonstrates how to integrate Qase with Pytest, providing instructions on how to add Qase IDs, fields, suites, and other metadata to your test cases. +This guide provides comprehensive instructions for integrating Qase with Pytest. > **Configuration:** For complete configuration reference including all available options, environment variables, and examples, see the [qase-python-commons README](../../qase-python-commons/README.md). --- -## Adding QaseID to a Test +## Table of Contents + +- [Adding QaseID](#adding-qaseid) +- [Adding Title](#adding-title) +- [Adding Fields](#adding-fields) +- [Adding Suite](#adding-suite) +- [Ignoring Tests](#ignoring-tests) +- [Muting Tests](#muting-tests) +- [Working with Attachments](#working-with-attachments) +- [Working with Steps](#working-with-steps) +- [Working with Parameters](#working-with-parameters) +- [Multi-Project Support](#multi-project-support) +- [Running Tests](#running-tests) +- [Complete Examples](#complete-examples) -To associate a QaseID with a test in Pytest, use the `@qase.id` decorator. +--- + +## Adding QaseID + +Link your automated tests to existing test cases in Qase by specifying the test case ID. + +### Single ID ```python from qase.pytest import qase @qase.id(1) -def test_example(): +def test_single_case(): assert True +``` + +### Multiple IDs + +Link one test to multiple test cases: -@qase.id([2, 3]) # Multiple IDs -def test_multiple_ids(): +```python +from qase.pytest import qase + +@qase.id([2, 3, 4]) +def test_linked_to_multiple_cases(): assert True ``` ### Multi-Project Support -Qase Pytest Reporter supports sending test results to multiple Qase projects simultaneously with different test case IDs for each project. - -For detailed information, configuration, examples, and troubleshooting, see the [Multi-Project Support Guide](MULTI_PROJECT.md). +To send test results to multiple Qase projects simultaneously, see the [Multi-Project Support Guide](MULTI_PROJECT.md). --- -## Adding Title to a Test +## Adding Title -To set a custom title for a test case: +Set a custom title for the test case (overrides the auto-generated title from the function name): ```python from qase.pytest import qase -@qase.title("User Login Test") +@qase.title("Verify user can log in with valid credentials") def test_login(): assert True ``` --- -## Adding Fields to a Test +## Adding Fields + +Add metadata to your test cases using fields. Both system and custom fields are supported. -The `qase.fields` decorator allows you to add additional metadata to a test case. +### System Fields + +| Field | Description | Example Values | +|-------|-------------|----------------| +| `description` | Test case description | Any text | +| `preconditions` | Test preconditions | Any text (supports Markdown) | +| `postconditions` | Test postconditions | Any text | +| `severity` | Test severity | `blocker`, `critical`, `major`, `normal`, `minor`, `trivial` | +| `priority` | Test priority | `high`, `medium`, `low` | +| `layer` | Test layer | `e2e`, `api`, `unit` | + +### Example ```python from qase.pytest import qase @qase.fields( - ("priority", "high"), ("severity", "critical"), - ("layer", "UI"), - ("custom_field", "custom_value") + ("priority", "high"), + ("layer", "e2e"), + ("description", "Verify user can complete checkout process"), + ("preconditions", "- User is logged in\n- Cart has items"), +) +def test_checkout(): + assert True +``` + +### Custom Fields + +Custom fields defined in your Qase project can also be set: + +```python +from qase.pytest import qase + +@qase.fields( + ("browser", "chrome"), + ("environment", "staging"), + ("custom_field_slug", "custom_value"), ) -def test_example(): +def test_with_custom_fields(): assert True ``` --- -## Adding a Suite to a Test +## Adding Suite -To assign a suite or sub-suite to a test: +Organize tests into suites and sub-suites. + +### Simple Suite ```python from qase.pytest import qase @@ -73,21 +130,31 @@ from qase.pytest import qase @qase.suite("Authentication") def test_login(): assert True +``` -@qase.suite("Authentication", "Login") # With description -def test_login_with_description(): +### Suite with Description + +```python +from qase.pytest import qase + +@qase.suite("Authentication", "Tests for user authentication flows") +def test_login(): assert True ``` ### Nested Suites -You can create nested suites using dot notation: +Use dot notation to create nested suite hierarchy: ```python from qase.pytest import qase @qase.suite("Authentication.Login") -def test_login(): +def test_valid_login(): + assert True + +@qase.suite("Authentication.Login.OAuth") +def test_google_login(): assert True @qase.suite("Authentication.Logout") @@ -95,199 +162,447 @@ def test_logout(): assert True ``` +This creates the following structure in Qase: +``` +Authentication/ +├── Login/ +│ ├── test_valid_login +│ └── OAuth/ +│ └── test_google_login +└── Logout/ + └── test_logout +``` + --- -## Ignoring a Test in Qase +## Ignoring Tests -To exclude a test from being reported to Qase (while still executing the test): +Exclude a test from Qase reporting. The test still executes in pytest, but results are not sent to Qase: ```python from qase.pytest import qase @qase.ignore() -def test_example(): +def test_not_reported_to_qase(): + # This test runs but is not reported assert True ``` --- -## Muting a Test - -To mark a test as muted (it will not affect the test run status): +## Muting Tests -### Using Decorator +Mark a test as muted. Muted tests are reported to Qase but do not affect the test run status: ```python from qase.pytest import qase @qase.muted() -def test_example(): +def test_flaky_test(): + # Results are reported but won't fail the run assert True ``` --- -## Working with Parameters +## Working with Attachments + +Attach files, screenshots, logs, and other content to your test results. -### Ignoring Parameters +### Attach File from Path -There are two ways to exclude specific parameters from Qase reports: +```python +from qase.pytest import qase -#### Using parametrize_ignore +def test_with_file(): + qase.attach("/path/to/screenshot.png") + qase.attach("/path/to/logs.txt") + assert True +``` -To exclude parameters from a specific parametrize decorator: +### Attach with MIME Type ```python from qase.pytest import qase -@qase.parametrize_ignore( - "test_input,expected", - [("3+5", 8), ("2+4", 6), ("6*9", 42)], - ids=["add_3_5", "add_2_4", "multiply_6_9"] -) -@pytest.mark.parametrize( - "param1,param2", - [(1, 2), (3, 4), (5, 6)], - ids=["param1_1_2", "param1_3_4", "param1_5_6"]) -def test_eval(test_input, expected, param1, param2): - print(param1, param2) - assert eval(test_input) == expected +def test_with_mime_type(): + qase.attach( + ("/path/to/data.json", "application/json"), + ("/path/to/report.xml", "application/xml"), + ) + assert True ``` -#### Using ignore_parameters +### Attach Content from Memory + +```python +from qase.pytest import qase -To exclude specific parameters from any parametrize decorator: +def test_with_content(): + # Text content + qase.attach(("Log content here", "text/plain", "test.log")) + + # Binary content (e.g., screenshot) + screenshot_bytes = driver.get_screenshot_as_png() + qase.attach((screenshot_bytes, "image/png", "screenshot.png")) + + assert True +``` + +### Attach in Fixtures ```python +import pytest from qase.pytest import qase -@pytest.mark.parametrize("browser", ["chrome", "firefox"]) -@pytest.mark.parametrize("user", ["user1", "user2"]) -@qase.ignore_parameters("user", "browser") -def test_login(browser, user): - # Both browser and user parameters will be ignored in Qase reports - assert browser in ["chrome", "firefox"] - assert user in ["user1", "user2"] +@pytest.fixture +def browser(): + driver = create_driver() + yield driver + + # Attach screenshot on teardown + screenshot = driver.get_screenshot_as_png() + qase.attach((screenshot, "image/png", "final_state.png")) + + driver.quit() ``` -You can also ignore only specific parameters: +> For more details, see [Attachments Guide](ATTACHMENTS.md). + +--- + +## Working with Steps + +Define test steps for detailed reporting in Qase. + +### Using Decorator ```python from qase.pytest import qase +@qase.step("Open login page") +def open_login_page(): + # Implementation + pass + +@qase.step("Enter username '{username}'") +def enter_username(username): + # Implementation + pass + +@qase.step("Click login button") +def click_login(): + # Implementation + pass + +def test_login_flow(): + open_login_page() + enter_username("testuser") + click_login() + assert True +``` + +### Using Context Manager + +```python +from qase.pytest import qase + +def test_checkout(): + with qase.step("Add item to cart"): + # Add item logic + pass + + with qase.step("Proceed to checkout"): + # Checkout logic + pass + + with qase.step("Complete payment"): + # Payment logic + pass + + assert True +``` + +### Nested Steps + +```python +from qase.pytest import qase + +def test_complex_flow(): + with qase.step("Setup test data"): + with qase.step("Create user"): + pass + with qase.step("Create product"): + pass + + with qase.step("Execute test"): + pass + + assert True +``` + +### Steps with Expected Results + +```python +from qase.pytest import qase + +@qase.step("Verify user is redirected", expected="User sees dashboard page") +def verify_redirect(): + # Verification logic + pass +``` + +> For more details, see [Steps Guide](STEPS.md). + +--- + +## Working with Parameters + +Report parameterized test data to Qase. + +### Basic Parameterized Test + +```python +import pytest +from qase.pytest import qase + +@pytest.mark.parametrize("username,password", [ + ("user1", "pass1"), + ("user2", "pass2"), +]) +@qase.id(1) +def test_login(username, password): + assert login(username, password) +``` + +### Ignoring Specific Parameters + +Exclude specific parameters from Qase reports using `@qase.ignore_parameters()`: + +```python +import pytest +from qase.pytest import qase + @pytest.mark.parametrize("browser", ["chrome", "firefox"]) @pytest.mark.parametrize("user", ["user1", "user2"]) -@pytest.mark.parametrize("env", ["staging", "production"]) -@qase.ignore_parameters("user") -def test_login(browser, user, env): - # Only user parameter will be ignored, browser and env will be included - assert browser in ["chrome", "firefox"] - assert user in ["user1", "user2"] - assert env in ["staging", "production"] +@qase.ignore_parameters("browser") # Only 'user' will be reported +def test_login(browser, user): + assert True ``` -#### Combining both approaches +### Using parametrize_ignore -You can use both decorators together: +Replace `@pytest.mark.parametrize` entirely to ignore those parameters: ```python from qase.pytest import qase -@qase.parametrize_ignore("test_data", ["data1", "data2"]) +@qase.parametrize_ignore( + "internal_data", + [("data1",), ("data2",)], + ids=["case1", "case2"] +) +@pytest.mark.parametrize("visible_param", ["a", "b"]) +def test_with_mixed_params(internal_data, visible_param): + # internal_data is not reported, visible_param is reported + assert True +``` + +### Combining Both Approaches + +```python +import pytest +from qase.pytest import qase + +@qase.parametrize_ignore("debug_data", ["d1", "d2"]) @pytest.mark.parametrize("browser", ["chrome", "firefox"]) +@pytest.mark.parametrize("user", ["alice", "bob"]) @qase.ignore_parameters("browser") -def test_combined(browser, test_data): - # Both test_data (from parametrize_ignore) and browser (from ignore_parameters) will be ignored - assert browser in ["chrome", "firefox"] - assert test_data in ["data1", "data2"] +def test_complex_params(debug_data, browser, user): + # Only 'user' is reported to Qase + # 'debug_data' ignored via parametrize_ignore + # 'browser' ignored via ignore_parameters + assert True ``` +> For more details, see [Parameters Guide](PARAMETERS.md). + --- -## Advanced Configuration +## Multi-Project Support -For complete configuration options including profilers, log capture, xfail status, execution plans, and all other settings, see the [qase-python-commons README](../../qase-python-commons/README.md) and [Pytest Configuration Reference](CONFIGURATION.md). +Send test results to multiple Qase projects simultaneously using `@qase.project_id()`: + +```python +from qase.pytest import qase -### Quick Reference +@qase.project_id("PROJ1", 1, 2) # IDs 1, 2 in PROJ1 +@qase.project_id("PROJ2", 10) # ID 10 in PROJ2 +def test_shared_functionality(): + assert True +``` -* **Profilers**: Use `--qase-profilers=sleep,network` to enable profilers -* **Log Capture**: Use `--qase-pytest-capture-logs=true` to capture pytest logs -* **XFail Status**: Use `--qase-pytest-xfail-status-xfail=skipped` to configure xfail status -* **Execution Plans**: Use `--qase-execution-plan-path=plan.json` to run specific tests +For detailed configuration, examples, and troubleshooting, see the [Multi-Project Support Guide](MULTI_PROJECT.md). --- -## Multiple Decorators +## Running Tests -You can combine multiple decorators: +### Basic Execution -```python -from qase.pytest import qase +```sh +pytest +``` -@qase.id(1) -@qase.title("User Login Test") -@qase.suite("Authentication") -@qase.fields( - ("priority", "high"), - ("severity", "critical"), - ("layer", "UI") -) -def test_login(): - assert True +### With CLI Options + +```sh +pytest \ + --qase-mode=testops \ + --qase-testops-project=PROJ \ + --qase-testops-api-token=your_token \ + --qase-testops-run-title="Regression Run" +``` + +### With Environment Variables + +```sh +export QASE_MODE=testops +export QASE_TESTOPS_PROJECT=PROJ +export QASE_TESTOPS_API_TOKEN=your_token +pytest +``` + +### With Existing Test Run + +Report results to an existing test run (useful for parallel execution or mixed manual/automated runs): + +```sh +pytest --qase-testops-run-id=123 +``` + +### With Test Plan + +Run only tests from a specific test plan: + +```sh +pytest --qase-testops-plan-id=456 +``` + +### With Environment + +```sh +pytest --qase-environment=staging +``` + +### With Log Capture + +```sh +pytest --qase-pytest-capture-logs=true ``` --- -## Examples +## Complete Examples -### Complete Test Example +### Full Test Example ```python import pytest from qase.pytest import qase @qase.id(1) -@qase.title("User Registration Test") +@qase.title("User Registration Flow") @qase.suite("Authentication.Registration") @qase.fields( - ("priority", "high"), ("severity", "critical"), - ("layer", "UI") + ("priority", "high"), + ("layer", "e2e"), + ("description", "Verify new user can register successfully"), + ("preconditions", "- Application is running\n- Email service is available"), ) -def test_user_registration(): - # Test implementation - assert True +def test_user_registration(browser): + with qase.step("Open registration page"): + browser.goto("/register") + + with qase.step("Fill registration form"): + browser.fill("#email", "test@example.com") + browser.fill("#password", "SecurePass123") + + with qase.step("Submit form"): + browser.click("#submit") + + with qase.step("Verify success message"): + assert browser.text_content(".success") == "Registration successful" + @qase.id([2, 3]) @qase.title("User Login Test") @qase.suite("Authentication.Login") -@qase.ignore() # This test will be ignored in Qase -def test_user_login(): - # Test implementation - assert True +@pytest.mark.parametrize("email,password,expected", [ + ("valid@example.com", "correct", True), + ("valid@example.com", "wrong", False), +]) +def test_login(browser, email, password, expected): + with qase.step(f"Login with {email}"): + result = login(browser, email, password) + + assert result == expected + + +@qase.ignore() +def test_work_in_progress(): + # Not reported to Qase + pass +``` + +### Example Project Structure -@qase.parametrize_ignore( - "email,password", - [("user1@example.com", "pass1"), ("user2@example.com", "pass2")], - ids=["user1", "user2"] -) -def test_login_with_different_users(email, password): - # Test implementation - assert True +``` +my-project/ +├── qase.config.json +├── conftest.py +├── tests/ +│ ├── test_auth.py +│ ├── test_checkout.py +│ └── test_api.py +└── requirements.txt ``` -### Running Tests +--- -```bash -# Basic run -pytest --qase-mode=testops --qase-testops-project=MYPROJECT --qase-testops-api-token=YOUR_TOKEN +## Troubleshooting -# With environment and plan -pytest --qase-mode=testops --qase-testops-project=MYPROJECT --qase-testops-api-token=YOUR_TOKEN --qase-environment=staging --qase-testops-plan-id=123 +### Tests Not Appearing in Qase -# With execution plan -pytest --qase-mode=testops --qase-testops-project=MYPROJECT --qase-testops-api-token=YOUR_TOKEN --qase-execution-plan-path=plan.json +1. Verify `mode` is set to `testops` (not `off` or `report`) +2. Check API token has write permissions +3. Verify project code is correct +4. Check for errors in console output (enable `debug: true`) -# With profilers and logs -pytest --qase-mode=testops --qase-testops-project=MYPROJECT --qase-testops-api-token=YOUR_TOKEN --qase-profilers=network,db --qase-pytest-capture-logs=true -``` +### Attachments Not Uploading + +1. Verify file path exists and is readable +2. Check file size (large files may take time) +3. Enable debug logging to see upload status + +### Results Going to Wrong Test Cases + +1. Verify QaseID matches the test case ID in Qase +2. Check for duplicate IDs in your test suite +3. Verify you're using the correct project code + +### Parameterized Tests Creating Duplicates + +Use `@qase.ignore_parameters()` to exclude parameters that shouldn't differentiate test cases. + +--- + +## See Also + +- [Configuration Reference](../../qase-python-commons/README.md) +- [Attachments Guide](ATTACHMENTS.md) +- [Steps Guide](STEPS.md) +- [Parameters Guide](PARAMETERS.md) +- [Multi-Project Support](MULTI_PROJECT.md) +- [Upgrade Guide](UPGRADE.md) diff --git a/qase-python-commons/README.md b/qase-python-commons/README.md index a657d5dc..f435756a 100644 --- a/qase-python-commons/README.md +++ b/qase-python-commons/README.md @@ -1,20 +1,46 @@ # Qase Python Commons -## Description - -This module is an SDK for developing test reporters for Qase TMS. -It's using `qase-api-client` as an API client, and all Qase Python reporters are, in turn, -using this package. -You should use it if you're developing your own test reporter for a special-purpose framework. - -To report results from tests using a popular framework or test runner, -don't install this module directly and -use the corresponding reporter module instead: - -* [Pytest](https://github.com/qase-tms/qase-python/tree/main/qase-pytest#readme) -* [Behave](https://github.com/qase-tms/qase-python/tree/main/qase-behave#readme) -* [Robot Framework](https://github.com/qase-tms/qase-python/tree/main/qase-robotframework#readme) -* [Tavern](https://github.com/qase-tms/qase-python/tree/main/qase-tavern#readme) +[![PyPI version](https://img.shields.io/pypi/v/qase-python-commons?style=flat-square)](https://pypi.org/project/qase-python-commons/) +[![PyPI downloads](https://img.shields.io/pypi/dm/qase-python-commons?style=flat-square)](https://pypi.org/project/qase-python-commons/) +[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?style=flat-square)](https://www.apache.org/licenses/LICENSE-2.0) + +Core library for all Qase Python reporters. Contains the complete configuration reference. + +## Table of Contents + +- [About](#about) +- [Installation](#installation) +- [Reporters](#reporters) +- [Configuration](#configuration) + - [Configuration Priority](#configuration-priority) + - [Reporter Modes](#reporter-modes) + - [Common Options](#common-options) + - [TestOps Options (Single Project)](#testops-options-single-project) + - [TestOps Multi Options (Multiple Projects)](#testops-multi-options-multiple-projects) + - [Local Report Options](#local-report-options) + - [Logging Options](#logging-options) + - [Framework-Specific Options](#framework-specific-options) +- [Configuration Examples](#configuration-examples) + - [Single Project (testops)](#single-project-testops) + - [Multiple Projects (testops_multi)](#multiple-projects-testops_multi) + - [Environment Variables](#environment-variables) +- [Additional Features](#additional-features) + - [Status Mapping](#status-mapping) + - [Status Filtering](#status-filtering) + - [External Links](#external-links) + - [Test Run Configurations](#test-run-configurations) + +--- + +## About + +This module is an SDK for developing test reporters for Qase TMS. It uses `qase-api-client` as an API client, and all Qase Python reporters depend on this package. + +**Use this library if:** +- You're developing a custom reporter for a specialized framework +- You need a complete configuration reference + +**For testing, use the ready-made reporters** — see [Reporters](#reporters) section. ## Installation @@ -22,99 +48,139 @@ use the corresponding reporter module instead: pip install qase-python-commons ``` +## Reporters + +For popular frameworks, use the ready-made reporters: + +| Framework | Package | Documentation | +|-----------|---------|---------------| +| Pytest | `qase-pytest` | [README](https://github.com/qase-tms/qase-python/tree/main/qase-pytest#readme) | +| Behave | `qase-behave` | [README](https://github.com/qase-tms/qase-python/tree/main/qase-behave#readme) | +| Robot Framework | `qase-robotframework` | [README](https://github.com/qase-tms/qase-python/tree/main/qase-robotframework#readme) | +| Tavern | `qase-tavern` | [README](https://github.com/qase-tms/qase-python/tree/main/qase-tavern#readme) | + +--- + ## Configuration -Qase Python Reporters can be configured in multiple ways: - -* using a config file `qase.config.json` -* using environment variables -* using command line options (for frameworks that support it, like pytest and tavern) - -Environment variables override the values given in the config file, -and command line options override both other values. - -All configuration options are listed in the tables below: - -### Common Configuration - -| Description | Config file | Environment variable | Default value | Required | Possible values | -|-----------------------------------------------------------------------------------------------------------------------|----------------------------|---------------------------------|-----------------------------------------|----------|----------------------------| -| **Common** | | | | | | -| Mode of reporter | `mode` | `QASE_MODE` | `off` | No | `testops`, `testops_multi`, `report`, `off` | -| Fallback mode of reporter | `fallback` | `QASE_FALLBACK` | `off` | No | `testops`, `testops_multi`, `report`, `off` | -| Environment | `environment` | `QASE_ENVIRONMENT` | undefined | No | Any string | -| Root suite | `rootSuite` | `QASE_ROOT_SUITE` | undefined | No | Any string | -| Enable debug logs | `debug` | `QASE_DEBUG` | `False` | No | `True`, `False` | -| Execution plan path | `executionPlan.path` | `QASE_EXECUTION_PLAN_PATH` | `./build/qase-execution-plan.json` | No | Any string | -| Exclude parameters from test results | `excludeParams` | `QASE_EXCLUDE_PARAMS` | undefined | No | Comma-separated list of parameter names | -| Map test result statuses to different values (format: `fromStatus=toStatus`) | `statusMapping` | `QASE_STATUS_MAPPING` | undefined | No | Object mapping statuses (e.g., `{"invalid": "failed", "skipped": "passed"}`) | -| **Logging configuration** | | | | | | -| Enable/disable console output for reporter logs | `logging.console` | `QASE_LOGGING_CONSOLE` | `True` | No | `True`, `False` | -| Enable/disable file output for reporter logs | `logging.file` | `QASE_LOGGING_FILE` | Same as `debug` setting | No | `True`, `False` | -| **Qase Report configuration** | | | | | | -| Driver used for report mode | `report.driver` | `QASE_REPORT_DRIVER` | `local` | No | `local` | -| Path to save the report | `report.connection.path` | `QASE_REPORT_CONNECTION_PATH` | `./build/qase-report` | No | Any string | -| Local report format | `report.connection.format` | `QASE_REPORT_CONNECTION_FORMAT` | `json` | No | `json`, `jsonp` | -| **Qase TestOps configuration (single project)** | | | | | | -| Token for [API access](https://developers.qase.io/#authentication) | `testops.api.token` | `QASE_TESTOPS_API_TOKEN` | undefined | Yes* | Any string | -| Qase API host. For enterprise users, specify address: `example.qase.io` | `testops.api.host` | `QASE_TESTOPS_API_HOST` | `qase.io` | No | Any string | -| Code of your project, which you can take from the URL: `https://app.qase.io/project/DEMOTR` - `DEMOTR` is the project code | `testops.project` | `QASE_TESTOPS_PROJECT` | undefined | Yes* | Any string | -| Qase test run ID | `testops.run.id` | `QASE_TESTOPS_RUN_ID` | undefined | No | Any integer | -| Qase test run title | `testops.run.title` | `QASE_TESTOPS_RUN_TITLE` | `Automated run ` | No | Any string | -| Qase test run description | `testops.run.description` | `QASE_TESTOPS_RUN_DESCRIPTION` | ` automated run` | No | Any string | -| Qase test run complete | `testops.run.complete` | `QASE_TESTOPS_RUN_COMPLETE` | `True` | No | `True`, `False` | -| Array of tags to be added to the test run | `testops.run.tags` | `QASE_TESTOPS_RUN_TAGS` | `[]` | No | Array of strings | -| External link to associate with test run (e.g., Jira ticket) | `testops.run.externalLink` | `QASE_TESTOPS_RUN_EXTERNAL_LINK` | undefined | No | JSON object with `type` (`jiraCloud` or `jiraServer`) and `link` (e.g., `PROJ-123`) | -| Qase test plan ID | `testops.plan.id` | `QASE_TESTOPS_PLAN_ID` | undefined | No | Any integer | -| Size of batch for sending test results | `testops.batch.size` | `QASE_TESTOPS_BATCH_SIZE` | `200` | No | Any integer (1 to 2000) | -| Enable defects for failed test cases | `testops.defect` | `QASE_TESTOPS_DEFECT` | `False` | No | `True`, `False` | -| Filter test results by status (comma-separated list of statuses to exclude from reporting) | `testops.statusFilter` | `QASE_TESTOPS_STATUS_FILTER` | undefined | No | Array of strings (`passed`, `failed`, `skipped`, `invalid`) | -| Configuration values to create/find in groups (format: `group1=value1,group2=value2`) | `testops.configurations.values` | `QASE_TESTOPS_CONFIGURATIONS_VALUES` | undefined | No | Array of objects with `name` and `value` fields | -| Create configuration groups if they don't exist | `testops.configurations.createIfNotExists` | `QASE_TESTOPS_CONFIGURATIONS_CREATE_IF_NOT_EXISTS` | `false` | No | `True`, `False` | -| Enable public report link generation and display after test run completion | `testops.showPublicReportLink` | `QASE_TESTOPS_SHOW_PUBLIC_REPORT_LINK` | `False` | No | `True`, `False` | -| **Qase TestOps Multi-Project configuration** | | | | | | -| Default project code for tests without explicit project mapping | `testops_multi.default_project` | `QASE_TESTOPS_MULTI_DEFAULT_PROJECT` | undefined | No | Any string (must match one of the project codes in `projects`) | -| Array of project configurations | `testops_multi.projects` | N/A (use config file) | `[]` | Yes** | Array of project configuration objects | -| Project code | `testops_multi.projects[].code` | N/A | undefined | Yes** | Any string | -| Project-specific test run title | `testops_multi.projects[].run.title` | N/A | `Automated Run ` | No | Any string | -| Project-specific test run description | `testops_multi.projects[].run.description` | N/A | `Automated Run ` | No | Any string | -| Project-specific test run complete | `testops_multi.projects[].run.complete` | N/A | `True` | No | `True`, `False` | -| Project-specific test run ID | `testops_multi.projects[].run.id` | N/A | undefined | No | Any integer | -| Project-specific test run tags | `testops_multi.projects[].run.tags` | N/A | `[]` | No | Array of strings | -| Project-specific external link | `testops_multi.projects[].run.externalLink` | N/A | undefined | No | JSON object with `type` and `link` | -| Project-specific test plan ID | `testops_multi.projects[].plan.id` | N/A | undefined | No | Any integer | -| Project-specific environment | `testops_multi.projects[].environment` | N/A | Uses global `environment` if not set | No | Any string or integer (environment ID) | - -\* Required when using `testops` mode +### Configuration Priority + +Qase Python reporters support three configuration methods (in order of priority): + +1. **CLI options** (pytest and tavern only) — highest priority +2. **Environment variables** (`QASE_*`) +3. **Config file** (`qase.config.json`) — lowest priority + +### Reporter Modes + +The reporter mode is set via the `mode` option: + +| Mode | Description | +|------|-------------| +| `testops` | Send results to a single Qase project | +| `testops_multi` | Send results to multiple projects | +| `report` | Generate a local JSON report | +| `off` | Reporter disabled (default) | + +### Common Options + +| Description | Config file | Environment variable | Default | Required | +|-------------|-------------|---------------------|---------|----------| +| Reporter mode | `mode` | `QASE_MODE` | `off` | No | +| Fallback mode | `fallback` | `QASE_FALLBACK` | `off` | No | +| Environment | `environment` | `QASE_ENVIRONMENT` | — | No | +| Root suite | `rootSuite` | `QASE_ROOT_SUITE` | — | No | +| Debug mode | `debug` | `QASE_DEBUG` | `False` | No | +| Execution plan path | `executionPlan.path` | `QASE_EXECUTION_PLAN_PATH` | `./build/qase-execution-plan.json` | No | +| Exclude parameters | `excludeParams` | `QASE_EXCLUDE_PARAMS` | — | No | +| Status mapping | `statusMapping` | `QASE_STATUS_MAPPING` | — | No | + +### TestOps Options (Single Project) + +| Description | Config file | Environment variable | Default | Required | +|-------------|-------------|---------------------|---------|----------| +| API token | `testops.api.token` | `QASE_TESTOPS_API_TOKEN` | — | Yes* | +| API host | `testops.api.host` | `QASE_TESTOPS_API_HOST` | `qase.io` | No | +| Project code | `testops.project` | `QASE_TESTOPS_PROJECT` | — | Yes* | +| Test run ID | `testops.run.id` | `QASE_TESTOPS_RUN_ID` | — | No | +| Test run title | `testops.run.title` | `QASE_TESTOPS_RUN_TITLE` | `Automated run ` | No | +| Test run description | `testops.run.description` | `QASE_TESTOPS_RUN_DESCRIPTION` | ` automated run` | No | +| Complete test run | `testops.run.complete` | `QASE_TESTOPS_RUN_COMPLETE` | `True` | No | +| Test run tags | `testops.run.tags` | `QASE_TESTOPS_RUN_TAGS` | `[]` | No | +| External link | `testops.run.externalLink` | `QASE_TESTOPS_RUN_EXTERNAL_LINK` | — | No | +| Test plan ID | `testops.plan.id` | `QASE_TESTOPS_PLAN_ID` | — | No | +| Batch size | `testops.batch.size` | `QASE_TESTOPS_BATCH_SIZE` | `200` | No | +| Create defects | `testops.defect` | `QASE_TESTOPS_DEFECT` | `False` | No | +| Status filter | `testops.statusFilter` | `QASE_TESTOPS_STATUS_FILTER` | — | No | +| Configuration values | `testops.configurations.values` | `QASE_TESTOPS_CONFIGURATIONS_VALUES` | — | No | +| Create configurations | `testops.configurations.createIfNotExists` | `QASE_TESTOPS_CONFIGURATIONS_CREATE_IF_NOT_EXISTS` | `false` | No | +| Show public report link | `testops.showPublicReportLink` | `QASE_TESTOPS_SHOW_PUBLIC_REPORT_LINK` | `False` | No | + +\* Required when using `testops` mode + +### TestOps Multi Options (Multiple Projects) + +| Description | Config file | Environment variable | Default | Required | +|-------------|-------------|---------------------|---------|----------| +| Default project | `testops_multi.default_project` | `QASE_TESTOPS_MULTI_DEFAULT_PROJECT` | — | No | +| Projects array | `testops_multi.projects` | — | `[]` | Yes** | +| Project code | `testops_multi.projects[].code` | — | — | Yes** | +| Test run title | `testops_multi.projects[].run.title` | — | `Automated Run ` | No | +| Test run description | `testops_multi.projects[].run.description` | — | `Automated Run ` | No | +| Complete test run | `testops_multi.projects[].run.complete` | — | `True` | No | +| Test run ID | `testops_multi.projects[].run.id` | — | — | No | +| Test run tags | `testops_multi.projects[].run.tags` | — | `[]` | No | +| External link | `testops_multi.projects[].run.externalLink` | — | — | No | +| Test plan ID | `testops_multi.projects[].plan.id` | — | — | No | +| Environment | `testops_multi.projects[].environment` | — | Global | No | + \** Required when using `testops_multi` mode -### Framework-Specific Configuration +**Multi-project annotations:** -#### Pytest +| Framework | Syntax | +|-----------|--------| +| Pytest | `@qase.project_id("CODE", 1, 2, 3)` | +| Behave | `@qase.project_id.CODE:1,2,3` | +| Robot Framework | `Q-PROJECT.CODE-1,2,3` | +| Tavern | `QaseProjectID.CODE=1,2,3` in test name | -| Description | Config file | Environment variable | CLI option | Default value | Required | Possible values | -|------------------------------------------------|--------------------------------------|----------------------------------|------------------------------------|-----------------------------------------|----------|----------------------------| -| Capture logs | `framework.pytest.captureLogs` | `QASE_PYTEST_CAPTURE_LOGS` | `--qase-pytest-capture-logs` | `False` | No | `true`, `false` | -| XFail status for failed tests | `framework.pytest.xfailStatus.xfail` | `QASE_PYTEST_XFAIL_STATUS_XFAIL` | `--qase-pytest-xfail-status-xfail` | `Skipped` | No | Any string | -| XFail status for passed tests | `framework.pytest.xfailStatus.xpass` | `QASE_PYTEST_XFAIL_STATUS_XPASS` | `--qase-pytest-xfail-status-xpass` | `Passed` | No | Any string | +See details: [Pytest](../qase-pytest/docs/MULTI_PROJECT.md) | [Behave](../qase-behave/docs/MULTI_PROJECT.md) | [Robot Framework](../qase-robotframework/docs/MULTI_PROJECT.md) | [Tavern](../qase-tavern/docs/MULTI_PROJECT.md) -#### Behave +### Local Report Options -Behave reporter uses the same common configuration options. There are no framework-specific options for Behave. +| Description | Config file | Environment variable | Default | +|-------------|-------------|---------------------|---------| +| Driver | `report.driver` | `QASE_REPORT_DRIVER` | `local` | +| Report path | `report.connection.path` | `QASE_REPORT_CONNECTION_PATH` | `./build/qase-report` | +| Report format | `report.connection.format` | `QASE_REPORT_CONNECTION_FORMAT` | `json` | -#### Robot Framework +### Logging Options -Robot Framework reporter uses the same common configuration options. There are no framework-specific options for Robot Framework. +| Description | Config file | Environment variable | Default | +|-------------|-------------|---------------------|---------| +| Console output | `logging.console` | `QASE_LOGGING_CONSOLE` | `True` | +| File output | `logging.file` | `QASE_LOGGING_FILE` | Same as `debug` | -#### Tavern +### Framework-Specific Options -Tavern reporter uses the same common configuration options. There are no framework-specific options for Tavern. +#### Pytest -## Configuration Examples +| Description | Config file | Environment variable | CLI | Default | +|-------------|-------------|---------------------|-----|---------| +| Capture logs | `framework.pytest.captureLogs` | `QASE_PYTEST_CAPTURE_LOGS` | `--qase-pytest-capture-logs` | `False` | +| XFail status (failed) | `framework.pytest.xfailStatus.xfail` | `QASE_PYTEST_XFAIL_STATUS_XFAIL` | `--qase-pytest-xfail-status-xfail` | `Skipped` | +| XFail status (passed) | `framework.pytest.xfailStatus.xpass` | `QASE_PYTEST_XFAIL_STATUS_XPASS` | `--qase-pytest-xfail-status-xpass` | `Passed` | + +#### Behave, Robot Framework, Tavern -### Single Project Configuration (`testops` mode) +These frameworks use only the common configuration options. -Example `qase.config.json` config: +--- + +## Configuration Examples + +### Single Project (testops) ```json { @@ -122,95 +188,31 @@ Example `qase.config.json` config: "fallback": "report", "debug": false, "environment": "local", - "excludeParams": ["password", "token"], - "statusMapping": { - "invalid": "failed", - "skipped": "passed" - }, - "logging": { - "console": true, - "file": true - }, - "report": { - "driver": "local", - "connection": { - "local": { - "path": "./build/qase-report", - "format": "json" - } - } - }, "testops": { "api": { "token": "", "host": "qase.io" }, - "project": "", + "project": "DEMO", "run": { "title": "Regress run", - "description": "Regress run description", + "description": "Automated regression tests", "complete": true, - "tags": ["tag1", "tag2"], - "externalLink": { - "type": "jiraCloud", - "link": "PROJ-123" - } + "tags": ["regression", "automated"] }, - "defect": false, "batch": { "size": 100 - }, - "statusFilter": ["passed", "skipped"], - "showPublicReportLink": true, - "configurations": { - "values": [ - { - "name": "group1", - "value": "value1" - }, - { - "name": "group2", - "value": "value2" - } - ], - "createIfNotExists": true - } - }, - "framework": { - "pytest": { - "captureLogs": true, - "xfailStatus": { - "xfail": "Skipped", - "xpass": "Passed" - } } } } ``` -### Multi-Project Configuration (`testops_multi` mode) - -Example `qase.config.json` config for multi-project reporting: +### Multiple Projects (testops_multi) ```json { "mode": "testops_multi", "fallback": "report", - "debug": false, - "environment": "local", - "logging": { - "console": true, - "file": false - }, - "report": { - "driver": "local", - "connection": { - "local": { - "path": "./build/qase-report", - "format": "json" - } - } - }, "testops": { "api": { "token": "", @@ -218,9 +220,7 @@ Example `qase.config.json` config for multi-project reporting: }, "batch": { "size": 100 - }, - "statusFilter": ["passed"], - "showPublicReportLink": true + } }, "testops_multi": { "default_project": "DEMO1", @@ -228,27 +228,16 @@ Example `qase.config.json` config for multi-project reporting: { "code": "DEMO1", "run": { - "title": "DEMO1 Multi-Project Run", - "description": "Test run for DEMO1 project", - "complete": true, - "tags": ["staging", "regression"], - "externalLink": { - "type": "jiraCloud", - "link": "PROJ-123" - } - }, - "plan": { - "id": 1 + "title": "DEMO1 Test Run", + "tags": ["staging"] }, "environment": "staging" }, { "code": "DEMO2", "run": { - "title": "DEMO2 Multi-Project Run", - "description": "Test run for DEMO2 project", - "complete": true, - "tags": ["production", "regression"] + "title": "DEMO2 Test Run", + "tags": ["production"] }, "environment": "production" } @@ -257,92 +246,32 @@ Example `qase.config.json` config for multi-project reporting: } ``` -### Environment Variables Example +### Environment Variables ```bash # Common settings export QASE_MODE="testops" export QASE_FALLBACK="report" export QASE_ENVIRONMENT="local" -export QASE_DEBUG="true" -export QASE_ROOT_SUITE="MyTestSuite" -export QASE_EXCLUDE_PARAMS="password,token" -export QASE_STATUS_MAPPING="invalid=failed,skipped=passed" - -# Logging configuration -export QASE_LOGGING_CONSOLE="true" -export QASE_LOGGING_FILE="false" +export QASE_DEBUG="false" -# Report mode configuration -export QASE_REPORT_DRIVER="local" -export QASE_REPORT_CONNECTION_PATH="./build/qase-report" -export QASE_REPORT_CONNECTION_FORMAT="json" - -# TestOps configuration (single project) +# TestOps export QASE_TESTOPS_API_TOKEN="" -export QASE_TESTOPS_API_HOST="qase.io" export QASE_TESTOPS_PROJECT="DEMO" -export QASE_TESTOPS_RUN_TITLE="My Test Run" -export QASE_TESTOPS_RUN_DESCRIPTION="Test run description" +export QASE_TESTOPS_RUN_TITLE="Automated Run" export QASE_TESTOPS_RUN_COMPLETE="true" -export QASE_TESTOPS_RUN_TAGS="tag1,tag2" -export QASE_TESTOPS_RUN_EXTERNAL_LINK='{"type":"jiraCloud","link":"PROJ-123"}' -export QASE_TESTOPS_PLAN_ID="1" -export QASE_TESTOPS_BATCH_SIZE="100" -export QASE_TESTOPS_DEFECT="false" -export QASE_TESTOPS_STATUS_FILTER="passed,skipped" -export QASE_TESTOPS_SHOW_PUBLIC_REPORT_LINK="true" - -# TestOps configurations -export QASE_TESTOPS_CONFIGURATIONS_VALUES='[{"name":"browser","value":"chrome"},{"name":"os","value":"linux"}]' -export QASE_TESTOPS_CONFIGURATIONS_CREATE_IF_NOT_EXISTS="true" - -# Pytest-specific -export QASE_PYTEST_CAPTURE_LOGS="true" -export QASE_PYTEST_XFAIL_STATUS_XFAIL="Skipped" -export QASE_PYTEST_XFAIL_STATUS_XPASS="Passed" -# Multi-project configuration (default project only) -export QASE_TESTOPS_MULTI_DEFAULT_PROJECT="DEMO1" +# Pytest +export QASE_PYTEST_CAPTURE_LOGS="true" ``` -## Multi-Project Support - -The multi-project feature allows you to send test results to multiple Qase projects simultaneously, with different test case IDs for each project. This is useful when: - -* You need to report the same test to different projects -* Different projects track the same functionality with different test case IDs -* You want to maintain separate test runs for different environments or teams - -### How It Works +--- -1. Configure multiple projects in `testops_multi.projects` array -2. Each project can have its own run configuration (title, description, tags, plan, environment) -3. Use framework-specific annotations to map test cases to projects: - * **Pytest**: Use `@qase.project_id()` decorator - * **Behave**: Use `@qase.project_id.PROJECT_CODE:IDS` tag format - * **Robot Framework**: Use `Q-PROJECT.PROJECT_CODE-IDS` tag format - * **Tavern**: Use `QaseProjectID.PROJECT_CODE=IDS` format in test names -4. Tests without explicit project mapping will be sent to the `default_project` +## Additional Features -### Framework-Specific Documentation +### Status Mapping -For detailed framework-specific documentation on multi-project support, see: - -* **[Pytest Multi-Project Guide](../qase-pytest/docs/MULTI_PROJECT.md)** - Detailed guide for using multi-project support with Pytest, including decorator usage, parametrized tests, and test classes -* **[Behave Multi-Project Guide](../qase-behave/docs/MULTI_PROJECT.md)** - Detailed guide for using multi-project support with Behave, including tag formats, feature-level tags, and scenario mapping -* **[Robot Framework Multi-Project Guide](../qase-robotframework/docs/MULTI_PROJECT.md)** - Detailed guide for using multi-project support with Robot Framework, including tag formats, suite-level tags, and parameter handling -* **[Tavern Multi-Project Guide](../qase-tavern/docs/MULTI_PROJECT.md)** - Detailed guide for using multi-project support with Tavern, including test name formats, extraction rules, and troubleshooting - -### Example Usage - -For detailed examples, see the [multi-project examples directory](../examples/multiproject/). - -## Status Mapping - -You can map test result statuses to different values using the `statusMapping` configuration option. This is useful when you want to change how certain statuses are reported to Qase. - -Example: +Allows changing test result status before sending to Qase: ```json { @@ -353,16 +282,11 @@ Example: } ``` -This will map: - -* `invalid` status → `failed` in Qase -* `skipped` status → `passed` in Qase +**Available statuses:** `passed`, `failed`, `skipped`, `invalid` -## Status Filtering +### Status Filtering -You can filter out test results by status using the `testops.statusFilter` configuration option. Results with statuses in the filter list will not be sent to Qase. - -Example: +Excludes results with specified statuses from being sent: ```json { @@ -372,13 +296,9 @@ Example: } ``` -This will exclude all `passed` and `skipped` results from being reported to Qase. - -## External Links +### External Links -You can associate external links (e.g., Jira tickets) with test runs using the `testops.run.externalLink` configuration. - -Example: +Associates the test run with external resources (e.g., Jira): ```json { @@ -393,30 +313,19 @@ Example: } ``` -Supported types: - -* `jiraCloud` - For Jira Cloud -* `jiraServer` - For Jira Server +**Types:** `jiraCloud`, `jiraServer` -## Configurations +### Test Run Configurations -You can specify test run configurations that will be created or found in Qase TestOps. - -Example: +Creates or finds configurations in Qase TestOps: ```json { "testops": { "configurations": { "values": [ - { - "name": "browser", - "value": "chrome" - }, - { - "name": "os", - "value": "linux" - } + { "name": "browser", "value": "chrome" }, + { "name": "os", "value": "linux" } ], "createIfNotExists": true } @@ -424,4 +333,13 @@ Example: } ``` -If `createIfNotExists` is `true`, configuration groups and values will be created automatically if they don't exist. +--- + +## Requirements + +- Python 3.9+ +- qase-api-client + +## License + +Apache 2.0 — see [LICENSE](../LICENSE) diff --git a/qase-robotframework/README.md b/qase-robotframework/README.md index 91f17ce6..02720212 100644 --- a/qase-robotframework/README.md +++ b/qase-robotframework/README.md @@ -1,6 +1,20 @@ # [Qase TestOps](https://qase.io) Robot Framework Reporter -[![License](https://lxgaming.github.io/badges/License-Apache%202.0-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0) +[![PyPI version](https://img.shields.io/pypi/v/qase-robotframework?style=flat-square)](https://pypi.org/project/qase-robotframework/) +[![PyPI downloads](https://img.shields.io/pypi/dm/qase-robotframework?style=flat-square)](https://pypi.org/project/qase-robotframework/) +[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?style=flat-square)](https://www.apache.org/licenses/LICENSE-2.0) + +Qase Robot Framework Reporter enables seamless integration between your Robot Framework tests and [Qase TestOps](https://qase.io), providing automatic test result reporting, test case management, and comprehensive test analytics. + +## Features + +- Link automated tests to Qase test cases by ID +- Auto-create test cases from your test suites +- Report test results with rich metadata (fields, parameters) +- Automatic step reporting from keywords +- Multi-project reporting support +- Support for parallel execution with pabot +- Flexible configuration (file, environment variables) ## Installation @@ -8,38 +22,71 @@ pip install qase-robotframework ``` -## Upgrade from 2.x to 3.x +## Quick Start -The new version 3.x of the Robot Framework reporter has breaking changes. -To migrate from versions 2.x, follow the [upgrade guide](docs/UPGRADE.md). +**1. Create `qase.config.json` in your project root:** -## Configuration +```json +{ + "mode": "testops", + "testops": { + "project": "YOUR_PROJECT_CODE", + "api": { + "token": "YOUR_API_TOKEN" + } + } +} +``` + +**2. Add Qase ID to your test:** + +```robotframework +*** Test Cases *** +User can log in with valid credentials + [Tags] Q-1 + Open Login Page + Enter Valid Credentials + Verify Dashboard Is Visible +``` + +**3. Run your tests:** -Qase Robot Framework Reporter is configured in multiple ways: +```sh +robot --listener qase.robotframework.Listener tests/ +``` -- using a config file `qase.config.json` -- using environment variables +## Upgrading -Environment variables override the values given in the config file. +For migration guides between major versions, see [Upgrade Guide](docs/UPGRADE.md). + +## Configuration -For complete configuration reference, see the [qase-python-commons README](../qase-python-commons/README.md) which contains all available configuration options. +The reporter is configured via (in order of priority): +1. **Environment variables** (`QASE_*`, highest priority) +2. **Config file** (`qase.config.json`) -### Example: qase.config.json +### Minimal Configuration + +| Option | Environment Variable | Description | +|--------|---------------------|-------------| +| `mode` | `QASE_MODE` | Set to `testops` to enable reporting | +| `testops.project` | `QASE_TESTOPS_PROJECT` | Your Qase project code | +| `testops.api.token` | `QASE_TESTOPS_API_TOKEN` | Your Qase API token | + +### Example `qase.config.json` ```json { "mode": "testops", "fallback": "report", - "debug": true, "testops": { "project": "YOUR_PROJECT_CODE", "api": { - "token": "YOUR_API_TOKEN", - "host": "qase.io" + "token": "YOUR_API_TOKEN" }, "run": { - "title": "Test run title" + "title": "Robot Framework Automated Run" }, "batch": { "size": 100 @@ -53,98 +100,135 @@ For complete configuration reference, see the [qase-python-commons README](../qa "format": "json" } } - }, - "logging": { - "console": true, - "file": false - }, - "environment": "local" + } } ``` -## Usage +> **Full configuration reference:** See [qase-python-commons](../qase-python-commons/README.md) for all available options including logging, status mapping, execution plans, and more. -For detailed instructions on using annotations and methods, refer to [Usage](docs/usage.md). +## Usage -### Multi-Project Support +### Link Tests with Test Cases -Qase Robot Framework Reporter supports sending test results to multiple Qase projects simultaneously. -You can specify different test case IDs for each project using the `Q-PROJECT.PROJECT_CODE-IDS` tag format. +Associate your tests with Qase test cases using the `Q-{ID}` tag: -For detailed information, configuration, and examples, see the [Multi-Project Support Guide](docs/MULTI_PROJECT.md). +```robotframework +*** Test Cases *** +User Registration + [Tags] Q-1 + Open Registration Page + Fill Registration Form + Submit Form + Verify Registration Success + +User Login + [Tags] Q-2 Q-3 + Open Login Page + Enter Credentials + Click Login Button +``` -### Link tests with test cases in Qase TestOps +### Add Metadata -To link the automated tests with the test cases in Qase TestOps, use the tags in form like -`Q-`. -Example: +Enhance your tests with fields using the `qase.fields` tag: ```robotframework *** Test Cases *** -Push button - [Tags] q-2 - Push button 1 - Result should be 1 - -Push multiple buttons - [Tags] Q-3 - Push button 1 - Push button 2 - Result should be 12 +Critical Purchase Flow + [Tags] Q-1 qase.fields:{"severity":"critical","priority":"high","layer":"e2e"} + [Documentation] Verify user can complete a purchase + Add Item To Cart + Proceed To Checkout + Complete Payment ``` +### Add Parameters + +Report specific variables as parameters using the `qase.params` tag: + ```robotframework -*** Test Cases *** Expression Expected -Addition 12 + 2 + 2 16 - 2 + -3 -1 - [Tags] Q-7 - -Subtraction 12 - 2 - 2 8 - 2 - -3 5 - [Tags] Q-8 -``` +*** Variables *** +${USERNAME} testuser +${PASSWORD} testpass -### Working with steps +*** Test Cases *** +Login Test + [Tags] Q-1 qase.params:[USERNAME, PASSWORD] + Login With Credentials ${USERNAME} ${PASSWORD} + Verify Login Success +``` -Listener supports reporting steps results: +### Ignore Tests -Example: +Exclude specific tests from Qase reporting: ```robotframework -Quick Get A JSON Body Test ## Test case: "Quick Get A JSON Body Test" - [Tags] Q-3 - ${response}= GET https://jsonplaceholder.typicode.com/posts/1 ## 1-st step - "GET" - Should Be Equal As Strings 1 ${response.json()}[id] ## 2-nd step - "Should Be Equal As Strings" - -Initializing the test case ## Test case: "Initializing the test case" - [Tags] q-4 - Set To Dictionary ${info} field1=A sample string ## 1-st step - "Set To Dictionary" +*** Test Cases *** +Work In Progress + [Tags] qase.ignore + Log This test is not reported to Qase ``` -### Working with parameters +### Test Result Statuses -Listener supports reporting parameters: +| Robot Framework Result | Qase Status | +|------------------------|-------------| +| PASS | `passed` | +| FAIL (AssertionError) | `failed` | +| FAIL (other exception) | `invalid` | +| SKIP | `skipped` | -Example: +> For detailed usage examples, see the [Usage Guide](docs/usage.md). -```robotframework -*** Variables *** -${var1} 1 -${var2} 1 -${var3} 2 +## Running Tests -*** Test Cases *** -Simple test - [Arguments] ${var1} ${var2} ${var3} - [Tags] qase.params:[var1, var2] - Should Be Equal As Numbers ${var1} ${var2} - Should Be Equal As Numbers ${var3} ${var3} +### Basic Execution + +```sh +robot --listener qase.robotframework.Listener tests/ ``` -Only `var1` and `var2` will be sent to Qase. +### With Environment Variables -### Execution: +```sh +export QASE_MODE=testops +export QASE_TESTOPS_PROJECT=PROJ +export QASE_TESTOPS_API_TOKEN=your_token +robot --listener qase.robotframework.Listener tests/ +``` + +### With Robot Variables +```sh +robot --listener qase.robotframework.Listener \ + --variable QASE_TESTOPS_PROJECT:PROJ \ + --variable QASE_TESTOPS_API_TOKEN:your_token \ + tests/ ``` -robot --listener qase.robotframework.Listener someTest.robot + +### Parallel Execution with Pabot + +```sh +pabot --listener qase.robotframework.Listener tests/ ``` + +## Requirements + +- Python >= 3.9 +- robotframework >= 5.0.0 + +## Documentation + +| Guide | Description | +|-------|-------------| +| [Usage Guide](docs/usage.md) | Complete usage reference with all tags and options | +| [Multi-Project Support](docs/MULTI_PROJECT.md) | Reporting to multiple Qase projects | +| [Upgrade Guide](docs/UPGRADE.md) | Migration guide for breaking changes | + +## Examples + +See the [examples directory](../examples/) for complete working examples. + +## License + +Apache License 2.0. See [LICENSE](../LICENSE) for details. diff --git a/qase-robotframework/docs/usage.md b/qase-robotframework/docs/usage.md index 9e04d4c5..bbc97cb9 100644 --- a/qase-robotframework/docs/usage.md +++ b/qase-robotframework/docs/usage.md @@ -1,259 +1,449 @@ # Qase Integration in Robot Framework -This guide demonstrates how to integrate Qase with Robot Framework, providing instructions on how to add Qase IDs, fields, and other metadata to your test cases. +This guide provides comprehensive instructions for integrating Qase with Robot Framework. > **Configuration:** For complete configuration reference including all available options, environment variables, and examples, see the [qase-python-commons README](../../qase-python-commons/README.md). -## Adding QaseID to a Test +--- + +## Table of Contents + +- [Adding QaseID](#adding-qaseid) +- [Adding Fields](#adding-fields) +- [Adding Parameters](#adding-parameters) +- [Ignoring Tests](#ignoring-tests) +- [Working with Steps](#working-with-steps) +- [Multi-Project Support](#multi-project-support) +- [Running Tests](#running-tests) +- [Complete Examples](#complete-examples) + +--- + +## Adding QaseID -To associate a QaseID with a test in Robot Framework, use the `Q-{ID}` tag format. +Link your tests to existing test cases in Qase using the `Q-{ID}` tag format. + +### Single ID ```robotframework *** Test Cases *** -Test with QaseID - [Tags] Q-10 - Step 01 - Step 02 - Passed step - -Test with multiple QaseIDs - [Tags] Q-11 Q-12 - Step 01 - Step 02 - Passed step +User Can Log In + [Tags] Q-1 + Open Login Page + Enter Valid Credentials + Verify Dashboard Is Visible ``` -### Multi-Project Support +### Multiple IDs + +Link one test to multiple test cases: + +```robotframework +*** Test Cases *** +Complete Purchase Flow + [Tags] Q-1 Q-2 Q-3 + Add Item To Cart + Proceed To Checkout + Complete Payment +``` + +### Case Insensitive + +Both `Q-1` and `q-1` are valid: + +```robotframework +*** Test Cases *** +Test With Lowercase Tag + [Tags] q-1 + Log This works too +``` -Qase Robot Framework Reporter supports sending test results to multiple Qase projects simultaneously with different test case IDs for each project. +### Multi-Project Support -For detailed information, configuration, examples, and troubleshooting, see the [Multi-Project Support Guide](MULTI_PROJECT.md). +To send test results to multiple Qase projects simultaneously, see the [Multi-Project Support Guide](MULTI_PROJECT.md). --- -## Adding Fields to a Test +## Adding Fields + +Add metadata to your tests using the `qase.fields` tag with JSON format. -The `qase.fields` tag allows you to add additional metadata to a test case using JSON format. +### System Fields + +| Field | Description | Example Values | +|-------|-------------|----------------| +| `description` | Test case description | Any text | +| `preconditions` | Test preconditions | Any text | +| `postconditions` | Test postconditions | Any text | +| `severity` | Test severity | `blocker`, `critical`, `major`, `normal`, `minor`, `trivial` | +| `priority` | Test priority | `high`, `medium`, `low` | +| `layer` | Test layer | `e2e`, `api`, `unit` | + +### Example ```robotframework *** Test Cases *** -Test with fields - [Tags] qase.fields:{"priority": "high", "severity": "critical", "layer": "UI"} - Step 01 - Step 02 - Passed step - -Test with description and preconditions - [Tags] qase.fields:{"description": "User login test", "preconditions": "User is not logged in"} - Step 01 - Step 02 - Passed step +Critical Purchase Flow + [Tags] Q-1 qase.fields:{"severity":"critical","priority":"high","layer":"e2e"} + Add Item To Cart + Complete Checkout ``` -### Available Fields +### With Documentation -You can add any custom fields, but some common ones include: +Robot Framework's `[Documentation]` is automatically used as the test description: -- `description` - Description of the test case -- `preconditions` - Preconditions for the test case -- `postconditions` - Postconditions for the test case -- `severity` - Severity of the test case (e.g., `critical`, `major`) -- `priority` - Priority of the test case (e.g., `high`, `low`) -- `layer` - Test layer (e.g., `UI`, `API`) +```robotframework +*** Test Cases *** +User Registration + [Documentation] Verify new user can register with valid email + [Tags] Q-1 qase.fields:{"severity":"major","preconditions":"User not registered"} + Open Registration Page + Fill Registration Form + Submit Form + Verify Success Message +``` + +### Fields in Keywords + +Fields can also be added to user keywords: + +```robotframework +*** Keywords *** +Verify Critical Feature + [Tags] qase.fields:{"severity":"critical"} + Log Verifying critical feature +``` --- -## Adding Parameters to a Test +## Adding Parameters + +Report specific variables as test parameters using the `qase.params` tag. -The `qase.params` tag allows you to specify which Robot Framework variables should be reported as parameters. +### Basic Usage ```robotframework *** Variables *** -${username} testuser -${password} testpass -${browser} chrome +${USERNAME} testuser +${PASSWORD} testpass +${BROWSER} chrome *** Test Cases *** Login Test - [Tags] qase.params:[username, password] - Login with credentials ${username} ${password} - Verify login successful - -Browser Test - [Tags] qase.params:[browser] - Open browser ${browser} - Navigate to page - Close browser + [Tags] Q-1 qase.params:[USERNAME, PASSWORD] + Login With Credentials ${USERNAME} ${PASSWORD} ``` -### Adding Parameters and Fields to a User Keyword +Only `USERNAME` and `PASSWORD` will be reported to Qase. -The `qase.params` tag can also be used in user keywords to specify which Robot Framework variables should be reported as parameters. +### Multiple Parameters ```robotframework -*** Settings *** -Test Template Check Status +*** Variables *** +${ENV} staging +${BROWSER} chrome +${USER_TYPE} admin +*** Test Cases *** +Cross-Browser Test + [Tags] Q-1 qase.params:[ENV, USER_TYPE] + Set Environment ${ENV} + Login As ${USER_TYPE} + Run Test In Browser ${BROWSER} +``` + +### Parameters in Keywords + +```robotframework *** Keywords *** -Check Status - [Arguments] ${module} - [Tags] qase.params:[module] qase.fields:{ "severity": "critical" } +Check Module Status + [Arguments] ${module} + [Tags] qase.params:[module] qase.fields:{"severity":"critical"} Log Checking status of module: ${module} *** Test Cases *** -Check Status of BMS - [Tags] Q-20 qase.fields:{ "preconditions": "Module BMS is connected", "description": "Flash firmware to BMS module and check status" } - [Template] Check Status - BMS - +Check BMS Status + [Tags] Q-20 + Check Module Status BMS ``` --- -## Ignoring a Test in Qase +## Ignoring Tests -To exclude a test from being reported to Qase (while still executing the test), use the `qase.ignore` tag. +Exclude tests from Qase reporting while still executing them: ```robotframework *** Test Cases *** -Ignored test +Work In Progress [Tags] qase.ignore - Step 01 - Step 02 - Passed step + Log This test runs but is not reported ``` --- -## Combining Multiple Tags +## Working with Steps -You can combine multiple Qase tags in a single test: +Robot Framework keywords are automatically reported as test steps. + +### Basic Keywords as Steps ```robotframework *** Test Cases *** -Complete test with all metadata - [Tags] Q-15 qase.fields:{"priority": "high", "severity": "critical"} qase.params:[username, password] - Step 01 - Step 02 - Passed step +Login Flow + [Tags] Q-1 + Open Login Page # Step 1 + Enter Username # Step 2 + Enter Password # Step 3 + Click Login Button # Step 4 + Verify Dashboard # Step 5 +``` + +### Nested Keywords + +```robotframework +*** Keywords *** +Complete Login + Open Login Page + Enter Credentials ${USERNAME} ${PASSWORD} + Click Login Button + +Enter Credentials + [Arguments] ${user} ${pass} + Input Text id=username ${user} + Input Text id=password ${pass} + +*** Test Cases *** +User Login + [Tags] Q-1 + Complete Login # Parent step + Verify Dashboard +``` + +### Custom Step Names + +Use meaningful keyword names for better step reporting: + +```robotframework +*** Keywords *** +User Navigates To Product Page + Go To ${BASE_URL}/products + +User Adds Item To Cart + Click Button id=add-to-cart + +User Proceeds To Checkout + Click Button id=checkout + +*** Test Cases *** +Purchase Flow + [Tags] Q-1 + User Navigates To Product Page + User Adds Item To Cart + User Proceeds To Checkout ``` --- -## Test Documentation +## Multi-Project Support -Robot Framework automatically uses the test documentation as the test description in Qase: +Send test results to multiple Qase projects using the `Q-PROJECT.CODE-IDS` tag format: ```robotframework *** Test Cases *** -User Login Test - [Documentation] Test user login functionality with valid credentials - [Tags] Q-20 - Step 01 - Step 02 - Passed step +Shared Test + [Tags] Q-PROJECT.PROJ1-1,2 Q-PROJECT.PROJ2-10 + Log This test is reported to both projects ``` +For detailed configuration, examples, and troubleshooting, see the [Multi-Project Support Guide](MULTI_PROJECT.md). + --- -## Advanced Configuration +## Running Tests + +### Basic Execution + +```sh +robot --listener qase.robotframework.Listener tests/ +``` + +### With Environment Variables + +```sh +export QASE_MODE=testops +export QASE_TESTOPS_PROJECT=PROJ +export QASE_TESTOPS_API_TOKEN=your_token +robot --listener qase.robotframework.Listener tests/ +``` + +### With Robot Variables -For complete configuration options including parallel execution, environment variables, and all other settings, see the [qase-python-commons README](../../qase-python-commons/README.md) and [Robot Framework Configuration Reference](CONFIGURATION.md). +```sh +robot --listener qase.robotframework.Listener \ + --variable QASE_MODE:testops \ + --variable QASE_TESTOPS_PROJECT:PROJ \ + --variable QASE_TESTOPS_API_TOKEN:your_token \ + tests/ +``` + +### Run Specific Suite + +```sh +robot --listener qase.robotframework.Listener tests/login.robot +``` + +### Run With Tags + +```sh +robot --listener qase.robotframework.Listener --include smoke tests/ +``` + +### Parallel Execution with Pabot -### Parallel Execution +```sh +pabot --listener qase.robotframework.Listener tests/ +``` -For parallel execution with pabot, the plugin automatically handles worker coordination: +### With Existing Test Run -```bash -pabot --listener qase.robotframework.Listener --variable QASE_TESTOPS_PROJECT:PROJECT_CODE --variable QASE_TESTOPS_API_TOKEN:YOUR_TOKEN tests/ +```sh +robot --listener qase.robotframework.Listener \ + --variable QASE_TESTOPS_RUN_ID:123 \ + tests/ ``` --- -## Examples +## Complete Examples -### Complete Test Suite Example +### Full Test Suite ```robotframework *** Settings *** Library SeleniumLibrary -Library steps.py +Library Collections *** Variables *** -${base_url} https://example.com -${username} testuser -${password} testpass +${BASE_URL} https://example.com +${BROWSER} chrome +${USERNAME} testuser +${PASSWORD} testpass *** Test Cases *** -User Registration Test - [Documentation] Test user registration with valid data - [Tags] Q-1 qase.fields:{"priority": "high", "severity": "critical", "layer": "UI"} - [Setup] Open browser ${base_url} chrome - Navigate to registration page - Fill registration form ${username} ${password} - Submit registration form - Verify registration successful - [Teardown] Close browser - -User Login Test - [Documentation] Test user login with valid credentials - [Tags] Q-2 qase.params:[username, password] - [Setup] Open browser ${base_url} chrome - Navigate to login page - Fill login form ${username} ${password} - Submit login form - Verify login successful - [Teardown] Close browser +User Registration + [Documentation] Verify new user can register successfully + [Tags] Q-1 qase.fields:{"severity":"critical","priority":"high","layer":"e2e"} + [Setup] Open Browser ${BASE_URL} ${BROWSER} + Navigate To Registration Page + Fill Registration Form ${USERNAME} ${PASSWORD} + Submit Form + Verify Registration Success + [Teardown] Close Browser + +User Login With Valid Credentials + [Documentation] Verify registered user can login + [Tags] Q-2 qase.params:[USERNAME, PASSWORD] + [Setup] Open Browser ${BASE_URL} ${BROWSER} + Navigate To Login Page + Enter Credentials ${USERNAME} ${PASSWORD} + Click Login Button + Verify Dashboard Is Visible + [Teardown] Close Browser + +User Login With Invalid Password + [Documentation] Verify error message for invalid password + [Tags] Q-3 qase.fields:{"severity":"major"} + [Setup] Open Browser ${BASE_URL} ${BROWSER} + Navigate To Login Page + Enter Credentials ${USERNAME} wrongpassword + Click Login Button + Verify Error Message Is Shown + [Teardown] Close Browser Ignored Test - [Documentation] This test is ignored in Qase [Tags] qase.ignore - Step 01 - Step 02 - Passed step + Log This test is not reported to Qase *** Keywords *** -Navigate to registration page - Go to ${base_url}/register +Navigate To Registration Page + Go To ${BASE_URL}/register -Fill registration form - [Arguments] ${username} ${password} - Input text id=username ${username} - Input text id=password ${password} +Navigate To Login Page + Go To ${BASE_URL}/login -Submit registration form - Click button id=submit +Fill Registration Form + [Arguments] ${user} ${pass} + Input Text id=username ${user} + Input Text id=email ${user}@example.com + Input Text id=password ${pass} + Input Text id=confirm ${pass} -Verify registration successful - Page should contain Registration successful +Enter Credentials + [Arguments] ${user} ${pass} + Input Text id=username ${user} + Input Text id=password ${pass} -Navigate to login page - Go to ${base_url}/login +Submit Form + Click Button id=submit -Fill login form - [Arguments] ${username} ${password} - Input text id=username ${username} - Input text id=password ${password} +Click Login Button + Click Button id=login -Submit login form - Click button id=submit +Verify Registration Success + Page Should Contain Registration successful -Verify login successful - Page should contain Welcome +Verify Dashboard Is Visible + Page Should Contain Element id=dashboard + +Verify Error Message Is Shown + Page Should Contain Invalid credentials ``` -### Running Tests +### Example Project Structure -```bash -# Basic run -robot --listener qase.robotframework.Listener --variable QASE_TESTOPS_PROJECT:MYPROJECT --variable QASE_TESTOPS_API_TOKEN:YOUR_TOKEN tests/ +``` +my-project/ +├── qase.config.json +├── tests/ +│ ├── login.robot +│ ├── checkout.robot +│ └── api/ +│ └── users.robot +├── resources/ +│ ├── keywords.robot +│ └── variables.robot +└── requirements.txt +``` -# With environment and plan -robot --listener qase.robotframework.Listener --variable QASE_TESTOPS_PROJECT:MYPROJECT --variable QASE_TESTOPS_API_TOKEN:YOUR_TOKEN --variable QASE_ENVIRONMENT:staging --variable QASE_TESTOPS_PLAN_ID:123 tests/ +--- -# With execution plan -robot --listener qase.robotframework.Listener --variable QASE_TESTOPS_PROJECT:MYPROJECT --variable QASE_TESTOPS_API_TOKEN:YOUR_TOKEN --variable QASE_EXECUTION_PLAN_PATH:plan.json tests/ +## Troubleshooting -# Parallel execution -pabot --listener qase.robotframework.Listener --variable QASE_TESTOPS_PROJECT:MYPROJECT --variable QASE_TESTOPS_API_TOKEN:YOUR_TOKEN tests/ -``` +### Tests Not Appearing in Qase + +1. Verify `mode` is set to `testops` +2. Check API token has write permissions +3. Verify project code is correct +4. Ensure listener is specified: `--listener qase.robotframework.Listener` + +### Parameters Not Reporting + +1. Verify variable names match exactly (case-sensitive) +2. Check `qase.params` syntax: `qase.params:[VAR1, VAR2]` +3. Ensure variables are defined + +### Parallel Execution Issues + +1. Use pabot instead of robot for parallel runs +2. Ensure each worker has access to configuration +3. Check for race conditions in shared resources + +--- + +## See Also + +- [Configuration Reference](../../qase-python-commons/README.md) +- [Multi-Project Support](MULTI_PROJECT.md) +- [Upgrade Guide](UPGRADE.md) diff --git a/qase-tavern/README.md b/qase-tavern/README.md index 6fd782be..7897f82d 100644 --- a/qase-tavern/README.md +++ b/qase-tavern/README.md @@ -1,74 +1,79 @@ # [Qase TestOps](https://qase.io) Tavern Reporter -[![License](https://lxgaming.github.io/badges/License-Apache%202.0-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0) +[![PyPI version](https://img.shields.io/pypi/v/qase-tavern?style=flat-square)](https://pypi.org/project/qase-tavern/) +[![PyPI downloads](https://img.shields.io/pypi/dm/qase-tavern?style=flat-square)](https://pypi.org/project/qase-tavern/) +[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?style=flat-square)](https://www.apache.org/licenses/LICENSE-2.0) -## Installation - -To install the latest version, run: - -```sh -pip install pre qase-tavern -``` +Qase Tavern Reporter enables seamless integration between your Tavern API tests and [Qase TestOps](https://qase.io), providing automatic test result reporting, test case management, and comprehensive test analytics. -## Getting started +## Features -The Tavern reporter can auto-generate test cases -and suites from your test data. -Test results of subsequent test runs will match the same test cases -as long as their names and file paths don't change. +- Link automated tests to Qase test cases by ID +- Auto-create test cases from your test files +- Report test results with test stages as steps +- Multi-project reporting support +- Flexible configuration (file, environment variables, CLI) -You can also annotate the tests with the IDs of existing test cases -from Qase.io before executing tests. It's a more reliable way to bind -autotests to test cases, that persists when you rename, move, or -parameterize your tests. +## Installation -For detailed instructions on using annotations and methods, refer to [Usage](docs/usage.md). +```sh +pip install qase-tavern +``` -### Multi-Project Support +## Quick Start -Qase Tavern Reporter supports sending test results to multiple Qase projects simultaneously. -You can specify different test case IDs for each project using the `QaseProjectID.PROJECT_CODE=IDS` format in test names. +**1. Create `qase.config.json` in your project root:** -For detailed information, configuration, and examples, see the [Multi-Project Support Guide](docs/MULTI_PROJECT.md). +```json +{ + "mode": "testops", + "testops": { + "project": "YOUR_PROJECT_CODE", + "api": { + "token": "YOUR_API_TOKEN" + } + } +} +``` -For example: +**2. Add Qase ID to your test:** ```yaml --- -test_name: QaseID=1 Test with QaseID success +test_name: QaseID=1 Get user by ID stages: - - name: Step 1 - request: - response: - - - name: Step 2 + - name: Get user request: + url: https://api.example.com/users/1 + method: GET response: + status_code: 200 ``` -To execute Tavern tests and report them to Qase.io, run the command: +**3. Run your tests:** -```bash +```sh pytest ``` -You can try it with the example project at [`examples/tavern`](../examples/tavern/). - ## Configuration -Qase Tavern Reporter is configured in multiple ways: +The reporter is configured via (in order of priority): -- using a config file `qase.config.json` -- using environment variables -- using command line options +1. **CLI options** (`--qase-*`, highest priority) +2. **Environment variables** (`QASE_*`) +3. **Config file** (`qase.config.json`) -Environment variables override the values given in the config file, -and command line options override both other values. +### Minimal Configuration -For complete configuration reference, see the [qase-python-commons README](../qase-python-commons/README.md) which contains all available configuration options. +| Option | Environment Variable | CLI Option | Description | +|--------|---------------------|------------|-------------| +| `mode` | `QASE_MODE` | `--qase-mode` | Set to `testops` to enable reporting | +| `testops.project` | `QASE_TESTOPS_PROJECT` | `--qase-testops-project` | Your Qase project code | +| `testops.api.token` | `QASE_TESTOPS_API_TOKEN` | `--qase-testops-api-token` | Your Qase API token | -### Example: qase.config.json +### Example `qase.config.json` ```json { @@ -77,11 +82,10 @@ For complete configuration reference, see the [qase-python-commons README](../qa "testops": { "project": "YOUR_PROJECT_CODE", "api": { - "token": "YOUR_API_TOKEN", - "host": "qase.io" + "token": "YOUR_API_TOKEN" }, "run": { - "title": "Test run title" + "title": "Tavern API Tests" }, "batch": { "size": 100 @@ -95,22 +99,144 @@ For complete configuration reference, see the [qase-python-commons README](../qa "format": "json" } } - }, - "logging": { - "console": true, - "file": false - }, - "environment": "local" + } } ``` +> **Full configuration reference:** See [qase-python-commons](../qase-python-commons/README.md) for all available options including logging, status mapping, execution plans, and more. + +## Usage + +### Link Tests with Test Cases + +Associate your tests with Qase test cases by adding `QaseID={ID}` to the test name: + +```yaml +--- +test_name: QaseID=1 Get user profile + +stages: + - name: Get user profile + request: + url: https://api.example.com/profile + method: GET + response: + status_code: 200 + json: + id: 1 + name: "John Doe" +``` + +### Multiple Qase IDs + +Link one test to multiple test cases: + +```yaml +--- +test_name: QaseID=1,2,3 User authentication flow + +stages: + - name: Login + request: + url: https://api.example.com/auth/login + method: POST + json: + username: testuser + password: testpass + response: + status_code: 200 +``` + +### Test Result Statuses + +| Tavern Result | Qase Status | +|---------------|-------------| +| Passed | `passed` | +| Failed (assertion) | `failed` | +| Failed (other) | `invalid` | +| Skipped | `skipped` | + +### Stages as Steps + +Each Tavern stage is automatically reported as a test step in Qase: + +```yaml +--- +test_name: QaseID=1 Complete order flow + +stages: + - name: Add item to cart # Step 1 + request: + url: https://api.example.com/cart + method: POST + response: + status_code: 201 + + - name: Proceed to checkout # Step 2 + request: + url: https://api.example.com/checkout + method: POST + response: + status_code: 200 + + - name: Complete payment # Step 3 + request: + url: https://api.example.com/payment + method: POST + response: + status_code: 200 +``` + +> For detailed usage examples, see the [Usage Guide](docs/usage.md). + +## Running Tests + +### Basic Execution + +```sh +pytest +``` + +### With CLI Options + +```sh +pytest \ + --qase-mode=testops \ + --qase-testops-project=PROJ \ + --qase-testops-api-token=your_token +``` + +### With Environment Variables + +```sh +export QASE_MODE=testops +export QASE_TESTOPS_PROJECT=PROJ +export QASE_TESTOPS_API_TOKEN=your_token +pytest +``` + +### Run Specific Test File + +```sh +pytest test_api.tavern.yaml +``` + ## Requirements -We maintain the reporter on [LTS versions of Python](https://devguide.python.org/versions/). +- Python >= 3.9 +- tavern >= 2.11.0 + +## Documentation + +| Guide | Description | +|-------|-------------| +| [Usage Guide](docs/usage.md) | Complete usage reference with all options | +| [Multi-Project Support](docs/MULTI_PROJECT.md) | Reporting to multiple Qase projects | + +## Examples -`python >= 3.9` -`tavern >= 2.11.0` +See the [examples directory](../examples/) for complete working examples. - +## License -[auth]: https://developers.qase.io/#authentication +Apache License 2.0. See [LICENSE](../LICENSE) for details. diff --git a/qase-tavern/docs/usage.md b/qase-tavern/docs/usage.md index f7a2da7c..45e8eda7 100644 --- a/qase-tavern/docs/usage.md +++ b/qase-tavern/docs/usage.md @@ -1,18 +1,30 @@ # Qase Integration in Tavern -This guide demonstrates how to integrate Qase with Tavern, providing instructions on how to add Qase IDs and other metadata to your API test cases. +This guide provides comprehensive instructions for integrating Qase with Tavern API testing framework. > **Configuration:** For complete configuration reference including all available options, environment variables, and examples, see the [qase-python-commons README](../../qase-python-commons/README.md). --- -## Adding QaseID to a Test +## Table of Contents -To associate a QaseID with a test in Tavern, include `QaseID={ID}` in the test name. +- [Adding QaseID](#adding-qaseid) +- [Test Stages as Steps](#test-stages-as-steps) +- [Multi-Project Support](#multi-project-support) +- [Running Tests](#running-tests) +- [Complete Examples](#complete-examples) + +--- + +## Adding QaseID + +Link your Tavern tests to existing test cases in Qase by adding `QaseID={ID}` to the test name. + +### Single ID ```yaml --- -test_name: QaseID=1 Get user by ID +test_name: QaseID=1 Get user profile stages: - name: Get user @@ -24,133 +36,445 @@ stages: json: id: 1 name: "John Doe" - email: "john@example.com" ``` -### Multiple Qase IDs +### Multiple IDs -You can associate multiple Qase IDs with a single test by separating them with commas: +Link one test to multiple test cases: ```yaml --- -test_name: QaseID=1,2,3 Get user by ID +test_name: QaseID=1,2,3 User authentication flow stages: - - name: Get user + - name: Login request: - url: https://api.example.com/users/1 + url: https://api.example.com/auth/login + method: POST + json: + username: testuser + password: testpass + response: + status_code: 200 + save: + json: + token: token +``` + +### Test Without QaseID + +Tests without `QaseID` are still reported to Qase but create new test cases: + +```yaml +--- +test_name: Simple health check + +stages: + - name: Check health endpoint + request: + url: https://api.example.com/health method: GET response: status_code: 200 - json: - id: 1 - name: "John Doe" - email: "john@example.com" ``` ### Multi-Project Support -Qase Tavern Reporter supports sending test results to multiple Qase projects simultaneously with different test case IDs for each project. +To send test results to multiple Qase projects simultaneously, see the [Multi-Project Support Guide](MULTI_PROJECT.md). -For detailed information, configuration, examples, and troubleshooting, see the [Multi-Project Support Guide](MULTI_PROJECT.md). +--- +## Test Stages as Steps + +Each Tavern stage is automatically reported as a test step in Qase, providing detailed execution visibility. + +### Basic Stages + +```yaml --- +test_name: QaseID=1 Complete order flow + +stages: + - name: Add item to cart # Reported as Step 1 + request: + url: https://api.example.com/cart + method: POST + json: + product_id: 123 + quantity: 1 + response: + status_code: 201 + save: + json: + cart_id: id + + - name: Get cart contents # Reported as Step 2 + request: + url: https://api.example.com/cart/{cart_id} + method: GET + response: + status_code: 200 + json: + items: + - product_id: 123 + quantity: 1 + + - name: Checkout # Reported as Step 3 + request: + url: https://api.example.com/checkout + method: POST + json: + cart_id: "{cart_id}" + response: + status_code: 200 +``` + +### Step Status -## Examples +Each step's status is determined by its response validation: -### Complete Test File Example +| Stage Result | Step Status | +|--------------|-------------| +| All assertions pass | Passed | +| Assertion fails | Failed | +| Request error | Invalid | + +--- + +## Test Result Statuses + +| Tavern Result | Qase Status | +|---------------|-------------| +| All stages pass | `passed` | +| Stage assertion fails | `failed` | +| Stage request error | `invalid` | +| Test skipped | `skipped` | + +--- + +## Multi-Project Support + +Send test results to multiple Qase projects using `QaseProjectID.CODE=IDS` in the test name: ```yaml --- -test_name: QaseID=1 Get user by ID success +test_name: QaseProjectID.PROJ1=1,2 QaseProjectID.PROJ2=10 Shared API test stages: - - name: Get user + - name: Call shared endpoint request: - url: https://jsonplaceholder.typicode.com/posts/1 + url: https://api.example.com/shared method: GET response: status_code: 200 +``` + +For detailed configuration, examples, and troubleshooting, see the [Multi-Project Support Guide](MULTI_PROJECT.md). + +--- + +## Running Tests + +### Basic Execution + +```sh +pytest +``` + +### With CLI Options + +```sh +pytest \ + --qase-mode=testops \ + --qase-testops-project=PROJ \ + --qase-testops-api-token=your_token +``` + +### With Environment Variables + +```sh +export QASE_MODE=testops +export QASE_TESTOPS_PROJECT=PROJ +export QASE_TESTOPS_API_TOKEN=your_token +pytest +``` + +### Run Specific Test File + +```sh +pytest test_users.tavern.yaml +``` + +### Run Specific Test + +```sh +pytest test_users.tavern.yaml::test_get_user +``` + +### With Verbose Output + +```sh +pytest -v --tb=short +``` + +--- + +## Complete Examples + +### User CRUD Operations + +```yaml +# test_users.tavern.yaml + +--- +test_name: QaseID=1 Create new user + +stages: + - name: Create user + request: + url: https://api.example.com/users + method: POST json: - id: 1 - userId: 1 - title: "sunt aut facere repellat provident occaecati excepturi optio reprehenderit" - body: "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" + name: "Test User" + email: "test@example.com" + role: "user" + response: + status_code: 201 + json: + id: !anyint + name: "Test User" + email: "test@example.com" + save: + json: + user_id: id --- -test_name: QaseID=2 Get user by ID failed +test_name: QaseID=2 Get user by ID stages: - name: Get user request: - url: https://jsonplaceholder.typicode.com/posts/1 + url: https://api.example.com/users/{user_id} method: GET response: - status_code: 300 # This will cause the test to fail + status_code: 200 json: - id: 1 - userId: 1 - title: "sunt aut facere repellat provident occaecati excepturi optio reprehenderit" - body: "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" + id: !int "{user_id}" + name: "Test User" + email: "test@example.com" + +--- +test_name: QaseID=3 Update user + +stages: + - name: Update user name + request: + url: https://api.example.com/users/{user_id} + method: PUT + json: + name: "Updated User" + response: + status_code: 200 + json: + id: !int "{user_id}" + name: "Updated User" + +--- +test_name: QaseID=4 Delete user + +stages: + - name: Delete user + request: + url: https://api.example.com/users/{user_id} + method: DELETE + response: + status_code: 204 +``` + +### Authentication Flow + +```yaml +# test_auth.tavern.yaml --- -test_name: QaseID=3,4 User authentication flow +test_name: QaseID=10,11 Complete authentication flow stages: - - name: Login user + - name: Register new user + request: + url: https://api.example.com/auth/register + method: POST + json: + email: "newuser@example.com" + password: "SecurePass123" + name: "New User" + response: + status_code: 201 + json: + message: "User registered successfully" + save: + json: + user_id: user_id + + - name: Login with credentials request: url: https://api.example.com/auth/login method: POST json: - username: "testuser" - password: "testpass" + email: "newuser@example.com" + password: "SecurePass123" response: status_code: 200 json: - token: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." + token: !anystr + user: + id: !int "{user_id}" + email: "newuser@example.com" + save: + json: + auth_token: token - - name: Get user profile + - name: Access protected resource request: - url: https://api.example.com/user/profile + url: https://api.example.com/profile method: GET headers: - Authorization: "Bearer {token}" + Authorization: "Bearer {auth_token}" response: status_code: 200 json: - id: 1 - name: "Test User" - email: "test@example.com" + id: !int "{user_id}" + name: "New User" - - name: Update user profile + - name: Logout request: - url: https://api.example.com/user/profile - method: PUT + url: https://api.example.com/auth/logout + method: POST headers: - Authorization: "Bearer {token}" - json: - name: "Updated User" - email: "updated@example.com" + Authorization: "Bearer {auth_token}" response: status_code: 200 + + - name: Verify token is invalid + request: + url: https://api.example.com/profile + method: GET + headers: + Authorization: "Bearer {auth_token}" + response: + status_code: 401 +``` + +### Error Handling Tests + +```yaml +# test_errors.tavern.yaml + +--- +test_name: QaseID=20 Invalid request returns 400 + +stages: + - name: Send invalid request + request: + url: https://api.example.com/users + method: POST json: - id: 1 - name: "Updated User" - email: "updated@example.com" + name: "" # Empty name should fail + response: + status_code: 400 + json: + error: "Validation failed" + details: + - field: "name" + message: "Name is required" + +--- +test_name: QaseID=21 Not found returns 404 + +stages: + - name: Request non-existent resource + request: + url: https://api.example.com/users/999999 + method: GET + response: + status_code: 404 + json: + error: "User not found" + +--- +test_name: QaseID=22 Unauthorized returns 401 + +stages: + - name: Access without token + request: + url: https://api.example.com/profile + method: GET + response: + status_code: 401 + json: + error: "Authentication required" ``` -### Running Tests +### Example Project Structure + +``` +my-project/ +├── qase.config.json +├── conftest.py +├── tests/ +│ ├── test_users.tavern.yaml +│ ├── test_auth.tavern.yaml +│ ├── test_products.tavern.yaml +│ └── common.yaml # Shared variables +└── requirements.txt +``` -```bash -# Basic run -pytest --qase-mode=testops --qase-testops-project=MYPROJECT --qase-testops-api-token=YOUR_TOKEN test_file.tavern.yaml +### Common Variables File -# With environment and plan -pytest --qase-mode=testops --qase-testops-project=MYPROJECT --qase-testops-api-token=YOUR_TOKEN --qase-environment=staging --qase-testops-plan-id=123 test_file.tavern.yaml +```yaml +# common.yaml -# With execution plan -pytest --qase-mode=testops --qase-testops-project=MYPROJECT --qase-testops-api-token=YOUR_TOKEN --qase-execution-plan-path=plan.json test_file.tavern.yaml +name: Common test variables -# With profilers and logs -pytest --qase-mode=testops --qase-testops-project=MYPROJECT --qase-testops-api-token=YOUR_TOKEN --qase-profilers=network,db --qase-pytest-capture-logs=true test_file.tavern.yaml +variables: + base_url: "https://api.example.com" + test_user: "testuser@example.com" + test_password: "TestPass123" ``` + +--- + +## Troubleshooting + +### Tests Not Appearing in Qase + +1. Verify `mode` is set to `testops` +2. Check API token has write permissions +3. Verify project code is correct +4. Enable debug logging: `"debug": true` + +### QaseID Not Recognized + +1. Ensure format is exactly `QaseID=123` (case-sensitive) +2. No spaces around `=` +3. Multiple IDs use commas without spaces: `QaseID=1,2,3` + +### Steps Not Showing + +1. Verify stages have `name` field +2. Check stage syntax is valid YAML +3. Enable verbose output to see stage execution + +### Response Validation Issues + +1. Use `!anystr` and `!anyint` for dynamic values +2. Save values with `save.json` for use in later stages +3. Check response structure matches expected format + +--- + +## See Also + +- [Configuration Reference](../../qase-python-commons/README.md) +- [Multi-Project Support](MULTI_PROJECT.md) +- [Tavern Documentation](https://tavern.readthedocs.io/)