Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion bin/codecept.js
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ program.command('run [test]')
.option('--features', 'run only *.feature files and skip tests')
.option('--tests', 'run only JS test files and skip features')
.option('-p, --plugins <k=v,k2=v2,...>', 'enable plugins, comma-separated')

.option('--failed', 'to run failed/custom Tests')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this option (as it affects core) and use environment variable instead:

RUN_FAILED=true npx codeceptjs run

// mocha options
.option('--colors', 'force enabling of colors')
.option('--no-colors', 'force disabling of colors')
Expand Down Expand Up @@ -133,6 +133,7 @@ program.command('run-workers <workers>')
.option('-p, --plugins <k=v,k2=v2,...>', 'enable plugins, comma-separated')
.option('-O, --reporter-options <k=v,k2=v2,...>', 'reporter-specific options')
.option('-R, --reporter <name>', 'specify the reporter to use')
.option('--failed', 'to run failed/custom Tests')
.action(require('../lib/command/run-workers'));

program.command('run-multiple [suites...]')
Expand Down
45 changes: 45 additions & 0 deletions docs/plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -918,6 +918,51 @@ In the same manner additional services from webdriverio can be installed, enable

- `config`

## reRunFailedTest
It stores failed scripts from current execution in failedCases.json

This plugin allows running
- only the scripts which are failed in previous execution
- run any custom scripts provided by user without any pattern (can be provided in failedCases.json)
- autoRetry failed scripts from current execution to detect flakiness

```js
plugins: {
reRunFailedTest: {}
enabled: true,
autoRetry: true
}
```

Run test with plugin enabled
```js
npx codeceptjs run --plugins reRunFailedTest
```

#### Configuration
- autoRetry: to auto retry the failed scripts from current execution after all scripts are completed

### Options


| Param | Description |
| ---------------- | ------------------------------------------------------------------------------ |
| failed | Only executes the failed/custom(selective) scripts present in failedCases.json |

```js
npx codeceptjs run --failed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should be able to read the Plugin values from the config plugins object. You might not need to provide CLI option unless it's a core codeceptjs feature.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gkushang, This option is more over to run the failed tests from the previous run to support both runner options sequential and workers. this would be a core feature as many of the developers were looking for this feature similar option in testng.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@senthillkumar The reason I asked because I see you also have a plugin reRunFailedTest.js defined so I was wondering the use case of the Plugin. If this becomes the Core feature then you wont' need a plugin or vice-versa (if plugin is enabled then it'd run failed tests automatically)

** This is indeed a much needed feature though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would the user need to execute one more command to run failed tests or the failed tests will run automatically if plugin is enabled? e.g.

codeceptjs run >> run all tests and re-run failed tests automatically at the end of execution
OR
codeceptjs run && codeceptjs run -p <this-plugin> >> execute two commands?

There are 2 features in the rerun test plugin.

  1. Manually run the failed test cases using run/run-workers command with --failed option as mentioned below.
    codeceptjs run --failed or codeceptjs run-workers --failed
  2. Enabling the Rerunfailed test Plugin would auto retry the failed test after the execution without passing any argument.
reRunFailedTest: {
  enabled: true,
  autoRetry: true,
  require: './plugins/reRunFailedTest',
},

Advantages:
As i have 50 test cases running in parallel out of 50 , 10 were failed. i want to autoretry / manual retry the failed test case in the same/different environment. As rerun functionality is available as core feature in the codeceptjs, but that will rerun the scripts sequentially and if the case is failed in between it will retry the failed one then continue the rest of the scripts. rerun mode is more time consuming.

Let me know your thoughts.

Copy link
Contributor

@senthillkumar senthillkumar Apr 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gkushang/@DavertMik , can you please review the changes

```
or
```js
npx codeceptjs run-workers ${workerCount} --failed
```

#### Note:
The restart option must be set true in order to use this plugin
```js
restart: true
```

[1]: https://user-images.githubusercontent.com/220264/45676511-8e052800-bb3a-11e8-8cbb-db5f73de2add.png

[2]: https://github.com/allure-framework/allure2/blob/master/plugins/screen-diff-plugin/README.md
Expand Down
1 change: 1 addition & 0 deletions failedCases.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
["test/login.js","test/logout.js"]
44 changes: 26 additions & 18 deletions lib/command/run-workers.js
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
// For Node version >=10.5.0, have to use experimental flag
const { satisfyNodeVersion } = require('./utils');
const { satisfyNodeVersion, getConfig } = require('./utils');
const { tryOrDefault } = require('../utils');
const output = require('../output');
const event = require('../event');
const Workers = require('../workers');
const reRunFailedTest = require('../plugin/reRunFailedTest');

module.exports = async function (workerCount, options) {
satisfyNodeVersion(
Expand All @@ -13,6 +14,9 @@ module.exports = async function (workerCount, options) {

process.env.profile = options.profile;

const configFile = options.config;
const Config = getConfig(configFile);

const { config: testConfig, override = '' } = options;
const overrideConfigs = tryOrDefault(() => JSON.parse(override), {});
const by = options.suites ? 'suite' : 'test';
Expand All @@ -31,22 +35,26 @@ module.exports = async function (workerCount, options) {

const workers = new Workers(numberOfWorkers, config);
workers.overrideConfig(overrideConfigs);
workers.on(event.test.failed, (failedTest) => {
output.test.failed(failedTest);
});

workers.on(event.test.passed, (successTest) => {
output.test.passed(successTest);
});

workers.on(event.all.result, () => {
workers.printResults();
});

try {
await workers.bootstrapAll();
await workers.run();
} finally {
await workers.teardownAll();
if (Config.plugins && Config.plugins.reRunFailedTest && Config.plugins.reRunFailedTest.enabled === true) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, this is unacceptable. It is a plugin so run-workers should not be affected by it.

await reRunFailedTest(workers, { config: Config, options: config.options }, false);
} else {
workers.on(event.test.failed, (failedTest) => {
output.test.failed(failedTest);
});

workers.on(event.test.passed, (successTest) => {
output.test.passed(successTest);
});

workers.on(event.all.result, () => {
workers.printResults();
});

try {
await workers.bootstrapAll();
await workers.run();
} finally {
await workers.teardownAll();
}
}
};
11 changes: 8 additions & 3 deletions lib/command/run.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ const {
} = require('./utils');
const Config = require('../config');
const Codecept = require('../codecept');
const reRunFailedTest = require('../plugin/reRunFailedTest');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. You can't import a plugin into a core.


module.exports = async function (test, options) {
// registering options globally to use in config
Expand All @@ -22,9 +23,13 @@ module.exports = async function (test, options) {

try {
codecept.init(testRoot);
await codecept.bootstrap();
codecept.loadTests();
await codecept.run(test);
if (config.plugins && config.plugins.reRunFailedTest && config.plugins.reRunFailedTest.enabled === true) {
await reRunFailedTest(codecept, { options, config, testRoot }, true);
} else {
await codecept.bootstrap();
codecept.loadTests();
await codecept.run(test);
}
} catch (err) {
printError(err);
process.exitCode = 1;
Expand Down
171 changes: 171 additions & 0 deletions lib/plugin/reRunFailedTest.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
const event = require('../event');
const { writeFailedTest, getFailedTest } = require('../reRunFailedTest');
const container = require('../container');
const output = require('../output');

const failedScripts = new Set();
const failedScriptsId = new Set();
const testScriptsName = new Set();
let mochaStatsBackup = {};

const sequentialRun = async (codecept, options) => {
codecept.loadTests();
if (options.options.failed) {
const testFiles = getFailedTest();
for (let i = 0; i < testFiles.length; i++) {
if (!codecept.testFiles.includes(testFiles[i])) {
output.print(`Invalid Script Path${testFiles[i]}`);
testFiles.splice(i, 1);
i--;
}
}
if (testFiles.length > 0) {
output.print(`Failed Scripts from previous execution are ${testFiles}`);
await run(testFiles, false);
} else {
output.print('No valid failed scripts from previous execution');
await writeFailedTest([]);
}
} else {
await run(codecept.testFiles, false);
}
if (options.config.plugins.reRunFailedTest.autoRetry === true) {
output.print('Auto Retrying Failed Scripts');
const testFiles = getFailedTest();
if (testFiles.length > 0) {
output.print('Failed Scripts from previous execution are ', testFiles);
await run(testFiles, true);
}
}
};

const run = (testFiles, retryFlag) => {
return new Promise((resolve, reject) => {
// @ts-ignore
container.createMocha();
const mocha = container.mocha();
testFiles.forEach((file) => {
delete require.cache[file];
});
mocha.files = testFiles;
const done = () => {
if (retryFlag === true) {
output.result(mocha._previousRunner.stats.passes, mocha._previousRunner.stats.failures, mocha._previousRunner.stats.pending, `${mocha._previousRunner.stats.duration || 0 / 1000}s`);
}
event.dispatcher.on(event.all.after, (test) => {
writeFailedTest(Array.from(failedScripts));
});
event.emit(event.all.result, this);
event.emit(event.all.after, this);
resolve();
};
try {
event.emit(event.all.before, this);
event.dispatcher.on(event.test.failed, (test) => {
failedScripts.add(test.file);
});
mocha.run(() => {
if (retryFlag === false) {
mochaStatsBackup = mocha._previousRunner.stats;
}
if (retryFlag === true) {
mocha._previousRunner.stats.passes += mochaStatsBackup.passes;
}
done();
});
} catch (e) {
output.error(e.stack);
reject(e);
}
});
};

const parallelRun = async (workers, options) => {
workers.on(event.test.failed, (failedTest) => {
const failTest = workers.testDetails.filter(t => t.id === failedTest.id);
failedScripts.add(failTest[0].file);
output.test.failed(failedTest);
});
workers.on(event.test.passed, (successTest) => {
output.test.passed(successTest);
});
workers.on(event.all.result, () => {
writeFailedTest(Array.from(failedScripts));
printResults(workers);
});
if (options.options.failed) {
workers = loadFailedScriptsForWorkers(workers);
}
try {
workers.numberOfWorkers = workers.workers.length;
await workers.bootstrapAll();
await workers.run();
if (options.config.plugins.reRunFailedTest.autoRetry === true) {
output.print('Auto Retrying Failed Scripts');
workers = await loadFailedScriptsForWorkers(workers);
if (workers.workers.length > 0) {
workers.numberOfWorkers += workers.workers.length;
workers.finishedTests = {};
workers.stats.failures = 0;
await workers.run();
}
}
} finally {
await workers.teardownAll();
}
};

const loadFailedScriptsForWorkers = (workers) => {
const testFiles = getFailedTest();
if (testFiles.length > 0) {
for (let i = 0; i < workers.testDetails.length; i++) {
if (testFiles.includes(workers.testDetails[i].file)) {
failedScriptsId.add(workers.testDetails[i].id);
}
testScriptsName.add(workers.testDetails[i].file);
}
for (let i = 0; i < testFiles.length; i++) {
if (!testScriptsName.has(testFiles[i])) {
output.print(`Invalid Script Path ${testFiles[i]}`);
testFiles.splice(i, 1);
i--;
}
}
}
if (testFiles.length > 0) {
output.print('Failed Scripts from previous execution are ', testFiles);
for (let i = 0; i < workers.workers.length; i++) {
for (let j = 0; j < workers.workers[i].tests.length; j++) {
if (!failedScriptsId.has(workers.workers[i].tests[j])) {
workers.workers[i].tests.splice(j, 1);
j--;
}
}
if (workers.workers[i].tests.length === 0) {
workers.workers.splice(i, 1);
i--;
}
}
} else {
writeFailedTest([]);
workers.workers = [];
}
return workers;
};

const reRunFailedTest = async (config, options, sequentialFlag) => {
if (sequentialFlag === true) {
await sequentialRun(config, options);
} else if (sequentialFlag === false) {
await parallelRun(config, options);
}
};

module.exports = reRunFailedTest;

const printResults = (workers) => {
workers.stats.end = new Date();
workers.stats.duration = workers.stats.end - workers.stats.start;
output.print();
output.result(workers.stats.passes, workers.stats.failures, workers.stats.pending, `${workers.stats.duration || 0 / 1000}s`);
};
29 changes: 29 additions & 0 deletions lib/reRunFailedTest.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
const fs = require('fs');
const { print } = require('./output');

const writeFailedTest = (failedTests) => {
if (failedTests.length !== 0) {
fs.writeFileSync('failedCases.json', JSON.stringify(failedTests));
} else if (fs.existsSync('failedCases.json')) {
fs.unlinkSync('failedCases.json');
}
};

exports.writeFailedTest = writeFailedTest;

const getFailedTest = () => {
if (!fs.existsSync('failedCases.json')) {
print('There Are No Failed/Custom Scripts From Previous Execution');
} else {
const failedTests = JSON.parse(fs.readFileSync('failedCases.json', 'utf8'));
if (failedTests.length === 0 || failedTests.toString() === '') {
print('There Are No Failed/Custom Scripts From Previous Execution');
writeFailedTest([]);
} else {
return failedTests;
}
}
return [];
};

exports.getFailedTest = getFailedTest;
3 changes: 3 additions & 0 deletions lib/workers.js
Original file line number Diff line number Diff line change
Expand Up @@ -176,6 +176,8 @@ class Workers extends EventEmitter {
pending: 0,
};
this.testGroups = [];
// contains all the tests objects
this.testDetails = [];

createOutputDir(config.testConfig);
if (numberOfWorkers) this._initWorkers(numberOfWorkers, config);
Expand Down Expand Up @@ -241,6 +243,7 @@ class Workers extends EventEmitter {
mocha.suite.eachTest((test) => {
const i = groupCounter % groups.length;
if (test) {
this.testDetails.push(test);
const { id } = test;
groups[i].push(id);
groupCounter++;
Expand Down
Loading