-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A 'testFileMapping' to enforce and speedup mutation testing for test policies with fixed test file naming #4689
Comments
In case someone is interested in our current workaround using the programmatic api, here is a minimal setup of our script. import * as extStrykerMutatorCore from '@stryker-mutator/core';
// Collection of all files under mutation testing
const files = [/* ... */];
for (const file of files) {
// Mapping the name of the file under mutation to its corresponding test files
let testFiles = [/* ... */];
const stryker = new extStrykerMutatorCore.Stryker({
// Makes the sum of the reports of the individual runs look more like a collective report of one big run
clearTextReporter: {
logTests: false,
reportTests: false,
reportScoreTable: false,
},
// The test runner we use
tap: {
nodeArgs: ['--test-reporter', 'tap'],
testFiles,
},
testRunner: 'tap',
// See above, removes info messages between the reports of the individual runs to make it look more like one big run
logLevel: 'warn',
mutate: [file],
reporters : ['clear-text'],
});
await stryker.runMutationTest();
} In case were However, this may not be true for test runners that also consider the individual tests within the test file during the To benefit from the incremental mode, while creating a separate stryker run for each file, it is necessary to create the incremental: true,
incrementalFile: `./stryker-incremental-${hash(file)}.json`, |
We've continued to think about how one could integrate this feature more easily. Since the script mentioned above already largely helps to implement this feature, one could consider to keep a major part of the function to userland instead of integrating it directly into this library. For the reported coverage result, it would be useful if we could collect the results of each run and pass them collectively to the reporters. For this to work, the reporters would need to be officially/intentionally callable from userland – we have yet to check if this is already possible. To address the requests from the other issues noted above, how aboud explaining in the documentation how one can implement this themselves. What do you think? |
Is your feature request related to a problem? Please describe.
As outlined in the discussions at #3595 and #4142, we are also organizing our unit tests in a manner where each specific test file corresponds to a particular system under test. Likewise, there is also a fixed naming scheme to keep everything well-organized for the entire engineering team. However, we do not use jest, but a tap based framework where there is no such option like
--findRelatedTests
anddependencyExtractor
.For us, the desired coverage of a system under test should be exclusively derived from its dedicated test file, ensuring comprehensive testing of all aspects of the system under test. Coverage from other test files is considered an unintended side-effect.
Enabling the possibility to configure the intended mapping between test files and systems under test would assist us in achieving two key objectives:
Describe the solution you'd like
The addition of a
testFileMapping
configuration feature, that associates mutated files with their corresponding test file(s). The existing mapping which was collected during thedryRun
phase could serve as a fallback for files not explicitly listed in thetestFileMapping
option.To accommodate various use cases, the file to be tested could be represented by a regular expression, especially supporting regex groups. The corresponding test files notation could utilize these groups, acting as a filter on the fallback mapping derived from the
dryRun
phase.An implementation of this logic could resemble the following:
We are open to contributing a merge request if this proposal aligns with your intentions 😃
Describe alternatives you've considered
We tried out the solution proposed in #3595 by using the programmatic api. In terms of functionality, it achieved our goal and also reduced the testing duration to a third. However, Stryker had to be started for each individual system under test, which not only creates quite the overhead, but also a large number of individual reports that would then have to be merged back into an overall report using additonal (yet to be implemented) post-processing steps.
We have also looked at writing our own plugin, but it seems that this would require much more than just the filter, which has so far prevented us from doing so.
The text was updated successfully, but these errors were encountered: