You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Scott. This is definitely something that is on the TODO list, especially since we now also use TravisCI (not part of the master branch as of now, but there is a separate branch since a two weeks).
Right now, pretty much all testing that we perform is done by fuzz testing. There is, for example, the script "slugsFuzzer.py", which tests the generated implementations against NuSMV. There also was once a script for comparing the realizability/unrealizability results of those with the safety synthesis tool "aisy". Together, that provides a little bit of automated testing, but the majority of plug-ins are left out. The scripts are also not engineered to be used by the end user yet (no documentation on where to put the NuSMV executable, etc.) yet. So this is clearly a TODO.
An automated test script that checks the realizability/unrealizability results of some examples has been added to the master branch of slugs. That's certainly not a thorough test, but at least allows one to check the installation. Run " tools/testSomeExamples.py" from the slugs root directory to run the tests.
Are there unit tests or other kinds of tests that demonstrate correctness of slugs? E.g., this might be a collection of tests that are performed using
or
make test
, which would be run aftercd src; make
as in the README.The text was updated successfully, but these errors were encountered: