In Test mode, RESTler will attempt to execute successfully all request types of the Swagger specification at least once in order to identify request types that are unreachable/unexercizable with the current test setup. In other words, the purpose of a such a test, also called smoketest, is to quickly (minutes) debug the current test setup.
Inputs:
-
a grammar.py file generated by the RESTler compiler
-
a RESTler fuzzing dictionary in JSON format
-
a configuration file and/or script that may be used to obtain a fresh authentication token, if required by your API.
Authentication options: configuring authentication is described in Authentication.
- a set of command-line options describing how to reach the service. See the example below, or run Restler.exe without arguments to get the list of supported options.
How to invoke RESTler in test mode:
C:\restler_bin\restler\Restler.exe test --grammar_file <RESTLer grammar.py file> --dictionary_file <RESTler fuzzing-dictionary.json file> --token_refresh_interval <time in seconds> --token_refresh_command <command>
Outputs: see the sub-directory Test
RESTLer will generate a sub-directory Test\RestlerResults\experiment<GUID>\logs
including the following files:
-
speccov.json
contains the summary of coverage for all of the tested requests. This file is documented in more detail later on this page. -
main.txt
is the main log documenting how each request is attempted to be executed - an INVALID status means that RESTler could not execute that request successfully -
request_rendering.txt
reports overall progress
Example: see Tutorial
This file ends with
Rendered requests with "valid" status codes: 13 / 13
which means that all 13 requests were VALID and thus executed successfully during the test - this is the best possible outcome since RESTler was about to achieve 13/13, that is, 100% Swagger specification coverage.
-
network.testing.<threadID>.txt
logs all HTTP(S) traffic generated with RESTler, including all REST API requests executd and their responses. This file is useful for detailed debugging. For instance, if some requests are never executed successfully by RESTler during the smoke test (INVALID status), the corresponding detailed requests generated by RESTler and their responses should be examined in order to troubleshoot and fix, either by updating the Swagger spec (e.g. if it is incomplete), modifying one of the RESTler config files (e.g. dictionary, annotations, or examples), or manually editing the grammar. -
network.gc.<threadID>.txt
and the correspondinggarbage_collector.gc.<threadID>.txt
are, respectively, the RESTler garbage-collector logs and the garbage-collector detailed traffic logs. Those logs can be safely ignored except for troubleshooting the garbage-collector, for instance in case of resource leaks.
RESTler will also generate a sub-directory Test\ResponseBuckets
including the following files:
-
runSummary.json
is a report on all the HTTP response codes that were received -
errorBuckets.json
includes a sample of up to 10 pairs of <request, response> for each HTTP error codes in the 4xx or 5xx ranges that were received
Warning: after running RESTler in test mode, you should monitor the service under test and delete any remaining resources created by RESTler, if any. These left-over resources may either be the result of leaks (i.e. bugs found by RESTler) or limitations in how RESTler is able to garbage-collect resources after testing (e.g. if resources are left after fuzzing in a state where they cannot be deleted).
By default, test mode will try to execute each request successfully once.
This means that, if there are 5 possible values for a parameter,
and the request is successfully executed when passing the first value,
the remaining 4 will not be tested. In some cases, such as for differential
regression testing, it is desired to test all of the specified parameter values
in Test mode. The command-line argument test_all_combinations
may be specified in test mode in order to try all parameter values
(up to max_combinations
).
Results for all parameter combinations will be reported in the spec coverage file.
During each Test run a speccov.json
file will be created in the logs directory. This file contains test results for each request in the grammar. It is incrementally updated after each new request is covered. Each request is represented by a hash of its definition.
"5915766984a7c5deaaae43cae4cfb810c138d0f2_1__1": {
"verb": "PUT",
"endpoint": "/blog/posts/{postId}",
"verb_endpoint": "PUT /blog/posts/{postId}",
"valid": 0,
"matching_prefix": [
{
"id": "1d7752f6d5ca3e03e423967a57335038a3d1bb70_1"
}
],
"invalid_due_to_sequence_failure": 0,
"invalid_due_to_resource_failure": 0,
"invalid_due_to_parser_failure": 0,
"invalid_due_to_500": 0,
"status_code": null,
"status_text": null,
"error_message": "{\n \"errors\": {\n \"id\": \"'5872' is not of type 'integer'\"\n },\n \"message\": \"Input payload validation failed\"\n}\n",
"request_order": 4,
"sample_request": {
"request_sent_timestamp": null,
"response_received_timestamp": "2021-07-02 05:10:12",
"request_verb": "PUT",
"request_uri": "/api/blog/posts/5872",
"request_headers": [
"Accept: application/json",
"Host: localhost:8888",
"Content-Type: application/json"
],
"request_body": "{\n \"id\":\"5872\",\n \"checksum\":\"fuzzstring\",\n \"body\":\"first blog\"}\r\n",
"response_status_code": "400",
"response_status_text": "BAD REQUEST",
"response_headers": [
"Content-Type: application/json",
"Content-Length: 124",
"Server: Werkzeug/0.16.0 Python/3.7.8",
"Date: Fri, 02 Jul 2021 05:10:12 GMT"
],
"response_body": "{\n \"errors\": {\n \"id\": \"'5872' is not of type 'integer'\"\n },\n \"message\": \"Input payload validation failed\"\n}\n"
},
"tracked_parameters": {
"id": [
"123"
],
"body": [
"\"first blog\""
]
}
},
In any of the boolean values above, 0 represents False and 1 represents True.
- The "verb" and "endpoint" values are as you would expect from the request.
- The "valid" value specifies whether or not the request was considered valid by RESTler standards.
- For a request to be "valid" it must have received a 2xx response from the server.
- The "matching_prefix" dict contains the hash ID for the request that contained the matching prefix
and whether or not that request was valid.
- If there was no matching prefix, it would sayd "None".
- If a request was invalid,
the appropriate "invalid_due_to..."_ value will be set to 1.
- "sequence_failure" will be set if a failure occurs while rendering a previously valid prefix sequence.
- "resource_failure" will be set if the server responded with a 2xx, but the async resource creation polling indicated that there was a failure when creating the resource.
- "parser_failure" will be set if the server responded with a 2xx, but there was a failure while parsing the response data.
- "500" will be set if a 5xx bug was detected.
- The "status_code" and "status_text" values are the response values received from the server.
- The "sample_request" contains the concrete values of the sent request and received response for which the coverage data is being reported. This property is optional.
- The "sequence_failure_sample_request" contains the concrete values of the sent request that failed when a valid sequence was being re-rendered. This property is optional.
- The "error_message" value will be set to the response body if the request was not "valid".
- The "request_order" value is the 0 indexed order that the request was sent.
- Requests sent during "preprocessing" or "postprocessing" will explicitely say so.
- The "tracked_parameters" property is optional and generated only when using
Test
mode withtest-all-combinations
. This property contains key-value pairs of all of the parameters for which more than one value is being tested. By default, enums and custom payloads are always tracked. In addition, when the specification was compiled withTrackFuzzedParameterNames
set totrue
, all fuzzable parameters will be tracked.
The utilities
directory contains a sub-directory called speccovparsing
that contains scripts for postprocessing speccov files.
diff_speccov.py
can be run to diff speccov files and output the diff as a new json file.- A "left" file is chosen as the baseline file and a list of multiple "right" files can be specified to be compared to the left file.
sum_speccov.py
simply adds up the final coverage and failure types and creates a new json file with the output.