Skip to content

Latest commit

 

History

History
146 lines (111 loc) · 4.12 KB

running_tests.md

File metadata and controls

146 lines (111 loc) · 4.12 KB

Running tests

This document goes over the basic steps to create tests for the components included in your extension

Data Warehouse configuration

For running the test and capture scripts, you need to configure the access to the data warehouse where your extension is supposed to run. To do so, rename the .env.template file in the root of the repository to .env and edit it with the appropriate values for each provider (BigQuery or Snowflake).

For BigQuery, we only need to specify the project and dataset where the tests will run.

BQ_TEST_PROJECT=
BQ_TEST_DATASET=

Check this section to ensure you have authenticated correctly with BigQuery.

For Snowflake, we also need to set credentials to authenticate in the .env file.

SF_ACCOUNT=
SF_TEST_DATABASE=
SF_TEST_SCHEMA=
SF_USER=
SF_PASSWORD=

Files and folder structure

The content of the /components/<component_name>/test/ is as follows:

test/
    ├── test.json
    ├── table1.ndjson
    └── fixtures/
        ├── 1.json
        └── 2.json

test.json

Contains an array with the definition of each test, specifying the id of each test and the values for each input:

[
    {
        "id": 1,
        "inputs": {
            "input_table": "table1",
            "value": "test"
        }
    },
    {
        "id": 2,
        "inputs": {
            "input_table": "table1",
            "value": "test2"
        }
    }
]

You can also add an `env_vars` property, in case you need to pass test environment variables. This property is not mandatory. If missing, and empty dictionary will be passed.

[
    {
        "id": 1,
        "inputs": {
            "input_table": "table1",
            "value": "test"
        },
        "env_vars": {
          "analyticsToolboxDataset": "myproject.mydataset"
        }
    }
]

table1.ndjson

An NDJSON file that contains the data to be used in the test. It can have any arbitrary name, but make sure it's correctly referenced in input_table in your test.json file. For example:

{"id":1,"name":"Alice"}
{"id":2,"name":"Bob"}
{"id":3,"name":"Carol"}

fixtures/<id>.json

The fixture files contain the expected result for each test defined in test.json. For example, for our test 1 we would have a 1.json file with this content:

{
    "output_table": [
        {
            "name": "Bob",
            "id": 2,
            "fixed_value_col": "test"
        },
        {
            "name": "Carol",
            "id": 3,
            "fixed_value_col": "test"
        },
        {
            "name": "Alice",
            "id": 1,
            "fixed_value_col": "test"
        }
    ]
}

When developing new components, the fixture folder and its content will be automatically generated by running the capture command:

$ python carto_extension.py capture

Setup

Setup the elements in the test folder to define how the test should be run to verify that the component is correctly working. Checkout this section to understand which files are necessary to define the tests.

Run the capture script to create the test fixtures from the results of running your components in the corresponding datawarehouse.

$ python carto_extension.py capture

This command will generate fixture files in the fixtures folder. Check the created files to ensure that the output is as expected.

From that point, you can now run the test script to run tests and check if they match the captured outputs, whenever you change the implementation of any of the components.

$ python carto_extension.py test

CI configuration

This template includes a GitHub workflow to run the extension test suite when new changes are pushed to the repository (provided that the capture script has been run and test fixtures have been captured).

GitHub secrets must be configured in order to have the workflow correctly running. Check the .github/.workflow/CI_tests.yml file for more information.