-
Notifications
You must be signed in to change notification settings - Fork 3
07. Testing
To ensure the quality and correctness of the code, it's important to run tests regularly. When planning to run tests, it is best to follow a TDD ( Test driven development ) approach. We should be building our code to conform to the test, not the other way around.
- Vitest - a jest drop in replacement that is mainly used for unit and integration testing
- React Testing Library - lets us interact with the react dom during testing
- Mock Service Worker - Mocking network calls
cd frontend
npm run test:ui
For our unit testing methodology, we will assume our units to be an isolated group of smaller items that interact with each other, like react components. We are testing for expected output for the input that we give it. To efficiently write tests we should reference the UI designs and plan out all the data outputs and interactions. Lets be mindful of carefully crafting our tests to minimize useless tests, the doc below should help with that.
With the introduction of custom hooks, global state, context etc, it has become harder to test unit groups in isolation. Before we could have passed in props to components and simply mock those values - now we are moving our logic closer/into the component that is using it via context/global state/hooks etc. In this shift in how we code our components, our mocking methods have also changed.
We still want to keep a sort of a 'black box' testing approach to test components in isolation. We only care about the expected output
with the relative input
so we will be mocking values that are fed into the component. This is done via vitest mocking utility functions ( fn()
, spyOn()
, mock
etc ) as well as mocking network calls with msw
.
With the above, we will be able to pass in mocked data returned from hooks and methods of other third party modules. We will test that the relationship with these modules and its data is working but will not need to know what these modules do behind the scenes, effectively isolating our react component.
Vitest comes with a ui dashboard as well as code coverage reports. Our aim is for 100% code coverage ( statements, branches, functions, lines ) which should be relatively simple with the methodology we will be using.
- third party modules - We can assume these work as intended by the author of modules
- user flows - should be done as e2e tests ( e2e docs below )
- interactions that interact outside of the intended isolated unit. (integration tests - coming soon(?))
- static values ( these wont change in any way anyways )
- API/Network calls - all of our endpoints should be mocked
- Utility functions
- Custom Hooks
- React components
- renders in the document
- test all hooks used in the component
- test all interactions/events
- test all branches ( conditionals/state changes )
Use MSW to mock all endpoints. Endpoints will need to be added to the MSW configuration as handlers so that it can intercept calls and return custom mocked values.
// /tests/utils/handlers.js
import { testServer } from '@/../testSetup'
import { apiRoutes } from '@/constants/routes'
import { http, HttpResponse } from 'msw'
const api = 'http://localhost:8000/api'
export const httpOverwrite = (endpoint, cb) => {
return testServer.use(http.get(api + endpoint, cb))
}
export const handlers = [
http.get(api + apiRoutes.currentUser, () =>
HttpResponse.json({
firstName: 'John',
lastName: 'Doe'
})
),
// ... more handlers here
]
If we need to modify the return value during a test we can use the httpOverwrite()
helper function
import { apiRoutes } from '@/constants/routes'
import { HttpResponse } from 'msw'
import { httpOverwrite } from '@/tests/utils/handlers'
describe('test', () => {
it('should work', () => {
httpOverwrite(apiRoutes.currentUser, () =>
HttpResponse.json({
firstName: 'Jane',
lastName: 'Smith'
})
)
// ... rest of test here
})
})
These should be relatively simple as they should mostly be pure functions.
Most of our custom hooks are react-query
based hooks which its returned data should already be mocked with msw
so it will be a simple return value check. For other custom hooks or react-query
hooks with extra methods, test that these methods return the expected value.
describe('useCustomHook', () => {
it('should do something', () => {
const initialValue = 1;
const { result } = renderHook(() => useCounter(initialValue));
expect(result.current.value).toBe(initialValue);
act(() => result.current.increment());
expect(result.current.value).toEqual(2);
});
});
As we are testing these components in isolation, the props and data from hooks/modules will be mocked.
import { render, screen } from '@testing-library/react'
import { wrapper } from '@/tests/utils/wrapper'
describe('ReactComponent.jsx', () => {
it('should render ReactComponent', async () => {
render(<ReactComponent />, { wrapper })
const component = await screen.findByTestId('component')
expect(component).toBeInTheDocument()
})
})
import { render, screen, waitFor, renderHook } from '@testing-library/react'
import { wrapper } from '@/tests/utils/wrapper'
describe('ReactComponent.jsx', () => {
it('should render ReactComponent', async () => {
const { result } = renderHook(() => useCustomReactQueryHook(), { wrapper })
await waitFor(() => expect(result.current.isSuccess).toBeTruthy()) // specific for a custom react query hook.
render(<ReactComponent />, { wrapper })
const component = await screen.findByTestId('component')
expect(component).toBeInTheDocument()
})
})
For methods that comes from within(?):
import { render, screen } from '@testing-library/react'
import { wrapper } from '@/tests/utils/wrapper'
import * as exportedModule from 'module'
describe('ReactComponent.jsx', () => {
it('should render ReactComponent', async () => {
render(<ReactComponent />, { wrapper })
const button = await screen.findByTestId('button')
fireEvent.click(button)
expect(someFn).toHaveBeenCalled()
})
})
For methods that comes from modules, spy on those methods:
import { render, screen } from '@testing-library/react'
import { wrapper } from '@/tests/utils/wrapper'
import * as exportedModule from 'module'
describe('ReactComponent.jsx', () => {
it('should render ReactComponent', async () => {
render(<ReactComponent />, { wrapper })
const someFn = vi
.spyOn(exportedModule, 'methodName')
.mockImplementation(() => {})
const button = await screen.findByTestId('button')
fireEvent.click(button)
expect(someFn).toHaveBeenCalled()
})
})
Pass in the appropriate props that change outputted data and test. It would be easy to identify branches by nesting describe blocks:
describe('ReactComponent.jsx', () => {
describe('is not authenticated', () => {
it('should render null', () => {
// ...
})
}
describe('is authenticated', () => {
describe('is loading', () => {
it('should render the Loading component', () => {
// ...
})
})
describe('loaded', () => {
it('should render the component', () => {
// ...
})
})
})
})
The vi.mock()
function is hoisted to the top
The vi.hoisted()
function is also hoisted to the top. It is hoisted in the order that it is in - so if vi.hoisted()
if above vi.mock()
, it will get hoisted to the top in that order. this is particularly useful when you need to change the mock value per test.
const apiMockReturnValue = vi.hoisted(() => ({
someKey: 'someValue'
}))
vi.mock('module', () => apiMockReturnValue)
describe('Component', () => {
it('should render someValue', () => {
// ... component uses 'module', the value of someKey will be 'someValue'
})
it('should render differentValue', () => {
apiMockReturnValue.mockReturnValue({
someKey: 'differentValue'
})
// ... component uses 'module', the value of someKey will be 'differentValue'
})
})
-
afterEach()
,afterAll()
,beforeEach()
,beforeAll()
- its advised to clear mocks after each test to prevent leaky mocks
afterEach(() => { vi.clearAllMocks() })
-
beforeEach()
example - test branch of an authenticated user:
describe('is authenticated', () => {
beforeEach(async () => {
keycloak.useKeycloak.mockReturnValue({
keycloak: { authenticated: true }
})
const { result } = renderHook(() => useCurrentUser(), {
wrapper
})
await waitFor(() => expect(result.current.isSuccess).toBeTruthy())
render(<Logout />, { wrapper })
})
})
it('should do something', () => {
// ...
})
it('should do another thing', () => {
// ...
})
})
e2e tests - Cypress with Cucumber
Cypress is used for end-to-end testing of the application. These tests simulate real user interactions and ensure the integrity of the user flows.
Cucumber complements the cypress e2e testing by following Behaviour-Driven Development (BDD) process. It reads executable specifications written in plain text and validates that the software does what those specifications say. For example,
Scenario: Breaker guesses a word
Given the Maker has chosen a word
When the Breaker makes a guess
Then the Maker is asked to score
Each scenario is a list of steps for Cucumber to work through. It uses Gherkin language syntax.
To run Cypress tests, you need to set up environment variables which include sensitive information like test user credentials. Follow these steps to set up your environment:
-
Copy the
cypress.env.example.json
file located in thefrontend
directory and rename the copy tocypress.env.json
.cd frontend cp cypress.env.example.json cypress.env.json
-
Edit the
cypress.env.json
file to include your specificidir
andbceid
test credentials. The file should look something like this:
{
"IDIR_TEST_USER": "user",
"IDIR_TEST_PASS": "password",
"BCEID_TEST_USER": "lcfs1",
"BCEID_TEST_PASS": "xxxxxxxxx",
"admin_idir_username": "user",
"admin_idir_password": "password",
"org1_bceid_username": "lcfs1",
"org1_bceid_password": "xxxxxxxxx",
"org1_bceid_id": "1",
"org1_bceid_userId": "7",
"org2_bceid_username": "LCFS2",
"org2_bceid_password": "xxxxxxxxx",
"org2_bceid_id": "2",
"org2_bceid_userId": "8"
}
-
Do not commit
cypress.env.json
to version control. It has been added to.gitignore
to prevent exposing sensitive information.
To run Cypress tests interactively:
cd frontend
npm run cypress:open
This opens the Cypress Test Runner, from which you can execute individual tests or the entire test suite.
For headless execution (useful for CI/CD pipelines):
cd frontend
npm run cypress:run
For executing feature-based test cases using tags, some of the optional commands are below.
npm run cypress:run --env tags="@transfer" # Runs test cases related to transfer feature.
npm run cypress:run --env tags="not @transfer" # exclude transfer related test cases.
npm run cypress:run --env tags="@transfer or[and] @organization" # test cases related to transfer and organization, or - either; and - both
When contributing new tests:
/lcfs/frontend
├── cypress
│ ├── e2e
│ │ │── Pages
│ │ │ ├── features # Cucumber based feature files (*.feature)
│ │ │ ├── step_definitions # Step definition files for features (*.test.js)
│ │ └── cypress tests # Cypress test files (*.cy.js)
│ │── support # Support files (e.g., commands.js, index.js)
│ ├── reports # Generated test execution reports
│ ├── fixtures # Fixtures for test case setup
│ └── screenshots # Generated test execution screenshots
├── cypress.config.js # Cypress configuration file
├── cypress.env.json # Cypress environment file containing all the secrets
├── .cypress-cucumber-preprocessorrc.json # Cucumber configuration file
└── package.json # Project's package.json
- Add your test files under
frontend/cypress/e2e
. - Use descriptive names for test files and test cases.
- Follow established patterns for structuring tests, such as using
beforeEach
and custom commands for routine tasks. - Utilize data attributes like
data-test
andid
for more stable element selection.
- chrome recorder This repo provides tools to export Cypress Tests from Google Chrome DevTools' Recordings programmatically.
- To update the configuration file for Cypress, please go to
frontend/cypress.config.js
. - For viewing Cypress environmental variables, refer to the file located at
frontend/cypress.env.json
.
- Refer to the Cypress Documentation for best practices and detailed guidance.
- Refer to the Cucumber-Cypress Quick Guide for guidance.
- For useful cypress plugins
Before running the tests, ensure the following prerequisites are met:
-
PostgreSQL Instance: A running instance of PostgreSQL is required. Ideally, use the provided
docker-compose
file to start a PostgreSQL container. This ensures consistency in the testing environment. -
Python Environment: Make sure your Python environment is set up with all necessary dependencies. This can be achieved using Poetry:
poetry install
The project's tests can be executed using the pytest
command. Our testing framework is configured to handle the setup and teardown of the test environment automatically. Here's what happens when you run the tests:
-
Test Database Setup: A test database is automatically created. This is separate from your development or production databases to avoid any unintended data modifications.
-
Database Migrations: Alembic migrations are run against the test database to ensure it has the correct schema.
-
Data Seeding: The
test_seeder
is used to populate the test database with necessary data for the tests. -
Test Execution: All test cases are run against the configured test database.
-
Teardown: After the tests have completed, the test database is dropped to clean up the environment.
To run the tests, use the following command in your terminal:
poetry run pytest -s -v
Options:
-
-s
: Disables per-test capturing of stdout/stderr. This is useful for observing print statements and other console outputs in real time. -
-v
: Verbose mode. Provides detailed information about each test being run.