Skip to main content

Testing

In the backend platform we write & maintain both unit and integration tests. We are using Vitest for all our tests. This is a modern test framework whose API is almost identical to Jest.

Running Tests

We've created a helper script to run tests, this is available via the pnpm run test command, usage as follows

Input params use name space value, i.e. -d c

-d c (directory to look for tests look in ./app/scripts/vitest.sh for shortcuts)
-c y coverage
-w y watch
-f bids- run all tests where file name matches bids-
-n "Close Listing Tests" run tests whose name matches Close Listing Tests

Examples

pnpm run vi # Run all tests
pnpm run vi -d c # run all tests in components directory
pnpm run vi -d t -n "Close Listing Tests" # run tests in `tests` folder that match name "Close Listing Tests"
pnpm run vi -d t -f "bids-" # run all tests in `tests` folder that match file name "bids-"
pnpm run vi -d m -c y # Run all tests under modules folder and generate code coverage (output to `./coverage` folder)

Unit Tests

In general we write unit tests for the more complex functions / processes, unit tests should live alongside the file which contains the code being tested in a .spec.ts file.

Integration Tests

Integration tests live in the tests folder where tests are organised by module. Each developer and environment has a dedicated integration test database that can be re-seeded at any time and should be re-seeded prior to each test run.

To reset your test database simply run

pnpm run db:test

This create a new database and seed it with the test data, the database name will be integrationtests_[DeveloperName] where DeveloperName comes from your overrides.json file

Seed data for each module should not be modified as part of the test process as this will potentially create problems for other tests. In general this should be reference data / static data such as asset categories that are often needed but can safely be shared by all tests.

As part of the creation of the database all indexes for each service being tested will also be created and validated as part of this process.

Test Driver

The utils folder in tests/utils contains resources to assist with testing. The main helper is the TestDriver class, there is a singleton instance of this which should be re-used across tests as its an expensive resource to create (due to parsing OpenAPI documents for the API).

You can import the driver and use it as follows

import { EventBuilder, driver } from '../../utils/index.js'
import { describe, expect, test } from 'vitest'

test('That we can retrieve a record', async () => {
const event = EventBuilder.Get('/inventory/revords/123-A').done()
const ctx = await driver.executeApi(event)
expect(ctx.event.response.statusCode).toBe(200)
})

In the API example above you create your event payload and trigger the executeApi method on the driver, executeApi will locate the appropriate handler based on the API path, validate the incoming payload and execute the handler.

The EventBuilder is a simple utility for helping create events

Registering & Testing Event Handlers

Integration tests use a mock event bus for publishing events, we can use this mock event bus to register handlers in order to be able to test events were fired and to validate their payload. We cant execute expectations in the handler as this runs outside the test context and failures will not cause the test to fail. We can however capture the event payload from the event handler we register and run our expectations after the event is fired to validate it, example below.

// register an event handler so we can test the record created event is fired
let eventPayload: RecordCreated = null
driver.registerEventHandler('RecordCreated', async (ctx): Promise<void> => {
eventPayload = ctx.event.payload as RecordCreated
})

// execute the API which triggers the event
await driver.executeApi(event)

// validate the payload after the API call returns
expect(eventPayload.name).toBe('Wheat')
expect(eventPayload.location).toBe('Storage')

Running Tests

We have created a helper script for running tests which provides shortcuts for command args. The script can be run using pnpm run vi

The script takes 5 optional arguments, they are passed with a dash followed by command letter followed by space and then value, for example pnpm run vi -d c run all tests in components directory

dir (-d) // directory to search for tests (defaults to all folders)
file (-f) // pattern matching test file name (no default)
name (-n) // pattern matching test name (no default)
watch (-w) // start tests in watch mode (defaults to no)
coverage (-c) // generate test coverage (defaults to no)

We have presets for directory as follows

a ./
c ./components
i ./integrations
m ./modules
t ./tests
cu ./modules/customers
tr ./modules/trading
in ./modules/inventory
sy ./modules/system
lo ./modules/logistics
li ./modules/livestock

Examples

// run all tests in components directory in watch mode
pnpm run vi -d c -w y
// run all test in modules folder where the file name contains records-
pnpm run vi -d m -f "records-"
// run all test in components folder where the test name contains "Get all roles"
pnpm run vi -d c -n "Get all roles"
// run all tests in components directory in watch mode and generate coverage
pnpm run vi -d c -w y -c y

Debugging Tests

Due to the way bit works where our code is loaded from the node_modules folder after transpiling we need to use sourcemaps for debugging our TS code. The simplest way to debug a test is to use the Debug Test File launch configuration in vscode which has settings to tel lvscode where to find the relevant source maps. Follow these instructions...

  1. Put a breakpoint in the code the test will hit at the point you want to debug
  2. Open the file with the test in it
  3. Switch to the debug tag in vscode
  4. Choose Debug Test File launch configuration and click the play button

Your breakpoints in both the test file and referenced code should get hit successfully.