Automated integration tests

Here, you will find information on how test in ska-sdp-integration are implemented, including what you need to know if you want to add your own tests.

We also describe how you can run the tests in different environments and give an overview of existing automated tests.

The SDP integration tests aim to test the interactions between various components in a real-life Kubernetes set up. SDP is deployed on its own with all of its components, which then interact with each other without any of them being mocked. These include tests of the controller and subarray devices, as well as various processing script and pipeline tests (e.g. vis-receive, pointing-offset, etc.).

The tests and all related files can be found in the tests directory. All of the file paths given below are relative to this directory.

context fixture

context is a pytest fixture defined in tests/conftest.py. It contains variables and definitions that are global to all tests and are used by most of the tests. This includes but not limited to:

  • set up for TangoGQL (e.g. ingress, host, port)
  • QA metrics and Kafka hosts
  • Tango device names
  • information loaded from environment variables

Use this fixture if you need to add new information similar to the above specifications.

Environment variables

The following variables are used in the tests. In this table, default indicates the value loaded into tests, if the environment variable is not specified:

Name Description Default
KUBE_NAMESPACE Namespace where the control system is running None
KUBE_NAMESPACE_SDP Namespace for processing None
SUBARRAY_ID Which Subarray Tango device to use for the test (e.g. test-sdp/subarray/01) 01
TEST_INGRESS Ingress to connect to the cluster - if not set, the test is assumed to be running inside the cluster None
TEST_TANGO_CLIENT Which tango client to use: dp or gql "dp"
DATA_PRODUCT_PVC_NAME PersistentVolumeClaim where data products are saved and accessed by the tests None
HELM_UNINSTALL_HAS_WAIT Does “helm uninstall” support –wait? Yes only if set to “1”. None
TEST_MARKER Only run BDD tests with this tag (or a combination of tags using and and not) None

In addition, various custom timeout variables are also used, see below in section “Custom timeouts”.

Custom timeouts

Some of the command and test execution is controlled by timeouts. These make sure that a step waits for a reasonable amount of time for the execution of a long-running command or a long-running test step.

The most important timeouts can be controlled via environment variables, which are loaded into the context fixture. All of the values are given in seconds:

Name Description Default
TEST_TIMEOUT_ASSIGN_RES AssignResources command timeout 120
TEST_TIMEOUT_CONFIGURE Configure command timeout 300
TEST_TIMEOUT_WAIT_POD Timeout for waiting for a pod to start (including downloading images) 300
TEST_TIMEOUT_DEVICE_ON_OFF Timeout for the QueueConnector and MockDish devices to change state to ON or OFF 20
TEST_TIMEOUT_WAIT_TO_EXECUTE Wait for various resources, image downloads, some processing; for QA tests 300

When adding a new test, please make sure you either use these, or add a new timeout, if relevant, instead of hard-coding the values in your test.

Commands other than AssignResources and Configure are not long-running, hence the timeout waiting for the state-transition as a result of these commands cannot be controlled and is set to 60 seconds as a default.

config_params fixture

config_params is another pytest fixture defined in tests/conftest.py. It returns a dictionary with the intent that it should hold information about which data, telescope metadata or other test configuration should be in place.

The config_params dictionary is populated by the given step in tests/integration/conftest.py:

@given(
    parsers.parse("I select the dataset for the {telescope_name:S} telescope")
)
def set_config_params(config_params, telescope_name):
    """
    Set parameters to determine test configuration
    """
    config_params["mode"] = telescope_name

The config_params ‘mode’ is later used to select which data and telescope metadata a test should use. This means that any test can specify which data it wants to use by including the line I select the dataset for the <telescope name> telescope in the feature file. Use the telescope name ‘Mid’ and ‘Low’ to choose data for those telescopes.

Data and telescope metadata

The data used in the integration tests is located at tests/resources/data under AA05LOW.ms for Low and pointing-data/scan-1.ms for Mid.

The configuration string for the AssignResources command is located at tests/resources/subarray-json and is named low.json and mid.json for the respective telescopes.

Shared BDD steps

Shared BDD steps are located in tests/integration/conftest.py. Common pytest fixtures are defined in tests/conftest.py. Shared steps can be referenced in a feature file without being imported into the corresponding test_ file. Shared fixtures can be used in test_ file functions by specifying the fixture name as an argument.

tests/integration/conftest.py:

Given I connect to an SDP subarray
Given I connect to the SDP controller
Given the volumes are created and the CBF emulator input data is copied
Given obsState is {obs_state:S}
Given I deploy the visibility receive script
Given I select the dataset for the {telescope_name:S} telescope

Then the state is {state:S}
Then the obsState is {obs_state:S}

tests/conftest.py:

context
k8s_element_manager
dataproduct_directory
subarray_ready
vis_receive_script
config_params

Using a Tango client

The tests connect to the Tango devices using one of two mechanisms:

Each has been wrapped with the same methods (code defined in tests/common/tango), and an environment variable controls which one is used in a specific environment.

Set TEST_TANGO_CLIENT to dp to use DeviceProxy or gql for TangoGQL. Tests running in the CI pipeline (inside the same cluster as the SDP deployment) use the DeviceProxy client. Manual tests running outside the cluster against an SDP deployment in a local Minikube cluster or a remote cluster need to use the TangoGQL client.

Values files for testing

Three values files have been provided inresources/values to deploy the SDP for testing in different environments:

  • test-ci.yaml is used in the CI pipeline with the DeviceProxy client.
  • test-external.yaml is intended to be used with the TangoGQL client in remote clusters, such as the DP cluster, where the Taranta auth chart is centrally deployed.
  • test-external-auth.yaml is intended to be used with the TangoGQL client in Minikube, where the Taranta auth chart is not already deployed.

Running the tests

In the following, we will assume that you installed SDP into the environment of your choice, to the namespace of your choice, together with a Persistent Volume Claim for both the control system and the processing namespace (this you can achieve if you use the one of the values files described above as your custom values file when running the helm install command).

The tests are marked with pytest markers. The TEST_MARKER environment variable specifies which tests will run when using the make targets. For example, if you only want to run the visibility receive test grouping, you would need to use TEST_MARKER="visibility_receive".

Export the relevant namespace environment variables (update accordingly):

export KUBE_NAMESPACE=<control-system-namespace>
export KUBE_NAMESPACE_SDP=<processing-namespace>

Some of the tests described below are slightly different whether you are running SDP in minikube or in the DP Platform. We will clearly mark the differences. If you want to use the DP Platform, note that you will need VPN access. Please follow the instructions in Confluence (make sure you request access to the DP Platform - the TechOps cluster is a different VPN). When running the tests on the DP Platform, you have to be connected to the VPN. VPN is not needed for minikube.

For the DP Platform, you also need KUBECONFIG access (assuming you installed SDP, you already have exported the file as needed.):

export KUBECONFIG=<my-config-file-to-dp>

Do not run this export for minikube. If you have exported a file, unset the variable:

unset KUBECONFIG

Set the ingress URL. For minikube:

export TEST_INGRESS=http://$(shell minikube ip)

For the DP Platform:

export TEST_INGRESS=http://k8s.sdhp.skao

NOTE

If you are using Minikube with the docker driver on a MacOS, then you must enable tunnelling for the tests to work, which is done by running this command in a separate terminal:

minikube tunnel

It may ask you for an administrator password to open privileged ports. The command must remain running for the tunnel to be active.


You can run the tests with:

make test

NOTE

The visibility receive test requires a couple of helper pods, which connect to persistent volumes. These contain MeasurementSet data (stored using Git LFS in the repository, see Troubleshooting), which are used for sending and validating the received data. These pods are automatically created and removed by the test.


Testing in CI/CD

Jobs on every commit

The SDP integration repository runs the tests automatically as part of its CI/CD pipeline in GitLab. See the pipelines page.

By default, on every commit, the test jobs are split into three parallel streams - on three separate subarray devices (test-sdp/subarray/01, 02 and 03)

  1. Jobs tagged in their BDD feature files with SKA_mid in test job k8s-test-mid
  2. Jobs tagged in their BDD feature files with SKA_low in test job k8s-test-low
  3. All other jobs run in a common test job called k8s-test-common

Alternatively, one can run all of the tests on a single subarray by adding the TEST_IN_SERIAL variable to the CI pipeline (can be done in GitLab or on a branch in code).

There is a k8s-test-setup CI job which installs SDP in the KUBE_NAMESPACE and which runs in the stage pre-test and a corresponding k8s-test-cleanup which removes this deployment and runs in the post-test stage

All of the tests, except the one marked as “alternating_scans”, run on every commit.

If new tests are being designed it is important to remember that concurrent test instances may be running on different subarrays. Existing tests incorporate their SUBARRAY_ID into any local named resources used for testing.

Scheduled jobs

Currently, three scheduled jobs are set up to run tests, which can be found here: Pipeline Schedules

  1. Daily: Runs all of the tests (except “alternating_scan” ones) once a day, in the middle of the night (UK time).
  2. Persistent SDP (DP Platform): Runs once an hour between 8 am and 4 pm UTC. It only runs the visibility_receive test, in the dp-shared namespace on the DP Platform (see Section “Testing persistent deployments”).
  3. Alternate Scans (DP Platform): Runs an extended version (20 minutes long) of the visibility_receive test, on the DP Platform (dp-shared namespace), once a day during the night (UK time).

Testing persistent deployments

At the moment, there are three persistent deployments of SDP, running in three different namespaces on the Data Processing Platform.

dp-shared

The dp-shared (main) and dp-shared-p (for processing scripts) namespaces host the first persistent SDP deployment, which is used for manual testing, experimenting, and scheduled testing. It runs with two subarray devices, the first one used for scheduled tests, while the second can be used for manual runs.

Schedules run once every hour on the master branch (See Persistent SDP (DP Platform) schedule) and they execute the visibility receive test only.

sdp-integration

The sdp-integration (main) and sdp-integration-p (for processing scripts) namespaces host the Integration deployment of SDP. This deployment is upgraded everytime new code is merged to the master branch of the sdp-integration repository.

This allows for continuous integration of new code into a running system.

sdp-staging

The sdp-staging (main) and sdp-staging-p (for processing scripts) namespaces host the Staging deployment of SDP. This deployment is upgraded everytime the SDP helm chart is released with a new tag.

This allows for continuous deployment of new code into a running system.

Eventually, we may merge this deployment with the one in dp-shared and use that as our staging environment, which can be accessed by users, as well as the GitLab CI pipeline.