How to run an execution block

This documentation is to explain how to run an execution block to capture visibility data and monitor the status of the data as they come in.

Command the SDP to run an execution block

  1. To get a stand-alone instance of the SDP running please refer to Running the SDP stand-alone. Once you have a running instance, you may issue commands to the tango device to run a scan. Please remember you need to create/have two namespaces beforehand. In the documetation, <namespace> refers to the namespace where the controllers are deployed. <namespace-proc> refers to the namespace where the processing scripts are deployed. If you use the SDP Integration makefile, the default namespaces are test and test-sdp.

    • If you are running the SDP on an SKA environment with access to the shared Data Product Dashboard, use the following command to deploy the helm chart. Replace the namespace with the namespace you have chosen or been given. This command allows the use of the shared Data Product Dashboard.

      helm install test ska/ska-sdp --set ska-tango-base.itango.enabled=true -n <namespace> --set helmdeploy.namespace=<namespace-proc>,global.data-product-pvc-name=shared
      
    • If you are running the SDP locally or on an environment without access to the Data Product Dashboard use this command. This command allows for the creation of your own Data Product Dasboard.

      helm install test ska/ska-sdp --set ska-tango-base.itango.enabled=true -n <namespace> --set helmdeploy.namespace=<namespace-proc>,global.data-product-pvc-name=test-pvc,data-pvc.create.enabled=true
      
    • When working with the SDP, it is suggested you have three terminals open, one for iTango, one for the first namespace and one for seond namespace.

  2. Subarray schemas

    The sub-array schemas are detailed in this link.

  3. Visibility Receive Processing Script parameters

    Details of the Visibility Receive Script parameters can be found here. A more detailed description can be found in a Gitlab Repo README.

  4. Subarray Tango device commands

    To access the Tango Interface (iTango) you can run the following command. For more information see Accessing the Tango interface.

    kubectl exec -it ska-tango-base-itango-console -n <namespace> -- itango3
    

    Once you are connected to and can start a iTango session, issue the following commands to run a scan.

    1. List the iTango devices

      lsdev
      
    2. Turning on subarray

      Pick a sub-array from the list e.g test-sdp/subarray/01.

      d = DeviceProxy('test-sdp/subarray/01')
      d.state()
      d.On()
      d.state()
      d.obsState
      
    1. Assign resources

      The state should now be in ON. This means it has now been transitioned to operational state. Now, we need to start the execution block using the AssignResources command. It takes an argument which contains configuration data in JSON format. The data are described by a schema which is versioned to support evolution of the interfaces. The schema is specified in the argument with the interface keyword.

      The configuration string defines externally managed resources and an execution block (EB). The EB contains information about the processing blocks (PBs) and that is required for SDP to receive visibility data from the correlator beam-former (CBF), provide calibration solutions, receive candidate and timing data from the pulsar search and timing subsystems and define scan types.

      Below is a sample of the Assign Resources JSON Object

      {
          "interface": "https://schema.skao.int/ska-sdp-assignres/0.4",
          "resources": {
              "csp_links": [1, 2, 3, 4],
              "receptors": [ "C10", "C136", "C1", "C217", "C13", "C42"],
              "receive_nodes": 1
          },
          "execution_block": {
              "eb_id": EXECUTION_BLOCK_ID,
              "context": {},
              "max_length": 21600.0,
              "channels": [
              {
                  "channels_id": "vis_channels",
                  "spectral_windows": [
                  {
                      "spectral_window_id": "fsp_1_channels",
                      "count": 13824,
                      "start": 0,
                      "stride": 2,
                      "freq_min": 350000000.0,
                      "freq_max": 368000000.0,
                      "link_map": [[0,0],[200,1],[744,2],[944,3]]
                  }
                  ]
              }
              ],
              "polarisations": [
              {
                  "polarisations_id": "all",
                  "corr_type": ["XX","XY","YY","YX"]
              }
              ],
              "fields": [
              {
                  "field_id": "field_a",
                  "phase_dir": {
                      "ra": [2.711325],
                      "dec": [-0.01328889],
                      "reference_time": "...",
                      "reference_frame": "ICRF3"
                  },
                  "pointing_fqdn": "low-tmc/telstate/0/pointing"
              },
              {
                  "field_id": "field_b",
                  "phase_dir": {
                      "ra": [12.48519],
                      "dec": [2.052388],
                      "reference_time": "...",
                      "reference_frame": "ICRF3"
                  },
                  "pointing_fqdn": "low-tmc/telstate/0/pointing"
              }
              ],
              "beams": [
              {
                  "beam_id": "vis0",
                  "function": "visibilities"
              }
              ],
              "scan_types": [
              {
                  "scan_type_id": ".default",
                  "beams": {
                  "vis0": {
                      "polarisations_id": "all",
                      "channels_id": "vis_channels"
                  }
                  }
              },
              {
                  "scan_type_id": "science",
                  "derive_from": ".default",
                  "beams": {
                      "vis0": {
                          "field_id": "field_a"
                      }
                  }
              },
              {
                  "scan_type_id": "calibration",
                  "derive_from": ".default",
                  "beams": {
                      "vis0": {
                          "field_id": "field_b"
                      }
                  }
              }
              ]
          },
          "processing_blocks": [
              {
                  "pb_id": PROCESSING_BLOCK_ID_REALTIME,
                  "script": {
                      "kind": "realtime",
                      "name": "vis-receive",
                      "version": "1.1.1"
                  },
                  "parameters": {
                      "channels_per_port": 6912,
                      "receiver": {
                          "options": {
                              "reception": {
                                  "transport_protocol": "tcp",
                                  "stats_receiver_kafka_config": f"{KAFKA_HOST}:json_workflow_state"
                              },
                              "telescope_model": {
                                  "telmodel_source_uris": "gitlab://gitlab.com/ska-telescope/sdp/ska-sdp-tmlite-repository?dfa075be#tmdata"
                              }
                          },
                          "pod_settings": [
                              {
                              "securityContext": {
                                  "runAsUser": 0,
                                  "fsGroup": 0
                                  }
                              }
                          ],
                      },
                      "processors": [ "mswriter",
                          {
                              "name": "qa-metrics-generator-plasma-receiver",
                              "image": "artefact.skao.int/ska-sdp-qa-metric-generator",
                              "version": "0.13.0",
                              "command": [
                                  "plasma-processor",
                                  "ska_sdp_qa_metric_generator.plasma_to_qa.SignalDisplay",
                                  "--plasma_socket",
                                  "/plasma/socket",
                                  "--readiness-file",
                                  "/tmp/processor_ready",
                                  "--use-sdp-metadata",
                                  "False",
                                  "--verbose"
                              ],
                              "readinessProbe": {
                                  "initialDelaySeconds": 5,
                                  "periodSeconds": 5,
                                  "exec": {
                                      "command": [
                                          "cat",
                                          "/tmp/processor_ready"
                                      ]
                                  }
                              },
                              "env": [
                                  {
                                      "name": "BROKER_INSTANCE",
                                      "value": KAFKA_HOST
                                  },
                                  {
                                      "name": "MESSAGE_TYPE",
                                      "value": "json"
                                  }
                              ]
                          }
                      ]
                  }
              }
          ]
      }
      

      To run the Assign Resources command, run the following code. Copy the JSON from above into the line defining config. If you run more than one scan in a day, increment the last number on the EXECUTION_BLOCK_ID and the PROCESSING_BLOCK_ID_REALTIME each time you run a scan.

      import json
      from datetime import date
      namespace = "<Your Namespace>"
      today = date.today().strftime("%Y%m%d")
      EXECUTION_BLOCK_ID = f"eb-test-{today}-00001"
      PROCESSING_BLOCK_ID_REALTIME = f"pb-testrealtime-{today}-00001"
      KAFKA_HOST = f"ska-sdp-qa-kafka.{namespace}.svc:9092"
      
      config = json.dumps(<copied-json-string>)
      
      d.AssignResources(config)
      
      d.obsState
      
    2. Visibility Receive parameters

      The visibility receive parameters are set in the above json under the parameters key.

    3. Configure scan

      The obsState should now be RESOURCING. You can now run:

      d.Configure('{"interface": "https://schema.skao.int/ska-sdp-configure/0.4", "scan_type": "science"}')
      
      d.obsState
      
    4. Run the scan

      The obsState should now be READY. To run the scan, enter the following commands:

      d.Scan('{"interface": "https://schema.skao.int/ska-sdp-scan/0.4", "scan_id": 1}')
      
      d.obsState
      

      The obsState should be SCANNING.

    5. Retrieving QA Metrics

      This part is a bit tricky. There is no current way of simulating qa metrics. So in order to view something in the Signal Display, you need to use actual data from a Measure Set. It is complicated code that just needs to be copied. The first step requires us to download a Measurement Set if none exists. The second step defines the method which will send out the qa metrics. Next the host and the port are retrieved from the receive addresses. Then finally the cbf_scan is called.

      !pip install ska-sdp-cbf-emulator
      
      import os
      import cbf_sdp.packetiser
      from realtime.receive.core.config import create_config_parser
      
      # Download Data
      MS_INPUT_NAME = "AA05LOW.ms"
      if not os.path.isdir(MS_INPUT_NAME):
          !curl https://gitlab.com/ska-telescope/sdp/ska-sdp-realtime-receive-core/-/raw/main/data/AA05LOW.ms.tar.gz --output AA05LOW.ms.tar.gz
          !tar -xzf AA05LOW.ms.tar.gz
      
      # Define CBF_scan Method
      async def cbf_scan(ms_path: str, target_host: str, target_port: str, scan_id: int):
          sender_args = create_config_parser()
          sender_args["reader"] = {"num_repeats": 1}
          sender_args["transmission"] = {
              "method": "spead2_transmitters",
              "num_streams": 6192,
              "rate": 2_822_400,
              "target_host": target_host,
              "target_port_start": target_port,
              "transport_protocol": "tcp",
              "scan_id": scan_id,
              "telescope": "low",
          }
          await cbf_sdp.packetiser.packetise(sender_args, ms_path)
      
      # Get and modifiy Receive Addresses
      receiveAddresses = json.loads(d.receiveAddresses)
      
      # Only use one scan_typ_id
      scan_type_id = "science"
      
      # Only use one scan_id and the first beam_id
      scan_id = 1
      beam_id = list(receiveAddresses[scan_type_id].keys())[0]
      host = receiveAddresses[scan_type_id][beam_id]["host"][0][1]
      start_port = receiveAddresses[scan_type_id][beam_id]["port"][0][1]
      
      # Call cbf_scan deifned earlier
      await cbf_scan(MS_INPUT_NAME, host, start_port, scan_id)
      
    6. End scan

      You can now end the scan by running the end scan command.

      d.EndScan()
      

      You should now be able to get the data product of your scan.

    7. Release resources and switch subarray off

      Run the following commands to release the sub-array and switch the telescope off. If you want the data products from the pods, do this before running these commands.

    d.End()
    
    d.ReleaseAllResources()
    
    d.Off()
    
    d.state()
    

    The telescope should now be off.

How to get the minikube/cluster IP address and namespace

  • IP Address

    You will need the IP address to access the Taranta dashboard and the signal display. If you deployed your instance of SDP on a cluster, your IP address should be that of the cluster eg. https://sdhp.stfc.skao.int/. Otherwise you will need the Minikube IP address. To get the Minikube IP address you need to run the following command: minikube ip.

  • Namespace

    On your SDP install you should have created a namespace or used the default one. If you used the default one, your namespace should be test. If you created and forgot your namepace, run the following command and list of namespaces will appear: kubectl get namespaces

Monitor the status of the SDP - Taranta Dashboard

  1. File of example Taranta Dashboard

    Here is a link to the Taranta dashboard that can be uploaded and used to monitor the status of the SDP.

  2. How to upload the Taranta Dashboard

    Once you have the SDP instance running:

    • Step 1: Open an instance of the taranta dashboard by going to this address: http://<cluster-address>/<namespace>/taranta/
    • Step 2: You need to log in to the Taranta instance. For information on usernames and passwords refer to this page
    • Step 3: Upload the file given above and the dashboard will load.
  3. How to use the dashboard

    You can use the dashboard by pressing the play button. This should connect the dashboard to the relevant sub arrays. Once connected you will see the status of the subarrays. If you select a specific subarray from the drop down list, it will list the statistics for that subarray.

Monitor the status of the visibility data as they come in - Signal Displays

  1. How to connect to the signal displays

    Once the SDP is running you can access the signal display by going to this address: http://<cluster-address>/<namespace>/signal/display/

  2. What is displayed in the Signal Display

    • Statistics about the data
    • RFI Graph
    • Spectrum Graphs
    • Phase vs Amplitude Graphs
    • Phase vs Frequencies Graphs
    • Spectrogram Waterfall Plots
  3. The legend

    For some of the graphs there is an interactive legend.

    • Upon initial receipt of the data, the data is parsed and a list is created. This list is displayed upon the screen as a list of buttons.
    • When the list of buttons is displayed, they are coloured to match the associated data in the charts. When a button is clicked, the colouring is suppressed and the associated data is suppressed from all of the associated charts. Clicking the button again will cause the colouring suppression to stop and the data will again pass to the charts.
    • Note that this legend affects the data before the charting is rendered, which allows for the charting library to be independent of this functionality.