Installing the SDP

In this section, we describe how you can install SDP as a stand-alone system.

Before running the SDP, your local development environment needs to be set up. This can either be a local Kubernetes instance running in Minikube or by using remote access to a Kubernetes cluster. Details can be found in the requirements section.

In this page, we describe how you can install and uninstall SDP using helm and kubectl, which do not require a clone of the ska-sdp-integration repository.

The ska-sdp-integration repository, once cloned, provides a Makefile, which simplifies some of these steps:

The current page provides general instructions, most of which are cluster and environment independent. For environment-specific differences see the relevant pages:

Creating namespaces

SDP requires two namespaces, one for its control system and another for deploying its processing scripts and their execution engines (i.e. processing namespace).

If you are using a local environment, you may use the default namespace for the control system, but you will have to create a new namespace to run the processing scripts. In the commands below, we will refer to the control system namespace as <control-namespace>, and to the processing namespace as <processing-namespace>.

$ kubectl create namespace <processing-namespace>

For remote Kubernetes clusters namespaces may be pre-assigned and you won’t need to create new ones. It is important that your control and processing namespaces are different!

Deploying the SDP

Adding the Helm repository

Releases of the SDP Helm chart are published in the SKA artefact repository. To install the released version, you need to add this chart repository to helm:

$ helm repo add ska https://artefact.skao.int/repository/helm-internal

If you already have the repository, you can update it, in order to gain access to latest chart versions, using:

$ helm repo update

Installing SDP with complementary interfaces

There are various interfaces to SDP, which are not deployed by default. To enable their deployment, append the following --set arguments to the helm upgrade command described in the following section(s):

For ITango:

--set ska-tango-base.itango.enabled=true

For Taranta dashboards:

--set ska-tango-taranta.enabled=true --set ska-tango-tangogql.enabled=true

Enable direct reception of external data

The SDP supports receiving data from the CBF hardware, or more generally from outside the Kubernetes network, in such a way that the pods receiving the data are given direct access to the workers’ Network Interface Controller (NIC), bypassing most of the Kubernetes networking stack. When this happens, the SDP also controls the IPs that are assigned to each pod by internally managing a super-network from where sub-networks and IPs are dynamically allocated.

To achieve this, the SDP watches and reads a series of network attachment definitions (Kubernetes network-attachment-definition Custom Objects), each describing the technology used to get hold of the worker’s NIC, and the parameters used by the SDP for its internal super/sub-network management. Such network attachment definitions must be pre-defined in the underlying Kubernetes cluster, and have a series of requirements so they are usable by the SDP, namely:

  • They need to have a sdp.skao.int/available-for-allocation label set to true.

  • They need to have two annotations:

    • sdp.skao.int/ip-supernet defines the super-network that the SDP will allocate sub-networks and IPs from (e.g., 192.168.1.0/24).

    • sdp.skao.int/ip-subnet-cidr-bits defines the size of the individual sub-networks that are internally allocated by the SDP for specific receivers (e.g., 26).

  • The network attachment definition must allow the SDP to set the IP on each pod, rather than letting Kubernetes determine an IP.

By default when the SDP is deployed, it will not add the required credentials to be able to watch and read Kubernetes network attachment definitions. To enable this feature set the following option:

--set helmdeploy.enableNADClusterRole=true

To get a potential list of candidate network attachment definitions usable by the SDP issue the following command:

kubectl get network-attachment-definitions.k8s.cni.cncf.io -A --selector=sdp.skao.int/available-for-allocation=true

Using SDP on a non-default cluster setup

On a cluster that is not using the domain of cluster.local. You will also need to set the cluster domain via a few different options, all are required to be done

--set kafka.clusterDomain=<my-cluster-domain>
--set kafka.zookeeper.clusterDomain=<my-cluster-domain>
--set ska-sdp-qa.redis.clusterDomain=<my-cluster-domain>

Installing SDP with PVC

Some of the SDP processing scripts require access to a Persistent Volume Claim (PVC) in order to store and access data. The following command asks SDP to create a PVC for you. Note that when SDP is uninstalled, the PVC is also removed.

If you already created a PVC yourself, which you want to use (independent of SDP), skip to the Installing SDP without PVC section.

Install the SDP chart with the command (assuming the release name is test). Note that we use helm upgrade --install to allow for both install and upgrade events as needed:

$ helm upgrade --install test ska/ska-sdp -n <control-namespace> \
    --set global.sdp.processingNamespace=<processing-namespace> \
    --set data-pvc.create=true \
    --set data-pvc.storageClassName=nfss1

The above command will create a PVC with the default name of test-pvc. You can change the name by setting global.data-product-pvc-name. data-pvc.storageClassName is set to nfss1, which is used for standard SKAO managed clusters.

By default the latest version of the helm chart is deployed. If you wish to use another version, add --version <version> to the end of the helm upgrade command.

Installing SDP without PVC

If you already have a PVC you want to use, install SDP as follows. Note that we use helm upgrade --install to allow for both install and upgrade events as needed:

$ helm upgrade --install test ska/ska-sdp -n <control-namespace> \
    --set global.sdp.processingNamespace=<processing-namespace> \
    --set global.data-product-pvc-name=<my-pvc-name>

Replace <my-pvc-name> with the name of the PVC you want to use. SDP uses test-pvc by default.

By default the latest version of the helm chart is deployed. If you wish to use another version, add --version <version> to the end of the helm upgrade command.

Monitoring the deployment

You can watch the deployment in progress using kubectl:

$ kubectl get pod -n <control-namespace> --watch

or the k9s terminal-based UI (recommended):

$ k9s -n <control-namespace>

Wait until all the pods are running:

default      databaseds-tango-base-test-0      ●  1/1          0 Running    172.17.0.12     m01   119s
default      ska-sdp-console-0                 ●  1/1          0 Running    172.17.0.15     m01   119s
default      ska-sdp-etcd-0                    ●  1/1          0 Running    172.17.0.6      m01   119s
default      ska-sdp-helmdeploy-0              ●  1/1          0 Running    172.17.0.14     m01   119s
default      ska-sdp-lmc-config-6vbtr          ●  0/1          0 Completed  172.17.0.11     m01   119s
default      ska-sdp-lmc-controller-0          ●  1/1          0 Running    172.17.0.9      m01   119s
default      ska-sdp-lmc-subarray-01-0         ●  1/1          0 Running    172.17.0.10     m01   119s
default      ska-sdp-proccontrol-0             ●  1/1          0 Running    172.17.0.4      m01   119s
default      ska-sdp-script-config-2hpdn       ●  0/1          0 Completed  172.17.0.5      m01   119s
default      ska-tango-base-tangodb-0          ●  1/1          0 Running    172.17.0.8      m01   119s

The two pods with config in their name will vanish about 30 seconds after they complete. The above list shows the minimal SDP deployment, depending on which parts of the sub-system are enabled, you may see more pods running.

You can check the logs of pods to verify that they are doing okay:

$ kubectl logs <pod-name> -n <control-namespace>

For example (for a default namespace):

$ kubectl logs ska-sdp-lmc-subarray-01-0
...
1|2021-05-25T11:32:53.161Z|INFO|MainThread|init_device|subarray.py#92|tango-device:test-sdp/subarray/01|SDP Subarray initialising
...
1|2021-05-25T11:32:53.185Z|INFO|MainThread|init_device|subarray.py#127|tango-device:test-sdp/subarray/01|SDP Subarray initialised
...

$ kubectl logs ska-sdp-proccontrol-0
1|2021-05-25T11:32:32.423Z|INFO|MainThread|main_loop|processing_controller.py#180||Connecting to config DB
1|2021-05-25T11:32:32.455Z|INFO|MainThread|main_loop|processing_controller.py#183||Starting main loop
1|2021-05-25T11:32:32.566Z|INFO|MainThread|main_loop|processing_controller.py#190||processing block ids []
...

If it looks like this (or similar), there is a good chance everything has been deployed correctly.

Removing the SDP

To remove the SDP deployment from the k8s cluster, run:

$ helm uninstall test -n <control-namespace>

Remember, that if you asked SDP to create a PVC upon deployment, this command will also remove that PVC.

If you created the processing namespace, you can remove it with:

$ kubectl delete namespace <processing-namespace>