Connecting to the configuration database
The SDP can be controlled by directly interacting with its
Configuration Database (see Components).
A CLI was developed to do that, which can be accessed via the console
pod.
Details about the existing commands of the ska-sdp
utility can be found in the
CLI to interact with SDP
section in the SDP Configuration Library documentation.
Warning
The following is not recommended for non-developers. Please follow the instructions in Accessing the Tango interface to work with SDP.
Start a shell in the console pod by running:
$ kubectl exec -it ska-sdp-console-0 -n <control-namespace> -- bash
This will allow you to use the ska-sdp
CLI:
# ska-sdp list -a
Keys with prefix /:
/lmc/controller
/lmc/subarray/01
/script/batch:test-batch:0.3.0
...
Which shows that the configuration DB contains the state of the Tango devices and the processing script definitions.
Starting a processing script
Next, we can add a processing block to the configuration:
# ska-sdp create pb <script-kind>:<script-name>:<script-version>
For example
# ska-sdp create pb batch:test-dask:0.3.0
Processing block created with pb_id: pb-sdpcli-20221011-00000
The processing block is created with the /pb
prefix in the
configuration:
# ska-sdp list -v pb
Keys with prefix /pb:
/pb/pb-sdpcli-20221011-00000 = {
"dependencies": [],
"eb_id": null,
"parameters": {},
"pb_id": "pb-sdpcli-20221011-00000",
"script": {
"kind": "batch",
"name": "test-dask",
"version": "0.3.0"
}
}
/pb/pb-sdpcli-20221011-00000/owner = {
"command": [
"test_dask.py"
],
"hostname": "proc-pb-sdpcli-20221011-00000-script--1-qqvgw",
"pid": 1
}
/pb/pb-sdpcli-20221011-00000/state = {
"deployments": {
"proc-pb-sdpcli-20221011-00000-dask-1": "RUNNING",
"proc-pb-sdpcli-20221011-00000-dask-2": "RUNNING"
},
"last_updated": "2022-10-11 08:20:34",
"resources_available": true,
"status": "RUNNING"
}
The processing block is detected by the processing controller which deploys the script. The script in turn deploys the execution engines (in this case, Dask).
The deployments are requested by creating entries with /deploy
prefix in
the configuration database, where they are detected by the Helm deployer
which actually makes the deployments:
# ska-sdp list -v deployment
Keys with prefix /deploy:
/deploy/proc-pb-sdpcli-20221011-00000-dask-1 = {
"args": {
"chart": "dask",
"values": {
"image": "artefact.skao.int/ska-sdp-script-test-dask:0.3.0",
"worker.replicas": 2
}
},
"dpl_id": "proc-pb-sdpcli-20221011-00000-dask-1",
"kind": "helm"
}
/deploy/proc-pb-sdpcli-20221011-00000-dask-1/state = {
"pods": {
"proc-pb-sdpcli-20221011-00000-dask-1-scheduler-7d6f5f9749-vr6dw": "Running",
"proc-pb-sdpcli-20221011-00000-dask-1-worker-5744899988-hmr5q": "Running",
"proc-pb-sdpcli-20221011-00000-dask-1-worker-5744899988-sqnf6": "Running"
}
}
/deploy/proc-pb-sdpcli-20221011-00000-dask-2 = {
"args": {
"chart": "dask",
"values": {
"image": "artefact.skao.int/ska-sdp-script-test-dask:0.3.0",
"worker.replicas": 2
}
},
"dpl_id": "proc-pb-sdpcli-20221011-00000-dask-2",
"kind": "helm"
}
/deploy/proc-pb-sdpcli-20221011-00000-dask-2/state = {
"pods": {
"proc-pb-sdpcli-20221011-00000-dask-2-scheduler-65cc58cf4f-8bm9r": "Running",
"proc-pb-sdpcli-20221011-00000-dask-2-worker-79694dbf85-j7nfb": "Running",
"proc-pb-sdpcli-20221011-00000-dask-2-worker-79694dbf85-njw6c": "Running"
}
}
/deploy/proc-pb-sdpcli-20221011-00000-script = {
"args": {
"chart": "script",
"values": {
"env": [
{
"name": "SDP_CONFIG_HOST",
"value": "ska-sdp-etcd-client.dp-orca"
},
{
"name": "SDP_HELM_NAMESPACE",
"value": "dp-orca-p"
},
{
"name": "SDP_PB_ID",
"value": "pb-sdpcli-20221011-00000"
}
],
"image": "artefact.skao.int/ska-sdp-script-test-dask:0.3.0"
}
},
"dpl_id": "proc-pb-sdpcli-20221011-00000-script",
"kind": "helm"
}
/deploy/proc-pb-sdpcli-20221011-00000-script/state = {
"pods": {
"proc-pb-sdpcli-20221011-00000-script--1-r4p9c": "Running"
}
}
The deployments associated with the processing block have been created
in the <processing-namespace>
. You can list the running pods using kubectl
on the host (exit the console pod):
$ kubectl get pod -n ``<processing-namespace>``
NAME READY STATUS RESTARTS AGE
proc-pb-sdpcli-20221011-00000-dask-1-scheduler-7d6f5f9749-vr6dw 1/1 Running 0 9s
proc-pb-sdpcli-20221011-00000-dask-1-worker-5744899988-hmr5q 1/1 Running 0 9s
proc-pb-sdpcli-20221011-00000-dask-1-worker-5744899988-sqnf6 1/1 Running 0 9s
proc-pb-sdpcli-20221011-00000-dask-2-scheduler-65cc58cf4f-8bm9r 1/1 Running 0 10s
proc-pb-sdpcli-20221011-00000-dask-2-worker-79694dbf85-j7nfb 1/1 Running 0 10s
proc-pb-sdpcli-20221011-00000-dask-2-worker-79694dbf85-njw6c 1/1 Running 0 10s
proc-pb-sdpcli-20221011-00000-script--1-r4p9c 1/1 Running 0 14s
Cleaning up
Finally, let us remove the processing block from the configuration DB:
# ska-sdp delete pb pb-sdpcli-20221011-00000
/pb/pb-sdpcli-20221011-00000
/pb/pb-sdpcli-20221011-00000/state
Deleted above keys with prefix /pb/pb-sdpcli-20221011-00000.
If you re-run the commands from the last section you will notice that this correctly causes all changes to the cluster configuration to be undone as well.