PSS Test Vector Generator

This document describes the process for making test vectors (simulated pulsar search data products) for the purpose of testing pulsar and fast-transient search algorithms.

The output test vectors are in Sigproc filterbank format, and can be used as input to the PSS pipeline for testing. Each file consists of Gaussian noise superposed with a simulated pulsar signal with user-defined spin parameters. The default output files are based on a typical SKA pulsar search beam with 4096 frequency channels, 64-us time resolution, 600-s integration time, and 8-bit samples. This results in each test vector having a size of 36 GB. If needed, the file parameters can also be modified by the user.

The PSS Test Vector Generator docker image

All relevant dependences and scripts needed to run the Test Vector Generator is included in a docker image, set up through the Docker File. The docker image is also available in:

nexus.engageska-portugal.pt/pss-test-vector-generator/test-vector-pipeline:0.1.0

Running the PSS Test Vector Generator pipeline

From inside the docker image, to run the Test Vector Generation pipeline, simply type:

>> python Pipeline.py -m <mode> -d <path to yaml directory> -o <output directory>

where <mode> is one of the following:

  • DDTR
  • FDAS-ACC
  • FDAS-FOP
  • FDAS-PER
  • FDAS-SLOW
  • FLDO

Each of these modes is described in a .yaml file, which are located in the examples directory. Alternatively, a custom made mode can be used by creating a .yaml file following the syntax in e.g. test.yaml, which is also included in the examples directory.

The pipeline will then create a filterbank file containing Gaussian noise (noise.fil), as many pulse profile files as has been specified in the .yaml file and then use these to inject simulated pulsars with the specified pulsar paramters. All output files will be put into the specified output directory, including the noise file (noise.fil) and profile files (*.asc).

Note:

Some parts of the pipeline are optimised with open-mp. Depending on your computer set-up, you may want to specify the number of CPU cores avaliable to the pipeline before starting up. This can be done by running:

>> export OMP_NUM_THREADS=<xx>

where <xx> is the number of CPU cores.