Mandatory checks#

This page illustrates all the information regarding what to check before and while you push your modifications.

The development of this software is monitored by a Continuous Integration (CI) pipeline that takes care of all checks (some using external services).

Documentation#

Please, make sure you have installed protopipe with the required packages (either with the all or docs keyword).

Each Pull Request (PR) has to have its own documentation updates (if any), according to what are the changes to be merged into master.

The documentation is generated by Sphinx with reStructuredText syntax.

To build and check your documentation locally,

  • cd docs

  • for big changes (or just to be sure), rm api/* && make clean && make html

  • for small changes, make html

The built documentation will be stored under docs/_build/html from which you can open index.html with your favorite browser to check how it looks.

You will have to fix any warning that appears during documentation building, because the documentation also runs on readthedocs with an option to treat warnings as errors.

Testing#

All testing code is called by issuing the pytest command (see official webpage for details).

This command can be called from any place within the cloned repository and it will always run from the root directory of the project.

For debugging purposes you can add the -s option which will allow to visualise any print statement within the test module(s).

Testing is automatically triggered by the CI every time a new pull-request is pushed to the repository, and its correct execution is one of the mandatory condition for merging.

Unit tests#

You can follow these guidelines to understand what a unit-test is supposed to do.

Note

This is a maintenance activity which has being long overdue and we need manpower for it, so if you have experience on this or you want to contribute please feel free to do so.

For more information on how to contribute to this effort check this issue.

Being protopipe based on ctapipe, all the tools imported from the latter have been already tested and approved (protopipe uses always one of the latest released versions of ctapipe). Same goes for pyirf.

Warning

This is not true for,

  • hard-coded parts that had to be modified in anticipation of code migration,

  • protopipe functions themselves (some of which will eventually migrate to ctapipe or get refactored)

Regarding the first point: given the difference in versions between the imported ctapipe and its development version, sometimes it’s possible that, in order to code a new feature, this has to be pull-requested to ctapipe and at the same time hardcoded in protopipe, until the new version of ctapipe is released.

Integration tests#

These are neither unit-tests nor benchmarks, but rather functions that test whole functionalities and not just single API functions.

In the case of the pipeline, such functionalities are the scripts/tools that make up its workflow.

The integration tests are defined in pipeline/scripts/tests/test_pipeline.py which essentially mimics an entire analysis, starting from test simtel files stored on a CC-IN2P3 dataserver.

The test data is diffuse data from the Prod3b baseline simulations of both CTAN and CTAS produced with the following Corsika settings,

  • gammas, NSHOW=10 ESLOPE=-2.0 EMIN=10 EMAX=20 NSCAT=1 CSCAT=200 VIEWCONE=3

  • protons, NSHOW=10 ESLOPE=-2.0 EMIN=100 EMAX=200 NSCAT=1 CSCAT=200 VIEWCONE=3

  • electrons, NSHOW=10 ESLOPE=-2.0 EMIN=10 EMAX=20 NSCAT=1 CSCAT=200 VIEWCONE=3

and it is analysed using the default workflow (Pipeline).

Benchmarks#

Benchmarks are a way to visualize qualitatively and quantitatively the performace of each data-level transformation and steps of an analysis.

protopipe provides a set of benchmarking notebooks within the benchmarks module. Such notebooks make use of the API stored under the same module (see Benchmarks (protopipe.benchmarks)).

Notebooks should share the same basic template and behaviurs which can be found in protopipe/benchmarks/notebooks/notebook_template.ipynb. Use this to create new notebooks and when possible make sure old ones are synchronized with it.

Their contents followed initially the development triggered by the comparison between protopipe and the historical pipelines CTA-MARS (see this issue and references therein for a summary), and EventDisplay. They have been continuously improved and they are expected to evolve in time, especially with the progressing refactoring with ctapipe.

The purpose of these tools is to help users and developers to check if their changes improve or degrade previous performances, but also to produce analyses books for showcasing (see Benchmarking for details).

Any developer interested in contributing to benchmarking can do so from a development installation of protopipe. It is suggested to open the notebooks with jupyter lab from their location at protopipe/benchmarks/notebooks.

Currently available benchmarks are organised as follows,

  • TRAINING

    • Calibration

    • Image cleaning

    • Image intensity resolution

    • Direction Look-Up Tables

    • Direction reconstruction

    • to energy estimation

    • Energy Look-Up Tables

    • to classification

In particular:

  • calibration requires ctapipe DL1a output (images without parameters),

  • all image cleaning and direction reconstruction benchmarks use protopipe TRAINING data without estimated energy,

  • all benchmarks for the energy estimator use protopipe TRAINING data without estimated energy,

  • benchmarks for the classifier use protopipe TRAINING data with energy as the only estimated DL2b parameter.

  • MODELS

These performances are obtained from a test portion of the TRAINING data,

  • Energy

  • Classification

  • Tuning

  • DL2

    • Particle classification

    • Direction reconstruction

  • DL3

    • Instrument Response Functions and sensitivity

    • Performance poster

The DL3 folder contains also the CTA requirements, while the ASWG performance data is left to the user, being internal.

Note

Remember that in the framework of CTA software there are similar projects,

Plots could be properly migrated-to/synchronized-with ctaplot, same for the single pipeline steps with cta-benchmarks after the pipeline has been refactored using ctapipe’s stage tools.