2020-11-08, 07:05–07:05, Posters
Imaging Atmospheric Cherenkov Telescopes (IACT) detect the Cherenkov light induced by particle showers generated by cosmic rays and gamma rays entering the atmosphere. A complex data analysis is then required to reconstruct the direction, energy and type of the incoming particle from the telescope images.
Since the 2012 ImageNet breakthrough, deep learning advances have shown dramatic improvements in data analysis across a variety of fields. Convolutional neural networks look particularly suited to the task of analysing IACT camera images for event reconstruction as they provide a way to reconstruct the interesting physical parameters directly from calibrated images, skipping the pre-processing steps of standard methods, such as image cleaning and image parametrization. Moreover, despite demanding substancial computing resources to be trained and optimised, neural networks show very good performance during execution. Such a performance proposes neural networks as online analysis tools for the Cherenkov Telescope Array (CTA), the future generation of IACT that will be one order of magnitude more sensitive than the current generation of experiments.
Here we present a complete reconstruction of IACT events using state-of-the-art deep learning techniques. The network is then applied in the single telescope context of the LST1, the first CTA telescope prototype built on La Palma's site. We show that the full event reconstruction is possible with a single multi-task network, improving the reconstruction performance by reducing the degeneracy introduced by the monoscopic atmospheric detection. In addition, using a single model reduces the computing needs. The obtained reconstruction performance will be shown and compared to other reconstruction methods.