Image Details

Choose export citation format:

Transformer-based Neural Network for Transient Detection without Image Subtraction

  • Authors: A. Inada, M. Sako, T. Acero-Cuellar, F. Bianco

A. Inada et al 2026 The Astronomical Journal 171 .

  • Provider: AAS Journals

Caption: Figure 2.

Network architecture of the real–bogus classifier used in this work. (Above) Input images are passed through six localized attention blocks with shared weights to extract salient features. Three distance-weighted localized attention steps are performed in each block. The features are forwarded to decoder blocks and the MLP layer for binary prediction. (Below) Inner working of localized attention modules. In our work, we use N = 3 blocks for each module.

Other Images in This Article
Copyright and Terms & Conditions

Additional terms of reuse