Inter-annotator consensus: optimizing machine learning for astrophysical feature segmentation and classification

10 Jul 2024, 11:45
5m
Conference Room

Conference Room

Speaker

Renaud Vancoellie

Description

With the arrival of Euclid/LSST and other large-scale surveys we address the automatic detection and segmentation of galactic features from deep sky images. The training of machine learning and deep learning systems requires manual annotations, which tend to present a high variability between annotators.
For complex astrophysical features such as low surface brightness collision debris, even expert annotators do not perfectly agree on the features’ exact shape and/or nature.
To avoid any ambiguity in the learning process, we propose to exploit the confidence information that is carried by the inter-annotator variability. We define a consensus associated with a new dedicated loss function, allowing us to exploit the inter-annotator variability. This loss mitigates learning based on a pixel-basis confidence measure. Annotators may be weighted within this consensus with respect to their expertise.
We experiment with various consensus formulas and galactic features to assess the effectiveness of this strategy in improving the learning of a deep neural network. We obtain improved convergence and accuracy for the segmentation of various structures including low brightness galactic collisional debris.

Presentation materials

There are no materials yet.