Speaker
Description
High-resolution cosmological simulations remain computationally expensive, limiting our ability to explore large volumes and complex physics like massive neutrinos and dynamic dark energy. Recent developments in AI-assisted super-resolution, particularly the work of Li et al. (2021), Ni et al. (2022), and Zhang et al. (2023) has shown that generative models can reconstruct high-resolution structure from low-resolution inputs, offering a different approach to the problem of the cost of large simulations.
After some successful tests with a pretrained model (Ni et al. 2022) on 64^3‑particle simulations, we are currently optmizing a super-resolution (SR) model based on StyleGAN2-inspired architectures (Karras et al. 2019) to enhance DEMNUni simulations (Carbone et al. 2016) trained on pairs of 512^3 - 1024^3 particles simulations. The network uses a U-Net-style encoder-decoder with progressive growing and skip connections to capture multi-scale features, with adversarial training introduced after initial convergence. Our training strategy combines adaptive loss weighting, learning-rate decay, and targeted data augmentation to maintain stability as we scale up to higher resolutions.
Training is the most resource-intensive phase, dominated by 3D convolutions and large-volume data throughput. We are relying on the CINECA machines to accelerate this process. Once trained, the model will allow us to generate SR simulations across varied cosmologies, enabling improved modeling of nonlinear structure formation, covariance estimation, and inference pipelines for surveys like Euclid.