- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Remote participation : https://inaf-it.zoom.us/j/87450213665?pwd=mrRnfaqHlSoxUITXunfotuYBQQRtRw.1
This is the first general meeting of the recently instituted INAF Central Scientific Unit VIII-Computing (USC VIII-Computing). The medium-long term objective of this USC is of contributing to the coordination of the work, competences and skills present in the INAF structures in the various ICT areas, starting from data towards hardware, software and networks architectures and infrastructures, in order to grow a computing ecosystem around the research and development in astronomy and astrophysics. The main aim of all of that being the support, for the next few decades, of the very high competitiveness of the members of INAF in the international arena.
The assembly will offer a great opportunity, after pandemic, to meet all INAF people involved in the different thematic connected to the data and computing technologies, affecting the whole data life cycle.
This includes all INAF people working in digital technologies for the research, development and innovation activities in astronomy and astrophysics, including for example data management and archives, computation in the computing continuum with infrastructures, facilities, data centers and software tools, networks and hardware, etc, but also in key and new areas of computing like AI techniques/technologies and quantum computing and the transversal fields (e.g., security, interoperability and standards, open source and open data in the context of open science practices).
Moreover, the meeting will also be the occasion to meet people of the INAF structures involved in the design, deployment, management and support of the digital infrastructures, tools and services for the whole personnel.
Furthermore, the assembly will host a series of parallel sessions dedicated to administrative staff, focused on organizational topics, budgeting, and in general on the digitalisation of public administrations and on the challenges related to the development of new management systems, and the use of those already acquired.
The assembly will give opportunities to enforce the expertise and competences, also through the offerings of specific training sessions, to disseminate experiences, activities and best practises around common needs and problems, and to promote discussions in the several thematics for all the personnel involved in INAF. This could help the establishment or consolidation of collaboration and facilitate the information exchange.
Finally, we would like to preserve the spirit of the very successful series of the ICT meeting, unfortunately interrupted due to the insurgence of the Covid. In this respect, the particular location could promote the "team-up" action among all the people working or interested in specific fields, and also allows for professional development; in this regard, participation in the entire week is strongly encouraged.
We invite the participants to the submission of abstracts in any of the themes. The readjusted deadline for abstract submission is 15 September 2024.
Thanks to the support of USC VIII, there is no registration fee for INAF personnel.
Obviously, the participants will have to take care of all the other costs. In particular, please note that booking of the accomodation at the venue of the Assembly must be done individually (and independently from the registration) following the instructions reported at the menu "Logistic Information" on the left side of this page. The extended deadline for the booking of the accomodation at the venue of the Assembly is 21 September 2024, whereas the registration will be open until 30 September 2024.
SOC: Serena Pastore (chair) on behalf of the USC VIII steering committee Ugo Becciani, Gianfranco Brunetti, Andrea Bulgarelli, Deborah Busonero, Alessandro Costa, Anna Di Giorgio, Cristina Knapic, Marco Landoni, Andrea Possenti, Giuliano Taffoni
LOC: Serena Pastore, Cristina Knapic, Barbara Neri, Sabrina Carraro, Federica De Guio, Federico Di Giacomo, Fabrizio Lion, Laura Marongiu, Amedeo Petrella, Danilo Selvestrel, Cristiano Urban.
Parallel administrative session
Parallel administrative session
Parallel administrative session
Parallel administrative session
introduzione alle sessioni
Quantum computing
I will summarise the INAF participation to the Spoke 10 of the ICSC (National Research Centre for High Performance Computing, Big Data and Quantum Computing
The intersection of Quantum Computing and Machine Learning, known as Quantum Machine Learning (QML), presents significant potential for advancing data-intensive fields like astrophysics. Astrophysics increasingly relies on Deep Learning (DL) for handling vast datasets generated by ground-based and satellite experiments, though the potential of Quantum DL remains underexplored. This study evaluates Quantum Convolutional Neural Networks (QCNNs) for detecting Transient Gamma-Ray Bursts (GRBs), using data from Cherenkov Telescope Array (CTA) simulations. GRBs, classified into short and long-duration types, are critical astrophysical events requiring prompt detection.
Building on previous work, we applied QCNNs to GRB data, comparing their performance with classical Convolutional Neural Networks (CNNs). Our findings indicate that QCNNs achieve comparable accuracy, often exceeding 90%, with fewer parameters than classical CNNs, suggesting potential efficiency gains. However, QCNNs require longer training times due to current limitations in quantum computing technology.
We conducted comprehensive benchmarks, examining the impact of hyperparameters such as the number of qubits and encoding methods, including Data Reuploading. While increasing qubits and using sophisticated encodings generally improved performance, it also increased complexity and training time. QCNNs demonstrated robust performance on both time series and sky map datasets, maintaining high accuracy with fewer parameters.
This study, by exploring the application of QCNNs in astrophysics, highlighting their potential and limitations. While QCNNs show promise, especially in parameter efficiency, further optimization of quantum hardware and software is necessary for real-time applications. Our work lays the groundwork for future research, offering insights that could lead to significant advancements in the application of quantum neural networks in astrophysical data analysis.
Quantum Machine Learning (QML) is an emerging field that integrates Machine Learning techniques with quantum computing to take advantage of the computational power of qubits. By leveraging quantum phenomena such as superposition and entanglement, QML has the potential to solve complex problems significantly faster than traditional methods, enhancing efficiency in data analysis, optimization and pattern recognition.
Astrophysics exploits ML to enhance the identification and analysis of cosmic phenomena, such as Gamma-Ray Bursts (GRBs). ML algorithms are employed to process vast amounts of data collected by space telescopes or satellites such as AGILE, enabling more accurate and rapid detection of signals associated with GRBs, while effectively distinguishing them from background noise.
The purpose of this research is to explore the use of QML for the detection of GRBs and their differentiation from background noise. This study builds upon a classical approach, where a ML model known as Convolutional Neural Network Autoencoder was employed to perform anomaly detection for the identification of GRBs in the ratemeters of the AGILE Anti-Coincidence System. The proposed QML framework aims to replicate or enhance detection accuracy and efficiency by leveraging quantum computational advantages to better distinguish between GRBs and background noise.
Therefore, we have implemented various types of hybrid quantum-classical convolutional autoencoders, which differ from their fully-classical counterparts by employing a quantum encoder that replaces classical convolutions with quantum ansatzes and quantum encoding. Different embedding techniques, such as amplitude embedding and data re-uploading, were explored using 8- and 16-qubit architectures. The models were trained in an unsupervised manner using a dataset of simulated univariate time series resembling background light curves. The reconstructed error of the autoencoder is used as anomaly score to classify the time series.
The results obtained so far indicate that the quantum approach achieves discrete outcomes, partially succeeding in reconstructing the input curves. However, its performances have not yet reached the level of precision achieved by the classical approach, which demonstrates excellent results in both curve reconstruction and anomaly detection. Finally, future work may involve developing a fully-quantum autoencoder, where the decoder is also composed of quantum circuits instead of transposed convolutions.
A hybrid quantum genetic algorithm has been developed to minimize $\chi^2$ functions of different cosmological probes, to find the best-fit value for two cosmological parameters. The algorithm computes the merit function classically, and then uses a quantum circuit to entangle the population and perform crossover and mutation operations. The results show consistency with the drawn plots of the objective functions. We have then tested the general behavior of our algorithm with its hyperparameters, and compared it with a second quantum genetic algorithm found from the literature as well as with classical algorithms.
Round table and discussion on quantum computing in INAF
Artificial Intelligence : status in INAF
New advancements in radio data post-processing are underway within the SKA precursor community, aiming to facilitate the extraction of scientific results from survey images through a semi-automated approach. Several of these developments leverage deep learning (DL) methodologies for diverse tasks, including source detection, object or morphology classification, and anomaly detection. Despite substantial progress, the full potential of these methods often remains untapped due to challenges associated with training large supervised models, particularly in the presence of small and class-unbalanced labeled datasets.
Self-supervised learning has recently established itself as a powerful methodology to deal with some of the aforementioned challenges, by directly learning a lower-dimensional representation from large samples of unlabeled data. The resulting model and data representation can then be used for data inspection and various downstream tasks if a small subset of labeled data is available.
In this study, we explored contrastive learning methods to learn suitable radio data representation from unlabeled images taken from the ASKAP EMU and MeerKAT GPS surveys. We evaluated trained models and the obtained data representation over smaller labeled datasets, also taken from different radio surveys, in selected analysis tasks: source detection and classification, and search for objects with peculiar morphologies. An overview of the achieved results will be presented at the workshop, along with updates on ongoing activities focusing on pre-training dataset curation and exploitation of ViT and multi-modal ViT+LLM architectures.
The HaMMon project is the outcome of an industrial partnership that includes many Italian research institutions and private companies, led by UnipolSai and Leitha. The project is funded from the ICSC, the National Research Centre for High Performance Computing, Big Data and Quantum Computing.
The ambition of HaMMon is to build a flexible and expandable platform to map the hydrogeological and atmospheric balance of the Italian territory, extending the current knowledge in hazard mapping, monitoring and forecasting from industrial perspectives by means of innovative technologies with the interdisciplinary activities carried out by the ICSC National Center’s Spokes [https://www.supercomputing-icsc.it/].
We will present the current activities and preliminary results related to the integration of Photogrammetry techniques, Data Visualization and Artificial Intelligence technologies, to map and assess the impact of extreme natural disasters, extracting meaningful information on risk-exposed assets.
Acknowledgement
The work is supported by the Spoke 1, 2 and 3 of the ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing – and hosting entity, funded by European Union – NextGenerationEU.
Radio astronomy is undergoing a data revolution, with advancements in instruments like LOFAR, MeerKAT, MWA, and ASKAP yielding unprecedented insights into the universe. The massive volume of data generated demands efficient processing and analysis, including tasks like source detection, segmentation, and classification. Artificial Intelligence (AI), particularly Machine Learning, offers a promising solution. In our prior studies, we have applied (AI) to radio astronomy data, using Convolutional Neural Networks (CNNs) for automated source detection and U-Net architectures for segmentation of diffuse radio emission, with promising results.
Building on this, our research explores the use of Vision Transformers (ViTs), a cutting-edge development based on the Transformer architecture, originally pioneered for natural language processing. Unlike CNNs, which focus on local pixel patterns, ViTs treat images as patches and use self-attention mechanisms to understand the relationships between these patches. This approach is being exploited in the framework of the Innovation Project HaMMon, funded by the National Center for HPC, Big Data and Quantum Computing (ICSC), for the assessment of the impacts of extreme natural events on the Italian territory. The ViT is proving to be effective also for tasks like source detection and segmentation in radio astronomy. The ViT approach has been combined with the U-Net architecture in order to enhance the performance of the traditional method for semantic segmentation.
We first present the main results from applying the U-Net model to LOFAR Two Metre Sky Survey observations. Then, we highlight the improved network performance and the preliminary outcomes achieved with this new approach.
Faithful reconstruction of the original spectral information is a long-standing yet unachieved goal in the science of spectroscopic data treatment, essential to several scientific cases in cosmology, fundamental physics, and stellar astrophysics. Data reduction in particular has still failed to implement on a large scale what is demonstrably the best approach to spectral extraction, namely the "spectro-perfectionist" (SP) method. I will talk about our project to remove this bottleneck by implementing the first full-SP tool for a super-stable spectrograph, HARPS. The project will take advantage of recent technological improvements (e.g. in GPUs) and novel solutions to model the instrumental PSF based on deep learning. The results will lead to similar tools for other state-of-the-art spectrographs, like ESPRESSO on the Very Large Telescope and future ANDES spectrograph for the Extremely Large Telescope.
This project is an ICSC Innovation Grant. It aims to demonstrate the applicability of modern AI-based techniques in developing predictive maintenance systems and modeling interdependencies in complex industrial apparatuses. Methods currently utilized in the academic domain, such as anomaly detection in signal processing, log parsing from IT systems, and graph-based analysis, will be adapted and tested on real-world data from Eni's production sites. These data sets include sensor readings from industrial facilities, data from individual apparatuses, and complex relational graphs representing interactions between multiple systems. The goal is to assess the usability and effectiveness of these AI-driven approaches in enhancing operational efficiency and reliability in industrial environments.
The solver module of the Astrometric Verification Unit-Global Sphere Reconstruction (AVU-GSR) pipeline aims to find the astrometric parameters of ~10^8 stars in the Milky Way, the attitude and instrumental settings of the Gaia satellite and the parametrized post Newtonian parameter gamma with a 10-100 micro-arcseconds resolution. To perform this task, the code solves a system of linear equations with the iterative LSQR algorithm, where the coefficient matrix is large (10-50 TB) and sparse and the iterations stop when convergence is achieved in the least square sense. The two matrix-by-vector operations at each LSQR iteration were GPU-ported with CUDA with a 14x speedup over an original code version entirely parallelized on the CPU with MPI + OpenMP. The CUDA code was additionally optimized obtaining a further 2x speedup. A special attention is given to a code section, which consists in the computation of covariances, whose total number is Nunk x (Nunk - 1)/2 and occupy ~1 EB, being Nunk ~ 5 x 10^8 the total number of unknowns. This “Big Data” issue cannot be faced with standard methods: we defined an I/O-based pipeline made of two concurrently launched jobs, where one job, i.e. the LSQR, writes some files and the second job reads them, iteratively computes the covariances and deletes them to avoid storage issues. The covariances calculation does not substantially slowdown the code until a number of covariances elements equal to ~8 x 10^6. The code currently runs in production on Leonardo CINECA infrastructure.
The James Webb Space Telescope is allowing the characterisation of planetary atmospheres with a level of detail that exceeds the resolution of current models. With the future launch of the ESA mission Ariel, interpreting the large anticipated volume of such detailed spectra will require a new generation of models and theoretical frameworks. To timely tackle this challenge, we developed and are continuously expanding the Arχes suite of simulation codes for astrochemistry and planet formation. In this talk we will present the PNRR Key Science Project OPAL whose goals are twofold: 1/ enhance the efficiency and HPC capabilities of the Arχes codes and simulation pipelines. 2/ produce an unprecedented library of detailed and physically-justified atmospheric models of planets to support the preparation of the Ariel mission. We will also present the most recent addition to the Arχes suite, the planetary GROwth and MIgration Track population synthesis code (GroMiT) and its applications in support of the GAPS project at TNG and JWST proposals.
The remnants of core-collapse supernovae (SNe) display complex morphologies and a highly non-uniform distribution of stellar debris. In young remnants (less than 5,000 years old), these features encode valuable information about the processes at work in the SN engine, including nucleosynthetic yields and large-scale asymmetries that arise during the early stages of the explosion. Moreover, other properties of the remnants reveal clues about the progenitor stars and their interactions with the circumstellar medium (CSM), which is shaped by the progenitor’s mass-loss history. Understanding the connection between young SN remnants, their parent supernovae, and the progenitor stars is crucial for uncovering the physics of SN engines and investigating the final evolutionary stages of massive stars, as well as the elusive mechanisms driving their mass loss. In this talk, I will present a comprehensive approach that leverages the enhanced efficiency of numerical codes and the availability of advanced HPC resources to model the entire evolution from progenitor stars to supernovae, and ultimately to fully developed supernova remnants. This approach has enabled us to link the observed physical and chemical properties of SN remnants to their progenitor stars and explosions, offering fresh insights into the life cycles and deaths of massive stars.
In this talk I will describe the "Dark Energy and Massive Neutrino Universe" (DEMNUni) simulation suite performed to study different probes of the large scale structure of the Universe in scenarios with massive neutrinos and dark energy equations of state. I will discuss the computational resources invested for the production and preservation of the DEMNUni dataset and present the main scientific results obtained so far. I will also sketch different ongoing projects exploiting the DEMNUni data.
Square Kilometer Array is expected to generate hundreds of petabytes of data per year, two orders of magnitude more than current radio interferometers. Data processing at this scale necessitates advanced High Performance Computing (HPC) resources. However, modern HPC platforms consume up to tens of $MW$, i.e. megawatts, and energy-to-solution in algorithms will become of utmost importance in the next future. In this work we study the trade-off between energy-to-solution and time-to-solution of our \textbf{RICK} code (Radio Imaging Code Kernels), which is a novel approach to implement the $w$-stacking algorithm designed to run on state-of-the-art HPC systems. The code can run on heterogeneous systems exploiting the accelerators.
We did both single-node tests and multi-node tests with both CPU and GPU solutions, in order to study which one is the greenest and which one is the fastest. We then defined the \textbf{green productivity}, i.e. a quantity which relates energy-to-solution and time-to-solution in different code configurations compared to a reference one. Configurations with the highest green productivities are the most efficient ones. The tests have been run on the Setonix machine available at the Pawsey Supercomputing Research Centre (PSC) in Perth (WA), ranked as $28^{th}$ in Top500\footnote{\url{https://top500.org/lists/top500/list/2024/06/}} list, updated at June 2024.
I will introduce the SPACE center of excellence, in which INAF plays a major role. The aim of the 4years project is to (1) advance the HPC capability of 6 European codes to exascale, via the interaction of code owners and communities, HPC experts from national centers of competences; (2) to implement new tools and techniques, both in visualization and computation, via advanced ML techniques, thanks to the interaction and collaboration with experts in the field.
The High-Performance Computing, Big Data e Quantum Computing Research Centre, created and managed by the ICSC, is one of the five National Centres established by the National Recovery and Resilience Plan (NRRP), covering designated strategic sectors for the development of the country. The Centre is organized in 11 Spokes, one dedicated to infrastructure while the remaining 10 focused on 10 distinct subject areas.
In the talk, we present Spoke 3: "Astrophysics and Cosmos Observations". Its main objectives are the exploitation of cutting-edge solutions in HPC and Big Data processing and analysis for problems of interest in the following research area: Cosmology; Stars and Galaxies; Space physics (Earth, Solar and Planetary); Radio Astronomy; Observational Astrophysics and Time Domain; High Energy Astrophysics, Cosmic Microwave Background; Large Scale Structure, Clusters and Galaxies; Multi-messenger Astrophysics; Numerical Simulations and Modeling.
All these domains require designing and implementing a full ecosystem capable of: (i) delivering complex simulations capable of high predictive accuracy to address the complexity of the Universe; (ii) exploiting and/or driving the evolution of current and future computing architectures and algorithms; (iii) exploiting the wealth of data produced by computations and observations; (iv) effectively engaging with the Astronomy, Astrophysics, and Astroparticle physics (AAA) community in the HPC environment of codes and resources;(v) adapting and implementing existing and new algorithms for the new challenging (exascale and post-exascale) HPC infrastructures; (vi) providing innovative data storage and archiving systems to face the big data challenges.
In addition, Spoke 3 undertakes a specific knowledge and technology transfer activity, coordinating a number of projects that involve industrial parteners in order to study and develop advanced AI and Visualization based solutions capable to efficiently exploit HPC, in order to boost the productivity and the effectiveness of Italian enterprises.
Supernova remnants (SNRs) are expanding nebulae which in general show a rather complex morphology reflecting the interaction of the supernova (SN) blast wave with the circumstellar/interstellar medium (CSM/ISM), and the physical processes associated to the SN explosion and the internal structure of the progenitor star. The CSM/ISM into which the SNR expands is likely to be quite anisotropic as it has been strongly modified by the wind of the progenitor. In some cases, SNRs expand into a highly inhomogeneous environment, interacting with molecular and atomic clouds. This interaction leads to a significant slowdown of the forward shock as it collides with dense clouds and, consequently, a strengthening of the reverse shock traveling through the ejecta resulting in highly asymmetric morphologies.
Combining observations with 3D hydrodynamical (HD) models offers new insights into the mechanisms which produce the diversity of observed SNRs. However, accurately simulating such complex interactions demands substantial computational resources. Here, we present some examples of 3D HD models that describe the evolution of a SNR and its interaction with the inhomogeneous CMS/ISM parametrized according to observations. We discuss the role of the inhomogeneous CSM/ISM in shaping the remnant morphology, and emphasize the necessity of using high-performance computing (HPC) to handle the complexity and scale of these simulations.
The Laboratory of Astroinformatics and Digital Planetology (LAPD) is a center of excellence in designing, developing, and optimizing advanced tools and methodologies to support space and planetary research. Its main activities are focused on several key areas:
Development and Optimization of Codes for Simulation and Data Analysis:
The laboratory creates and refines advanced software codes to simulate space phenomena and analyze complex data from space missions. These codes include algorithms for modeling physical processes, processing images and signals, and simulating future scenarios. The goal is to provide tools that allow researchers to test scientific hypotheses, predict behaviors, and analyze space observation results with high precision.
Hardware Optimization and HPC (High-Performance Computing):
The laboratory manages and optimizes hardware infrastructure dedicated to high-performance computing, essential for handling and analyzing vast amounts of data. This activity includes configuring and optimizing compute clusters, servers, and other computing resources. Optimization aims to improve system performance, reduce processing times, and ensure efficiency in managing computational workloads associated with complex simulations and large datasets.
Management of Archives and Telemetry Processing Pipelines:
Data archiving management is a critical function of the laboratory, which handles the collection, organization, and preservation of data collected from space missions. Additionally, the laboratory develops and maintains telemetry processing pipelines, which include automated processes for validating, correcting, and processing raw data. This ensures that data is readily available and ready for analysis, reducing the risk of errors and ensuring data integrity.
Data Dissemination:
The laboratory plays a crucial role in data dissemination. Through the development of platforms and access tools, the laboratory makes space mission data accessible to the scientific community and the public. Dissemination includes publishing datasets on dedicated repositories, creating web interfaces for data access, and producing documentation and guides to facilitate users' use of the data.
The Laboratory of Astroinformatics and Digital Planetology's expertise and resources are cross-cutting across various research lines, significantly contributing to numerous scientific projects. The laboratory supports a variety of space missions, providing its expertise according to specific project needs. Currently, the laboratory is involved in significant space missions such as:
BepiColombo: A joint ESA and JAXA mission to explore Mercury. The laboratory contributes to developing tools for simulating and analyzing data collected during the journey to and observation of Mercury.
Juice (JUpiter ICy moons Explorer): An ESA mission dedicated to exploring Jupiter and its icy moons. The laboratory supports the processing and analysis of scientific data related to Jupiter's atmosphere and its moons.
Ariel (Atmospheric Remote-Sensing Infrared Exoplanet Large-survey): An ESA mission to study the atmospheres of exoplanets. The laboratory is involved in developing tools for analyzing and simulating the spectroscopic data collected by the space observatory.
Philosophies and Open Science Practices
The laboratory adheres to the principles of Free and Open Source Software (FOSS), Findable, Accessible, Interoperable, and Reusable (FAIR), and Open Science. This commitment translates into making its codes and tools available on public platforms such as GitHub and PyPI.
FOSS: Promotes transparency and collaboration through the availability of open-source code. The tools developed are shared freely, allowing anyone to view, modify, and contribute to the code.
FAIR: Ensures data and codes are findable, accessible, interoperable, and reusable. The laboratory adopts practices that facilitate the integration of data and software with other tools and resources.
Open Science: Encourages the dissemination of scientific results and methods through public channels, promoting collaboration and innovation. Research results and data are available to the public to facilitate further studies and developments.
With this approach, the laboratory contributes to the advancement of space science and fosters a more inclusive and collaborative research environment, reflecting a commitment to continuous sharing and innovation.
Verranno presentate le attuali tecnologie e relative implementazioni con introduzione generale delle reti geografiche GARR ad alte prestazioni focalizzandosi sulle tecnologie necessarie alla loro implementazione ed ai relativi casi d'uso implementati o in fase di implementazione.
Tutor: Francesco Bedosti
Singularity: Capiremo quali problemi possono essere risolti dalla containerizzazione, e in quali casi puo' essere conveniente usare container singularity. Proveremo passo passo ad installare una applicazione scientifica dentro un container e ad eseguirla su differenti sistemi operativi.
Prerequisiti: Singularity
INAF is leader in the Instrument Control Software development of telescope instruments at large, in particular ESO ones.
Within the framework of the TETIS (TEchnologies for Telescopes and Instrument control Software) Coordination Unit, we present the members of OAPD and OATS and the projects in which they are involved.
Then, we briefly state the several common tasks faced by the teams, and the development process of a typical Control Software of an ESO instrument.
Finally, we give a short description of the main projects managed by the OAPD (MORFEO, MAVIS, SOXS) and for the OATS (ANDES) teams.
Adaptive Optics (AO) has become crucial in present astronomy, to reach diffraction-limited imaging of astronomical targets on large ground-based telescopes. AO correction loops usually need to be operating at very high frequencies, of the order of 1 kHz, corresponding to the typical coherence timescale of the atmospheric turbulence. Hence, they require very fast real-time control (RTC) systems, with low latency and jitter.
At the Observatory of Padova, we have devised and we are currently developing MATTO (Multi-conjugate Adaptive Techniques Test Optics), a laboratorial test bench funded by INAF through the PNRR STILES project. MATTO is the first and only lab facility aimed at supporting the study and development of new Multi-conjugate Adaptive Optics (MCAO) techniques for the world-wide AO community.
DAO4MATTO is the RTC component of MATTO. In this contribution, I will present its hardware architecture, based on out-of-the-shelf components, and how it is physically interfaced to the large number of devices on the bench. Software-wise, DAO4MATTO is based on the DAO RTC Toolkit, developed by the Durham Adaptive Optics group. I will discuss the advantages of this solution and show some results for a very preliminary DAO4MATTO prototype, obtained by testing DAO on a very simple AO lab system.
An overview of the contribution given by INAF to the development of the SKAO Observatory Management and Control (OMC) software is given. The staff from three INAF structures (OAA, OAAb and OATs) is mainly involved in the development of the Local Monitoring and Control of the Central Signal Processor (CSP.LMC) and of the UIs generation platform called Taranta, which produces engineering interfaces for the control system. SKAO-OMC software is written in Python and based on TANGO Control, an open-source framework, of which both SKAO and INAF are consortium members. SKAO software development involves researchers and technologists from more than 20 countries and follows a set of organization and workflow patterns structure called SAFe (Scaled Agile Framework). Important activities are also carried out at System-level in supporting the Deployment, the Integration and Testing of the SKA Software. In this framework important tools are the Gitlab CI/CD and Kubernetes for container orchestration.
The ASTRI Mini-Array is an international collaboration led by INAF and devoted to imaging atmospheric Cherenkov light for very-high γ-ray astronomy. The project is deploying an array of 9 4-m class Imaging Atmospheric Cherenkov Telescopes at the Teide Observatory on Tenerife, in the Canary Islands, most sensitive to γ-ray radiation above 1 TeV. The Supervisory Control and Data Acquisition (SCADA) system controls all the operations carried out on-site, from the execution of an observing plan to the acquisition of scientific data. SCADA provides monitoring and online observations quality information to help assess data quality during the acquisition. Moreover, the system provides automated reactions to critical conditions. SCADA handles the automated data transfer to the Data Center at the Observatory in Rome through a high-speed networking connection, allowing us to operate the array remotely from different locations. In this contribution, we describe the software architecture of the SCADA system, the team organisation, the software engineering development approach and the software quality management.
Software Quality Assurance (SQA) is an increasingly important part of modern software development for astronomy. For space projects ESA enforces through ECSS standards strict SQA procedures for the software operating on their space missions, and in the last few years also ground based projects have been pressed to provide higher quality software for their instruments, since stakeholders, like ESO, are more and more aware of the hidden costs linked to software issues, both in terms of money and of loss of observation time Those hidden costs dramatically increase with the size and the operating cost of the telescopes..
It is important that SQA is part of the software life cycle since the early stages, because an early installment of quality management is easier and more productive, while at the same time verification can be performed timely avoiding major or blocking issues, like e.g. heavy rewriting of code.
The SQA management plan, that set the road to achieve that goal, is a deliverable requested in the early stages of most (e.g. ESA, ESO) astronomical project and, being based essentially on a set of standards, it can be formalized, leveraging on the commonalities among INAF projects, to deal with a generalized set of requirements, in a framework where the best practices, tools and procedures are not only outlined but actually implemented. This framework would be suitable to be adapted, through ad hoc tailoring to actual requirements, whenever a new INAF project is starting.
Based on a long experience in different projects (EUCLID/NISP, MORFEO, CUBES, MAVIS, ANDES), we aim to set up an INAF common framework to carry out SQA activities for INAF projects. The framework is virtualized, i.e. a set of virtual machines, which are a natural tool to allow the investigation of different strategies, easily duplicable and adaptable to different projects. We aim to take advantage of CI/CD techniques to build a versatile SQA automated pipeline that implements the project-specific requirements for quality in a continuous manner, strictly following the natural development of the software and avoiding delays in the product improvement. Several tools are implemented to carry out the required SQA checks both commercial (Polyspace, Understand…) and open source (e.g. cppcheck, prospector…). Style Guides for programming languages 9C/C++, Python) are provided and set of rules based on consolidated standards (e.g. MISRA, HIS) are provided.
An overview of the contribution given by INAF to the development of real time control SW for space instrumentation: starting from the design phase up to the challenges of space SW validation activities. The experience acquired working with different processors for different missions (Euclid, PLATO, ARIEL, Athena) will be summarized, providing discussion points on similarities with control SW for ground based instrumentation.
Tutors: Cristina Knapic, Francesca Martines, Andrea Bignamini
Introduzione generale ai DMPs, cosa sono, a cosa servono, cenni sulla loro struttura; FAIRness dei dati: cosa è, a cosa serve e come ottenerla; utilizzo dei DMP ed esempi pratici; DMP per l'utilizzo delle risorse INAF; perché modellare dati e metadati, collegamento con i principi FAIR; cenni su formati di file e tecniche di modellazione;
esempi di data model in UML ed esempi di implementazione di database e ORM in Python.
Data reduction software
Abstract
The experience accumulated by INAF in international space missions has demonstrated the significant impact and scientific value of imaging tools. These instruments have necessitated the development of software tools across all mission phases to simulate data acquisition, verify and optimize observation strategies, and define essential operational parameters. The Simulator for Operations of Imaging Missions (SOIM) will be a versatile software tool designed for the simulation of these payloads. It will serve as a cross-functional resource, supporting multiple research lines, optimizing performance, and enabling the validation of scientific requirements in collaboration with the scientific community. The contribution of the INAF team is crucial not only in preparing for the planning operations of global mapping and target acquisition for the BepiColombo mission, but also as a tool that can be extended to any imaging instrument for planetary surfaces. This effort involves the collaboration of INAF-OAPD, INAF-IAPS, and CNR-IFN and the LAPD (Laboratorio di Astroinformatica e Planetologia Digitale).
1.Objectives and results
Almost all imaging instruments in planetary science can be modelled as pinhole instruments (analogous to a pinhole camera). These instruments are crucial for planetary exploration, providing images that allow both morphological and radiometric analysis of solar system bodies. Our group's experience with planetary exploration missions has highlighted the lack of an operational simulation tool for such instruments within INAF, which is essential from the early design phases to verify and simulate the acquisition capabilities, impacting risk factors, timing, and precision in payload development.
To address this, the development of the SOIM (Simulator for Operations of Imaging Missions) software began, which is currently being used for the MCAM and SIMBIO-SYS instrument on the BepiColombo mission.
For the operations of a planetary mission, many necessary parameters are defined based on the following inputs: ephemerides, the instrument model, the target model.
1.1 Models
For ongoing missions, ephemerides are directly provided by the relevant space agencies in the standard SK format. For future missions, the development of SKs can be supplemented with the definition of potential new scenarios.
The geometric target model could be a DSK already present (i.e. Phobos) or not (i.e. local MOLA models on Mars) in the Spice Kernel (SK) pool.
Once the orbit is defined and a 3D model (even an approximate one) of the target body is integrated, the illumination angles for each region within the instrument’s field of view can be calculated. This, in turn, determines the incoming flux to the instrument, enabling the validation of an operational timeline.
The instrument model is formalized through two sub-models: the geometric model and the opto-electronic model. The geometric model defines the camera's projection system. The opto-electronic model describes the sensor's response to incoming flux in terms of gain (related to filters and aperture) and dark current (or an approximation verified during calibration). While the geometric model is defined during calibration, the opto-electronic model, which can be approximated with a few parameters, is deducible from the early phases of optical and sensor design.
The target model is essentially the radiometric model of the observed body. This can be defined generically for less well-known bodies or specifically if data from previous missions is available. For example, the BepiColombo mission will use knowledge of Mercury from earlier missions as an initial estimate for the radiometric models.
1.2 Overview of the Products
The primary outputs of the software are related to image quality and target visibility. Regarding image quality, the system allows the definition of integration times to avoid saturation or smearing. Concerning target visibility, it enables the estimation of repetition times needed to ensure a defined overlap of images and provides mapping of planetary acquisitions as well as a synthetic simulation of the expected data. Outputs related to this task include shapefiles of each acquisition.
With the current version of the simulator, it was possible, for example, to simulate images at the closest approach of the first 4 Mercury flybys by BepiColombo's MCAM cameras. This enabled the creation of acquisition timelines and the validation of geological features captured during these phases.
The same process was used to optimize the camera models by correcting the pointing of these cameras as predicted by the SKs, as detailed in INAF Technical Report 125.
2. Conclusions
Planetary research requires constant updates and experimentation with new technologies, demanding rapid problem-solving and necessitating consistent funding for training and access to basic research resources. The SOIM faces two main challenges: scalability and dissemination.
The studies of pre-supernova outbursts have received lot of attention because of their high diagnostic power for the nature of the supernova progenitor and the implications on the structure of the circum-stellar medium (CSM) in the immediate surrounding of the exploding SNe. To study the low-luminosity bursts of RSGs, we assembled a literature-based catalog of a coeval and co-distant sample of galactic RSGs in the Scutum-Crux region, which gives us the possibility to change the observational paradigm of pre-SN outburst studies. We launched a short-cadence long-term monitoring campaign aimed to characterize the multi-color light curves of this large sample of RSGs, to detect late-stage outbursts and to compare them with very recent models of the final phases of low-mass RSGs before SN explosion, thus possibly predicting core collapse of specific objects before it occurs. We present this on-going project and some preliminary results.
Time series analysis is a powerful tool used to extract insights from data collected over time, and its applications span across a wide range of fields of application. By analysing trends, patterns, and seasonal variations, time series models allow for accurate forecasting and anomaly detection. Beyond forecasting, classifying time series data allows us to group similar patterns and behaviors, adding even more depth to its utility. Moreover, the methodologies applied in one area can often be adapted to others, allowing knowledge sharing and innovation across different domains. In this talk, we will explore how the same methodologies used to analyse temporal data can be used in seemingly unrelated fields, highlighting how innovations in one area can provide valuable insights and solutions in others.
Data Analysis and Visualization
Scientific visualization is currently a very active and vital area of research. It involves the synthesis of large and complex amounts of data into images, videos or even virtual experiences allowing us to immerse ourselves directly into the data. The success of scientific visualization is due to the fact that the brain finds it easier to understand an image than words or numbers. In this way, scientific data visualization is extremely effective not only for communicating the results to different audiences, be they fellow scientists or the general public, but also for the scientific process itself by improving the scientific understanding of the data. The increasing accessibility and volume of data requires effective ways to analyze and communicate the information contained in data sets in simple and understandable formats. The use of different digital technologies in the scientific visualization of this data opens new horizons for understanding the Universe. Virtual reality, for example, offers an immersive environment in which scientists can explore the vast and complex structure of the Milky Way, allowing for a deeper perception of the relationships between stars and other galactic structures. This approach provides real-time information context, facilitating the understanding of correlations between various celestial elements. The ability to simultaneously view multiple data in intuitive ways allows for better identification of astronomical patterns and the formulation of more precise hypotheses.
This approach is widely recognized in the DPAC (Data Processing and Analysis Consortium) consortium as a necessary effort, to properly communicate the importance and impact of the extremely large and complex Gaia dataset. Within CU8, we have started to organize working group to use emerging technologies to visualize some of the CU8 specific data products and the results. One of the latest examples of this work is a video, produced for the FPR 2023, that translates into a sequence of easily understandable steps the complex science behind the 235,000 DIB (Diffuse Interstellar Bands) measurements. The language used in the video allows the tone of the video to be adapted to different audiences. The DPAC recognizes the importance of promoting its data releases (DR) to maintain community interest. Given the embargo on results by the European Space Agency (ESA) until the official release of data, the DR become crucial to attract the attention of the scientific community and the public. In DR3 an even greater effort was required from each CU to produce material to adequately advertise the products that would be released. Our goal is to consolidate efforts and continue the analysis of new scientific visualization tools as we prepare for DR4. This will serve to maintain the skills acquired and expand further.
In this talk, we will explore the importance of scientific visualization in the field of astronomical research, focusing on the integration of videos, images, and immersive experiences for better visualization and understanding of scientific data produced by the Gaia mission. We will also examine the strategies implemented by CU8 to visualize and improve the accessibility of astronomical data, with an outlook towards DR4. Finally, the creation of virtual environments accessible to the public offers an unprecedented opportunity to engage students, and the general public, enabling them to explore space in ways previously unimaginable. This fosters the dissemination of scientific knowledge and interest in space missions such as Gaia.
Introduction
Planetary exploration studies can greatly benefit from integrating data acquired from diverse sources, each with unique spatial and spectral resolutions and often georeferencing inaccuracies.
The project PANCO (PANsharpening and COregistration) represents an open-source software suite dedicated to facilitating the integration of images of planetary surfaces through co-registration procedures and resolution enhancement by means of pansharpening techniques. In particular, pansharpening represents a family of data fusion and radiometric transformations that aim to merge lower-resolution multispectral data (MS) with a high-resolution panchromatic base images (PAN). In this way, the resulting data will be characterized by both high spatial and spectral resolutions, gaining the advantages and capabilities of both sources simultaneously [1].
The suite here presented will allow the use of different algorithms and will be accompanied by a series of studies currently in publication that focus on evaluating the methods' performance in real Mars and Mercury observation contexts.
Automatic alignment
An essential part of data integration is "co-registration," i.e., the alignment of data positioning in physical space to accurately overlay the various layers of information.
Despite being a common need for several applications, such as mosaicking, mapping, and photogrammetry, data coregistration remains a time-consuming process that, commonly, implies manual search for homologous features between the images in a Geographic Information System (GIS). In addition, the accuracy of this process is greatly influenced by the user's skills and perseverance.
Our current research presents an advanced approach using an automatic alignment methodology based on the Scale-Invariant Feature Transform (SIFT) algorithm [2], a powerful tool inherited from the photogrammetric and computer vision fields. The matching points are used to estimate the homographic transformation between the images, deriving the geometry of the images to be aligned. Even considering further manual refinement, this procedure considerably reduces co-registration time and permits the achievement of up to pixel-level accuracy across the whole image (Fig.1).
Pansharpening
Despite its appreciable advantages for morphological analysis [3, 4], image simulation [5], and the significant advances in this field [6, 7], the complex alignment and the lack of accessible tools have meant that pansharpening has rarely been used in planetary sciences outside the Earth Observation context.
To date, the suite allows the use of twelve different component substitution (CS) pansharpening techniques suitable for different fields of application and purposes.
The PANCO suite is designed to be flexible, capable of managing images from different sources and at any resolution range. The introduction of newer methods, such as the Gram-Schmidt Adaptative (GSA) and Band Dependent Spatial Detail with Physical Constraints (BDSC-PC), further enhances its applicability, making it suitable even for the hyperspectral field.
Validation and case studies
Instrument validation studies have been applied to two datasets concerning the observation of the surface of Mars. The first includes four-band colour images from the ExoMars Trace Gas Orbiter (TGO) CaSSIS (Colour and Stereo Surface Imaging System) using panchromatic data from the Mars Reconnaissance Orbiter (MRO) HiRISE (High Resolution Imaging Science Experiment) images [8, 9]. This dataset allowed to validate the accuracy of increasing spatial resolution of up to sixteen times the original, from the 4.5 m/px of CaSSIS to the 0.25 m/px of HiRISE, using the entire set of methods developed (Fig.2).
The second phase of studies, currently underway, consider the MRO Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) hyperspectral cubes using the weighted sum of CaSSIS Multispectral observations as the reference panchromatic. The focus of this study is the ability of recent methods to be used on high spectral resolution data, with 489 bands between 362 and 3920 nm [10]. In both tests, the chosen images were selected to range in lighting conditions, surface geological properties, and acquisition geometries, allowing the influence of these factors to be further analysed. The results were then examined using various numerical performance indicators and a visual survey. The evaluation indexes adopted include SAM, RMSE, ERGAS, UQI, PSNR, and SSIM, considering the analysis from various aspects of images' spectral and structural properties.
Acknowledgements
The study is part of the INAF MiniGrant "Combined implementation of CaSSIS and HiRISE data through pansharpening experiments" (CUP C93C23008430001), and it has also been supported by the Italian Space Agency (ASI-INAF agreement no. 525 2017- 03-17).
References
[1] Vivone, G., et al. (2021). IEEE Geosci. Remote Sens. Mag., 9(1), 53–81.
[2] Lowe, D. G. (1999). IEEE Int. Conf. Comput. Vis., 1150–1157 vol.2.
[3] Kwan, C., et al. (2017). IEEE Int. Geosci. Remote Sens. Symp. (IGARSS).
[4] Semenzato, A., et al. (2020). Remote Sens., 12(19), 3213.
[5] Tornabene, L. L., et al. (2017). Space Sci. Rev., 214(1).
[6] Aiazzi, B., et al. (2007). IEEE Trans. Geosci. Remote Sens., 45(10), 3230–3239.
[7] Palubinskas, G. (2013). J. Appl. Remote Sens., 7(1), 073526.
[8] Thomas, N., et al. (2017). Space Sci. Rev., 212(3–4), 1897–1944.
[9] McEwen, A. S., et al. (2023). Icarus, 115795.
[10] Murchie, S., et al. (2007). J. Geophys. Res. Planets, 112(E5).*
Fig.1 Example of CRISM image (on the left) aligned to a reference HRSC image on the right) through matching points between images (linked in green) acquired by SIFT operator.Fig.1
Fig.2 Detail of the maximum resolution increase obtained during testing, from 4.5 m/px (a) CaSSIS mosaic MY34_004209_158_0 – RPB) to 0.25 m/px (b) with PSP_010394_2025 HiRISE by Gram-Schmidt pansharpening algorithm.Fig.2
Modern Astronomy and Cosmology (A&C) generate petabyte-scale data volumes requiring the development of a new generation of software tools to access, store, and process them. The Visualization Interface for the Virtual Observatory (VisIVO) tool [https://visivo.readthedocs.io/en/latest/] performs multi-dimensional data analysis and knowledge discovery in multivariate astrophysical datasets. Thanks to containerization and virtualization technologies, VisIVO has already been exploited on top of several distributed computing infrastructures including the European Open Science Cloud (EOSC). Within the SPACE Center of Excellence [https://www.space-coe.eu/] we are adapting VisIVO solutions for high performance visualization of data generated on the (pre-)exascale systems by HPC applications in A&C, including GADGET (GAlaxies with Dark matter and Gas) and ChaNGa (Charm N-body GrAvity solver) simulations.
We outline the execution strategies designed to enhance the portability and reproducibility of the VisIVO modular applications for high performance visualization of data generated on the (pre-)exascale systems by HPC A&C simulations performed with the GADGET code. Additionally, we present a solution to run the analysis and visualization concurrently (in-situ) with the simulation and bypass the storage of the full results exploiting a framework offering a highly distributed database to stream A&C simulation data for on-line visualization and demonstrate it with the ChaNGa high performant cosmological simulator.
Acknowledgement
The work has received funding from the European High Performance Computing Joint Undertaking (JU) and Belgium, Czech Republic, France, Germany, Greece, Italy, Norway, and Spain under grant agreement No 101093441 (SPACE CoE).
Also, it is supported by the Spoke 1 "FutureHPC & BigData" and the Spoke 3 "Astrophysics and Cosmos Observations" of the ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing – and hosting entity, funded by European Union – NextGenerationEU.
The field of astrophysics is continuously advancing, with an ever-growing influx of data requiring robust and efficient analysis tools. As the Square Kilometre Array (SKA) radio telescopes come fully operational, we anticipate the generation of hundreds of petabytes of data annually, characterized by unprecedented resolution and detail. In this context, scientific visualization becomes a critical component, enabling researchers to interpret complex datasets and extract meaningful insights. The immense volume of data demands not only suitable tools but also substantial infrastructure and computational capacity to analyze it effectively. In this talk, we will discuss how we are addressing these challenges with the development of our interactive visualization tool named VisIVO Visual Analytics. The tool is transitioning from a local visualizer to a remote visualizer, utilizing a client-server architecture. This evolution will allow the software to run parallel visualization pipelines on high-performance computing (HPC) clusters, thereby enhancing its capacity to handle extensive datasets efficiently.
GIT & GITLab:
Speakers: A. Bignamini, K. Munari
Introduzione a Git: cosa è, perché usarlo (o perché non usarlo);
installazione e configurazione di Git e accesso a GitLab INAF;
descrizione dei comandi base e branching; best practices ed errori da evitare;
Applicazione di un tipico workflow Git: "feature branch workflow", con una parte pratica basata sull'utilizzo di GitLab e dei comandi Git
Prerequisito: connessione internet per accedere a GitLab INAF contenente la documentazione.:
http://gitlab-school.pages.ict.inaf.it/howto-gitlab/
GITLab&CI/CD
Speakers: D. Tavagnacco, C. Urban:
Nozioni generali delle CI/CD pipelines in GitLab;
definizione e costruzione di una pipeline associata ad un progetto GitLab con esempi di trigger;
Installazione e configurazione di un servizio runner sul proprio laptop per eseguire una CI/CD;
Prerequisiti: docker installato e funzionante, connessione internet per accedere a GitLab INAF ed alla documentazione del training, editor di testo, conoscenza base di Git, YAML, TOML. Durante la sessione useremo la shell per eseguire dei comandi base (commit, edit, ecc..) un esempio di codice scritto in Python come "progetto" a cui applicare una pipeline CI/CD
Agile
Speakers: V. Alberti, M. Di Carlo
Introduzione al metodo Agile: principi e motivazioni;
Framework comuni ed esempi di implementazione: Scrum, Kanban e SAFe;
GIT & GITLab:
Speakers: A. Bignamini, K. Munari
Introduzione a Git: cosa è, perché usarlo (o perché non usarlo);
installazione e configurazione di Git e accesso a GitLab INAF;
descrizione dei comandi base e branching; best practices ed errori da evitare;
Applicazione di un tipico workflow Git: "feature branch workflow", con una parte pratica basata sull'utilizzo di GitLab e dei comandi Git
Prerequisito: connessione internet per accedere a GitLab INAF contenente la documentazione.:
http://gitlab-school.pages.ict.inaf.it/howto-gitlab/
GITLab&CI/CD
Speakers: D. Tavagnacco, C. Urban:
Nozioni generali delle CI/CD pipelines in GitLab;
definizione e costruzione di una pipeline associata ad un progetto GitLab con esempi di trigger;
Installazione e configurazione di un servizio runner sul proprio laptop per eseguire una CI/CD;
Prerequisiti: docker installato e funzionante, connessione internet per accedere a GitLab INAF ed alla documentazione del training, editor di testo, conoscenza base di Git, YAML, TOML. Durante la sessione useremo la shell per eseguire dei comandi base (commit, edit, ecc..) un esempio di codice scritto in Python come "progetto" a cui applicare una pipeline CI/CD
Agile
Speakers: V. Alberti, M. Di Carlo
Introduzione al metodo Agile: principi e motivazioni;
Framework comuni ed esempi di implementazione: Scrum, Kanban e SAFe;
Predictive maintenance in High-Performance Computing (HPC) systems is crucial for ensuring reliability, performance, and energy efficiency. By leveraging AI-driven models, we can detect anomalies and predict potential faults before they disrupt operations, minimizing downtime and repair costs. This talk will explore advanced anomaly and fault detection techniques in HPC environments, utilizing machine learning algorithms to monitor system behaviour and identify irregularities. Additionally, we will examine how AI can optimize energy usage, reducing the carbon footprint of these systems while maintaining high computational performance.
The CAESAR project (Comprehensive Space Weather Studies for the ASPIS Prototype Realisation) was funded by ASI (Italian Space Agency) and INAF (Italian National Institute for Astrophysics) to develop the prototype of ASPIS (ASI SPace weather InfraStructure) and conduct research for Space Weather Science.
This contribution will give an overview of the CAESAR project and its outcome ASPIS prototype. The research products harvest and data preparation, the metadata management, the database contents, the web GUI graphical interface, the dedicated APISpy package for programmatic access and usage samples of the full system will be described.
INAF manages three single dish radio telescopes (Medicina, Noto and Sardinia Radio Telescope, SRT). The three dishes are also part of the European VLBI Network and the International VLBI Service for Geodesy & Astrometry. Also, SRT is involved in international collaborations dedicated to pulsar observation, namely the European Pulsar Timing Array and the Large European Array for Pulsars project.
The increasing importance of Science Archives and archive mining in defining the ultimate productivity of an observing facility motivated the Italian Centre for Astronomical Archives (IA2) service to develop and maintain the INAF Radio Data Archive. Such a geographically-distributed archival facility flexibly handles different data models and formats, also supporting data discovery/access through Virtual Observatory (VO).
In this contribution I will give an overview of the archival system, addressing challenges posed by increasing data rates/volumes produced by the state-of-the-art digital backends. I will also present the INAF effort in modeling such data to enable their discoverability, access and retrieval through VO tools and services.
TBD
All’interno dell’ Osservatorio di Astrofisica e Scienza dello Spazio di Bologna (OAS), da sempre abbiamo avuto la necessità di strumenti di calcolo potenti e condivisi su cui gli scienziati possano svolgere le elaborazioni sui dati scientifici di Progetti e Missioni.
Per questo ci siamo dotati di un Centro di Calcolo (CdC) adeguato e sempre in continuo sviluppo in grado di soddisfare le nostre necessità di ricerca.
Il Centro di Calcolo OAS è di dimensioni medio piccole ed è ospitato in una sala di circa 45 metri quadri, ospita 7 Rack con circa 80 server e 1 Rack per apparati di rete. Ma soprattutto è dotato di un'infrastruttura professionale. In particolare il CdC è dotato di un sistema di raffreddamento ridondante con funzione free cooling, UPS e Generatore per una potenza massima di circa 50 KW.
Il CdC ospita un Infrastruttura di rete ethernet 10 Gbit/s per connessioni server, 1-10 Gbit/s con uffici, abbiamo un collegamento a Internet 10 Gbit/s che prevediamo di portare a 100 Gbit/s nei prossimi mesi. Quindi un CdC piccolo, ma con caratteristiche simili a quelli più grandi in termini di infrastruttura e quindi affidabilità.
Nel Data Center sono ospitati tutti i servizi di rete indispensabili: DNS, LDAP, VPN, NAT, DHCP, RADIUS ecc.. Abbiamo un sistema di virtualizzazione professionale basato su ProxMox per realizzarli e anche per supportare altre necessità. Ma sicuramente la cosa più importante del CdC è il Cluster di Calcolo che permette agli Scienziati di lavorare in maniera interattiva consentendo di lanciare in tempo reale le elaborazioni anche in maniera grafica tramite opportuni programmi si console virtuale. Oppure con la classica modalità batch che sfrutta il meccanismo a code, slurm, per sottomettere i lavori in maniera organizzata ai più potenti nodi di calcolo.
Il Cluster OAS non ha una potenza paragonabile ai grandi Cluster Commerciali e di Ricerca, ma essendo ritagliato sulle esigenze degli afferenti a OAS risponde bene alle esigenze dell'istituto. Infatti gli utenti trovano i programmi e compilatori di cui necessitano già installati nelle principali versioni e hanno a disposizione una buona potenza di calcolo e spazio di storage per poter lavorare molto più agevolmente rispetto ai propri computer personali.
Il Cluster può servire anche come Nave Scuola per poter poi accedere a strutture più grandi qualora fosse necessario dato che sfrutta gli stessi principi di funzionamento.
Un’altra importante funzione del CdC OAS è quello di ospitare server dedicati a a Missioni e Progetti, fornendo la possibilità di installarli in un ambiente professionale con una connessione fra loro e a Internet in banda larga. Al momento abbiamo Server e Workstation di ASTRI-CTA, CTA-RTA, AGILE, PLANK, REM, GAIA, GRAPPA, INTEGRAL, XMM, CHANDRA, EUCLID ecc… Il CdC OAS ospita inoltre i server che realizzano i servizi di Media-INAF e EDU INAF ed è anche predisposto per ospitare il Data Center di CTAO-PO che ha già un rack dedicato all’interno del CdC.
Negli ultimi anni è aumentata moltissimo la richiesta di VM e Container per svolgere tutta una serie di servizi informatici prima svolti da macchine fisiche. Dal semplice sito web ai sistemi di analisi dati in parallelo questo modo di lavorare è ormai ampiamente usato in ogni ambito professionale.
Per cercare di gestire in modo coordinato ed efficiente queste richieste sono disponibili numerosi software, gli Hypervisor, ognuno con le sue caratteristiche.
In IAPS abbiamo svolto uno studio comparativo per decidere quale dei diversi sistemi disponibili per la creazione e gestione di VM fosse il più adatto per le esigenze dell’Istituto e quale famiglia di server fosse la più adatta alla sua implementazione per prestazioni, prezzo e scalabilità.
Alla fine la scelta è caduta su ProxMox Virtual Environment, un HyperVisor gratuito basato su Debian affidabile, potente, ricco di funzionalità e altamente scalabile per adattarsi alle diverse esigenze dell’Istituto sia in ambito amministrativo/gestionale che di calcolo. Nella presentazione verranno brevemente introdotti i vari software disponibili e si parlerà più in dettaglio di ProxMox VE e ProxMox Backup Server, entrambi installati e usati in modo coordinato sui server dell’Istituto per assicurare la massima affidabilità.
Saranno spiegate le architetture hardware prescelte e il sistema Server+NAS creato, inoltre verranno presentati due interessanti casi di uso esplicativi delle possibilità offerte dal sistema.
MIRTA is a project aiming to support and improve the networking and sharing capabilities of the technology research community - and more - inside the Italian Institute for Astrophysics, with possible interactions with, and extensions to, other institutions.
The project focuses on the design and production of a database, an interactive search engine and a community forum, in order to connect people and ease the sharing of tools, knowledge, instruments, expertise.
INAF is involved in a high number of scientific and technological research projects on an international scale. These activities, in order to be optimally managed, require an operational approach to sharing information and resources and facilitating contacts in the community.
MIRTA aims at building an online tool that makes it simple and effective to search within the community for the resources, skills and help one needs.
The project consists in the development of an interactive map (IM), easy to consult, in which information on technological research in INAF is collected. The information will concern: laboratories, research and technical personnel, skills, facilities, relationships with industry, areas of interest and application, collaborations and anything that can be shared within the community.
Behind the IM user interface there will be a database, fed and updated by the INAF community itself. The information contained in the database will be conveyed into structured metadata documents (descriptive of individual skills, knowledge, tools, etc.), which will populate the (potentially relational) database, which the IM will query to respond to user searches.
GIT & GITLab:
Speakers: A. Bignamini, K. Munari
Introduzione a Git: cosa è, perché usarlo (o perché non usarlo);
installazione e configurazione di Git e accesso a GitLab INAF;
descrizione dei comandi base e branching; best practices ed errori da evitare;
Applicazione di un tipico workflow Git: "feature branch workflow", con una parte pratica basata sull'utilizzo di GitLab e dei comandi Git
Prerequisito: connessione internet per accedere a GitLab INAF contenente la documentazione.:
http://gitlab-school.pages.ict.inaf.it/howto-gitlab/
GITLab&CI/CD
Speakers: D. Tavagnacco, C. Urban:
Nozioni generali delle CI/CD pipelines in GitLab;
definizione e costruzione di una pipeline associata ad un progetto GitLab con esempi di trigger;
Installazione e configurazione di un servizio runner sul proprio laptop per eseguire una CI/CD;
Prerequisiti: docker installato e funzionante, connessione internet per accedere a GitLab INAF ed alla documentazione del training, editor di testo, conoscenza base di Git, YAML, TOML. Durante la sessione useremo la shell per eseguire dei comandi base (commit, edit, ecc..) un esempio di codice scritto in Python come "progetto" a cui applicare una pipeline CI/CD
Agile
Speakers: V. Alberti, M. Di Carlo
Introduzione al metodo Agile: principi e motivazioni;
Framework comuni ed esempi di implementazione: Scrum, Kanban e SAFe;
The International Virtual Observatory Alliance (IVOA) is a community effort to build standards meant to let astronomical datasets and other resources work as a seamless whole, i.e. interoperating at the technical and semantic level, and enabling FAIR principle adherence. To make this vision possible, national projects contribute to the IVOA and VObs.it is the italian effort in this respect.
This contribution will provide a brief overview of the IVOA and VOBs.it, and then report on the various links the Italian astrophysical VO community has started to lay down at national level (with sibling communities) and at the European and EOSC levels.
Finally, an overview of the IVOA standards will also be provided.
In the context of the INAF participation as a member of the EOSC Association (EOSC-A) implementing the European Open Science (EOSC) vision, the description of the activities under the auspicious of INAF USCVIII, aiming at the integration of astrophysics and astronomical disciplines into the federation of EOSC as a node infrastructure for sharing and reusing all digital objects outputs and results from research and innovation projects.
Abstract.
The Gaia mission is nearing the end of its operational life: after 10.5 years of continuous science operations (more than twice the initial lifetime), in Jan 2025 data taken with focal plane instruments will no longer be considered in the scientific data stream. However, in-orbit operations will continue for a few more months for testing end-of-life performances of digital detectors and other satellite subsystems in an effort to exploit Gaia's exceptional longevity despite its orbiting in the L2 environment.
We are then entering the post-operations phase and Gaia’ legacy era, i.e., from data reduction and analysis to data management and exploitation. Therefore, this presentation addresses the work that Team INAF, with the support of Team ALTEC and the addition of more recent initiatives funded with PNRR resources, are carrying on at our Data Processing facility (DPCT@ALTEC) to face the challenges of the mission legacy, as advocated and prototyped in the framework of The Living Sky Project (WP4 of the MITIC special projects funded by MUR, 2018-2022) . Computational and infrastructural aspects, implied by the deep exploitation of the first Big Data system of the faint astronomical sky built only from space-borne data, will be illustrated along with the progress made so far.
Finally, we show how this unique, world-class HTC and HPC facility is capable of dealing with INAF growing computational complexities and ready to take more space data (e.g. Euclid, Lisa,..) and exploit them on a variety of outstanding problems in observational physics and the physics of space instrumentation by our scientific communities.
ASTRI Mini-Array On-Site Information and Communication Technology infrastructure
F. Gianottia, I. Abua, , M. Lodif, A. Tacchinia, G. Malaspinac, M. De Benedettof, F. Fiordolivae, M.Costig, G. Mancini g, D. Gregorig,, M. Pinettig , M. Sardog, D. Fuzzatig, M. Rosig, P. Brunob , A. Bulgarellia , L. Castaldinia , V. Confortia, A. Costab, V. Fiorettia ,S. Gallozzie, C. Grivelf , F. Incardonab, G. Letob, F. Lucarellie, K. Munarib, N. Parmiggiania, V. Pastorea, F. Russoa, S. Scuderic, G. Tostid, M. Trifoglioa. for the ASTRI Projecth
aINAF, OAS-Bologna, Via Piero Gobetti 93/3, Bologna, Italy
bINAF - Osservatorio di Astrofisica di Catania, Via S. Sofia 78, Catania, Italy
cINAF - Osservatorio Astronomico di Brera, Via Bianchi 46, Merate, Italy
dUNIPG - Università di Perugia Dip. di Fisica e Geologia, Via A. Pascoli, Perugia, Italy
eINAF - Osservatorio Astronomico di Roma, Via Frascati 33, M. Porzio Catone (RM), Italy
fFundación Galileo Galilei – INAF
gE4 Company SPA, Via Martiri della Libertà, 66, 42019 Scandiano (RE) – Italy
hhttp://www.astri.inaf.it/en/about/persone/
fulvio.gianotti@inaf.it
ABSTRACT:
The ASTRI ("Astrofisica con Specchi a Tecnologia Replicante Italiana") is a collaborative international effort led by the Italian National Institute for Astrophysics (INAF) for developing an array of nine 4m-class dual-mirror imaging atmospheric Cherenkov telescopes (IACTs)sensitive to gamma-ray radiation at energies above 1 TeV. The array is placed at the Teide Observatory in Tenerife, in the Canary Islands. In order to support the development, installation, and operations of the ASTRI Mini-Array, an on-site Information and Communication Technology (ICT) Infrastructure has been designed.
This presentation describes the design of this ICT infrastructure, which includes various subsystems dedicated primarily to host the Supervisory Control And Data Acquisition (SCADA) software whose aim is to control and monitor the array of telescopes and to perform data acquisition and data quality control.
For each subsystem, the best technology solutions were chosen. A dedicated Virtual System based on ProxMox for telescope control, to ensure the easy control and management combined with high reliability and continuity of service was implemented. To ensure the throughput of tens of MB/s the data acquisition and dispatch operations were realized bare metal from the camera and frontier server, combined with a dedicated BeeGFS-based storage system to ensure the necessary performance and provide a distributed, shared and concurrent filesystem.
The high performances of the online data quality control and of the Monitoring System are guaranteed by a Kubernetes Technology approach, which also improves the automation, the scaling and deployment.
These subsystem and ASTRI telescopes are interconnected by the high-performance network, so special attention has been focused on the network topology to ensure both reliability and data transfer throughput, both in the local network and for transmission to the remote archive facility in Rome where the data are transferred as soon as they are available.
The entire ICT infrastructure was engineered to have no Single Point Of Failure (SPOF) and to ensure high availability, because there will be no one dedicated to its maintenance on-site at Teide and during the night. Therefore, all the most critical systems have been designed in hot redundancy, that is, capable of withstanding a failure without service interruption.
Summary of Abstract
The ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) program, led by the Italian National Institute for Astrophysics (INAF), is developing nine 4-meter Imaging Atmospheric Cherenkov Telescopes (IACTs) to detect gamma-ray radiation above 1 TeV at the Teide Observatory in the Canary Islands. To support this project, an on-site Information and Communication Technology (ICT) Infrastructure has been designed.
This presentation describes the design of this ICT infrastructure, which includes various subsystems dedicated primarily to hosting the Supervisory Control And Data Acquisition (SCADA) software.
The ICT architecture was divided into subsystems dedicated to specific functions: telescope control; acquisition and storage; data quality control; fast transmission to data archiving; and monitoring of the entire Observatory.
All these ICT components are interconnected, so special attention was paid to the network topology to ensure the necessary throughput and reliability of the connections.
Final Remarks