Astrophysical observations are described by multimodal data such as light curves, spectra, and scientific texts. The extraction of compact representations from these data helps identify meaningful patterns and understand their physical meaning. This talk focuses on three works that use deep learning and complex networks to achieve these goals. The first work shows how autoencoders can compress X-ray spectra into a low-dimensional latent space while preserving physical information. The second work shows that light curves can be aligned with textual summaries using contrastive learning. This alignment generates a shared compact multimodal representation of both modalities. Finally, the third work shows that complex networks can capture patterns in multimodal data, revealing anomalies (such as rare astrophysical sources).
AI and complex networks in multimodal X-ray astrophysics