OmniSat: Self-Supervised Modality Fusion for Earth Observation - Institut national de l’information géographique et forestière - Ecole nationale des sciences géographiques Access content directly
Preprints, Working Papers, ... Year : 2024

OmniSat: Self-Supervised Modality Fusion for Earth Observation

Abstract

The field of Earth Observations (EO) offers a wealth of data from diverse sensors, presenting a great opportunity for advancing self-supervised multimodal learning. However, current multimodal EO datasets and models focus on a single data type, either mono-date images or time series, which limits their expressivity. We introduce OmniSat, a novel architecture that exploits the spatial alignment between multiple EO modalities to learn expressive multimodal representations without labels. To demonstrate the advantages of combining modalities of different natures, we augment two existing datasets with new modalities. As demonstrated on three downstream tasks: forestry, land cover classification, and crop mapping. OmniSat can learn rich representations in an unsupervised manner, leading to improved performance in the semi- and fully-supervised settings, even when only one modality is available for inference. The code and dataset are available at github.com/gastruc/OmniSat.
Fichier principal
Vignette du fichier
2404.08351.pdf (7.93 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04556598 , version 1 (23-04-2024)

Identifiers

Cite

Guillaume Astruc, Nicolas Gonthier, Clement Mallet, Loic Landrieu. OmniSat: Self-Supervised Modality Fusion for Earth Observation. 2024. ⟨hal-04556598⟩
0 View
0 Download

Altmetric

Share

Gmail Facebook X LinkedIn More