Xiaoyan LU and Qihao WENG
[Paper]
Each dataset contains 17,676 training samples and 395 test samples, all at a spatial resolution of 10 m with an image size of 510 × 510 pixels. Since the Image Translation and Semantic Segmentation datasets cover the same geographic regions, we jointly refer to them as IT-SS-18K, where “18K” denotes the approximately 18,000 samples in the combined training and test sets.
IT-SS-18K datasets are released at Quark Drive, Code: 1wTx
-
Train a SAR-to-optical translation model using 17,676 paired stretched optical and SAR images.
-
Apply the trained translation model to SAR images from the training set (17,676) and the test set (395) to generate synthesized optical images.
-
Train a degradable fusion model using the 17,676 synthesized optical images, the corresponding real SAR images, and their associated semantic labels.
-
Feed the 395 synthesized optical images together with the corresponding real SAR images into the fusion model to obtain the final segmentation results.