Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretrained weights - how to preprocess Sentinel-1 images #6

Open
Multihuntr opened this issue Feb 12, 2024 · 3 comments
Open

Pretrained weights - how to preprocess Sentinel-1 images #6

Multihuntr opened this issue Feb 12, 2024 · 3 comments

Comments

@Multihuntr
Copy link

Multihuntr commented Feb 12, 2024

To use the pretrained models, we need to preprocess the Sentinel-1 images identically to the KuroSiwo dataset. Else we risk degraded performance from subtle data distribution misalignment.

In the paper, Section 3 describes it as:

[Using Sentinal Application Platform (SNAP) to apply] precise orbit application, removal of thermal and border noise, land and sea masking, calibration, speckle filtering, and terrain correction using an external digital elevation model.

Could you provide either a script or a configuration file or something like that which would allow others to exactly replicate this preprocessing on other Sentinel-1 data?

P.S. Sorry for raising so many issues. I'm excited to use the model is all.

@Multihuntr
Copy link
Author

I have figured out a preprocessing graph for SNAP which seems to work well. Visually, it looks to be mostly within resampling difference. The only real difference is that the provided preprocessed images have more bright-spot artifacts as compared to using this graph.

<graph id="KuroSiwoPreprocessingGraph">
    <version>1.0</version>
    <node id="OrbitApplied">
        <operator>Apply-Orbit-File</operator>
        <sources>
            <sourceProduct>${source}</sourceProduct>
        </sources>
        <parameters>
            <polyDegree>2</polyDegree>
        </parameters>
    </node>
    <node id="Calibrated">
        <operator>Calibration</operator>
        <sources>
            <source>OrbitApplied</source>
        </sources>
        <parameters>
            <sourceBands>Intensity_VV,Intensity_VH</sourceBands>
            <selectedPolarisations>VV,VH</selectedPolarisations>
        </parameters>
    </node>
    <node id="Filtered">
        <operator>Speckle-Filter</operator>
        <sources>
            <source>Calibrated</source>
        </sources>
        <parameters>
            <filter>Lee Sigma</filter>
            <filterSizeX>5</filterSizeX>
            <filterSizeY>5</filterSizeY>
            <sigmaStr>0.9</sigmaStr>
        </parameters>
    </node>
    <node id="Terrain">
        <operator>Terrain-Correction</operator>
        <sources>
            <source>Filtered</source>
        </sources>
        <parameters>
            <demName>SRTM 1Sec HGT</demName>
            <pixelSpacingInMeter>10.0</pixelSpacingInMeter>
            <sourceBands>Sigma0_VV,Sigma0_VH</sourceBands>
            <mapProjection>EPSG:3857</mapProjection>
        </parameters>
    </node>
</graph>

You can use it with SNAP's gpt. I used gpt graph.xml -c 12G -Ssource=/path/to/zip -t /path/to/output.dim. I found that exporting to .dim was fast, but exporting to .tif was slow.

I have also validated that S1 images created with this graph give extremely similar results using the pretrained models across one site, but the code is too involved to share here.

@ngbountos
Copy link
Member

Hi @Multihuntr,

Sorry for the late reply.

We have updated Kuro Siwo to include more events outside of Europe and the respective raw SLC products (with minimal preprocessing).

You can find the updated preprocessing scripts used to generate both the GRD and SLC products in configs/grd_preprocessing.xml and configs/slc_preprocessing.xml.

Feel free to check the updated paper for more information.

Nikos.

@Multihuntr
Copy link
Author

Hi @ngbountos, thanks for the reply. I think those changes address this issue.

But, I also wanted to ask when/if you planned to release the pretrained weights for the new models that were trained for the updated paper. And, once they are updated, to ask: which preprocessing was used on the S1 images to train them? configs/grd_preprocessing.xml or configs/slc_preprocessing.xml?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants