PyPI 🎉 )
"PanCollection" for Remote Sensing Pansharpening (Make Available on
This repository is the official PyTorch implementation of “基于卷积神经网络的遥感图像全色锐化进展综述及相关数据集发布” (paper, homepage).
- Release the PanCollection of the pan-sharpening training-test dataset of related satellites (such as WorldView-3, QuickBird, GaoFen2, WorldView-2 satellites);
- Release the Python code based on the unified Pytorch framework , which is convenient for later scholars;
- Release a unified Pansharpening framework with traditional/deep learning methods (including MATLAB test software package, see link), which is convenient for later scholars to conduct fair tests.
- Make available on PyPI.
- Added Colab Demo.
See the repo for more detailed descriptions.
See the PanCollection Paper for early results.
Recommendations
We recommend users to use the code-toolbox DLPan-Toolbox + the dataset PanCollection for fair training and testing!
Datasets (Reduced and Full)
Satellite | Value | Comment |
---|---|---|
WorldView-3 | 2047 | Training; Testing; Generalization |
QuickBird | 2047 | Training; Testing |
GaoFen-2 | 1023 | Training; Testing |
WorldView-2 | 2047 | Training; Testing; Generalization |
Requirements
- Python3.7+, Pytorch>=1.9.0
- NVIDIA GPU + CUDA
- The project is based on UDL.
Quick Start
import pancollection as pan
cfg = pan.TaskDispatcher.new(task='pansharpening', mode='entrypoint', arch='FusionNet',
dataset_name="gf2", use_resume=False,
dataset={'train': 'gf2', 'test': 'test_gf2_multiExm1.h5'})
print(pan.TaskDispatcher._task)
pan.trainer.main(cfg, pan.build_model, pan.getDataSession)
Quick Start for Developer
Step0. set your Python environment.
Then,
python setup.py develop
or
pip install -i pancollection https://pypi.org/simple
Step1.
-
Download datasets (WorldView-3, QuickBird, GaoFen2, WorldView2) from the homepage. Put it with the following format.
-
Verify the dataset path in
PanCollection/UDL/Basis/option.py
, or you can print the output ofrun_pansharpening.py
, then set cfg.data_dir to your dataset path.
|-$ROOT/Datasets
├── pansharpening
│ ├── training_data
│ │ ├── train_wv3.h5
│ │ ├── ...
│ ├── validation_data
│ │ │ ├── valid_wv3.h5
│ │ │ ├── ...
│ ├── test_data
│ │ ├── WV3
│ │ │ ├── test_wv3_multiExm.h5
│ │ │ ├── ...
Step2. Open PanCollection/UDL/pansharpening
, run the following code:
python run_pansharpening.py
step3. How to train/validate the code.
-
A training example:
run_pansharpening.py
where arch='BDPN', and configs/option_bdpn.py has:
cfg.eval = False,
cfg.workflow = [('train', 50), ('valid', 1)], cfg.dataset = {'train': 'wv3', 'valid': 'wv3_multiExm.h5'}
-
A test example:
run_test_pansharpening.py
cfg.eval = True or cfg.workflow = [('test', 1)]
Step4. How to customize your model.
def run_demo():
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
from <your_model_file> import build
from pancollection.configs.configs import TaskDispatcher
from udl_vis.AutoDL.trainer import main
from pancollection.common.builder import build_model, getDataSession
import option_<your_model_file> # please refer to https://github.com/XiaoXiao-Woo/PanCollection/blob/main/pancollection/configs/option_fusionnet.py
cfg = TaskDispatcher.new(task='pansharpeninig', mode='entrypoint', arch='<your_model_name>', data_dir='<your_data_path>',
workflow=[('train', 10), ('valid', 1), ('test', 1)], resume_from=r"", experimental_desc="Test")
print(TaskDispatcher._task.keys())
main(cfg, build_model, getDataSession)
if __name__ == '__main__':
run_demo()
Step5. How to customize the code.
One model is divided into three parts:
-
Record hyperparameter configurations in folder of
pancollection/configs/option_<modelName>.py
. For example, you can load pretrained model by cfg.resume_from = "your_model_path". -
Set model, loss, optimizer, scheduler in folder of
pancollection/models/<modelName>_main.py
. -
Write a new model in folder of
pancollection/models/<modelName>/model_<modelName>.py
.
Note that when you add a new model into PanCollection, you need to update pancollection/models/__init__.py
and add option_.py.
Others
- if you want to add customized datasets, you need to update:
pansharpening/common/psdata.py.
- if you want to add customized tasks, you need to update:
1.Put model_<newModelName> and <newModelName>_main in pancollection/models.
2.Create a new folder of pancollection/configs to put option_<newModelName>.
3.Add a class in panCollection/models/base_model.py, like this:
class PanSharpeningModel(ModelDispatcher, name='pansharpening'):
- if you want to add customized training settings, such as saving model, recording logs, and so on. you need to update:
udl-vis/mmcv/mmcv/runner/hooks
Note that: Don't put model/dataset/task-related files into the folder of AutoDL.
- if you want to know more details of runner about how to train/test in
udl-vis/AutoDL/trainer.py
, please see udl-vis/mmcv/runner/epoch_based_runner.py
Plannings
-
-
Integrated into Huggingface Spaces
🤗 using Gradio. Try out the web demo: Hugging Face Spaces
-
Integrated into Huggingface Spaces
-
Support more models
-
Make the Leaderboard for model results.
Contribution
We appreciate all contributions to improving PanCollection. Looking forward to your contribution to PanCollection.
Citation
Please cite this project if you use datasets or the toolbox in your research.
@misc{PanCollection,
author = {Xiao Wu, Liang-Jian Deng and Ran Ran},
title = {"PanCollection" for Remote Sensing Pansharpening},
url = {https://github.com/XiaoXiao-Woo/PanCollection/},
year = {2022},
}
@ARTICLE{dengjig2022,
author={邓良剑,冉燃,吴潇,张添敬},
journal={中国图象图形学报},
title={遥感图像全色锐化的卷积神经网络方法研究进展},
year={2022},
volume={},
number={9},
pages={},
doi={10.11834/jig.220540}
}
@ARTICLE{deng2022grsm,
author={L.-J. Deng, G. Vivone, M. E. Paoletti, G. Scarpa, J. He, Y. Zhang, J. Chanussot, and A. Plaza},
booktitle={IEEE Geoscience and Remote Sensing Magazine},
title={Machine Learning in Pansharpening: A Benchmark, from Shallow to Deep Networks},
year={2022},
pages={2-38},
doi={10.1109/MGRS.2020.3019315}
}
Acknowledgement
- MMCV: OpenMMLab foundational library for computer vision.
License & Copyright
This project is open sourced under GNU General Public License v3.0.