Skip to content

Commit

Permalink
Merge pull request #7 from jangsoopark/v2.1.0
Browse files Browse the repository at this point in the history
V2.1.0
  • Loading branch information
jangsoopark authored Jul 30, 2021
2 parents 4d9416e + 4fdf008 commit f4d8d9f
Show file tree
Hide file tree
Showing 15 changed files with 256 additions and 59 deletions.
39 changes: 25 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,7 @@ The proposed model only consists of **sparsely connected layers** without any fu
## Training
For training, this implementation fixes the random seed to `12321` for `reproducibility`.

The experimental conditions are same as in the paper, except for `data augmentation` and `learning rate`.
The `learning rate` is initialized with `1e-3` and decreased by a factor of 0.1 **after 26 epochs**.
The experimental conditions are same as in the paper, except for `data augmentation`.
You can see the details in `src/model/_base.py` and `experiments/config/AConvNet-SOC.json`

### Data Augmentation
Expand All @@ -52,10 +51,9 @@ You can see the details in `src/model/_base.py` and `experiments/config/AConvNet

- However, for SOC, this repository does not use random shifting tue to accuracy issue.
- You can see the details in `src/data/generate_dataset.py` and `src/data/mstar.py`
- This implementation failed to achieve higher than 98% accuracy when using random sampling.
- The implementation details for data augmentation is as:
- Crop the center of 94 x 94 size image on 128 x 128 SAR image chip (49 patches per image chip).
- Extract 88 x 88 patches with stride 1 from 94 x 94 image.
- Crop the center of 94 x 94 size image on 100 x 100 SAR image chip (49 patches per image chip).
- Extract 88 x 88 patches with stride 1 from 94 x 94 image with random cropping.


## Experiments
Expand Down Expand Up @@ -141,14 +139,14 @@ MSTAR-PublicMixedTargets-CD1/MSTAR_PUBLIC_MIXED_TARGETS_CD1
- Place the two directories (`train` and `test`) to the `dataset/raw`.
```shell
$ cd src/data
$ python3 generate_dataset.py --is_train=True --use_phase=True --chip_size=94 --dataset=soc
$ python3 generate_dataset.py --is_train=False --use_phase=True --dataset=soc
$ python3 generate_dataset.py --is_train=True --use_phase=True --chip_size=100 --patch_size=94 --use_phase=True --dataset=soc
$ python3 generate_dataset.py --is_train=False --use_phase=True --chip_size=128 --patch_size=128 --use_phase=True --dataset=soc
$ cd ..
$ python3 train.py
$ python3 train.py --config_name=config/AConvNet-SOC.json
```

#### Results of SOC
- Final Accuracy is **99.18%** (The official accuracy is 99.13%)
- Final Accuracy is **99.13%** at epoch 26 (The official accuracy is 99.13%)
- You can see the details in `notebook/experiments-SOC.ipynb`

- Visualization of training loss and test accuracy
Expand All @@ -165,10 +163,10 @@ $ python3 train.py

| Noise | 1% | 5% | 10% | 15%|
| :---: | :---: | :---: | :---: | :---: |
| AConvNet-PyTorch | 98.56 | 94.39 | 85.03 | 73.65 |
| AConvNet-PyTorch | 98.60 | 95.18 | 85.36 | 73.24 |
| AConvNet-Official | 91.76 | 88.52 | 75.84 | 54.68 |


<!--
### Extended Operating Conditions (EOC)
#### EOC-1 (Large depression angle change)
Expand Down Expand Up @@ -216,15 +214,28 @@ MSTAR-PublicMixedTargets-CD2/MSTAR_PUBLIC_MIXED_TARGETS_CD2
└ ...
```
- Train Target: 2S1, BRDM2, T72, ZSU234 with depression angle 17$\degree$
- Test Target: 2S1, BRDM2, T72, ZSU234 with depression angle 30$\degree$
#### Quick Start Guide for Training
- Dataset Preparation
- Download the [soc-dataset.zip](https://github.com/jangsoopark/AConvNet-pytorch/releases/download/V2.0.0/soc-raw.zip)
- After extracting it, you can find `train` and `test` directories inside `raw` directory.
- Place the two directories (`train` and `test`) to the `dataset/raw`.
```shell
$ cd src/data
$ python3 generate_dataset.py --is_train=True --use_phase=True --chip_size=96 --dataset=eoc-1
$ python3 generate_dataset.py --is_train=False --use_phase=True --dataset=soc
$ cd ..
$ python3 train.py --config_name=config/AConvNet-EOC-1.json
```
#### EOC-2 (Target configuration and version variants)
### Outlier Rejection
### End-to-End SAR-ATR Cases

-->
## Details about the specific environment of this repository

| | |
Expand Down
Binary file modified assets/figure/001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/figure/2S1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified assets/figure/soc-confusion-matrix.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified assets/figure/soc-training-plot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 3 additions & 3 deletions experiments/config/AConvNet-EOC-1.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@
"num_classes": 4,
"channels": 2,
"batch_size": 100,
"epochs": 50,
"epochs": 100,
"momentum": 0.9,
"lr": 1e-3,
"lr_step": [14],
"lr": 1e-4,
"lr_step": [50],
"lr_decay": 0.1,
"weight_decay": 4e-3,
"dropout_rate": 0.5
Expand Down
4 changes: 2 additions & 2 deletions experiments/config/AConvNet-SOC.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@
"num_classes": 10,
"channels": 2,
"batch_size": 100,
"epochs": 50,
"epochs": 100,
"momentum": 0.9,
"lr": 1e-3,
"lr_step": [26],
"lr_step": [50],
"lr_decay": 0.1,
"weight_decay": 4e-3,
"dropout_rate": 0.5
Expand Down
51 changes: 27 additions & 24 deletions notebook/experiments-SOC.ipynb

Large diffs are not rendered by default.

156 changes: 156 additions & 0 deletions notebook/target-chip.ipynb

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ tqdm==4.61.2
torchvision==0.10.0+cu111
matplotlib
scikit-learn
seaborn
seaborn
Pillow
27 changes: 21 additions & 6 deletions src/data/generate_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
from absl import app

from multiprocessing import Pool
from PIL import Image
import numpy as np

import json
Expand All @@ -13,23 +14,34 @@

flags.DEFINE_string('image_root', default='dataset', help='')
flags.DEFINE_string('dataset', default='soc', help='')
flags.DEFINE_boolean('is_train', default=True, help='')
flags.DEFINE_boolean('use_phase', default=False, help='')
flags.DEFINE_integer('chip_size', default=94, help='')
flags.DEFINE_boolean('is_train', default=False, help='')
flags.DEFINE_integer('chip_size', default=100, help='')
flags.DEFINE_integer('patch_size', default=94, help='')
flags.DEFINE_boolean('use_phase', default=True, help='')

FLAGS = flags.FLAGS

project_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))


def generate(src_path, dst_path, is_train, use_phase, chip_size, dataset):
def data_scaling(chip):
r = chip.max() - chip.min()
return (chip - chip.min()) / r


def log_scale(chip):
return np.log10(np.abs(chip) + 1)


def generate(src_path, dst_path, is_train, chip_size, patch_size, use_phase, dataset):
if not os.path.exists(src_path):
return
if not os.path.exists(dst_path):
os.makedirs(dst_path, exist_ok=True)
print(f'Target Name: {os.path.basename(dst_path)}')

_mstar = mstar.MSTAR(
name=dataset, is_train=is_train, use_phase=use_phase, chip_size=chip_size, patch_size=88, stride=1
name=dataset, is_train=is_train, chip_size=chip_size, patch_size=patch_size, use_phase=use_phase, stride=1
)

image_list = glob.glob(os.path.join(src_path, '*'))
Expand All @@ -40,7 +52,10 @@ def generate(src_path, dst_path, is_train, use_phase, chip_size, dataset):
name = os.path.splitext(os.path.basename(path))[0]
with open(os.path.join(dst_path, f'{name}-{i}.json'), mode='w', encoding='utf-8') as f:
json.dump(label, f, ensure_ascii=False, indent=2)

# _image = log_scale(_image)
np.save(os.path.join(dst_path, f'{name}-{i}.npy'), _image)
# Image.fromarray(data_scaling(_image)).convert('L').save(os.path.join(dst_path, f'{name}-{i}.bmp'))


def main(_):
Expand All @@ -57,7 +72,7 @@ def main(_):
(
os.path.join(raw_root, mode, target),
os.path.join(output_root, target),
FLAGS.is_train, FLAGS.use_phase, FLAGS.chip_size, FLAGS.dataset
FLAGS.is_train, FLAGS.chip_size, FLAGS.patch_size, FLAGS.use_phase, FLAGS.dataset
) for target in mstar.target_name_soc
]

Expand Down
2 changes: 1 addition & 1 deletion src/data/loader.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import numpy as np

import torchvision
from skimage import io
import torch
import tqdm

Expand Down
9 changes: 7 additions & 2 deletions src/data/preprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,13 @@ def __call__(self, sample):

h, w, _ = _input.shape
oh, ow = self.size
y = np.random.randint(0, h - oh)
x = np.random.randint(0, w - ow)

dh = h - oh
dw = w - ow
y = np.random.randint(0, dh) if dh > 0 else 0
x = np.random.randint(0, dw) if dw > 0 else 0
oh = oh if dh > 0 else h
ow = ow if dw > 0 else w

return _input[y: y + oh, x: x + ow, :]

Expand Down
4 changes: 2 additions & 2 deletions src/model/network.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ def __init__(self, **params):
self.classes = params.get('classes', 10)
self.channels = params.get('channels', 1)

_w_init = params.get('w_init', lambda x: nn.init.kaiming_uniform_(x, nonlinearity='relu'))
_w_init = params.get('w_init', lambda x: nn.init.kaiming_normal_(x, nonlinearity='relu'))
_b_init = params.get('b_init', lambda x: nn.init.constant_(x, 0.1))

self._layer = nn.Sequential(
Expand All @@ -34,7 +34,7 @@ def __init__(self, **params):
),
nn.Dropout(p=self.dropout_rate),
_blocks.Conv2DBlock(
shape=[3, 3, 128, self.classes], stride=3, padding='valid',
shape=[3, 3, 128, self.classes], stride=1, padding='valid',
w_init=_w_init, b_init=nn.init.zeros_
),
nn.Flatten()
Expand Down
14 changes: 10 additions & 4 deletions src/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
import json
import os

from data import preprocess
from data import loader
from utils import common
import model
Expand All @@ -22,14 +23,17 @@
flags.DEFINE_string('config_name', 'config/AConvNet-SOC.json', help='')
FLAGS = flags.FLAGS

#common.set_random_seed(12321)

common.set_random_seed(12321)

def load_dataset(path, is_train, name, batch_size):

def load_dataset(path, is_train, name, batch_size):
transform = [preprocess.CenterCrop(88), torchvision.transforms.ToTensor()]
if is_train:
transform = [preprocess.RandomCrop(88), torchvision.transforms.ToTensor()]
_dataset = loader.Dataset(
path, name=name, is_train=is_train,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
transform=torchvision.transforms.Compose(transform)
)
data_loader = torch.utils.data.DataLoader(
_dataset, batch_size=batch_size, shuffle=is_train, num_workers=1
Expand Down Expand Up @@ -99,7 +103,9 @@ def run(epochs, dataset, classes, channels, batch_size,

accuracy = validation(m, valid_set)

logging.info(f'Epoch: {epoch + 1:03d}/{epochs:03d} | loss={np.mean(_loss):.4f} | lr={lr} | accuracy={accuracy}')
logging.info(
f'Epoch: {epoch + 1:03d}/{epochs:03d} | loss={np.mean(_loss):.4f} | lr={lr} | accuracy={accuracy:.2f}'
)

history['loss'].append(np.mean(_loss))
history['accuracy'].append(accuracy)
Expand Down

0 comments on commit f4d8d9f

Please sign in to comment.