Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hyper-parameters for reproducing the results on ImageNet #36

Open
kumamonatseu opened this issue Nov 16, 2020 · 9 comments
Open

Hyper-parameters for reproducing the results on ImageNet #36

kumamonatseu opened this issue Nov 16, 2020 · 9 comments

Comments

@kumamonatseu
Copy link

This is a great work. However, when I try to reproduce results on the ImageNet dataset, there is a 1% accuracy gap between mine and that in your paper.

Would you mind providing the hyper-parameters for training on ImageNet?

Here is mine:
-r 1.0
-a 0.0
-b 0.8
--trial 1
--weight_decay 0.0001
--learning_rate 0.1
--epochs 100
--lr_decay_epochs 30,60,90
--print_freq 500
--batch_size 256 \

eight 1080Ti GPUs are used.
Thanks!

@kumamonatseu
Copy link
Author

image
The validation curve is attached and the best acc is 70.434.
Hope for your reply.

@shaoeric
Copy link

This is a great work. However, when I try to reproduce results on the ImageNet dataset, there is a 1% accuracy gap between mine and that in your paper.

Would you mind providing the hyper-parameters for training on ImageNet?

Here is mine:
-r 1.0
-a 0.0
-b 0.8
--trial 1
--weight_decay 0.0001
--learning_rate 0.1
--epochs 100
--lr_decay_epochs 30,60,90
--print_freq 500
--batch_size 256 \

eight 1080Ti GPUs are used.
Thanks!

I met the problem today, have you solve this problem? I think the dataset is wrong, i will try to rewrite the code

@kumamonatseu
Copy link
Author

This is a great work. However, when I try to reproduce results on the ImageNet dataset, there is a 1% accuracy gap between mine and that in your paper.
Would you mind providing the hyper-parameters for training on ImageNet?
Here is mine:
-r 1.0
-a 0.0
-b 0.8
--trial 1
--weight_decay 0.0001
--learning_rate 0.1
--epochs 100
--lr_decay_epochs 30,60,90
--print_freq 500
--batch_size 256
eight 1080Ti GPUs are used.
Thanks!

I met the problem today, have you solve this problem? I think the dataset is wrong, i will try to rewrite the code

Not solved yet. I gave up.
Similar results were obtained in this repo.
https://github.com/yoshitomo-matsubara/torchdistill/tree/master/configs/official/ilsvrc2012/yoshitomo-matsubara/rrpr2020

@shaoeric
Copy link

This is a great work. However, when I try to reproduce results on the ImageNet dataset, there is a 1% accuracy gap between mine and that in your paper.
Would you mind providing the hyper-parameters for training on ImageNet?
Here is mine:
-r 1.0
-a 0.0
-b 0.8
--trial 1
--weight_decay 0.0001
--learning_rate 0.1
--epochs 100
--lr_decay_epochs 30,60,90
--print_freq 500
--batch_size 256
eight 1080Ti GPUs are used.
Thanks!

I met the problem today, have you solve this problem? I think the dataset is wrong, i will try to rewrite the code

Not solved yet. I gave up.
Similar results were obtained in this repo.
https://github.com/yoshitomo-matsubara/torchdistill/tree/master/configs/official/ilsvrc2012/yoshitomo-matsubara/rrpr2020

i found test_set = datasets.ImageFolder(test_folder, transform=test_transform) is wrong, test_set gets incorrect labels, i guess something has been done or processed in author's work. So i am coding this part, not difficult, expecting a normal result. ^v^

@shaoeric
Copy link

class TestImageDataset(Dataset):
    def __init__(self, root, transform, classes2label=None):
        super(TestImageDataset, self).__init__()
        self.root = root
        self.transform = transform
        self.classes2label = classes2label
        self.image_file_list, self.label_list = self.parse_txt()

    def __len__(self):
        return len(self.image_file_list)

    def __getitem__(self, idx):
        file = os.path.join(self.root,'images', self.image_file_list[idx])
        img = Image.open(file).convert('RGB')
        label = torch.tensor(int(self.label_list[idx])).long()
        if self.transform is not None:
            img = self.transform(img)
        return img, label

    def parse_txt(self):
        annotation_path = os.path.join(self.root, 'val_annotations.txt')
        image_file_list = []
        label_list = []

        with open(annotation_path, 'r') as f:
            contents = f.readlines()
        for content in contents:
            image_file, classes_name = content.split('\t')[:2]
            image_file_list.append(image_file)
            label = self.classes2label[classes_name]
            label_list.append(label)
        return image_file_list, label_list
if is_instance:
    train_set = ImageFolderInstance(train_folder, transform=train_transform)
    n_data = len(train_set)

else:
    train_set = datasets.ImageFolder(train_folder, transform=train_transform)
test_set = TestImageDataset(root=test_folder, transform=test_transform, classes2label=train_set.class_to_idx)

@kumamonatseu This code works! ^v^

@kumamonatseu
Copy link
Author

@shaoeric thanks for your reply. it seems that codes of the dataset are already rewritten in another repo. anyway, hope for good news from you!

@liuhao-lh
Copy link

class TestImageDataset(Dataset):
    def __init__(self, root, transform, classes2label=None):
        super(TestImageDataset, self).__init__()
        self.root = root
        self.transform = transform
        self.classes2label = classes2label
        self.image_file_list, self.label_list = self.parse_txt()

    def __len__(self):
        return len(self.image_file_list)

    def __getitem__(self, idx):
        file = os.path.join(self.root,'images', self.image_file_list[idx])
        img = Image.open(file).convert('RGB')
        label = torch.tensor(int(self.label_list[idx])).long()
        if self.transform is not None:
            img = self.transform(img)
        return img, label

    def parse_txt(self):
        annotation_path = os.path.join(self.root, 'val_annotations.txt')
        image_file_list = []
        label_list = []

        with open(annotation_path, 'r') as f:
            contents = f.readlines()
        for content in contents:
            image_file, classes_name = content.split('\t')[:2]
            image_file_list.append(image_file)
            label = self.classes2label[classes_name]
            label_list.append(label)
        return image_file_list, label_list
if is_instance:
    train_set = ImageFolderInstance(train_folder, transform=train_transform)
    n_data = len(train_set)

else:
    train_set = datasets.ImageFolder(train_folder, transform=train_transform)
test_set = TestImageDataset(root=test_folder, transform=test_transform, classes2label=train_set.class_to_idx)

@kumamonatseu This code works! ^v^

May I ask what the 'val_annotations.txt' is? I can't find it in ilsvrc2012. Thanks!

@shaoeric
Copy link

shaoeric commented Mar 2, 2021

class TestImageDataset(Dataset):
    def __init__(self, root, transform, classes2label=None):
        super(TestImageDataset, self).__init__()
        self.root = root
        self.transform = transform
        self.classes2label = classes2label
        self.image_file_list, self.label_list = self.parse_txt()

    def __len__(self):
        return len(self.image_file_list)

    def __getitem__(self, idx):
        file = os.path.join(self.root,'images', self.image_file_list[idx])
        img = Image.open(file).convert('RGB')
        label = torch.tensor(int(self.label_list[idx])).long()
        if self.transform is not None:
            img = self.transform(img)
        return img, label

    def parse_txt(self):
        annotation_path = os.path.join(self.root, 'val_annotations.txt')
        image_file_list = []
        label_list = []

        with open(annotation_path, 'r') as f:
            contents = f.readlines()
        for content in contents:
            image_file, classes_name = content.split('\t')[:2]
            image_file_list.append(image_file)
            label = self.classes2label[classes_name]
            label_list.append(label)
        return image_file_list, label_list
if is_instance:
    train_set = ImageFolderInstance(train_folder, transform=train_transform)
    n_data = len(train_set)

else:
    train_set = datasets.ImageFolder(train_folder, transform=train_transform)
test_set = TestImageDataset(root=test_folder, transform=test_transform, classes2label=train_set.class_to_idx)

@kumamonatseu This code works! ^v^

May I ask what the 'val_annotations.txt' is? I can't find it in ilsvrc2012. Thanks!

very glad to help you, but i run the code just on tiny-imagenet, so sorry

@liuhao-lh
Copy link

class TestImageDataset(Dataset):
    def __init__(self, root, transform, classes2label=None):
        super(TestImageDataset, self).__init__()
        self.root = root
        self.transform = transform
        self.classes2label = classes2label
        self.image_file_list, self.label_list = self.parse_txt()

    def __len__(self):
        return len(self.image_file_list)

    def __getitem__(self, idx):
        file = os.path.join(self.root,'images', self.image_file_list[idx])
        img = Image.open(file).convert('RGB')
        label = torch.tensor(int(self.label_list[idx])).long()
        if self.transform is not None:
            img = self.transform(img)
        return img, label

    def parse_txt(self):
        annotation_path = os.path.join(self.root, 'val_annotations.txt')
        image_file_list = []
        label_list = []

        with open(annotation_path, 'r') as f:
            contents = f.readlines()
        for content in contents:
            image_file, classes_name = content.split('\t')[:2]
            image_file_list.append(image_file)
            label = self.classes2label[classes_name]
            label_list.append(label)
        return image_file_list, label_list
if is_instance:
    train_set = ImageFolderInstance(train_folder, transform=train_transform)
    n_data = len(train_set)

else:
    train_set = datasets.ImageFolder(train_folder, transform=train_transform)
test_set = TestImageDataset(root=test_folder, transform=test_transform, classes2label=train_set.class_to_idx)

@kumamonatseu This code works! ^v^

May I ask what the 'val_annotations.txt' is? I can't find it in ilsvrc2012. Thanks!

very glad to help you, but i run the code just on tiny-imagenet, so sorry

Thanks for reply

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants