Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use this project #16

Open
hahahaha2019310241 opened this issue Oct 9, 2024 · 3 comments
Open

How to use this project #16

hahahaha2019310241 opened this issue Oct 9, 2024 · 3 comments

Comments

@hahahaha2019310241
Copy link

Hello, I am very interested in your project, but I do not know how to run this project on my own data set. May I ask how to use this project. Thank you!

@lmaxwell
Copy link
Contributor

Could you please give me more details about your goals? Are you looking to pretrain a self-supervised model with your own data set, or are you interested in fine-tuning a pre-trained model on your data? I'd be happy to assist you in getting started with the project.

@hahahaha2019310241
Copy link
Author

hahahaha2019310241 commented Nov 27, 2024

Thank you very much for your reply. I want to reproduce the results of the atst model on the us8k data set, but I now have the following problems, how to solve this, thank you very much!
Take key teacher in provided checkpoint dict
Pretrained weights found at /home/lxz/PycharmProjects/lxz/project/audiossl-main/audiossl/methods/atst/base.ckpt and loaded with msg: _IncompatibleKeys(missing_keys=[], unexpected_keys=['head.0.weight', 'head.1.weight', 'head.1.bias', 'head.1.running_mean', 'head.1.running_var', 'head.1.num_batches_tracked', 'head.3.weight'])
Traceback (most recent call last):
File "/home/lxz/PycharmProjects/lxz/project/audiossl-main/audiossl/methods/atst/shell/downtream/finetune/../../../downstream/train_finetune.py", line 201, in
main()
File "/home/lxz/PycharmProjects/lxz/project/audiossl-main/audiossl/methods/atst/shell/downtream/finetune/../../../downstream/train_finetune.py", line 195, in main
run_n_folds(args, pretrained_module, num_folds)
File "/home/lxz/PycharmProjects/lxz/project/audiossl-main/audiossl/methods/atst/shell/downtream/finetune/../../../downstream/train_finetune.py", line 157, in run_n_folds
test_metrics.append(run(args, deepcopy(pretrained_module), fold+1))
File "/home/lxz/PycharmProjects/lxz/project/audiossl-main/audiossl/methods/atst/shell/downtream/finetune/../../../downstream/train_finetune.py", line 124, in run
trainer: Trainer = Trainer(
File "/home/lxz/anaconda3/envs/audiossl/lib/python3.10/site-packages/pytorch_lightning/utilities/argparse.py", line 70, in insert_env_defaults
return fn(self, **kwargs)
File "/home/lxz/anaconda3/envs/audiossl/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 401, in init
self._accelerator_connector = _AcceleratorConnector(
File "/home/lxz/anaconda3/envs/audiossl/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 147, in init
self._accelerator_flag = self._choose_gpu_accelerator_backend()
File "/home/lxz/anaconda3/envs/audiossl/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 367, in _choose_gpu_accelerator_backend
raise MisconfigurationException("No supported gpu backend found!")
lightning_fabric.utilities.exceptions.MisconfigurationException: No supported gpu backend found!

@lmaxwell
Copy link
Contributor

lmaxwell commented Dec 2, 2024

Did you have torch installed with cuda support ? Could you please check what the following code returns ?

print(torch.cuda.is_available())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants