Skip to content
This repository has been archived by the owner on Jan 13, 2022. It is now read-only.

Error with guppy using a new model trained with taiyaki #115

Open
jlaroche opened this issue Jun 2, 2021 · 2 comments
Open

Error with guppy using a new model trained with taiyaki #115

jlaroche opened this issue Jun 2, 2021 · 2 comments

Comments

@jlaroche
Copy link

jlaroche commented Jun 2, 2021

I trained a new model with taiyaki and export it with dump_json.py
I try to use it with guppy with this cmd:

guppy_basecaller -i subset_validation_single_fast5 -s subset_validation_basecall_new -c /prg/guppy/4.4.1/data/dna_r9.4.1_450bps_hac.cfg -m model_laval1.json --device cuda:0

I got this in the log file:

2021-06-02 16:17:05.873911 [guppy/message] ONT Guppy basecalling software version 4.4.1+1c81d62
config file: /prg/guppy/4.4.1/data/dna_r9.4.1_450bps_hac.cfg
model file: /home/jelar5/cam/samples/neg_ctrl/guppy_training/taiyaki_model/model_laval1.json
input path: /project/bioinfo/users/jelar5/consultations/camila_lima/samples/neg_ctrl/subset_validation_single_fast5
save path: /project/bioinfo/users/jelar5/consultations/camila_lima/samples/neg_ctrl/subset_validation_basecall_new
chunk size: 2000
chunks per runner: 512
records per file: 4000
num basecallers: 4
gpu device: cuda:0
kernel path:
runners per device: 4
2021-06-02 16:17:05.874334 [guppy/info] crashpad_handler not supported on this platform.
2021-06-02 16:17:12.314632 [guppy/info] CUDA device 0 (compute 6.0) initialised, memory limit 17071734784B (16804216832B free)
2021-06-02 16:17:12.339841 [guppy/error] Could not open gru_384_384_1_1_1_0.fatbin
2021-06-02 16:17:12.340052 [guppy/warning] Failed to load gru(gru_384_384_1_1_1_0) from fatbin
2021-06-02 16:17:12.342134 [guppy/error] Could not open CUDA kernel file: gru.cu
2021-06-02 16:17:12.342253 [guppy/warning] An error has occurred. Aborting.

Thanks!

Jerome

@smaegol
Copy link

smaegol commented Jul 9, 2021

I have the same problem. Trained with RNA samples. Is there any solution? Basecalling on CPU is time-consuming...

@tmassingham-ont
Copy link
Contributor

Hello. I'm afraid Guppy only supports a limited number of layer types and sizes and, unfortunately, a size 384 GRU layer is not one of them (the error you are seeing). Size 256 GRU or size size 384 LSTM layers are supported.

Alternatively you could try the basecaller.py script from Taiyaki -- it is generally much slower than Guppy but does support all layers on the GPU. Depending on GPU memory available, increasing the --jobs flag can considerably improve performance.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants