-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
function 'feval' in model.lua get bad argument #1 error #9
Comments
That's indeed a bug. Did you observe that after 1 epoch has finished? |
I have the same problem. And yes, that happens after one training epoch, when validation starts. The same happens if you try to "translate" images only (without training) using your pretrained model. |
Oh thanks for reporting that! I'll look into it. |
Sorry for the delay. I suspect that's due to the fact that OpenNMT updated the library. I just updated the repo to be compatible with the current OpenNMT library. Can you try reinstalling OpenNMT libary by |
Oh sorry my bad, training broke now although decoding works fine... Working to sovle this. |
Thanks! It works now |
Great! |
Hello, trying to get Im2Text to work alongside with the latest OpenNMT and Torch installations. A lot of errors are encountered, obviously due to that the two have been updated since the latest Im2Text commit. So, I have rolled them back adjusting to the latest Im2Text commit date, and the following configuration seems to be working somehow:
However, still the following error for some test images: |
Oh sure. I plan to directly include onmt in this project. |
@acmx2 Updated. This is a working version which myself is using. Since onmt is already included, make sure the currently installed onmt has been removed. Note that due to the model changes, the pretrained model cannot work now. I'll train and upload a new model. |
Thanks, it runs w/o crashes. However, having trained and tested it I see the following results: |
@acmx2 Thanks for reporting that! Can you test it on the training set as well? |
Surely, want to train on the 100k set, need to upgrade my videocard for that, because it might take a couple of weeks to train on my current gtx 950m 4Gb, although the training seems to be running on it with reduced batch size (20->5). |
@da03 Finally, trained the latest Im2Text using the latest Torch on the 100k training set. The result is: the same {2{2{2... It looks like something is broken in the latest commits. Details: my configuration is GTX 1080 Ti 11GB RAM, I use the command line provided on the main page with additional option disabling stress test on start up. After 70 hours of training the perplexity definitely doesn't go below 20. Nevertheless, the following configuration seems to be working:
|
I use example command:
th src/train.lua -phase train -gpu_id 1 -input_feed -model_dir model -image_dir data/images -data_path data/train.txt -val_data_path data/validate.txt -label_path data/labels.txt -vocab_file data/vocab.txt -batch_size 8 -beam_size 1 -max_num_tokens 150 -max_image_width 500 -max_image_height 160
but receive this error:
/home/kxx/torch/install/bin/luajit: /home/kxx/.luarocks/share/lua/5.1/torch/Tensor.lua:462: bad argument #1 to 'set' (expecting number or Tensor or Storage) stack traceback: [C]: in function 'set' /home/kxx/.luarocks/share/lua/5.1/torch/Tensor.lua:462: in function 'view' /home/kxx/.luarocks/share/lua/5.1/onmt/translate/Beam.lua:127: in function 'func' /home/kxx/.luarocks/share/lua/5.1/onmt/utils/Tensor.lua:12: in function 'recursiveApply' /home/kxx/.luarocks/share/lua/5.1/onmt/utils/Tensor.lua:7: in function 'selectBeam' /home/kxx/.luarocks/share/lua/5.1/onmt/translate/Beam.lua:350: in function '_nextState' /home/kxx/.luarocks/share/lua/5.1/onmt/translate/Beam.lua:339: in function '_nextBeam' .../.luarocks/share/lua/5.1/onmt/translate/BeamSearcher.lua:98: in function '_findKBest' .../.luarocks/share/lua/5.1/onmt/translate/BeamSearcher.lua:68: in function 'search' ./src/model.lua:246: in function 'feval' ./src/model.lua:313: in function 'step' src/train.lua:159: in function 'run' src/train.lua:253: in function 'main' src/train.lua:259: in main chunk [C]: in function 'dofile' .../kxx/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50
is it a bug?
The text was updated successfully, but these errors were encountered: