You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Congratulations, the package is great and thanks for developing it.
I've teste with some standard dataset and it works great (including the 50 MB cookbooks). However, when using it with a personal 1 MB dataset written in Brazilian Portuguese, R crashes every single time. I've already removed punctuation and excess white space, tried with 1/2/4/8 threads, 100/200/500 vectors and with/without removing stopwords, but got no better result. Do you have any idea what it can be the reason of this crash?
The text was updated successfully, but these errors were encountered:
I tried with accent, like "é ó ú â ã", and it worked fine for small files (5 KB). I also tried with small pieces of my personal dataset, it worked with samples from 1% to 6% of it. I also generated a lorem ipsum files: from 5 KB until 700 KB, it worked. However, when I tried with around 1 MB, it crashed again.
I'm having the same issues -- RStudio aborts even when trying to train very reasonably-sized files. The same problem with the rword2vec package as well. Path/file length is unlikely to be an issue (path length of around 80-90 characters).
Congratulations, the package is great and thanks for developing it.
I've teste with some standard dataset and it works great (including the 50 MB cookbooks). However, when using it with a personal 1 MB dataset written in Brazilian Portuguese, R crashes every single time. I've already removed punctuation and excess white space, tried with 1/2/4/8 threads, 100/200/500 vectors and with/without removing stopwords, but got no better result. Do you have any idea what it can be the reason of this crash?
The text was updated successfully, but these errors were encountered: