log file and supply path tracing for large scale data #474
-
I tried to implement production planning task by frepple, which is installed upon kubernetes and docker environment. The data we use is about following amount: The engine seems to stuck in planning stage for several hours, while splitting sales orders into small batches costs less than 1 hour in sum. We try to tune "loglevel()" to 2 for issue tracing but it removed automatically after task is finished. Could I get some tips to preserve the log file and get a guide to understand these logs? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
In the file /etc/frepple/djangosettings.py there is setting MAXTOTALLOGFILESIZE that defines a limit to the total disk space we allow for log files. If this limit is reached, we automatically delete the oldest files. With this feature we avoid keeping too many and too old log files.
That's too open a question to answer... There are just too many possible causes, each with different alternative resolutions. |
Beta Was this translation helpful? Give feedback.
In the file /etc/frepple/djangosettings.py there is setting MAXTOTALLOGFILESIZE that defines a limit to the total disk space we allow for log files. If this limit is reached, we automatically delete the oldest files. With this feature we avoid keeping too many and too old log files.
Increasing the parameter will allow you to preserve the log file.
That's too open a question to answer... There are jus…