-
-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Constant crashes on Linux #830
Comments
This is your first time submitting an issue with UVtools 🥳Please review your issue and ensure that the submit template was followed, the information is complete, and not related to any other open issue. It will be reviewed shortly. Debugging is very important and make the program better. Thanks for contributing and making the software better! 🙌 |
I think the problem is the lack of RAM to process such file using the system default all power. On my linux vm it crash while loading because RAM and SWAP hit 100% then system crash the app to save the OS. Decoding 12k images in parallel is a RAM hog, the solution is to decrease core count per workload on UVtools settings, see max degree of paralelism under Tasks settings. The more cores you have the more images it decode/encode in parallel. So working with n * 12k bitmaps or you have RAM or it will crash. Opens fine (smaller spike) vs dont open (big spike) The opens fine file consume less RAM due the less layers to process, there is less pressure. |
However the insane amount of RAM it shows using makes little sense, because with your core count the calcs aim for 1GB. For that reason, I reviewed the code for Goo and find that I forget to dispose the Mat after using it, so object remain alive even if not required. That means it decodes and keep all 12K bitmaps into RAM and only auto release them when its all processed via GC. I will fix this on next patch. Only affecting goo. Difference after fix, on the larger file: |
I tried to decrease parallelism to one, but it still consumed all of my RAM + 7Gb of swap (which I did not even expect to be used at all with such amount of RAM, but now it seems to be too small) and died. |
Can you attach the CTB file? |
Ok, EncryptedCTB is also affected by same problem, after that I review all other file formats and found that SVGX is also with same problem. It's strange that no-one complained before with CTB, given the big resolutions and high layer count... |
- **Layers:** - (Add) Brotli compression codec with good performance and compression ratio (Choose it if you have low available RAM) - (Improvement) Use `ResizableMemory` instead of `MemoryStream` for `GZip` and `Deflate` compressions, this results in faster compressions and less memory pressure - (Improvement) Changed how layers are cached into the memory, before they were compressed and save for the whole image. Now it crops the bitmap to the bounding rectangle of the current layer and save only that portion with pixels. For example, if the image is empty the cached size will be 0 bytes, and in a 12K image but with a 100 x 100 usable area it will save only that area instead of requiring a 12K buffer. The size of the buffer is now dynamic and will depends on layer data, as so, this method can reduce the memory usage greatly, specially when using large images with a lot of empty space, but also boosts the overall performance by relief the allocations and the required memory footprint. Only in few special cases can have drawbacks, however they are very minimal and the performance impact is minimal in that case. When decompressing, the full resolution image is still created and then the cached area is imported to the corresponding position, composing the final and original image. This is still faster than the old method because decompress a larger buffer is more costly. In the end both writes/compresses and reads/decompresses are now faster and using less memory. Note: When printing multiple objects it is recommended to place them close to each other as you can to take better advantage of this new method. - **Issues Detection:** - (Fix) When detecting for Islands but not overhangs it will throw an exception about invalid roi - (Fix) Huge memory leak when detecting resin traps (#830) - (Improvement) Core: Changed the way "Roi" method is returned and try to dispose all it instances - (Fix) EncryptedCTB, GOO, SVGX: Huge memory leak when decoding files that caused the program to crash (#830)
Check new version, should be fixed, including the issue detection leak |
4.2.0 works fine with previous files. |
That file is huge and using almost all space in 12k, little to no optimization is made in that case. The Auto (-1) parallelism is not handled by the program but the NET framework: ParallelOptions.MaxDegreeOfParallelism. Special this lines:
So, in that case the framework manages the creation of tasks based on what he thinks is best for performance. For example, if you pause a task, it will keep spawning them because there are an oportunity to process data (Since others are paused). If it's very busy it will wait an oportunity to spawn a new task. So in the end it may spawn more tasks than your CPU core count. In my case I have 32 and it spawn 34 to 36. But if you choose a number, it will always stay within that limit. Myself and what I recommend is to use the "!" (Optimal). It will define to your core count less a few to make sure some cores are free to make your system responsive, that way you can use your OS without much lag. Where the results on my system: The most memory is taken on Resin traps due the MatCache requirement. If your models are solid or you know they are ok for sure, please disable resin trap detection on such files. As alternative you can upgrade RAM with more 32GB. It would make the difference in this cases. OR cheaper increase your SWAP, if you have in to SSD the performance is better still not as fast as RAM but will save you on this situations. |
so maybe having 128 cores is working against me then so n * bitmaps , I did find this so how the heck am i using up all my RAM ? |
You have a ton of cores, is that 128 the total or it doubles with hyperthreading? My advice is to cut workload in half on UVtools settings, start with 1/2, monitor ram and tune from there |
System
Printer and Slicer
Description of the bug
App crashes whether while opening a file or trying to open "issues" tab.
The whole system freezes while app is loading/calculating stuff.
There is no error log in "settings folder" and no output if I run it from the terminal.
I've had similar crashes before with some files. Last time the app was crashing constantly I moved back from 4.0.6 to 4.0.5 and it kinda worked. Now I reinstalled my whole system (for reasons) and even 4.0.5 does not work.
How to reproduce
Files
There are 2 files here.
First one is a simple cube I've created in lychee, it opens fine.
The second file opened once but it crashed after I tried to open "issues" tab, now it doesn't open at all, crashing on load.
https://drive.google.com/drive/folders/1klkspVe7JEjMR90WdmTua67er07E-XOX?usp=sharing
The text was updated successfully, but these errors were encountered: