-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: setTableEntry extremely slow #4769
Comments
I think I can see a couple issues -- such as gaps in the table if one were to use For now, I'm going to focus on the performance issue while the above question is discussed. |
I see a lot things going against this. For one, we don't need it since we can already use
Not hard to package that up as a UDF if doing this a lot. As a follow-up to that, the proposed change would be a breaking change to Third is that Finally, it will give the impression that tables is are mappings from numbers to results, but that isn't true. Of course we can use tables that way, but it's not reality.
Gaps are already possible with
Small tangent there: I took a look at |
All good points; thanks. I had to review the wiki pages (I don't build tables very often) to learn about some of the details, like having gaps in the entries. |
Thanks for opening the issue. I'm not sure how much of the current behaviour can fairly be called a "bug" unless it's doing something truly unintended as opposed to just doing something poorly conceived to accommodate large tables... I just know that there is something really bad going on when addTableEntry and setTableEntry are done at a large scale (i.e. when hundreds or thousands of records are edited or added) and I suspect that the big-brain collective can improve the performance significantly. Of additional note, the massive memory usage when doing a lot of edits to a large table also seems to permanently lockup a large amount of memory. That may, in fact, be a real bug. I'm attaching a nearly 3200 record table of monsters based on Pathfinder 1e monsters for anyone who might need a large table to test with. Remove the .zip extension to use in MT. Edit: I haven't finished indexing the table... The (0) record has all of the listings, (1) alpha, (2) type, and I think (3) subtype. |
Some design ideas... I like the idea of a sorted ArrayList to hold table entries and a binary search to find a specific entry when given a roll number. That should perform well and doesn't change the structure stored in the campaign file so there shouldn't be any work when reading older campaigns (potentially a sort?). Adding entries in bulk, as the OP was attempting, is a common case during development, but rare during a game. If we add/set a single entry, it can be inserted where it belongs in the table and we avoid a sort, but performing that operation in a loop can become O(n) as all following entries must be "pushed down" by one. Then, there's the issue of every change being pushed out to clients if it does happen during a game. I see three options.
In regards to option 3, if the Any other ideas? |
Technically true, but I don't think this will be noticable. Even inserting into large
And right now this is even worse than it seems, since For the rest. I like the consideration of cutting down on network traffic. Definitely in favour of an approach that is not specific to tables. Similar to your idea (3), I've had in mind that it should be possible to treat all macros as running in exactly that sort of "transaction". No need for special syntax to opt into it, we can just apply the idea generally. |
I think the |
True transactions would certainly be too much. But merely queueing up server commands during macro execution should not be an issue since there's no guarantee how quickly they can get actioned anyways. |
There are already issues with changes from multiple clients overwriting each other without extending the window on long-running macros. Worse, macros that use input (including the input functionality built into the parser) or rest calls are also problematic as they could run a very long time. Without a specific use case that can't be solved with a bulk operation like Also, some commands rely on coming back from the server to the local client to behave properly, which could be a mess to untangle. |
I'm fine with the Perhaps the It might also allow for some built in parallelism if the parser can determine that the contents of the code block wouldn't interfere with itself if run on separate threads. |
Just an update for @FullBleed ... There were no existing tests for LookupTable, LookupTable.LookupEntry, or LookupFunctionsTest, so I'm starting from scratch. I'm mostly done with the first, but few of the methods are documented, so I've been stuck learning first what they're for and then adding that documentation — it just takes time. |
Another update... I'm done with the tests and I have the |
Describe the Bug
When adding 3200 elements to a new table using
setTableEntry()
, MT can consume significant cpu and even more memory (upwards of 48GB!).To Reproduce
Execute the follow macro and note the execution time. Then comment out or delete the
setTableEntry()
and run it again.Expected Behaviour
The
setTableEntry()
function should overwrite an existing entry if it's there, or add a new one if it isn't. (The latter is new functionality that doesn't exist in the current version, so the wiki will need to be updated.) Unfortunately, I don't see a way to change the return code so that it can indicate whether an entry was added or replaced and still maintain backward compatibility.Screenshots
No response
MapTool Info
All
Desktop
Any
Additional Context
See this Discord discussion for the initial report from Full Bleed.
The text was updated successfully, but these errors were encountered: