You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As noted during a user study the extension has issues with files that have a large number of non-select statements.
In many other tools the entire query is run as 1 batch, with the effect you only get 1 resultset out
In this tools service each statement is run separately instead. While this preserves all result sets, it means that for a 15,000 statement file (e.g. do 15K inserts) you'll execute 15K serial queries. Against a remote DB like azure (say with 200ms round trip time) this means you would have (15K x 0.2sec = 3000 sec = 50min) just in round trips.
Proposed solution:
pgsqltoolsservice/query/query.py, line 66 splits into batches by statement, but later (pgsqltoolsservice/query/batch.py, create_batch line 190) each statement is parsed & select statements treated differently to non-select statements
We could just extend the query batching so that it splits on selects but otherwise runs all statements in a single batch. Hence for inserts / other operations it'll run as a single large batch.
If you have a file that does, for example, a Create Table, then 15K inserts, then 2 selects, you'll have 3 total network calls to the server (create + 15K inserts, then one for each select).
If you want to get fancier, you can define user setting as follows:
(Default) Smart batch. Run all non-select statements in groups, and select statements on their own
No batching. Run the entire query as 1 operation and only get 1 possible select statement back
Batch per statement. Run with each statement in its own batch.
The text was updated successfully, but these errors were encountered:
As noted during a user study the extension has issues with files that have a large number of non-select statements.
Proposed solution:
If you want to get fancier, you can define user setting as follows:
The text was updated successfully, but these errors were encountered: