You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a test running at ~1200 req/s with 1 K vu. It runs from a 16 Gb memory 8 core EC2 of some sort on Ubuntu Linux.
The test itself is a rather simple two-request use case.
It ramps up fine to the max rate over 20 min so a pretty slow ramp up.
Once at max it just gobbles up memory until it runs out.
Running the same test using no output except the console summary consumes some 7% total of the memory.
Running the same test setting the K6_TIMESCALEDB_PUSH_INTERVAL to 500ms or even shorter makes the test work. It still uses a lot of memory but not as much.
We would like to optimize memory usage a bit better so as not to consume such large amounts of memory.
A self sizing reporting window could be one possibility.
With a well scaled db reverting the connection pool and having parallel connections to the db would also work very well. The db has very high parallel throughput but shortening to push interval to something like 200 ms will max out the one core tied to the connection.
This is of course only valid for a monolith running the entire test. Scaling out using several smaller loadgens will not hit this issue unless they are pressed for high request rates.
The text was updated successfully, but these errors were encountered:
We have a test running at ~1200 req/s with 1 K vu. It runs from a 16 Gb memory 8 core EC2 of some sort on Ubuntu Linux.
The test itself is a rather simple two-request use case.
It ramps up fine to the max rate over 20 min so a pretty slow ramp up.
Once at max it just gobbles up memory until it runs out.
Running the same test using no output except the console summary consumes some 7% total of the memory.
Running the same test setting the K6_TIMESCALEDB_PUSH_INTERVAL to 500ms or even shorter makes the test work. It still uses a lot of memory but not as much.
We would like to optimize memory usage a bit better so as not to consume such large amounts of memory.
A self sizing reporting window could be one possibility.
With a well scaled db reverting the connection pool and having parallel connections to the db would also work very well. The db has very high parallel throughput but shortening to push interval to something like 200 ms will max out the one core tied to the connection.
This is of course only valid for a monolith running the entire test. Scaling out using several smaller loadgens will not hit this issue unless they are pressed for high request rates.
The text was updated successfully, but these errors were encountered: