-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limits event loop #467
base: master
Are you sure you want to change the base?
Conversation
…second to keep up with the 'Less than 10 000 requests per 10 minutes' hard limit.
Following the AI bot's recommendation for avoiding division by zero. Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
WalkthroughThe pull request introduces a comprehensive rate limiting mechanism for Discord API interactions across multiple files. The changes involve updating the Makefile to support C++20 compilation, modifying Changes
Sequence DiagramsequenceDiagram
participant Client
participant LibDiscord
participant RateLimiter
participant DiscordAPI
Client->>LibDiscord: Make API Request
LibDiscord->>RateLimiter: Check Rate Limits
RateLimiter->>DiscordAPI: Send Request
DiscordAPI-->>RateLimiter: Response with Headers
RateLimiter->>RateLimiter: Update Rate Limit Tracking
RateLimiter->>LibDiscord: Allow/Delay Next Request
LibDiscord->>Client: API Response
Poem
Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🔭 Outside diff range comments (1)
Makefile (1)
Line range hint
123-124
: Link against C++ standard libraryThe build commands are missing explicit linking against the C++ standard library.
- $(CC) -fPIC $(CFLAGS) $(CPPFLAGS) -shared -o $@ $^ $(LDFLAGS) `$(PKG_CONFIG) purple glib-2.0 json-glib-1.0 zlib --libs --cflags` $(INCLUDES) -Ipurple2compat -g -ggdb + $(CXX) -fPIC $(CFLAGS) $(CPPFLAGS) -shared -o $@ $^ $(LDFLAGS) `$(PKG_CONFIG) purple glib-2.0 json-glib-1.0 zlib --libs --cflags` $(INCLUDES) -Ipurple2compat -g -ggdb -lstdc++ - $(CC) -fPIC $(CFLAGS) $(CPPFLAGS) -shared -o $@ $^ $(LDFLAGS) `$(PKG_CONFIG) purple-3 glib-2.0 json-glib-1.0 zlib --libs --cflags` $(INCLUDES) -g -ggdb + $(CXX) -fPIC $(CFLAGS) $(CPPFLAGS) -shared -o $@ $^ $(LDFLAGS) `$(PKG_CONFIG) purple-3 glib-2.0 json-glib-1.0 zlib --libs --cflags` $(INCLUDES) -g -ggdb -lstdc++Also applies to: 126-127
🧹 Nitpick comments (9)
ratelimiter.cpp (4)
64-70
: Non-movable and non-copyable approach
The EventLoop class is declared as non-copyable and non-movable, but there’s no strong reason to forbid moving in many concurrency scenarios.Consider allowing move semantics if that simplifies usage. For instance, you might move the thread or other resources. Confirm that concurrency correctness is preserved before enabling moves.
169-171
: Thread safety with blocking operations
The loopFunction is designed to handle indefinite waits for commands, but if commands contain blocking operations (like sleeps), the entire event loop can be stalled.If blocking is expected, consider splitting the event loop or using a separate worker thread pool to avoid unpredictably suspending time-critical dispatch.
250-255
: Using std::this_thread::sleep_for in the main event loop
This might unintentionally block your event thread. Additionally, if interval is large, the entire thread remains blocked, preventing the processing of other tasks.Consider scheduling tasks on a separate timer mechanism (e.g., a dedicated timed tasks queue) or using non-blocking wait strategies for better concurrency.
257-258
: Placeholder return value
The function returns 1 unconditionally. If the caller needs a handle or an ID to manage the timeout, returning a meaningful identifier helps track or cancel the scheduled command later.Refactor the function to store and return an actual handle for the scheduled task. For example:
static std::atomic_uint next_id = 1U; // ... return next_id++;libdiscord.c (3)
108-128
: Rate limit constants and variables need documentationThe code introduces several rate limit related constants and variables but lacks documentation explaining their purpose and usage. Consider adding comments to explain:
- What each variable represents
- The units of measurement (seconds, milliseconds, etc.)
- Any constraints or valid ranges
- How they interact with each other
+/* Rate limit constants and state */ + +/* Initial rate limit values */ const int init_xrateLimit=40; /* Maximum number of requests allowed */ const int init_xrateRemaining=10; /* Remaining requests in current window */ const double init_xrateReset=55; /* Time until rate limit resets (seconds) */ const double init_xrateResetAfter=1; /* Time after which rate limit resets (seconds) */ const int init_xRateAllowedPerSecond=12; /* Calculated requests per second */ const int init_xRateDelayPerRequest=(int)((1.0 / (double)init_xRateAllowedPerSecond) * 1000.0); /* Delay between requests (ms) */ const int init_xRateAllowedRemaining=10; /* Remaining allowed requests */ +/* Current rate limit state */ static int xrateLimit=init_xrateLimit; static int xrateRemaining=init_xrateRemaining; static double xrateReset=init_xrateReset; static double xrateResetAfter=init_xrateResetAfter; static int xRateAllowedPerSecond=init_xRateAllowedPerSecond; static int xRateDelayPerRequest=init_xRateDelayPerRequest; static int xRateAllowedRemaining=init_xRateAllowedRemaining;
1695-1695
: Rate limit delay calculation needs reviewThe delay calculation
MAX(65,xRateDelayPerRequest)
uses a magic number 65 without explanation. Consider documenting why this minimum delay is needed.- rlimited_timeout_add(delay + MAX(65,xRateDelayPerRequest), discord_fetch_url_with_method_delay_cb, request); + /* Minimum delay of 65ms to prevent overwhelming the server */ + const int MIN_REQUEST_DELAY_MS = 65; + rlimited_timeout_add(delay + MAX(MIN_REQUEST_DELAY_MS, xRateDelayPerRequest), + discord_fetch_url_with_method_delay_cb, request);
10675-10675
: Initial rate limiter initializationThe code initializes the rate limiter with a hardcoded value of 83ms. Consider making this configurable or documenting why this specific value was chosen.
+ /* Initialize rate limiter with default delay of 83ms (approx 12 requests/sec) */ + const int DEFAULT_RATE_LIMIT_DELAY_MS = 83; - initialize_rate_limiter(83); + initialize_rate_limiter(DEFAULT_RATE_LIMIT_DELAY_MS);ratelimiter.h (1)
8-10
: Add documentation for the public API functionsThe header file introduces critical rate limiting functionality but lacks documentation. Please add:
- Function descriptions
- Parameter explanations
- Thread safety guarantees
- Resource ownership details
- Error handling behavior
Example format:
/** * initialize_rate_limiter: * @interval: The rate limiting interval in milliseconds * * Initializes the rate limiting system. Must be called before any * rlimited_timeout_add calls. Not thread-safe. */Makefile (1)
113-113
: Rename PURPLE_C_FILES to reflect mixed C/C++ contentThe variable name suggests C files only but includes C++ files.
-PURPLE_C_FILES := ratelimiter.cpp libdiscord.c $(C_FILES) +PURPLE_SOURCE_FILES := ratelimiter.cpp libdiscord.c $(C_FILES)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
Makefile
(3 hunks)libdiscord.c
(8 hunks)ratelimiter.cpp
(1 hunks)ratelimiter.h
(1 hunks)
🔇 Additional comments (7)
ratelimiter.cpp (1)
94-99
: Potential confusion around rvalue references vs. const references
The enqueue(Command&&) correctly applies std::move. Meanwhile, the other enqueue variant takes a const reference but still uses std::move.
Check if the code compiles as intended. Typically, applying std::move on a const reference doesn’t move anything. The actual effect is a copy. If this is intentional, it might be worth clarifying in comments. Otherwise, consider overloading for both lvalues and rvalues in a consistent manner.
libdiscord.c (5)
55-55
: Include new rate limiter header
The code adds a new header ratelimiter.h
which suggests implementation of rate limiting functionality.
1501-1507
: Rate limit header parsing function looks good
The parse_rate_limit_headers
function is well structured and handles header extraction cleanly.
1516-1516
: Good placement of rate limit header parsing
The rate limit header parsing is appropriately placed in the response callback to process headers as soon as they are received.
6337-6337
: Good use of rate limiter for socket writes
The code appropriately uses the rate limiter for socket writes to prevent overwhelming the server.
10689-10689
: Clean rate limiter shutdown
The code properly stops the rate limiter during plugin unload.
Makefile (1)
32-32
: Consider C++17 instead of C++20 for better compatibility
C++20 is relatively new and might not be widely supported. Consider using C++17 unless C++20 features are essential.
|
||
void initialize_rate_limiter(guint interval); | ||
void stop_rate_limiter(); | ||
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding error handling to rlimited_timeout_add
The function should indicate whether the timeout was successfully added.
-guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data);
+gboolean rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data, guint *timeout_id);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data); | |
gboolean rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data, guint *timeout_id); |
CPPFLAGS ?= -O2 -g -pipe -Wall | ||
CFLAGS += -std=c99 -DDISCORD_PLUGIN_VERSION='"$(PLUGIN_VERSION)"' -DMARKDOWN_PIDGIN | ||
CPPFLAGS+= -std=c++20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add essential compiler flags to CPPFLAGS
The C++ flags are missing important options needed for shared library compilation:
-CPPFLAGS ?= -O2 -g -pipe -Wall
+CPPFLAGS ?= -O2 -g -pipe -Wall -Wextra -Werror -fPIC
Also consider adding:
-Wconversion
for implicit conversion warnings-Wshadow
for variable shadowing warnings-Wno-unused-parameter
to match C flags
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
CPPFLAGS ?= -O2 -g -pipe -Wall | |
CFLAGS += -std=c99 -DDISCORD_PLUGIN_VERSION='"$(PLUGIN_VERSION)"' -DMARKDOWN_PIDGIN | |
CPPFLAGS+= -std=c++20 | |
CPPFLAGS ?= -O2 -g -pipe -Wall -Wextra -Werror -fPIC | |
CFLAGS += -std=c99 -DDISCORD_PLUGIN_VERSION='"$(PLUGIN_VERSION)"' -DMARKDOWN_PIDGIN | |
CPPFLAGS+= -std=c++20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (1)
ratelimiter.cpp (1)
9-13
: Add documentation for CommandData structThe struct lacks documentation explaining its purpose and member variables. Consider adding Doxygen-style documentation similar to other documented entities in the file.
+/** + * @struct CommandData + * @brief Holds information for a scheduled command execution + */ struct CommandData { + /** @brief Interval in milliseconds before executing the function */ guint interval; + /** @brief Function to be executed */ GSourceFunc function; + /** @brief User data passed to the function */ gpointer data; };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ratelimiter.cpp
(1 hunks)
🔇 Additional comments (3)
ratelimiter.cpp (3)
22-22
: Rename Command typedef to avoid type conflict
The Command typedef conflicts with the CommandData struct. Consider renaming it to EventCommand or QueuedCommand for clarity.
199-206
: Fix race condition in destructor
The current destructor implementation has a race condition. Consider using the suggested approach from the previous review to set isRunning under a lock.
231-232
:
Use atomic for INTERVAL and smart pointer for EVENT_LOOP
The global INTERVAL variable is accessed from multiple threads without synchronization, and EVENT_LOOP uses manual memory management.
-static EventLoop* EVENT_LOOP = nullptr;
-static guint INTERVAL = 0;
+static std::unique_ptr<EventLoop> EVENT_LOOP = nullptr;
+static std::atomic<guint> INTERVAL{0};
Likely invalid or redundant comment.
ratelimiter.cpp
Outdated
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data) { | ||
if (!EVENT_LOOP) { | ||
g_warning("Rate limiter not initialized."); | ||
return 0; | ||
} | ||
|
||
EVENT_LOOP->enqueue([interval, function, data, INTERVAL]() { | ||
std::this_thread::sleep_for(std::chrono::milliseconds(interval ? interval : INTERVAL)); | ||
if (function) { | ||
function(data); | ||
} | ||
}); | ||
|
||
return 1; // Return a placeholder ID (or a meaningful one if task tracking is added). | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve task scheduling mechanism
The current implementation has several limitations:
- No way to cancel scheduled tasks
- Sleep blocks the event loop thread
- Hardcoded return value doesn't allow task identification
Consider implementing a proper task scheduling system:
+class TaskId {
+ static std::atomic<guint> nextId{1};
+ guint id;
+public:
+ TaskId() : id(nextId++) {}
+ operator guint() const { return id; }
+};
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data) {
if (!EVENT_LOOP) {
g_warning("Rate limiter not initialized.");
return 0;
}
+ auto taskId = TaskId();
+ auto future = EVENT_LOOP->enqueueAsync([=]() {
+ std::this_thread::sleep_for(
+ std::chrono::milliseconds(interval ? interval : INTERVAL));
+ return function ? function(data) : FALSE;
+ });
- return 1;
+ return taskId;
}
Committable suggestion skipped: line range outside the PR's diff.
ratelimiter.cpp
Outdated
template<typename Func, typename... Args> inline auto enqueueSync(Func&& callable, Args&&...args) | ||
{ | ||
if (std::this_thread::get_id() == loopThread.get_id()) | ||
{ | ||
return std::invoke( | ||
std::forward<Func>(callable), | ||
std::forward<Args>(args)...); | ||
} | ||
|
||
using return_type = std::invoke_result_t<Func, Args...>; | ||
using packaged_task_type = | ||
std::packaged_task<return_type(Args&&...)>; | ||
|
||
packaged_task_type task(std::forward<Func>(callable)); | ||
|
||
enqueue([&] | ||
{ | ||
task(std::forward<Args>(args)...); | ||
}); | ||
|
||
return task.get_future().get(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential deadlock in enqueueSync
The current implementation of enqueueSync captures arguments by reference in the lambda ([&]
). This could lead to a deadlock or undefined behavior if the arguments go out of scope before the task executes.
- enqueue([&]
+ enqueue([task = std::move(task),
+ ...args = std::forward<Args>(args)...]() mutable
{
- task(std::forward<Args>(args)...);
+ task(std::forward<decltype(args)>(args)...);
});
Committable suggestion skipped: line range outside the PR's diff.
ratelimiter.cpp
Outdated
EVENT_LOOP->enqueue([interval, function, data, INTERVAL]() { | ||
std::this_thread::sleep_for(std::chrono::milliseconds(interval ? interval : INTERVAL)); | ||
if (function) { | ||
function(data); | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Blocking sleep in event loop can impact message processing performance
The blocking sleep_for
in the event loop is problematic as it:
- Blocks the event loop thread while waiting, preventing processing of other events
- Can cause delays in message sending and heartbeat operations which use the same event loop
- May lead to message queuing and potential timeouts if multiple rate-limited operations are scheduled
Recommendations:
- Replace blocking sleep with non-blocking timer or delayed execution mechanism
- Consider using async scheduling or dedicated timer queue instead of direct thread sleep
🔗 Analysis chain
Verify impact on message sending functionality
Given the PR objective mentioning issues with sending messages, the blocking sleep in the event loop might be causing delays or preventing message processing.
The initial results show several timeout-based operations related to message sending, including heartbeat and socket writes. Let's examine their implementations and the event loop structure to better understand the potential impact of the blocking sleep.
The previous results show the event loop using blocking sleep, but we need to understand the message processing architecture to determine if this is causing the reported issues. Let's examine queue implementations and other potential blocking operations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for message sending related functions that might be affected
rg -l "message.*send|send.*message" --type cpp --type c
# Look for other uses of timeout_add to understand the impact
rg "timeout_add" --type cpp --type c
Length of output: 1316
Script:
#!/bin/bash
# Let's examine the event loop implementation and message sending functions
ast-grep --pattern 'discord_send_heartbeat($$$)'
ast-grep --pattern 'discord_socket_write_data_delay_cb($$$)'
# Check the implementation of EVENT_LOOP
rg -A 5 "class.*EVENT_LOOP" --type cpp
rg "EVENT_LOOP.*enqueue" -A 3 --type cpp
Length of output: 474
Script:
#!/bin/bash
# Let's check for concurrent message processing or queue implementations
rg -A 5 "class.*Queue|struct.*Queue" --type cpp
rg "process.*message|handle.*message" -A 3 --type cpp
# Look for other sleep operations that might affect message processing
rg "sleep|delay" -A 2 --type cpp
Length of output: 336
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure this is a wise choice? The whole point is to ensure that less than X requests are sent per second, hence the decision to sleep before execution. Correct me if I'm wrong, but non-blocking timer would lead us back to the naive solution of merely delaying execution, which can still leads to surpassing the rate limits in case of a sudden burst of requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Metalhead33, you raise a valid concern about preventing bursts beyond the allowed rate limit. However, a non-blocking approach does not necessarily have to be naïve. Instead of relying on a single blocking sleep, you can use a queuing mechanism or token bucket approach that schedules requests only when tokens (representing allowable requests) are available. This setup ensures you never exceed X requests per second while still allowing the event loop to handle other operations concurrently.
When you block the entire event loop thread with sleep, you essentially stall all other scheduled tasks, including heartbeat operations and incoming messages, which can cause delays or even failures, especially when multiple tasks accumulate. By contrast, if you use a token bucket or scheduled timer approach in the background, you avoid blocking the main loop. Each incoming request is queued until a token is released or the timer triggers the next call, thereby respecting the same rate limit without pausing everything else.
5313a67
to
a15a685
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
ratelimiter.cpp (1)
41-45
: Add timeout to waitForToken to prevent indefinite blockingThe
waitForToken
method could block indefinitely if tokens are never refilled. Consider adding a timeout parameter and returning a boolean to indicate success.- void waitForToken() { + bool waitForToken(std::chrono::milliseconds timeout = std::chrono::seconds(30)) { std::unique_lock<std::mutex> lock(bucketMutex); - condVar.wait(lock, [this] { return tokens > 0; }); + if (!condVar.wait_for(lock, timeout, [this] { return tokens > 0; })) { + return false; + } --tokens; + return true; }libdiscord.c (3)
108-112
: Documentation for rate limit calculation formulaThe comment block explains the rate limit calculation formula but could be more descriptive. Consider adding more context about why these calculations are needed and what each variable represents.
/* + Rate limit calculation formula: + xRateAllowedPerSecond = remaining requests / reset time + xRateAllowedRemaining = xRateAllowedPerSecond (tracks current remaining allowed requests) + xRateDelayPerRequest = (1 / xRateAllowedPerSecond) * 1000 (converts to milliseconds) */
114-128
: Initialize rate limit constants with meaningful valuesThe initial rate limit values appear to be somewhat arbitrary. Consider adding comments explaining why these specific values were chosen.
-const int init_xrateLimit=40; -const int init_xrateRemaining=10; -const double init_xrateReset=55; -const double init_xrateResetAfter=1; -const int init_xRateAllowedPerSecond=15; +// Initial rate limit values based on Discord's default limits +const int init_xrateLimit = 40; // Maximum number of requests allowed +const int init_xrateRemaining = 10; // Initial remaining requests +const double init_xrateReset = 55; // Time until rate limit resets +const double init_xrateResetAfter = 1; // Time after which rate limit resets +const int init_xRateAllowedPerSecond = 15; // Initial rate of allowed requests
10675-10675
: Rate limiter initialization and cleanupThe code properly initializes and cleans up the rate limiter, but the initial delay value of 83ms seems arbitrary.
- initialize_rate_limiter(83); + /* Initialize rate limiter with 83ms delay (approximately 12 requests per second) */ + initialize_rate_limiter(83);Also applies to: 10689-10689
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
libdiscord.c
(8 hunks)ratelimiter.cpp
(1 hunks)
🔇 Additional comments (8)
ratelimiter.cpp (5)
9-13
: Rename Command struct to avoid naming conflictThe struct
Command
conflicts with theCommand
typedef inEventLoop
. This can lead to ambiguity and confusion.
205-224
: Add error handling to prevent event loop crashesThe command execution loop lacks error handling, which could cause the event loop to crash if a command throws an exception.
233-239
: Fix potential race condition in destructorThe current implementation has a race condition during destruction. Setting
isRunning
through the event loop can lead to unexpected behavior.
293-307
: 🛠️ Refactor suggestionImprove return value handling in rlimited_timeout_add
The function always returns 1 (except on error), which doesn't provide meaningful task identification. This makes it impossible to cancel or track specific tasks.
Consider implementing a proper task ID system:
+static std::atomic<guint> nextTaskId{1}; + guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data) { if (!EVENT_LOOP || !TOKEN_BUCKET) { g_warning("Rate limiter not initialized."); return 0; } + guint taskId = nextTaskId++; EVENT_LOOP->enqueue([interval, function, data]() { TOKEN_BUCKET->waitForToken(); // Wait until a token is available if (function) { function(data); } }); - return 1; // Return a placeholder ID + return taskId; }Likely invalid or redundant comment.
299-304
: Blocking waitForToken may cause message sending issuesThe current implementation blocks the event loop while waiting for a token. This could be contributing to the reported message sending issues by preventing other operations from being processed.
libdiscord.c (3)
1501-1507
: LGTM: Rate limit header parsing functionThe parse_rate_limit_headers function is well-structured and correctly extracts the rate limit headers from the response.
6337-6337
: Use of rlimited_timeout_add for rate limitingThe code uses rlimited_timeout_add for rate limiting, but there's no error handling if the rate limiter fails. Consider adding error handling.
- rlimited_timeout_add(1000, discord_socket_write_data_delay_cb, info); + if (!rlimited_timeout_add(1000, discord_socket_write_data_delay_cb, info)) { + purple_debug_error("discord", "Failed to add rate limited timeout\n"); + g_free(info); + return; + }
1475-1500
: 🛠️ Refactor suggestionRate limit update function needs improvement
The UpdateRateLimits function has several issues that should be addressed:
- No bounds checking on numeric conversions
- Magic number 4.0 multiplier without explanation
- Debug messages could be more descriptive
static void UpdateRateLimits(const gchar *xrateLimitS, const gchar *xrateRemainingS, const gchar *xRateReset, const gchar *xRateResetAfter) { - if(!xrateLimitS || !xrateRemainingS || !xRateReset || !xRateResetAfter) return; + if(!xrateLimitS || !xrateRemainingS || !xRateReset || !xRateResetAfter) { + purple_debug_warning("discord", "Missing required rate limit headers\n"); + return; + } + /* Parse rate limit values with error checking */ + gint64 limit = g_ascii_strtoll(xrateLimitS, NULL, 10); + if (limit <= 0 || limit > G_MAXINT) { + purple_debug_warning("discord", "Invalid rate limit value: %s\n", xrateLimitS); + return; + } + xrateLimit = (int)limit; - xrateLimit=atoi(xrateLimitS); purple_debug_info("discord", "X-RateLimit-Limit: %s\n", xrateLimitS); xrateRemaining=atoi(xrateRemainingS); purple_debug_info("discord", "X-RateLimit-Remaining: %s\n", xrateRemainingS); xrateReset=atof(xRateReset); purple_debug_info("discord", "X-RateLimit-Reset: %s\n", xRateReset); - xrateResetAfter=atof(xRateResetAfter) * 4.0; + /* Apply safety margin multiplier to reset time */ + const double RESET_SAFETY_MARGIN = 4.0; + xrateResetAfter = atof(xRateResetAfter) * RESET_SAFETY_MARGIN; purple_debug_info("discord", "X-RateLimit-Reset-After: %s\n", xRateResetAfter);Likely invalid or redundant comment.
ratelimiter.cpp
Outdated
std::thread([]() { | ||
while (EVENT_LOOP) { | ||
std::this_thread::sleep_for(std::chrono::seconds(1)); | ||
if (TOKEN_BUCKET) { | ||
TOKEN_BUCKET->refill(); | ||
} | ||
} | ||
}).detach(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix thread lifecycle management
The detached thread continues running and accessing potentially deleted objects after stop_rate_limiter
is called. This can lead to crashes.
- std::thread([]() {
- while (EVENT_LOOP) {
- std::this_thread::sleep_for(std::chrono::seconds(1));
- if (TOKEN_BUCKET) {
- TOKEN_BUCKET->refill();
- }
- }
- }).detach();
+ static std::atomic<bool> refillThreadRunning{false};
+ static std::thread refillThread;
+
+ if (refillThreadRunning.exchange(true)) {
+ return; // Thread already running
+ }
+
+ refillThread = std::thread([&refillThreadRunning]() {
+ while (refillThreadRunning) {
+ std::this_thread::sleep_for(std::chrono::seconds(1));
+ if (TOKEN_BUCKET) {
+ TOKEN_BUCKET->refill();
+ }
+ }
+ });
+ refillThread.detach();
Then update stop_rate_limiter
:
void stop_rate_limiter() {
+ refillThreadRunning = false;
+ std::this_thread::sleep_for(std::chrono::seconds(1)); // Allow thread to complete
if (EVENT_LOOP) {
delete EVENT_LOOP;
EVENT_LOOP = nullptr;
}
if (TOKEN_BUCKET) {
delete TOKEN_BUCKET;
TOKEN_BUCKET = nullptr;
}
}
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (5)
ratelimiter.cpp (5)
9-13
:⚠️ Potential issueRename Command struct to avoid naming conflict.
The struct name conflicts with
EventLoop::Command
typedef defined later, which could lead to confusion and potential issues.-struct Command { +struct RateLimitCommand { guint interval; GSourceFunc function; gpointer data; };
171-174
:⚠️ Potential issueFix potential deadlock in enqueueSync.
The lambda captures arguments by reference (
[&]
), which could lead to a deadlock if the arguments go out of scope before task execution.- enqueue([&] + enqueue([task = std::move(task)] { - task(std::forward<Args>(args)...); + task(std::forward<Args>(args)...); });
233-239
:⚠️ Potential issueFix race condition in destructor.
Setting
isRunning = false
through the event loop can lead to a race condition if new commands are enqueued between lines 234 and 238.EventLoop::~EventLoop() { - enqueue([this] - { - isRunning = false; - }); + { + std::lock_guard<std::mutex> lockguard(commandsMutex); + isRunning = false; + writeBuffer.clear(); + } + condVar.notify_all(); loopThread.join(); }
218-221
:⚠️ Potential issueAdd error handling in command execution loop.
Commands are executed without any error handling, which could cause the event loop to crash if a command throws an exception.
for (Command& func : readBuffer) { + try { func(); + } catch (const std::exception& e) { + g_warning("Command execution failed: %s", e.what()); + } catch (...) { + g_warning("Command execution failed with unknown error"); + } }
264-268
: 🛠️ Refactor suggestionUse smart pointers for global variables.
Manual memory management with raw pointers could lead to memory leaks. Consider using smart pointers for automatic cleanup.
-static EventLoop* EVENT_LOOP = nullptr; -static TokenBucket* TOKEN_BUCKET = nullptr; +static std::unique_ptr<EventLoop> EVENT_LOOP = nullptr; +static std::unique_ptr<TokenBucket> TOKEN_BUCKET = nullptr;
🧹 Nitpick comments (1)
ratelimiter.cpp (1)
26-33
: Improve const correctness in TokenBucket::acquire.The method doesn't modify the object's logical state (only synchronization primitives), so it should be marked as const.
- bool acquire() { + bool acquire() const { std::unique_lock<std::mutex> lock(bucketMutex); if (tokens > 0) { --tokens; return true; } return false; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ratelimiter.cpp
(1 hunks)
🔇 Additional comments (2)
ratelimiter.cpp (2)
277-286
:⚠️ Potential issueFix thread safety issues in refill thread.
The refill thread uses a dangling reference to
refillThreadRunning
and continues running afterEVENT_LOOP
is deleted.- refillThread = std::thread([&refillThreadRunning]() { + refillThread = std::thread([refillThreadRunning = &refillThreadRunning]() { - while (EVENT_LOOP) { + while (*refillThreadRunning) { std::this_thread::sleep_for(std::chrono::seconds(1)); if (TOKEN_BUCKET) { TOKEN_BUCKET->refill(); } } }); - refillThread.detach(); + refillThread.detach(); // Consider managing thread lifecycle with join insteadLikely invalid or redundant comment.
308-311
: Review impact on message processing performance.The blocking
waitForToken()
call can prevent processing of other events, potentially causing message sending issues mentioned in the PR objectives.Consider implementing a non-blocking token acquisition mechanism or a separate queue for rate-limited operations to avoid blocking the main event loop.
ratelimiter.cpp
Outdated
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data) { | ||
if (!EVENT_LOOP || !TOKEN_BUCKET) { | ||
g_warning("Rate limiter not initialized."); | ||
return 0; | ||
} | ||
|
||
EVENT_LOOP->enqueue([interval, function, data]() { | ||
TOKEN_BUCKET->waitForToken(); // Wait until a token is available | ||
if (function) { | ||
function(data); | ||
} | ||
}); | ||
|
||
return 1; // Return a placeholder ID (or a meaningful one if task tracking is added). | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
🛠️ Refactor suggestion
Integrate with GLib/Purple timeout system for consistency
The current implementation uses a custom event loop that doesn't integrate with the codebase's timeout management system (GLib/Purple). This creates inconsistencies:
- No support for task cancellation via
purple_timeout_remove
- Incompatible with the source ID system used by GLib/Purple timeouts
- Custom event loop only used in this file
Consider refactoring to use the existing timeout system while maintaining rate-limiting functionality.
🔗 Analysis chain
Improve task scheduling mechanism.
The current implementation has several limitations:
- No way to cancel scheduled tasks
- Hardcoded return value doesn't allow task identification
- No task completion tracking
Consider implementing a proper task scheduling system with unique task IDs and cancellation support.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any existing task cancellation mechanisms
rg "cancel.*task|remove.*timeout" --type cpp
Length of output: 115
Script:
#!/bin/bash
# Check for timeout remove implementation and usage
rg "purple_timeout_remove|g_source_remove" -A 3 --type cpp
# Look for timeout add implementations to understand the pattern
ast-grep --pattern 'guint $FUNC($$$) {
$$$
}'
Length of output: 602
Script:
#!/bin/bash
# Check for GLib timeout usage patterns
rg "g_timeout_add|g_timeout_add_seconds" -A 2 --type cpp
# Look for EVENT_LOOP usage
rg "EVENT_LOOP" --type cpp
Length of output: 733
It still doesn't work correctly. Messages are not being sent AT ALL. |
…zing up Pidgin. Weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (1)
ratelimiter.cpp (1)
9-13
: Remove unused structRateLimitCommand
.The struct appears to be unused throughout the codebase. Consider removing it to avoid confusion, as the same parameters are directly used in the
rlimited_timeout_add
function.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ratelimiter.cpp
(1 hunks)
🔇 Additional comments (2)
ratelimiter.cpp (2)
173-176
: Fix potential deadlock in enqueueSync lambda capture.The lambda captures arguments by value which is correct, but the implementation could be improved for better exception safety.
220-223
: Add error handling in command execution loop.Commands are executed without any error handling, which could cause the event loop to crash if a command throws an exception.
ratelimiter.cpp
Outdated
} | ||
} | ||
}); | ||
refillThread.detach(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix thread lifecycle management inconsistency.
The thread is detached on line 293 but there's an attempt to join it on line 300. This will cause undefined behavior as you cannot join a detached thread.
- refillThread.detach();
Also applies to: 300-300
g_warning("Rate limiter not initialized."); | ||
initialize_rate_limiter(15); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove automatic initialization with hardcoded rate limit.
The function automatically initializes the rate limiter with a hardcoded value of 15 requests per second. This could be dangerous as different Discord API endpoints have different rate limits.
- initialize_rate_limiter(15);
+ return 0; // Return error when rate limiter is not initialized
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
g_warning("Rate limiter not initialized."); | |
initialize_rate_limiter(15); | |
} | |
g_warning("Rate limiter not initialized."); | |
return 0; // Return error when rate limiter is not initialized | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This auto-initialization was added for test purposes, as I have discovered that the rate limiter was not being initialized at all (which was the source of the problem of no messages being sent), and I want to prioritize fixing other issues over finding out why it was not being initialized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Metalhead33 I see that the rate limiter is supposed to be initialized in libpurple3_plugin_load
with a default value of 83 requests per second. The fact that you needed to add auto-initialization suggests that this initialization isn't happening. Would you like me to help investigate why the plugin's initialization function isn't being called? This might be the root cause of your messaging issues.
🔗 Analysis chain
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for potential initialization points in the codebase
rg -A 5 "initialize_rate_limiter" --type c --type cpp
# Look for Discord plugin initialization
ast-grep --pattern 'plugin_init($$$)'
# Look for Discord plugin load/unload functions
rg -A 5 "(plugin_load|plugin_unload)" --type c --type cpp
Length of output: 4219
✏️ Learnings added
Learnt from: Metalhead33
PR: EionRobb/purple-discord#467
File: ratelimiter.cpp:314-316
Timestamp: 2025-01-10T21:43:00.832Z
Learning: The Discord plugin's rate limiter is initialized during plugin load with a default value of 83 requests per second in `libpurple3_plugin_load`, and is properly cleaned up in `libpurple3_plugin_unload`. The rate limit values are later adjusted based on Discord's response headers through `UpdateRateLimits`.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ratelimiter.cpp
(1 hunks)
🧰 Additional context used
📓 Learnings (1)
ratelimiter.cpp (1)
Learnt from: Metalhead33
PR: EionRobb/purple-discord#467
File: ratelimiter.cpp:314-316
Timestamp: 2025-01-10T21:43:00.963Z
Learning: The Discord plugin's rate limiter is initialized during plugin load with a default value of 83 requests per second in `libpurple3_plugin_load`, and is properly cleaned up in `libpurple3_plugin_unload`. The rate limit values are later adjusted based on Discord's response headers through `UpdateRateLimits`.
🔇 Additional comments (5)
ratelimiter.cpp (5)
330-330
:⚠️ Potential issueImplement proper task ID generation.
Returning a constant value of 1 makes it impossible to cancel specific tasks.
- return 1; + static std::atomic<guint> nextId{1}; + return nextId++;Likely invalid or redundant comment.
220-223
:⚠️ Potential issueAdd error handling in command execution.
Commands are executed without any error handling, which could crash the event loop.
for (Command& func : readBuffer) { + try { func(); + } catch (const std::exception& e) { + g_warning("Command execution failed: %s", e.what()); + } catch (...) { + g_warning("Command execution failed with unknown error"); + } }Likely invalid or redundant comment.
57-57
:⚠️ Potential issueRename Command typedef to avoid name collision.
The Command typedef overshadows the RateLimitCommand struct.
- typedef std::function<void()> Command; + typedef std::function<void()> EventCommand;Likely invalid or redundant comment.
43-47
:⚠️ Potential issueAdd timeout mechanism to waitForToken.
The current implementation can deadlock if tokens are never refilled.
- void waitForToken() { + bool waitForToken(std::chrono::milliseconds timeout = std::chrono::seconds(30)) { std::unique_lock<std::mutex> lock(bucketMutex); - condVar.wait(lock, [this] { return tokens > 0; }); + if (!condVar.wait_for(lock, timeout, [this] { return tokens > 0; })) { + g_warning("Timeout waiting for rate limit token"); + return false; + } --tokens; + return true; }Likely invalid or redundant comment.
284-293
:⚠️ Potential issueFix thread lifecycle management.
The detached thread continues running and accessing potentially deleted objects after
stop_rate_limiter
is called.- refillThread = std::thread([refillThreadRunning = &refillThreadRunning]() { + refillThread = std::thread([refillThreadRunning = &refillThreadRunning, bucket = TOKEN_BUCKET.get()]() { while (*refillThreadRunning) { std::this_thread::sleep_for(std::chrono::seconds(1)); - if (TOKEN_BUCKET) { + if (bucket) { g_info("Refilling Discord rate-limiter token bucket."); - TOKEN_BUCKET->refill(); + bucket->refill(); } } }); - refillThread.detach();Likely invalid or redundant comment.
explicit TokenBucket(guint rateLimitPerSecond) | ||
: tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), interval(1000 / rateLimitPerSecond) { | ||
g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add overflow check for interval calculation.
The interval calculation 1000 / rateLimitPerSecond
could result in integer division issues or overflow for very small/large values.
explicit TokenBucket(guint rateLimitPerSecond)
- : tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), interval(1000 / rateLimitPerSecond) {
+ : tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond),
+ interval(rateLimitPerSecond > 0 ? 1000 / rateLimitPerSecond : 1000) {
g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
explicit TokenBucket(guint rateLimitPerSecond) | |
: tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), interval(1000 / rateLimitPerSecond) { | |
g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond); | |
} | |
explicit TokenBucket(guint rateLimitPerSecond) | |
: tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), | |
interval(rateLimitPerSecond > 0 ? 1000 / rateLimitPerSecond : 1000) { | |
g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond); | |
} |
ratelimiter.cpp
Outdated
g_info("Enquing deferred function execution on the rate limiter!\nInterval: %d\nFunction pointer:%p\nUser data pointer:%p\n", interval, (void*)function, (void*)data); | ||
EVENT_LOOP->enqueue([interval, function, data]() { | ||
TOKEN_BUCKET->waitForToken(); // Wait until a token is available | ||
if (function) { | ||
function(data); | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check waitForToken result and handle errors.
The waitForToken result is not checked, which could lead to executing functions when rate limits are exceeded.
EVENT_LOOP->enqueue([interval, function, data]() {
- TOKEN_BUCKET->waitForToken();
+ if (!TOKEN_BUCKET->waitForToken()) {
+ g_warning("Failed to acquire rate limit token, skipping function execution");
+ return;
+ }
if (function) {
function(data);
}
});
Committable suggestion skipped: line range outside the PR's diff.
Proper ID generation. Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This is kind of weird. It seems to work as expected when I open it in the command line, but when I open Pidgin from the GUI, messages are not being sent. Very unusual. It also crashes when I open "debug logs". |
Nope, upon a more recent testing, it appears to work. But it crashes randomly when receiving a message. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ratelimiter.cpp
(1 hunks)
🧰 Additional context used
📓 Learnings (1)
ratelimiter.cpp (1)
Learnt from: Metalhead33
PR: EionRobb/purple-discord#467
File: ratelimiter.cpp:314-316
Timestamp: 2025-01-10T21:43:00.963Z
Learning: The Discord plugin's rate limiter is initialized during plugin load with a default value of 83 requests per second in `libpurple3_plugin_load`, and is properly cleaned up in `libpurple3_plugin_unload`. The rate limit values are later adjusted based on Discord's response headers through `UpdateRateLimits`.
🔇 Additional comments (5)
ratelimiter.cpp (5)
272-277
: LGTM! Good use of smart pointers and thread safety.The global state management is well-implemented with proper memory management using smart pointers and thread safety using recursive mutex.
313-342
: LGTM! Well-implemented rate-limited command execution.The function has proper error handling, uses promises for async operation, and correctly integrates with GLib's timeout system.
223-226
:⚠️ Potential issueAdd error handling in command execution loop.
Commands are executed without any error handling, which could crash the event loop thread.
for (Command& func : readBuffer) { + try { func(); + } catch (const std::exception& e) { + g_warning("Command execution failed: %s", e.what()); + } catch (...) { + g_warning("Command execution failed with unknown error"); + } }Likely invalid or redundant comment.
23-26
:⚠️ Potential issueAdd input validation and overflow protection in TokenBucket constructor.
The constructor needs validation for the input parameter and protection against division by zero or overflow.
explicit TokenBucket(guint rateLimitPerSecond) - : tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), interval(1000 / rateLimitPerSecond) { + : tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), + interval(rateLimitPerSecond > 0 ? 1000 / rateLimitPerSecond : 1000) { + if (rateLimitPerSecond == 0) { + g_warning("Invalid rate limit of 0 requests per second, defaulting to 1"); + } g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond); }Likely invalid or redundant comment.
316-318
:⚠️ Potential issueRemove auto-initialization and investigate plugin initialization.
Based on the learnings, the rate limiter should be initialized during plugin load with a default value of 83 requests per second. Auto-initialization with a hardcoded value of 15 could cause API issues.
Let's investigate why the plugin's initialization isn't working:
✅ Verification successful
Remove redundant auto-initialization to prevent API issues
The rate limiter is already properly initialized:
- During plugin load with correct value (83)
- Dynamically adjusted based on server responses
The auto-initialization with value 15 should be removed as it:
- Uses incorrect rate limit value
- Could mask initialization failures that should be fixed
- Might interfere with dynamic rate limiting
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check plugin initialization rg -A 5 "initialize_rate_limiter" --type c # Look for plugin load function ast-grep --pattern 'libpurple3_plugin_load($$$)'Length of output: 920
Script:
#!/bin/bash # Find the plugin initialization function and its context rg -B 5 -A 10 "initialize_rate_limiter\(83\)" --type c # Check for rate limiter usage to understand the execution flow rg "g_warning.*Rate limiter not initialized" --type c -B 5Length of output: 637
Script:
#!/bin/bash # Search for any rate limiter related warnings with broader pattern rg -B 5 "warning.*[Rr]ate.*[Ll]imiter" --type c # Also check for g_warning calls near rate limiter usage rg -B 5 "g_warning.*[Rr]ate" --type cLength of output: 501
* @var writeBuffer | ||
* @brief Buffer to store commands waiting to be executed. | ||
*/ | ||
std::vector<Command> writeBuffer; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding a maximum queue size limit.
The writeBuffer vector could grow unbounded, potentially causing memory issues under high load.
-std::vector<Command> writeBuffer;
+std::vector<Command> writeBuffer;
+static constexpr size_t MAX_QUEUE_SIZE = 1000;
Then in enqueue methods:
void enqueue(Command&& callable) {
std::lock_guard<std::mutex> lockguard(commandsMutex);
+ if (writeBuffer.size() >= MAX_QUEUE_SIZE) {
+ g_warning("Command queue full, dropping command");
+ return;
+ }
writeBuffer.emplace_back(std::move(callable));
}
Committable suggestion skipped: line range outside the PR's diff.
It appears relatively stable now, now that I have reintegrated the original g_timeout_add. Weird. |
@EionRobb Can we have autobuilds made of this version too? Some people might be willing to try it out. |
Experimental pull request. Not ready yet: current testing reveals completely inability to send messages. Otherwise, it appears to function, because it loads buddies and servers.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores