Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate limits event loop #467

Open
wants to merge 21 commits into
base: master
Choose a base branch
from

Conversation

Metalhead33
Copy link

@Metalhead33 Metalhead33 commented Dec 20, 2024

Experimental pull request. Not ready yet: current testing reveals completely inability to send messages. Otherwise, it appears to function, because it loads buddies and servers.

Summary by CodeRabbit

  • New Features

    • Introduced rate limiting functionality for Discord API requests.
    • Added support for C++20 in the build process.
    • Implemented an event-driven architecture for managing timed command execution.
  • Bug Fixes

    • Enhanced control flow and error handling related to API rate limits.
  • Documentation

    • New header file for rate limiting functionality added.
  • Chores

    • Updated build configuration to include C++ files and standards.

Copy link

coderabbitai bot commented Dec 20, 2024

Walkthrough

The pull request introduces a comprehensive rate limiting mechanism for Discord API interactions across multiple files. The changes involve updating the Makefile to support C++20 compilation, modifying libdiscord.c to implement rate limit tracking, and adding two new files (ratelimiter.cpp and ratelimiter.h) to manage asynchronous command execution and rate limiting. The implementation focuses on handling Discord API request rate limits more efficiently by tracking and respecting API rate limit headers.

Changes

File Change Summary
Makefile - Added CPPFLAGS with -std=c++20 flag
- Updated CFLAGS with -std=c99 flag
- Defined CXX for Windows and non-Windows environments
- Added ratelimiter.cpp to PURPLE_C_FILES
libdiscord.c - Added rate limit header parsing functions
- Introduced rate limit tracking variables
- Updated discord_fetch_url_with_method_delay to use new rate limiting logic
- Modified discord_response_callback to process rate limit headers
ratelimiter.cpp - Implemented TokenBucket class for rate limiting
- Enhanced EventLoop class for async command execution
- Added global functions for rate limiter initialization and management
ratelimiter.h - Declared functions for rate limiter initialization
- Provided interface for stopping rate limiter
- Defined rlimited_timeout_add function

Sequence Diagram

sequenceDiagram
    participant Client
    participant LibDiscord
    participant RateLimiter
    participant DiscordAPI

    Client->>LibDiscord: Make API Request
    LibDiscord->>RateLimiter: Check Rate Limits
    RateLimiter->>DiscordAPI: Send Request
    DiscordAPI-->>RateLimiter: Response with Headers
    RateLimiter->>RateLimiter: Update Rate Limit Tracking
    RateLimiter->>LibDiscord: Allow/Delay Next Request
    LibDiscord->>Client: API Response
Loading

Poem

🐰 Rate limits dance, a digital ballet,
Requests hop and skip with measured sway,
C++ and C, in harmony they sing,
Discord's rhythm, a synchronized thing!
Code rabbits rejoice, efficiency's delight! 🚀

Finishing Touches

  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🔭 Outside diff range comments (1)
Makefile (1)

Line range hint 123-124: Link against C++ standard library

The build commands are missing explicit linking against the C++ standard library.

-	$(CC) -fPIC $(CFLAGS) $(CPPFLAGS) -shared -o $@ $^ $(LDFLAGS) `$(PKG_CONFIG) purple glib-2.0 json-glib-1.0 zlib --libs --cflags`  $(INCLUDES) -Ipurple2compat -g -ggdb
+	$(CXX) -fPIC $(CFLAGS) $(CPPFLAGS) -shared -o $@ $^ $(LDFLAGS) `$(PKG_CONFIG) purple glib-2.0 json-glib-1.0 zlib --libs --cflags`  $(INCLUDES) -Ipurple2compat -g -ggdb -lstdc++

-	$(CC) -fPIC $(CFLAGS) $(CPPFLAGS) -shared -o $@ $^ $(LDFLAGS) `$(PKG_CONFIG) purple-3 glib-2.0 json-glib-1.0 zlib --libs --cflags` $(INCLUDES)  -g -ggdb
+	$(CXX) -fPIC $(CFLAGS) $(CPPFLAGS) -shared -o $@ $^ $(LDFLAGS) `$(PKG_CONFIG) purple-3 glib-2.0 json-glib-1.0 zlib --libs --cflags` $(INCLUDES)  -g -ggdb -lstdc++

Also applies to: 126-127

🧹 Nitpick comments (9)
ratelimiter.cpp (4)

64-70: Non-movable and non-copyable approach
The EventLoop class is declared as non-copyable and non-movable, but there’s no strong reason to forbid moving in many concurrency scenarios.

Consider allowing move semantics if that simplifies usage. For instance, you might move the thread or other resources. Confirm that concurrency correctness is preserved before enabling moves.


169-171: Thread safety with blocking operations
The loopFunction is designed to handle indefinite waits for commands, but if commands contain blocking operations (like sleeps), the entire event loop can be stalled.

If blocking is expected, consider splitting the event loop or using a separate worker thread pool to avoid unpredictably suspending time-critical dispatch.


250-255: Using std::this_thread::sleep_for in the main event loop
This might unintentionally block your event thread. Additionally, if interval is large, the entire thread remains blocked, preventing the processing of other tasks.

Consider scheduling tasks on a separate timer mechanism (e.g., a dedicated timed tasks queue) or using non-blocking wait strategies for better concurrency.


257-258: Placeholder return value
The function returns 1 unconditionally. If the caller needs a handle or an ID to manage the timeout, returning a meaningful identifier helps track or cancel the scheduled command later.

Refactor the function to store and return an actual handle for the scheduled task. For example:

static std::atomic_uint next_id = 1U;
// ...
return next_id++;
libdiscord.c (3)

108-128: Rate limit constants and variables need documentation

The code introduces several rate limit related constants and variables but lacks documentation explaining their purpose and usage. Consider adding comments to explain:

  • What each variable represents
  • The units of measurement (seconds, milliseconds, etc.)
  • Any constraints or valid ranges
  • How they interact with each other
+/* Rate limit constants and state */
+
+/* Initial rate limit values */
 const int init_xrateLimit=40;        /* Maximum number of requests allowed */
 const int init_xrateRemaining=10;    /* Remaining requests in current window */
 const double init_xrateReset=55;     /* Time until rate limit resets (seconds) */
 const double init_xrateResetAfter=1; /* Time after which rate limit resets (seconds) */
 const int init_xRateAllowedPerSecond=12;  /* Calculated requests per second */
 const int init_xRateDelayPerRequest=(int)((1.0 / (double)init_xRateAllowedPerSecond) * 1000.0); /* Delay between requests (ms) */
 const int init_xRateAllowedRemaining=10;   /* Remaining allowed requests */

+/* Current rate limit state */
 static int xrateLimit=init_xrateLimit;
 static int xrateRemaining=init_xrateRemaining;
 static double xrateReset=init_xrateReset;
 static double xrateResetAfter=init_xrateResetAfter;
 static int xRateAllowedPerSecond=init_xRateAllowedPerSecond;
 static int xRateDelayPerRequest=init_xRateDelayPerRequest;
 static int xRateAllowedRemaining=init_xRateAllowedRemaining;

1695-1695: Rate limit delay calculation needs review

The delay calculation MAX(65,xRateDelayPerRequest) uses a magic number 65 without explanation. Consider documenting why this minimum delay is needed.

-        rlimited_timeout_add(delay + MAX(65,xRateDelayPerRequest), discord_fetch_url_with_method_delay_cb, request);
+        /* Minimum delay of 65ms to prevent overwhelming the server */
+        const int MIN_REQUEST_DELAY_MS = 65;
+        rlimited_timeout_add(delay + MAX(MIN_REQUEST_DELAY_MS, xRateDelayPerRequest), 
+                           discord_fetch_url_with_method_delay_cb, request);

10675-10675: Initial rate limiter initialization

The code initializes the rate limiter with a hardcoded value of 83ms. Consider making this configurable or documenting why this specific value was chosen.

+    /* Initialize rate limiter with default delay of 83ms (approx 12 requests/sec) */
+    const int DEFAULT_RATE_LIMIT_DELAY_MS = 83;
-    initialize_rate_limiter(83);
+    initialize_rate_limiter(DEFAULT_RATE_LIMIT_DELAY_MS);
ratelimiter.h (1)

8-10: Add documentation for the public API functions

The header file introduces critical rate limiting functionality but lacks documentation. Please add:

  • Function descriptions
  • Parameter explanations
  • Thread safety guarantees
  • Resource ownership details
  • Error handling behavior

Example format:

/**
 * initialize_rate_limiter:
 * @interval: The rate limiting interval in milliseconds
 *
 * Initializes the rate limiting system. Must be called before any
 * rlimited_timeout_add calls. Not thread-safe.
 */
Makefile (1)

113-113: Rename PURPLE_C_FILES to reflect mixed C/C++ content

The variable name suggests C files only but includes C++ files.

-PURPLE_C_FILES := ratelimiter.cpp libdiscord.c $(C_FILES)
+PURPLE_SOURCE_FILES := ratelimiter.cpp libdiscord.c $(C_FILES)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c5dfbe0 and d25bfd0.

📒 Files selected for processing (4)
  • Makefile (3 hunks)
  • libdiscord.c (8 hunks)
  • ratelimiter.cpp (1 hunks)
  • ratelimiter.h (1 hunks)
🔇 Additional comments (7)
ratelimiter.cpp (1)

94-99: Potential confusion around rvalue references vs. const references
The enqueue(Command&&) correctly applies std::move. Meanwhile, the other enqueue variant takes a const reference but still uses std::move.

Check if the code compiles as intended. Typically, applying std::move on a const reference doesn’t move anything. The actual effect is a copy. If this is intentional, it might be worth clarifying in comments. Otherwise, consider overloading for both lvalues and rvalues in a consistent manner.

libdiscord.c (5)

55-55: Include new rate limiter header

The code adds a new header ratelimiter.h which suggests implementation of rate limiting functionality.


1501-1507: Rate limit header parsing function looks good

The parse_rate_limit_headers function is well structured and handles header extraction cleanly.


1516-1516: Good placement of rate limit header parsing

The rate limit header parsing is appropriately placed in the response callback to process headers as soon as they are received.


6337-6337: Good use of rate limiter for socket writes

The code appropriately uses the rate limiter for socket writes to prevent overwhelming the server.


10689-10689: Clean rate limiter shutdown

The code properly stops the rate limiter during plugin unload.

Makefile (1)

32-32: Consider C++17 instead of C++20 for better compatibility

C++20 is relatively new and might not be widely supported. Consider using C++17 unless C++20 features are essential.

ratelimiter.cpp Show resolved Hide resolved
ratelimiter.cpp Outdated Show resolved Hide resolved
ratelimiter.cpp Outdated Show resolved Hide resolved
ratelimiter.cpp Outdated Show resolved Hide resolved
libdiscord.c Show resolved Hide resolved

void initialize_rate_limiter(guint interval);
void stop_rate_limiter();
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding error handling to rlimited_timeout_add

The function should indicate whether the timeout was successfully added.

-guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data);
+gboolean rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data, guint *timeout_id);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data);
gboolean rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data, guint *timeout_id);

Comment on lines +30 to +32
CPPFLAGS ?= -O2 -g -pipe -Wall
CFLAGS += -std=c99 -DDISCORD_PLUGIN_VERSION='"$(PLUGIN_VERSION)"' -DMARKDOWN_PIDGIN
CPPFLAGS+= -std=c++20
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add essential compiler flags to CPPFLAGS

The C++ flags are missing important options needed for shared library compilation:

-CPPFLAGS ?= -O2 -g -pipe -Wall
+CPPFLAGS ?= -O2 -g -pipe -Wall -Wextra -Werror -fPIC

Also consider adding:

  • -Wconversion for implicit conversion warnings
  • -Wshadow for variable shadowing warnings
  • -Wno-unused-parameter to match C flags
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CPPFLAGS ?= -O2 -g -pipe -Wall
CFLAGS += -std=c99 -DDISCORD_PLUGIN_VERSION='"$(PLUGIN_VERSION)"' -DMARKDOWN_PIDGIN
CPPFLAGS+= -std=c++20
CPPFLAGS ?= -O2 -g -pipe -Wall -Wextra -Werror -fPIC
CFLAGS += -std=c99 -DDISCORD_PLUGIN_VERSION='"$(PLUGIN_VERSION)"' -DMARKDOWN_PIDGIN
CPPFLAGS+= -std=c++20

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
ratelimiter.cpp (1)

9-13: Add documentation for CommandData struct

The struct lacks documentation explaining its purpose and member variables. Consider adding Doxygen-style documentation similar to other documented entities in the file.

+/**
+ * @struct CommandData
+ * @brief Holds information for a scheduled command execution
+ */
 struct CommandData {
+    /** @brief Interval in milliseconds before executing the function */
     guint interval;
+    /** @brief Function to be executed */
     GSourceFunc function;
+    /** @brief User data passed to the function */
     gpointer data;
 };
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d25bfd0 and 2a8efc0.

📒 Files selected for processing (1)
  • ratelimiter.cpp (1 hunks)
🔇 Additional comments (3)
ratelimiter.cpp (3)

22-22: Rename Command typedef to avoid type conflict

The Command typedef conflicts with the CommandData struct. Consider renaming it to EventCommand or QueuedCommand for clarity.


199-206: Fix race condition in destructor

The current destructor implementation has a race condition. Consider using the suggested approach from the previous review to set isRunning under a lock.


231-232: ⚠️ Potential issue

Use atomic for INTERVAL and smart pointer for EVENT_LOOP

The global INTERVAL variable is accessed from multiple threads without synchronization, and EVENT_LOOP uses manual memory management.

-static EventLoop* EVENT_LOOP = nullptr;
-static guint INTERVAL = 0;
+static std::unique_ptr<EventLoop> EVENT_LOOP = nullptr;
+static std::atomic<guint> INTERVAL{0};

Likely invalid or redundant comment.

ratelimiter.cpp Outdated
Comment on lines 244 to 307
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data) {
if (!EVENT_LOOP) {
g_warning("Rate limiter not initialized.");
return 0;
}

EVENT_LOOP->enqueue([interval, function, data, INTERVAL]() {
std::this_thread::sleep_for(std::chrono::milliseconds(interval ? interval : INTERVAL));
if (function) {
function(data);
}
});

return 1; // Return a placeholder ID (or a meaningful one if task tracking is added).
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve task scheduling mechanism

The current implementation has several limitations:

  1. No way to cancel scheduled tasks
  2. Sleep blocks the event loop thread
  3. Hardcoded return value doesn't allow task identification

Consider implementing a proper task scheduling system:

+class TaskId {
+    static std::atomic<guint> nextId{1};
+    guint id;
+public:
+    TaskId() : id(nextId++) {}
+    operator guint() const { return id; }
+};

 guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data) {
     if (!EVENT_LOOP) {
         g_warning("Rate limiter not initialized.");
         return 0;
     }

+    auto taskId = TaskId();
+    auto future = EVENT_LOOP->enqueueAsync([=]() {
+        std::this_thread::sleep_for(
+            std::chrono::milliseconds(interval ? interval : INTERVAL));
+        return function ? function(data) : FALSE;
+    });

-    return 1;
+    return taskId;
 }

Committable suggestion skipped: line range outside the PR's diff.

ratelimiter.cpp Show resolved Hide resolved
ratelimiter.cpp Outdated
Comment on lines 123 to 177
template<typename Func, typename... Args> inline auto enqueueSync(Func&& callable, Args&&...args)
{
if (std::this_thread::get_id() == loopThread.get_id())
{
return std::invoke(
std::forward<Func>(callable),
std::forward<Args>(args)...);
}

using return_type = std::invoke_result_t<Func, Args...>;
using packaged_task_type =
std::packaged_task<return_type(Args&&...)>;

packaged_task_type task(std::forward<Func>(callable));

enqueue([&]
{
task(std::forward<Args>(args)...);
});

return task.get_future().get();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential deadlock in enqueueSync

The current implementation of enqueueSync captures arguments by reference in the lambda ([&]). This could lead to a deadlock or undefined behavior if the arguments go out of scope before the task executes.

-        enqueue([&]
+        enqueue([task = std::move(task), 
+                ...args = std::forward<Args>(args)...]() mutable
         {
-            task(std::forward<Args>(args)...);
+            task(std::forward<decltype(args)>(args)...);
         });

Committable suggestion skipped: line range outside the PR's diff.

ratelimiter.cpp Outdated
Comment on lines 250 to 304
EVENT_LOOP->enqueue([interval, function, data, INTERVAL]() {
std::this_thread::sleep_for(std::chrono::milliseconds(interval ? interval : INTERVAL));
if (function) {
function(data);
}
});
Copy link

@coderabbitai coderabbitai bot Dec 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Blocking sleep in event loop can impact message processing performance

The blocking sleep_for in the event loop is problematic as it:

  • Blocks the event loop thread while waiting, preventing processing of other events
  • Can cause delays in message sending and heartbeat operations which use the same event loop
  • May lead to message queuing and potential timeouts if multiple rate-limited operations are scheduled

Recommendations:

  • Replace blocking sleep with non-blocking timer or delayed execution mechanism
  • Consider using async scheduling or dedicated timer queue instead of direct thread sleep
🔗 Analysis chain

Verify impact on message sending functionality

Given the PR objective mentioning issues with sending messages, the blocking sleep in the event loop might be causing delays or preventing message processing.


The initial results show several timeout-based operations related to message sending, including heartbeat and socket writes. Let's examine their implementations and the event loop structure to better understand the potential impact of the blocking sleep.


The previous results show the event loop using blocking sleep, but we need to understand the message processing architecture to determine if this is causing the reported issues. Let's examine queue implementations and other potential blocking operations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for message sending related functions that might be affected
rg -l "message.*send|send.*message" --type cpp --type c

# Look for other uses of timeout_add to understand the impact
rg "timeout_add" --type cpp --type c

Length of output: 1316


Script:

#!/bin/bash
# Let's examine the event loop implementation and message sending functions
ast-grep --pattern 'discord_send_heartbeat($$$)' 
ast-grep --pattern 'discord_socket_write_data_delay_cb($$$)'

# Check the implementation of EVENT_LOOP
rg -A 5 "class.*EVENT_LOOP" --type cpp
rg "EVENT_LOOP.*enqueue" -A 3 --type cpp

Length of output: 474


Script:

#!/bin/bash
# Let's check for concurrent message processing or queue implementations
rg -A 5 "class.*Queue|struct.*Queue" --type cpp
rg "process.*message|handle.*message" -A 3 --type cpp

# Look for other sleep operations that might affect message processing
rg "sleep|delay" -A 2 --type cpp

Length of output: 336

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure this is a wise choice? The whole point is to ensure that less than X requests are sent per second, hence the decision to sleep before execution. Correct me if I'm wrong, but non-blocking timer would lead us back to the naive solution of merely delaying execution, which can still leads to surpassing the rate limits in case of a sudden burst of requests.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Metalhead33, you raise a valid concern about preventing bursts beyond the allowed rate limit. However, a non-blocking approach does not necessarily have to be naïve. Instead of relying on a single blocking sleep, you can use a queuing mechanism or token bucket approach that schedules requests only when tokens (representing allowable requests) are available. This setup ensures you never exceed X requests per second while still allowing the event loop to handle other operations concurrently.

When you block the entire event loop thread with sleep, you essentially stall all other scheduled tasks, including heartbeat operations and incoming messages, which can cause delays or even failures, especially when multiple tasks accumulate. By contrast, if you use a token bucket or scheduled timer approach in the background, you avoid blocking the main loop. Each incoming request is queued until a token is released or the timer triggers the next call, thereby respecting the same rate limit without pausing everything else.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
ratelimiter.cpp (1)

41-45: Add timeout to waitForToken to prevent indefinite blocking

The waitForToken method could block indefinitely if tokens are never refilled. Consider adding a timeout parameter and returning a boolean to indicate success.

-    void waitForToken() {
+    bool waitForToken(std::chrono::milliseconds timeout = std::chrono::seconds(30)) {
         std::unique_lock<std::mutex> lock(bucketMutex);
-        condVar.wait(lock, [this] { return tokens > 0; });
+        if (!condVar.wait_for(lock, timeout, [this] { return tokens > 0; })) {
+            return false;
+        }
         --tokens;
+        return true;
     }
libdiscord.c (3)

108-112: Documentation for rate limit calculation formula

The comment block explains the rate limit calculation formula but could be more descriptive. Consider adding more context about why these calculations are needed and what each variable represents.

/*
+  Rate limit calculation formula:
+  xRateAllowedPerSecond = remaining requests / reset time
+  xRateAllowedRemaining = xRateAllowedPerSecond (tracks current remaining allowed requests)
+  xRateDelayPerRequest = (1 / xRateAllowedPerSecond) * 1000 (converts to milliseconds)
 */

114-128: Initialize rate limit constants with meaningful values

The initial rate limit values appear to be somewhat arbitrary. Consider adding comments explaining why these specific values were chosen.

-const int init_xrateLimit=40;
-const int init_xrateRemaining=10;
-const double init_xrateReset=55;
-const double init_xrateResetAfter=1;
-const int init_xRateAllowedPerSecond=15;
+// Initial rate limit values based on Discord's default limits
+const int init_xrateLimit = 40;        // Maximum number of requests allowed
+const int init_xrateRemaining = 10;    // Initial remaining requests
+const double init_xrateReset = 55;     // Time until rate limit resets
+const double init_xrateResetAfter = 1; // Time after which rate limit resets
+const int init_xRateAllowedPerSecond = 15; // Initial rate of allowed requests

10675-10675: Rate limiter initialization and cleanup

The code properly initializes and cleans up the rate limiter, but the initial delay value of 83ms seems arbitrary.

-    initialize_rate_limiter(83);
+    /* Initialize rate limiter with 83ms delay (approximately 12 requests per second) */
+    initialize_rate_limiter(83);

Also applies to: 10689-10689

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2a8efc0 and a15a685.

📒 Files selected for processing (2)
  • libdiscord.c (8 hunks)
  • ratelimiter.cpp (1 hunks)
🔇 Additional comments (8)
ratelimiter.cpp (5)

9-13: Rename Command struct to avoid naming conflict

The struct Command conflicts with the Command typedef in EventLoop. This can lead to ambiguity and confusion.


205-224: Add error handling to prevent event loop crashes

The command execution loop lacks error handling, which could cause the event loop to crash if a command throws an exception.


233-239: Fix potential race condition in destructor

The current implementation has a race condition during destruction. Setting isRunning through the event loop can lead to unexpected behavior.


293-307: 🛠️ Refactor suggestion

Improve return value handling in rlimited_timeout_add

The function always returns 1 (except on error), which doesn't provide meaningful task identification. This makes it impossible to cancel or track specific tasks.

Consider implementing a proper task ID system:

+static std::atomic<guint> nextTaskId{1};
+
 guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data) {
     if (!EVENT_LOOP || !TOKEN_BUCKET) {
         g_warning("Rate limiter not initialized.");
         return 0;
     }

+    guint taskId = nextTaskId++;
     EVENT_LOOP->enqueue([interval, function, data]() {
         TOKEN_BUCKET->waitForToken(); // Wait until a token is available
         if (function) {
             function(data);
         }
     });

-    return 1; // Return a placeholder ID
+    return taskId;
 }

Likely invalid or redundant comment.


299-304: Blocking waitForToken may cause message sending issues

The current implementation blocks the event loop while waiting for a token. This could be contributing to the reported message sending issues by preventing other operations from being processed.

libdiscord.c (3)

1501-1507: LGTM: Rate limit header parsing function

The parse_rate_limit_headers function is well-structured and correctly extracts the rate limit headers from the response.


6337-6337: Use of rlimited_timeout_add for rate limiting

The code uses rlimited_timeout_add for rate limiting, but there's no error handling if the rate limiter fails. Consider adding error handling.

-    rlimited_timeout_add(1000, discord_socket_write_data_delay_cb, info);
+    if (!rlimited_timeout_add(1000, discord_socket_write_data_delay_cb, info)) {
+        purple_debug_error("discord", "Failed to add rate limited timeout\n");
+        g_free(info);
+        return;
+    }

1475-1500: 🛠️ Refactor suggestion

Rate limit update function needs improvement

The UpdateRateLimits function has several issues that should be addressed:

  1. No bounds checking on numeric conversions
  2. Magic number 4.0 multiplier without explanation
  3. Debug messages could be more descriptive
 static void UpdateRateLimits(const gchar *xrateLimitS, const gchar *xrateRemainingS, const gchar *xRateReset, const gchar *xRateResetAfter)
 {
-    if(!xrateLimitS || !xrateRemainingS || !xRateReset || !xRateResetAfter) return;
+    if(!xrateLimitS || !xrateRemainingS || !xRateReset || !xRateResetAfter) {
+        purple_debug_warning("discord", "Missing required rate limit headers\n");
+        return;
+    }

+    /* Parse rate limit values with error checking */
+    gint64 limit = g_ascii_strtoll(xrateLimitS, NULL, 10);
+    if (limit <= 0 || limit > G_MAXINT) {
+        purple_debug_warning("discord", "Invalid rate limit value: %s\n", xrateLimitS);
+        return;
+    }
+    xrateLimit = (int)limit;

-    xrateLimit=atoi(xrateLimitS);
     purple_debug_info("discord", "X-RateLimit-Limit: %s\n", xrateLimitS);
     xrateRemaining=atoi(xrateRemainingS);
     purple_debug_info("discord", "X-RateLimit-Remaining: %s\n", xrateRemainingS);
     xrateReset=atof(xRateReset);
     purple_debug_info("discord", "X-RateLimit-Reset: %s\n", xRateReset);
-    xrateResetAfter=atof(xRateResetAfter) * 4.0;
+    /* Apply safety margin multiplier to reset time */
+    const double RESET_SAFETY_MARGIN = 4.0;
+    xrateResetAfter = atof(xRateResetAfter) * RESET_SAFETY_MARGIN;
     purple_debug_info("discord", "X-RateLimit-Reset-After: %s\n", xRateResetAfter);

Likely invalid or redundant comment.

ratelimiter.cpp Outdated
Comment on lines 272 to 280
std::thread([]() {
while (EVENT_LOOP) {
std::this_thread::sleep_for(std::chrono::seconds(1));
if (TOKEN_BUCKET) {
TOKEN_BUCKET->refill();
}
}
}).detach();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix thread lifecycle management

The detached thread continues running and accessing potentially deleted objects after stop_rate_limiter is called. This can lead to crashes.

-    std::thread([]() {
-        while (EVENT_LOOP) {
-            std::this_thread::sleep_for(std::chrono::seconds(1));
-            if (TOKEN_BUCKET) {
-                TOKEN_BUCKET->refill();
-            }
-        }
-    }).detach();
+    static std::atomic<bool> refillThreadRunning{false};
+    static std::thread refillThread;
+    
+    if (refillThreadRunning.exchange(true)) {
+        return;  // Thread already running
+    }
+    
+    refillThread = std::thread([&refillThreadRunning]() {
+        while (refillThreadRunning) {
+            std::this_thread::sleep_for(std::chrono::seconds(1));
+            if (TOKEN_BUCKET) {
+                TOKEN_BUCKET->refill();
+            }
+        }
+    });
+    refillThread.detach();

Then update stop_rate_limiter:

 void stop_rate_limiter() {
+    refillThreadRunning = false;
+    std::this_thread::sleep_for(std::chrono::seconds(1));  // Allow thread to complete
     if (EVENT_LOOP) {
         delete EVENT_LOOP;
         EVENT_LOOP = nullptr;
     }
     if (TOKEN_BUCKET) {
         delete TOKEN_BUCKET;
         TOKEN_BUCKET = nullptr;
     }
 }

Committable suggestion skipped: line range outside the PR's diff.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
ratelimiter.cpp (5)

9-13: ⚠️ Potential issue

Rename Command struct to avoid naming conflict.

The struct name conflicts with EventLoop::Command typedef defined later, which could lead to confusion and potential issues.

-struct Command {
+struct RateLimitCommand {
   guint interval;
   GSourceFunc function;
   gpointer data;
};

171-174: ⚠️ Potential issue

Fix potential deadlock in enqueueSync.

The lambda captures arguments by reference ([&]), which could lead to a deadlock if the arguments go out of scope before task execution.

-        enqueue([&]
+        enqueue([task = std::move(task)]
         {
-            task(std::forward<Args>(args)...);
+            task(std::forward<Args>(args)...);
         });

233-239: ⚠️ Potential issue

Fix race condition in destructor.

Setting isRunning = false through the event loop can lead to a race condition if new commands are enqueued between lines 234 and 238.

 EventLoop::~EventLoop()
 {
-    enqueue([this]
-    {
-        isRunning = false;
-    });
+    {
+        std::lock_guard<std::mutex> lockguard(commandsMutex);
+        isRunning = false;
+        writeBuffer.clear();
+    }
+    condVar.notify_all();
     loopThread.join();
 }

218-221: ⚠️ Potential issue

Add error handling in command execution loop.

Commands are executed without any error handling, which could cause the event loop to crash if a command throws an exception.

         for (Command& func : readBuffer)
         {
+            try {
                 func();
+            } catch (const std::exception& e) {
+                g_warning("Command execution failed: %s", e.what());
+            } catch (...) {
+                g_warning("Command execution failed with unknown error");
+            }
         }

264-268: 🛠️ Refactor suggestion

Use smart pointers for global variables.

Manual memory management with raw pointers could lead to memory leaks. Consider using smart pointers for automatic cleanup.

-static EventLoop* EVENT_LOOP = nullptr;
-static TokenBucket* TOKEN_BUCKET = nullptr;
+static std::unique_ptr<EventLoop> EVENT_LOOP = nullptr;
+static std::unique_ptr<TokenBucket> TOKEN_BUCKET = nullptr;
🧹 Nitpick comments (1)
ratelimiter.cpp (1)

26-33: Improve const correctness in TokenBucket::acquire.

The method doesn't modify the object's logical state (only synchronization primitives), so it should be marked as const.

-    bool acquire() {
+    bool acquire() const {
         std::unique_lock<std::mutex> lock(bucketMutex);
         if (tokens > 0) {
             --tokens;
             return true;
         }
         return false;
     }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a15a685 and fede650.

📒 Files selected for processing (1)
  • ratelimiter.cpp (1 hunks)
🔇 Additional comments (2)
ratelimiter.cpp (2)

277-286: ⚠️ Potential issue

Fix thread safety issues in refill thread.

The refill thread uses a dangling reference to refillThreadRunning and continues running after EVENT_LOOP is deleted.

-    refillThread = std::thread([&refillThreadRunning]() {
+    refillThread = std::thread([refillThreadRunning = &refillThreadRunning]() {
-        while (EVENT_LOOP) {
+        while (*refillThreadRunning) {
             std::this_thread::sleep_for(std::chrono::seconds(1));
             if (TOKEN_BUCKET) {
                 TOKEN_BUCKET->refill();
             }
         }
     });
-    refillThread.detach();
+    refillThread.detach();  // Consider managing thread lifecycle with join instead

Likely invalid or redundant comment.


308-311: Review impact on message processing performance.

The blocking waitForToken() call can prevent processing of other events, potentially causing message sending issues mentioned in the PR objectives.

Consider implementing a non-blocking token acquisition mechanism or a separate queue for rate-limited operations to avoid blocking the main event loop.

ratelimiter.cpp Outdated
Comment on lines 301 to 315
guint rlimited_timeout_add(guint interval, GSourceFunc function, gpointer data) {
if (!EVENT_LOOP || !TOKEN_BUCKET) {
g_warning("Rate limiter not initialized.");
return 0;
}

EVENT_LOOP->enqueue([interval, function, data]() {
TOKEN_BUCKET->waitForToken(); // Wait until a token is available
if (function) {
function(data);
}
});

return 1; // Return a placeholder ID (or a meaningful one if task tracking is added).
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

🛠️ Refactor suggestion

Integrate with GLib/Purple timeout system for consistency

The current implementation uses a custom event loop that doesn't integrate with the codebase's timeout management system (GLib/Purple). This creates inconsistencies:

  • No support for task cancellation via purple_timeout_remove
  • Incompatible with the source ID system used by GLib/Purple timeouts
  • Custom event loop only used in this file

Consider refactoring to use the existing timeout system while maintaining rate-limiting functionality.

🔗 Analysis chain

Improve task scheduling mechanism.

The current implementation has several limitations:

  1. No way to cancel scheduled tasks
  2. Hardcoded return value doesn't allow task identification
  3. No task completion tracking

Consider implementing a proper task scheduling system with unique task IDs and cancellation support.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for any existing task cancellation mechanisms
rg "cancel.*task|remove.*timeout" --type cpp

Length of output: 115


Script:

#!/bin/bash
# Check for timeout remove implementation and usage
rg "purple_timeout_remove|g_source_remove" -A 3 --type cpp

# Look for timeout add implementations to understand the pattern
ast-grep --pattern 'guint $FUNC($$$) {
  $$$
}'

Length of output: 602


Script:

#!/bin/bash
# Check for GLib timeout usage patterns
rg "g_timeout_add|g_timeout_add_seconds" -A 2 --type cpp

# Look for EVENT_LOOP usage
rg "EVENT_LOOP" --type cpp

Length of output: 733

@Metalhead33
Copy link
Author

It still doesn't work correctly. Messages are not being sent AT ALL.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
ratelimiter.cpp (1)

9-13: Remove unused struct RateLimitCommand.

The struct appears to be unused throughout the codebase. Consider removing it to avoid confusion, as the same parameters are directly used in the rlimited_timeout_add function.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fede650 and 2f249c4.

📒 Files selected for processing (1)
  • ratelimiter.cpp (1 hunks)
🔇 Additional comments (2)
ratelimiter.cpp (2)

173-176: Fix potential deadlock in enqueueSync lambda capture.

The lambda captures arguments by value which is correct, but the implementation could be improved for better exception safety.


220-223: Add error handling in command execution loop.

Commands are executed without any error handling, which could cause the event loop to crash if a command throws an exception.

ratelimiter.cpp Outdated
}
}
});
refillThread.detach();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix thread lifecycle management inconsistency.

The thread is detached on line 293 but there's an attempt to join it on line 300. This will cause undefined behavior as you cannot join a detached thread.

-    refillThread.detach();

Also applies to: 300-300

Comment on lines +314 to +316
g_warning("Rate limiter not initialized.");
initialize_rate_limiter(15);
}
Copy link

@coderabbitai coderabbitai bot Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove automatic initialization with hardcoded rate limit.

The function automatically initializes the rate limiter with a hardcoded value of 15 requests per second. This could be dangerous as different Discord API endpoints have different rate limits.

-        initialize_rate_limiter(15);
+        return 0;  // Return error when rate limiter is not initialized
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
g_warning("Rate limiter not initialized.");
initialize_rate_limiter(15);
}
g_warning("Rate limiter not initialized.");
return 0; // Return error when rate limiter is not initialized
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This auto-initialization was added for test purposes, as I have discovered that the rate limiter was not being initialized at all (which was the source of the problem of no messages being sent), and I want to prioritize fixing other issues over finding out why it was not being initialized.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Metalhead33 I see that the rate limiter is supposed to be initialized in libpurple3_plugin_load with a default value of 83 requests per second. The fact that you needed to add auto-initialization suggests that this initialization isn't happening. Would you like me to help investigate why the plugin's initialization function isn't being called? This might be the root cause of your messaging issues.

🔗 Analysis chain
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for potential initialization points in the codebase
rg -A 5 "initialize_rate_limiter" --type c --type cpp
# Look for Discord plugin initialization
ast-grep --pattern 'plugin_init($$$)' 
# Look for Discord plugin load/unload functions
rg -A 5 "(plugin_load|plugin_unload)" --type c --type cpp

Length of output: 4219


✏️ Learnings added
Learnt from: Metalhead33
PR: EionRobb/purple-discord#467
File: ratelimiter.cpp:314-316
Timestamp: 2025-01-10T21:43:00.832Z
Learning: The Discord plugin's rate limiter is initialized during plugin load with a default value of 83 requests per second in `libpurple3_plugin_load`, and is properly cleaned up in `libpurple3_plugin_unload`. The rate limit values are later adjusted based on Discord's response headers through `UpdateRateLimits`.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

ratelimiter.cpp Outdated Show resolved Hide resolved
ratelimiter.cpp Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2f249c4 and adf5521.

📒 Files selected for processing (1)
  • ratelimiter.cpp (1 hunks)
🧰 Additional context used
📓 Learnings (1)
ratelimiter.cpp (1)
Learnt from: Metalhead33
PR: EionRobb/purple-discord#467
File: ratelimiter.cpp:314-316
Timestamp: 2025-01-10T21:43:00.963Z
Learning: The Discord plugin's rate limiter is initialized during plugin load with a default value of 83 requests per second in `libpurple3_plugin_load`, and is properly cleaned up in `libpurple3_plugin_unload`. The rate limit values are later adjusted based on Discord's response headers through `UpdateRateLimits`.
🔇 Additional comments (5)
ratelimiter.cpp (5)

330-330: ⚠️ Potential issue

Implement proper task ID generation.

Returning a constant value of 1 makes it impossible to cancel specific tasks.

-    return 1;
+    static std::atomic<guint> nextId{1};
+    return nextId++;

Likely invalid or redundant comment.


220-223: ⚠️ Potential issue

Add error handling in command execution.

Commands are executed without any error handling, which could crash the event loop.

     for (Command& func : readBuffer)
     {
+        try {
             func();
+        } catch (const std::exception& e) {
+            g_warning("Command execution failed: %s", e.what());
+        } catch (...) {
+            g_warning("Command execution failed with unknown error");
+        }
     }

Likely invalid or redundant comment.


57-57: ⚠️ Potential issue

Rename Command typedef to avoid name collision.

The Command typedef overshadows the RateLimitCommand struct.

-    typedef std::function<void()> Command;
+    typedef std::function<void()> EventCommand;

Likely invalid or redundant comment.


43-47: ⚠️ Potential issue

Add timeout mechanism to waitForToken.

The current implementation can deadlock if tokens are never refilled.

-    void waitForToken() {
+    bool waitForToken(std::chrono::milliseconds timeout = std::chrono::seconds(30)) {
         std::unique_lock<std::mutex> lock(bucketMutex);
-        condVar.wait(lock, [this] { return tokens > 0; });
+        if (!condVar.wait_for(lock, timeout, [this] { return tokens > 0; })) {
+            g_warning("Timeout waiting for rate limit token");
+            return false;
+        }
         --tokens;
+        return true;
     }

Likely invalid or redundant comment.


284-293: ⚠️ Potential issue

Fix thread lifecycle management.

The detached thread continues running and accessing potentially deleted objects after stop_rate_limiter is called.

-    refillThread = std::thread([refillThreadRunning = &refillThreadRunning]() {
+    refillThread = std::thread([refillThreadRunning = &refillThreadRunning, bucket = TOKEN_BUCKET.get()]() {
         while (*refillThreadRunning) {
             std::this_thread::sleep_for(std::chrono::seconds(1));
-            if (TOKEN_BUCKET) {
+            if (bucket) {
                 g_info("Refilling Discord rate-limiter token bucket.");
-                TOKEN_BUCKET->refill();
+                bucket->refill();
             }
         }
     });
-    refillThread.detach();

Likely invalid or redundant comment.

Comment on lines +23 to +26
explicit TokenBucket(guint rateLimitPerSecond)
: tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), interval(1000 / rateLimitPerSecond) {
g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add overflow check for interval calculation.

The interval calculation 1000 / rateLimitPerSecond could result in integer division issues or overflow for very small/large values.

 explicit TokenBucket(guint rateLimitPerSecond)
-    : tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), interval(1000 / rateLimitPerSecond) {
+    : tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), 
+      interval(rateLimitPerSecond > 0 ? 1000 / rateLimitPerSecond : 1000) {
         g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond);
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
explicit TokenBucket(guint rateLimitPerSecond)
: tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), interval(1000 / rateLimitPerSecond) {
g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond);
}
explicit TokenBucket(guint rateLimitPerSecond)
: tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond),
interval(rateLimitPerSecond > 0 ? 1000 / rateLimitPerSecond : 1000) {
g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond);
}

ratelimiter.cpp Outdated
Comment on lines 322 to 328
g_info("Enquing deferred function execution on the rate limiter!\nInterval: %d\nFunction pointer:%p\nUser data pointer:%p\n", interval, (void*)function, (void*)data);
EVENT_LOOP->enqueue([interval, function, data]() {
TOKEN_BUCKET->waitForToken(); // Wait until a token is available
if (function) {
function(data);
}
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Check waitForToken result and handle errors.

The waitForToken result is not checked, which could lead to executing functions when rate limits are exceeded.

     EVENT_LOOP->enqueue([interval, function, data]() {
-        TOKEN_BUCKET->waitForToken();
+        if (!TOKEN_BUCKET->waitForToken()) {
+            g_warning("Failed to acquire rate limit token, skipping function execution");
+            return;
+        }
         if (function) {
             function(data);
         }
     });

Committable suggestion skipped: line range outside the PR's diff.

Metalhead33 and others added 3 commits January 10, 2025 22:51
Proper ID generation.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@Metalhead33
Copy link
Author

This is kind of weird. It seems to work as expected when I open it in the command line, but when I open Pidgin from the GUI, messages are not being sent. Very unusual. It also crashes when I open "debug logs".

@Metalhead33
Copy link
Author

Nope, upon a more recent testing, it appears to work. But it crashes randomly when receiving a message.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between adf5521 and e9899c7.

📒 Files selected for processing (1)
  • ratelimiter.cpp (1 hunks)
🧰 Additional context used
📓 Learnings (1)
ratelimiter.cpp (1)
Learnt from: Metalhead33
PR: EionRobb/purple-discord#467
File: ratelimiter.cpp:314-316
Timestamp: 2025-01-10T21:43:00.963Z
Learning: The Discord plugin's rate limiter is initialized during plugin load with a default value of 83 requests per second in `libpurple3_plugin_load`, and is properly cleaned up in `libpurple3_plugin_unload`. The rate limit values are later adjusted based on Discord's response headers through `UpdateRateLimits`.
🔇 Additional comments (5)
ratelimiter.cpp (5)

272-277: LGTM! Good use of smart pointers and thread safety.

The global state management is well-implemented with proper memory management using smart pointers and thread safety using recursive mutex.


313-342: LGTM! Well-implemented rate-limited command execution.

The function has proper error handling, uses promises for async operation, and correctly integrates with GLib's timeout system.


223-226: ⚠️ Potential issue

Add error handling in command execution loop.

Commands are executed without any error handling, which could crash the event loop thread.

 for (Command& func : readBuffer)
 {
+    try {
         func();
+    } catch (const std::exception& e) {
+        g_warning("Command execution failed: %s", e.what());
+    } catch (...) {
+        g_warning("Command execution failed with unknown error");
+    }
 }

Likely invalid or redundant comment.


23-26: ⚠️ Potential issue

Add input validation and overflow protection in TokenBucket constructor.

The constructor needs validation for the input parameter and protection against division by zero or overflow.

 explicit TokenBucket(guint rateLimitPerSecond)
-    : tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond), interval(1000 / rateLimitPerSecond) {
+    : tokens(rateLimitPerSecond), maxTokens(rateLimitPerSecond),
+      interval(rateLimitPerSecond > 0 ? 1000 / rateLimitPerSecond : 1000) {
+    if (rateLimitPerSecond == 0) {
+        g_warning("Invalid rate limit of 0 requests per second, defaulting to 1");
+    }
     g_info("Initializing token bucket with an interval of %d tokens per second.", rateLimitPerSecond);
 }

Likely invalid or redundant comment.


316-318: ⚠️ Potential issue

Remove auto-initialization and investigate plugin initialization.

Based on the learnings, the rate limiter should be initialized during plugin load with a default value of 83 requests per second. Auto-initialization with a hardcoded value of 15 could cause API issues.

Let's investigate why the plugin's initialization isn't working:

✅ Verification successful

Remove redundant auto-initialization to prevent API issues

The rate limiter is already properly initialized:

  • During plugin load with correct value (83)
  • Dynamically adjusted based on server responses

The auto-initialization with value 15 should be removed as it:

  • Uses incorrect rate limit value
  • Could mask initialization failures that should be fixed
  • Might interfere with dynamic rate limiting
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check plugin initialization
rg -A 5 "initialize_rate_limiter" --type c
# Look for plugin load function
ast-grep --pattern 'libpurple3_plugin_load($$$)'

Length of output: 920


Script:

#!/bin/bash
# Find the plugin initialization function and its context
rg -B 5 -A 10 "initialize_rate_limiter\(83\)" --type c

# Check for rate limiter usage to understand the execution flow
rg "g_warning.*Rate limiter not initialized" --type c -B 5

Length of output: 637


Script:

#!/bin/bash
# Search for any rate limiter related warnings with broader pattern
rg -B 5 "warning.*[Rr]ate.*[Ll]imiter" --type c

# Also check for g_warning calls near rate limiter usage
rg -B 5 "g_warning.*[Rr]ate" --type c

Length of output: 501

* @var writeBuffer
* @brief Buffer to store commands waiting to be executed.
*/
std::vector<Command> writeBuffer;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding a maximum queue size limit.

The writeBuffer vector could grow unbounded, potentially causing memory issues under high load.

-std::vector<Command> writeBuffer;
+std::vector<Command> writeBuffer;
+static constexpr size_t MAX_QUEUE_SIZE = 1000;

Then in enqueue methods:

 void enqueue(Command&& callable) {
     std::lock_guard<std::mutex> lockguard(commandsMutex);
+    if (writeBuffer.size() >= MAX_QUEUE_SIZE) {
+        g_warning("Command queue full, dropping command");
+        return;
+    }
     writeBuffer.emplace_back(std::move(callable));
 }

Committable suggestion skipped: line range outside the PR's diff.

@Metalhead33
Copy link
Author

It appears relatively stable now, now that I have reintegrated the original g_timeout_add. Weird.

@Metalhead33
Copy link
Author

@EionRobb Can we have autobuilds made of this version too? Some people might be willing to try it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant