-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing load balancing with nginx #199
Comments
A transparent TCP Load Balancer insertion between Diameter peers (this is what you're trying to achieve, right?) won't be Diameter aware. So it sounds like it works as it would be expected and not as a Diameter-application-aware Load Balancer. I don't think you can achieve what you are trying to do, unless nginx would be Diameter-aware. Which I doubt it is, as it would need to include itself most of a Diameter stack, respond to CERs, then also initiate CERs to downstream and route Diameter messages. I don't know about nginx internals and its state-of-the-art, but would be interesting if someone has a different view. My thinking is that you're trying to use the wrong tool here and you might want to look for a Diameter-Routing-Agent, or a Diameter-to/from-HTTP/2 Inter-Working-Function?... |
Yes, I am looking for a Diameter routing agent. Do you have any proxy software or framework that you have used or are familiar with? |
Hi Guys,
My service architecture is KeepAlived+nginx+Backend_Server. nginx is configured with two diameter service instances. How can I make the diameter service support load balancing? The current phenomenon is that after establishing a connection between the client and nginx, the CER request is forwarded to the first diameter node and returns CEA success. Subsequent request messages are then forwarded by nginx to the first diameter node. My expectation is that requests from the same connection will be forwarded to different diameter nodes through polling, achieving load balancing.
Another issue is that nginx and the first diameter service are deployed on the same host, while the second diameter service is deployed on another host. The client calls nginx to initiate a request message, and regardless of how many connections are established, subsequent requests will be forwarded to the first host. My expectation is to call nginx's IP and port from different connection request messages. If the current forwarding strategy is polling, nginx should forward the message polling to two diameter nodes, even if one of the diameter nodes is deployed on the same host as nginx.
This is my nginx configuration
stream {
upstream backend_servers {
server 192.107.21.1:3868;
server 192.107.21.2:3868;
}
}
Thank you, my dear bro
The text was updated successfully, but these errors were encountered: