-
Notifications
You must be signed in to change notification settings - Fork 0
Dynamic Discovery
Dynamic Discovery (D2) allows clients to register to receive property updates in a generic manner. The client registers a key (string), and will receive property updates associated with that key. These property updates can come from a file, ZooKeeper, Glu, etc. Currently, d2 contains the following implementations of Dynamic Discovery:
- ZooKeeper
- File System
- In-Memory
- Toggling
Dynamic Discovery is fairly flexible in how it allows data to be discovered. Essentially, a user registers for a given channel that it’s interested in. It will receive updates for that channel in the form of a “property”. In this case, a “property” can be any class. For instance, in the Load Balancer State, we might be interested in the “WidgetService” service, so we would register with the service registry to listen for “WidgetService” property updates.
In D2, a store is a way to get/put/delete properties.
D2 contains two ZooKeeper implementations of DynamicDiscovery. The first is the ZooKeeperPermanentStore. This store operates by attaching listeners to a file in ZooKeeper. Every time the file is updated, the listeners are notified of the property change. The second is the ZooKeeperEphemeralStore. This store operates by attaching listeners to a ZooKeeper directory, and putting sequential ephemeral nodes inside of the directory. The ZooKeeperEphemeralStore is provided with a “merger” that merges all ephemeral nodes into a single property. Whenever a node is added or removed to the directory, the ZooKeeperEphemeralStore re-merges all nodes in the directory, and sends them to all listeners.
In the software load balancing system, the permanent store is used for cluster and service properties, while the ephemeral store is used for URI properties.
The file system implementation simply uses a directory on the local filesystem to manage property updates. When a property is updated, a file is written to disk with the property’s key. For instance, putting a property with name “foo” would create /file/system/path/foo, and store the serialized property data in it. The File System will then alert all listeners of the update.
The in-memory implementation of Dynamic Discovery just uses a HashMap to store properties by key. Whenever a store is put/delete occurs, the HashMap is updated, and the listeners are notified.
The toggling store that wraps another PropertyStore. The purpose of the toggling store is to allow a store to be “toggled off”. By toggling a store off, all future put/get/removes will be ignored. The reason that this class is useful is because LinkedIn wants to toggle the ZooKeeper stores off if connectivity is lost with the ZooKeeperCluster (until a human being can verify the state of the cluster, and re-enable connectivity).
In D2, a registry is a way to listen for properties. Registries allow you to register/unregister on a given channel. Most stores also implement the registry interface. Thus, if you’re interested in updates for a given channel, you would register with the store, and every time a put/delete is made, the store will update the listeners for that channel.
By default, none of the stores in Dynamic Discovery are thread safe. To make the stores thread safe, a PropertyStoreMessenger can be used. The messenger is basically a wrapper around a store that forces all writes to go through a single thread. Reads still happen synchronously.
Dynamic Discovery also contains a messenger chain implementation. The chain allows multiple stores to be connected to a single registry. Whenever the registry sends an update for a property, the stores will receive the update (onAdd, onRemove, etc) in the order that they were registered. This is useful for chaining a ZooKeeper registry to a File Store, and then chaining the File Store to the Load Balancer State (see below).
The software load balancer takes a cluster of machines for a specific service, and figures out which machine to send a message to. This is useful for RPC calls and also HTTP calls. Broadly speaking there are Client and Server in Load Balancer. The diagram below tells the architecture of D2 load balancer.
Cluster: A set of machines that can perform identical task. Example: machine #1, #2, #3, #4 all are running services. Thus these machines are in one cluster.
Service: Name for a particular resource. For example a service may have multiple resources. Each of these resources are called “service”
URI: the resource identifier. Example: http://hostname:port/bla/bla/bla/my-service/widget/1
There is currently one real implementation of a LoadBalancer in com.linkedin.d2.balancer. This implementation is called SimpleLoadBalancer. There are other implementations of LoadBalancer that will wrap this SimpleLoadBalancer for example: ZKFSLoadBalancer. In any case, the simple load balancer contains one important method: getClient. The getClient method is called with a URN such as “urn:MyService:/getWidget”. The responsibility of the load balancer is to return a client that can handle the request, if one is available, or to throw a ServiceUnavailableException, if no client is available.
When getClient is called on the simple load balancer, it:
- First tries to extract the service name from the URI that was provided.
- It then makes sure that it’s listening to that service in the LoadBalancerState.
- It then makes sure that it’s listening to the service’s cluster in the LoadBalancerState
- If either the service or cluster is unknown, it will throw a ServiceUnavailableException.
- It will then iterate through the prioritized schemes (prpc, http, etc) for the cluster.
- For each scheme, it will get all URIs in the service’s cluster for that scheme, and ask the service’s load balancer strategy to load balance them.
- If the load balancer strategy returns a client, it will be returned, otherwise the next scheme will be tried.
- If all schemes are exhausted, and no client was found, a ServiceUnavailableException will be thrown.
Here is an example of the code flow when a request comes in. For the sake of this example, we’ll a fictional widget service. Let’s also say that in order to get the data for a widget resource, we need to contact 3 different services: WidgetX, WidgetY, and WidgetZ backend.
On the server side:
- When a machine joins a cluster, let’s say we add a new machine to Widget Server Cluster. Let’s say that is machine number #24. Then discovery server code in machine #24 will “announce” to D2 zookeeper that there is another machine joining the widget server cluster.
- It will tell zookeeper about the machine #24 URI.
- All the “listeners” for “widget server” service will be notified (these are all the clients for example widget front-end) and since the load balancer client side has the load balancing strategy, the client will determine which machine gets the new request.
On the client side:
- A request comes to
http://example.com/widget/1
- The HTTP load balancer knows that /widget/ is redirected to widget service (this is not the D2 load balancer)
- One of the machines in widget front-end gets the request and processes it.
- Since there’s D2 client code in every war and the D2 client code is connected to D2 zookeeper, the client code knows how to load balance the request and choose the machine for each service needed to construct the returned data.
- In this case we assume that widget front-end needs a resource from WidgetX, WidgetY and WidgetZ backend. So the D2 client code in widget front-end is listening to these 3 services in zookeeper.
- In the example, the D2 client code in widget front-end chooses machine #14 for WidgetX backend, machine #5 for WidgetY and machine #33 for WidgetZ backend.
- Then the requests get dispersed to each corresponding machine.
The load balancer knows about three properties:
- ServiceProperties
- ClusterProperties
- UriProperties
ServiceProperties defines a service’s name, cluster, path, and load balancer strategy. The load balancer strategy will use the cluster name to get potential nodes for a service. It will then use the load balancer strategy to get a client for a node in the service’s cluster. Finally, it will append the service’s path to the end of the cluster node’s URI.
ClusterProperties define’s a cluster’s name, schemes, banned nodes, and connection properties.
- The schemes are defined in their priority. That is, if a cluster supports both HTTP and Spring RPC, for instance, the order of the schemes defines in which order the load balancer will try to find a client.
- The banned nodes are a list of nodes (URIs) that belong to the cluster, but should not be called.
- The connection properties map is a map of string key/value pairs that will be passed to the connection factory when instantiating new connections to a given node in the cluster.
UriProperties define a cluster name and asset of URIs associated with the cluster. Each URI is also given a weight, which will be passed to the load balancer strategy.
The Load Balancer maintains its state through a LoadBalancerState class. Currently there is only one implementation of this interface: SimpleLoadBalancerState.
The SimpleLoadBalancerState listens to a registry for updates for ClusterProperties, UriProperties, and ServiceProperties. It maintains an in-memory map that associates each property to its cluster or service. It also exposes get()
methods so that the load balancer can retrieve the properties from the state.
The Load Balancer wraps r2 clients with three classes: TrackerClient, RewriteClient, and LazyClient. The underlying R2 clients are: HttpNettyClient, FilterChainClient and FactoryClient.
The TrackerClient attaches a CallTracker and Degrader to a URI. When a call is made to this client, it will use call tracker to track it, and then forward all calls to the r2 client.
The RewriteClient simply rewrites URIs from the URN style to a URL style. For example, it will rewrite “urn:MyService:/getWidget” to “http://hostname:port/my-service/widgets/getWidget”.
The LazyClient is just a wrapper that does not actually create an r2 client until the first rest/rpc request is made.
Load balancer strategies have one responsibility. Given a list of TrackerClients for a cluster, return one that can be used to make a service call. There are currently two implementations of load balancer strategies: random and degrader.
The random load balancer strategy simply chooses a random tracker client from the list that it is given. If the list is empty, it returns null. This is the default behavior for dev environment. Because in development environments, one may wish to use the same machine for every service. so with this strategy, we will always return the “dev” tracker client to route the request (and prevent confusion).
The load balancer strategy that attempts to do degradation is the DegraderLoadBalancerStrategy. Here are some facts about the degrader strategy:
- Each node in a cluster (TrackerClient) has an associated CallTracker and Degrader.
- The CallTracker tracks things like latency, number of exceptions, number of calls, etc for a given URI endpoint in the cluster.
- The Degrader uses the CallTracker to try and figure out whether to drop traffic, how much traffic to drop, the health of the node, etc. This is boiled down to a “drop rate” score between 0 and 1.
- DegraderLoadBalancerStrategy takes a maximum cluster latency when it is constructed.
- If the cluster’s average latency per node is less than the max cluster latency, all calls will go through. The probability of selecting a node in the cluster will depend on its computed drop rate (nodes with lower drop rates will be weighted higher), but no messages will be dropped.
- If the cluster’s average latency per node is greater than the max cluster latency, the balancer will begin allowing the nodes to drop traffic (using the degrader’s checkDrop method).
D2 currently support range-based and hash-based partitioning.
TODO: add “Partitioning Support for Dynamic Discovery”
TODO: add download for CLI
To add a cluster to ZooKeeper, issue a command like:
./lb-tool.sh --put-cluster=cluster-1 --schemes=http --store=zk://localhost:2181/d2/clusters
To add a service to ZooKeeper, issue a command like:
./lb-tool.sh --put-service=service-1 --cluster=cluster-1 --path=/service-1 --balancer=degrader --store=zk://localhost:2181/d2/services
To delete a property from ZooKeeper, issue a command like:
./lb-tool.sh --delete=service-1 --store=zk://localhost:2181/d2/services
To get a property from ZooKeeper, issue a command like:
./lb-tool.sh --get=cluster-1 --serializer=com.linkedin.d2.balancer.properties.ClusterPropertiesJsonSerializer --store=zk://localhost:2181/d2/clusters
TODO
A disabled ZooKeeper client can be renabled using the setEnabled operation. These beans exist in the com.linkedin.d2 JMX name space. Alternatively, lb-tool.sh provides a --toggle
operation.
To add a re-enable a ZooKeeper toggling store using lb-tool.sh, issue a command like:
./lb-tool.sh --toggle
This command will scan localhost for all java processes that are running a JMX server. For each java process with a JMX server, it will connect, and scan the com.linkedi.d2 namespace for *TogglingStore beans. The command will issue setEnabled(true) method call on any toggling store bean that it finds.
- Dynamic Discovery (D2)
- Data Schema and Templates
-
Rest.li
- Server
- Client
- Projections
- Tools
- FAQs