You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following warnings were logged during key generation and heartbeat signing on a machine with good performance. We need to investigate what is the reason and see if we shouldn't tweak some parameters.
[email protected]/floodsub.go:95 dropping message to peer 16Uiu2HAkwbed4RCLysj3TRKeR8UabPThKqWWvd1sW2HtAWgm7nYx: queue full
[email protected]/pubsub.go:950 Can't deliver message to subscription for topic tbtc-04b0a483e97dfbb15e88ecbc2ef8f7e37776dd713eed54089f87704a4fbae0442aa3ff5a0ec7f4d52bf4680b5b9b70be9accb64970df11b96abb61d48be7d8db8b; subscriber too slow
Log provided by one of the beta stakers: 65a4eb.log
The text was updated successfully, but these errors were encountered:
Lost of all the peers connections happens 3 times a day (every heartbeat sync) for one of our nodes. Analyzing graphs I see it consumes much more memory and CPU during the sync process. Before peers are disconnected I see a lot of queue full as well as Can't deliver message to subscription for topic messages, then all the peers and bootstrap nodes are disconnected. Interesting fact that only one of 6 nodes fails like this, all the rest work normally. And this one failing fails all the time. I tried to rebuild OS and all the software from scratch, add more memory and CPU, but it fails all in vain. From my perspective for some reason some nodes handle much more requests which fills queue rapidly. So we either need to find why it happens or try to increase queue length. Attaching logs of such an event.
The following warnings were logged during key generation and heartbeat signing on a machine with good performance. We need to investigate what is the reason and see if we shouldn't tweak some parameters.
Log provided by one of the beta stakers:
65a4eb.log
The text was updated successfully, but these errors were encountered: