Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: 'IncorrectClusterState() #81

Open
ThomasBlock opened this issue May 12, 2024 · 1 comment
Open

error: 'IncorrectClusterState() #81

ThomasBlock opened this issue May 12, 2024 · 1 comment

Comments

@ThomasBlock
Copy link

Hi.
I ran a testnet liquidator without problems
i have my mainnet liquidator ruuning, but i am not sure if it is operational..
running two weeks without a liquidation

how can i fix IncorrectClusterState or criticalStatus{app="liquidation-cli"} 1 ?

log:

Environment="LOG_LEVEL=info" 
cli.sh --ssv-sync-env=prod --ssv-sync=v4.mainnet --node-url=XX --private-key=YY --gas-price=high --hide-table=1 --max-visible-blocks=50000
Starting API
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [NestFactory] Starting Nest application...
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] TypeOrmModule dependencies initialized +116ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] HttpModule dependencies initialized +13ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] DiscoveryModule dependencies initialized +0ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] ConfigModule dependencies initialized +20ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] ScheduleModule dependencies initialized +1ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] SharedModule dependencies initialized +1ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] TypeOrmCoreModule dependencies initialized +79ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] TypeOrmModule dependencies initialized +0ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] TypeOrmModule dependencies initialized +1ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] TypeOrmModule dependencies initialized +0ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] SystemModule dependencies initialized +0ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] ClusterModule dependencies initialized +1ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] EarningModule dependencies initialized +0ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] WebappModule dependencies initialized +0ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [InstanceLoader] WorkerModule dependencies initialized +1ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [RoutesResolver] MetricsController {/metrics}: +161ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [RouterExplorer] Mapped {/metrics, GET} route +4ms
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [NestApplication] Nest application successfully started +55ms
WebApp is running on port: 3000
Starting Liquidator worker
[Nest] 12159  - 05/12/2024, 3:11:35 PM     LOG [NestFactory] Starting Nest application... +4ms
isLiquidated { error: 'IncorrectClusterState()', hash: '0x12e04c87' } {
  owner: '0x61AD82D4466437d4Cc250a0ceFfBCbD7e07b8f96',
  operatorIds: '126,127,128,129'
}
getBurnRate { error: 'IncorrectClusterState()', hash: '0x12e04c87' } {
  owner: '0x61AD82D4466437d4Cc250a0ceFfBCbD7e07b8f96',
  operatorIds: '126,127,128,129'
}
getBalance { error: 'IncorrectClusterState()', hash: '0x12e04c87' } {
  owner: '0x61AD82D4466437d4Cc250a0ceFfBCbD7e07b8f96',
  operatorIds: '126,127,128,129'
}

and then i see a lot more of IncorrectClusterState in my logs..

metrics:

# HELP fetchStatus Status of fetch part of CLI
# TYPE fetchStatus gauge
fetchStatus{app="liquidation-cli"} 1

# HELP liquidationStatus The number of failed transactions for a time period
# TYPE liquidationStatus gauge
liquidationStatus{app="liquidation-cli"} 1

# HELP burnRatesStatus Status of burn rates part of CLI
# TYPE burnRatesStatus gauge
burnRatesStatus{app="liquidation-cli"} 1

# HELP totalActiveClusters Total active clusters count
# TYPE totalActiveClusters gauge
totalActiveClusters{app="liquidation-cli"} 456

# HELP totalLiquidatableClusters Total liquidatable clusters count
# TYPE totalLiquidatableClusters gauge
totalLiquidatableClusters{app="liquidation-cli"} 0

# HELP burnt10LiquidatableClusters Clusters gone through 10% of their liquidation collateral counter
# TYPE burnt10LiquidatableClusters gauge
burnt10LiquidatableClusters{app="liquidation-cli"} 0

# HELP burnt50LiquidatableClusters Clusters gone through 50% of their liquidation collateral counter
# TYPE burnt50LiquidatableClusters gauge
burnt50LiquidatableClusters{app="liquidation-cli"} 0

# HELP burnt90LiquidatableClusters Clusters gone through 90% of their liquidation collateral counter
# TYPE burnt90LiquidatableClusters gauge
burnt90LiquidatableClusters{app="liquidation-cli"} 0

# HELP burnt99LiquidatableClusters Clusters gone through 99% of their liquidation collateral counter
# TYPE burnt99LiquidatableClusters gauge
burnt99LiquidatableClusters{app="liquidation-cli"} 0

# HELP criticalStatus Status of any part of liquidator which requires immediate attention
# TYPE criticalStatus gauge
criticalStatus{app="liquidation-cli"} 1

# HELP lastBlockNumberMetric The last synced block number
# TYPE lastBlockNumberMetric gauge
lastBlockNumberMetric{app="liquidation-cli"} 19854197

# HELP liquidatorETHBalance Current balance of the wallet of the liquidator
# TYPE liquidatorETHBalance gauge
liquidatorETHBalance{app="liquidation-cli"} 0.05
@ThomasBlock
Copy link
Author

update: i changed rpc endpoint from besu to geth.

at first it did not boot:

[Nest] 16614  - 05/17/2024, 8:30:23 AM     LOG [RouterExplorer] Mapped {/metrics, GET} route +2ms
getMinimumLiquidationCollateral null
[CRITICAL] unhandledRejection UpdateValuesMissingError: Cannot perform update query because update values are not defined. Call "qb.set(...)" method to specify updated values.
    at UpdateQueryBuilder.createUpdateExpression (/root/ssv-liquidator/src/query-builder/UpdateQueryBuilder.ts:681:19)
    at UpdateQueryBuilder.getQuery (/root/ssv-liquidator/src/query-builder/UpdateQueryBuilder.ts:53:21)
    at UpdateQueryBuilder.getQueryAndParameters (/root/ssv-liquidator/src/query-builder/QueryBuilder.ts:507:28)
    at UpdateQueryBuilder.execute (/root/ssv-liquidator/src/query-builder/UpdateQueryBuilder.ts:142:50)
    at SystemService.save (/root/ssv-liquidator/src/modules/system/system.service.ts:39:7)
    at WorkerService.onModuleInit (/root/ssv-liquidator/src/services/worker/worker.service.ts:20:5)
    at async Promise.all (index 0)
    at callModuleInitHook (/root/ssv-liquidator/node_modules/@nestjs/core/hooks/on-module-init.hook.js:43:5)
    at NestApplication.callInitHook (/root/ssv-liquidator/node_modules/@nestjs/core/nest-application-context.js:224:13)
    at NestApplication.init (/root/ssv-liquidator/node_modules/@nestjs/core/nest-application.js:98:9)
    at NestApplication.listen (/root/ssv-liquidator/node_modules/@nestjs/core/nest-application.js:168:33)
    at bootstrapApi (/root/ssv-liquidator/src/services/worker/worker.tsx:47:3)
    at bootstrap (/root/ssv-liquidator/src/services/worker/worker.tsx:89:3)

i tried to delete data/local.db:
better, but now it crashes after a minute


[Nest] 18023  - 05/17/2024, 8:40:31 AM     LOG [NestApplication] Nest application successfully started +84ms
WebApp is running on port: 3000
Starting Liquidator worker
[Nest] 18023  - 05/17/2024, 8:40:31 AM     LOG [NestFactory] Starting Nest application... +1ms
[Nest] 18023  - 05/17/2024, 8:40:32 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: system.type
[Nest] 18023  - 05/17/2024, 8:40:43 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 18023  - 05/17/2024, 8:41:35 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 18023  - 05/17/2024, 8:41:46 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 18023  - 05/17/2024, 8:42:24 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 18023  - 05/17/2024, 8:42:29 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 18023  - 05/17/2024, 8:42:35 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_BUSY: database is locked
[Nest] 18023  - 05/17/2024, 8:42:36 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_BUSY: database is locked
[Nest] 18023  - 05/17/2024, 8:42:37 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_BUSY: database is locked
QueryFailedError: SQLITE_BUSY: database is locked
    at Statement.handler (/root/ssv-liquidator/src/driver/sqlite/SqliteQueryRunner.ts:113:26)
    at Statement.replacement (/root/ssv-liquidator/node_modules/sqlite3/lib/trace.js:25:27) {
  query: 'SELECT "System"."type" AS "System_type", "System"."payload" AS "System_payload" FROM "system" "System" WHERE ("System"."type" = ?) LIMIT 1',
  parameters: [ 'GENERAL_LAST_BLOCK_NUMBER' ],
  driverError: Error: SQLITE_BUSY: database is locked
  --> in Database#all('SELECT "System"."type" AS "System_type", "System"."payload" AS "System_payload" FROM "system" "System" WHERE ("System"."type" = ?) LIMIT 1', [ 'GENERAL_LAST_BLOCK_NUMBER' ], [Function: handler])
      at execute (/root/ssv-liquidator/src/driver/sqlite/SqliteQueryRunner.ts:77:46)
      at /root/ssv-liquidator/src/driver/sqlite/SqliteQueryRunner.ts:137:19
      at processTicksAndRejections (node:internal/process/task_queues:95:5) {
    errno: 5,
    code: 'SQLITE_BUSY',
    __augmented: true
  },
  errno: 5,
  code: 'SQLITE_BUSY',
  __augmented: true
}
^C

lets try a nethermind node:
seems unhealthy.. but at least IncorrectClusterState() is gone?

[Nest] 692  - 05/17/2024, 8:48:46 AM     LOG [RouterExplorer] Mapped {/metrics, GET} route +2ms
[Nest] 692  - 05/17/2024, 8:48:46 AM     LOG [NestApplication] Nest application successfully started +33ms
WebApp is running on port: 3000
Starting Liquidator worker
[Nest] 692  - 05/17/2024, 8:48:46 AM     LOG [NestFactory] Starting Nest application... +2ms
[Nest] 692  - 05/17/2024, 8:48:47 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: system.type
[Nest] 692  - 05/17/2024, 8:48:54 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:48:55 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:49:25 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:49:31 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:49:33 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:50:05 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:50:08 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:50:18 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:50:19 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:50:43 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:50:44 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:50:48 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_BUSY: database is locked
[Nest] 692  - 05/17/2024, 8:50:49 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_BUSY: database is locked
[Nest] 692  - 05/17/2024, 8:50:56 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:50:58 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds
[Nest] 692  - 05/17/2024, 8:51:12 AM   ERROR [FetchTask] Sync updates error: QueryFailedError: SQLITE_CONSTRAINT: UNIQUE constraint failed: cluster.owner, cluster.operatorIds


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant