Skip to content

PBM-1422 test selective restore with name remapping #182

PBM-1422 test selective restore with name remapping

PBM-1422 test selective restore with name remapping #182

GitHub Actions / JUnit Test Report failed Nov 13, 2024 in 0s

40 tests run, 24 passed, 8 skipped, 8 failed.

Annotations

Check failure on line 71 in pbm-functional/pytest/test_rename_replicaset.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_rename_replicaset.test_logical_pitr_crud_PBM_T270[inserts]

AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
 +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr
Raw output
start_cluster = True, cluster = <cluster.Cluster object at 0x7f3bed1d7790>
collection = 'inserts'

    @pytest.mark.timeout(300,func_only=True)
    @pytest.mark.parametrize('collection',['inserts','replaces','updates','deletes'])
    def test_logical_pitr_crud_PBM_T270(start_cluster,cluster,collection):
        cluster.check_pbm_status()
        cluster.make_backup("logical")
        cluster.enable_pitr(pitr_extra_args="--set pitr.oplogSpanMin=0.1")
        time.sleep(5)
        for i in range(10):
            client = pymongo.MongoClient(cluster.connection)
            client["test"]["inserts"].insert_one({"key": i+10, "data": i+10})
            client["test"]["replaces"].replace_one({"key": i}, {"key": i+10, "data": i+10})
            client["test"]["updates"].update_one({"key": i}, {"$inc": { "data": 10 }})
            client["test"]["deletes"].delete_one({"key": i})
        time.sleep(5)
        pitr = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
        pitr=" --time=" + pitr
        Cluster.log("Time for PITR is: " + pitr)
        time.sleep(10)
        cluster.disable_pitr()
        time.sleep(5)
        backup_to_fail=pitr + " --ns-from=test." + collection + " --ns-to=test.test"
        result=cluster.exec_pbm_cli("restore" + backup_to_fail + " --wait")
        assert result.rc != 0
>       assert "cloning namespace (--ns-to) is already in use" in result.stderr
E       AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
E        +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr

test_rename_replicaset.py:71: AssertionError

Check failure on line 52 in pbm-functional/pytest/test_rename_replicaset.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_rename_replicaset.test_logical_pitr_crud_PBM_T270[replaces]

AssertionError: Backup failed{"Error":"get backup metadata: get: context deadline exceeded"}

2024-11-13T10:42:55Z I [rs1/rs103:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:42:55Z I [rs1/rs101:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:42:55Z I [rs1/rs101:27017] starting PITR routine
2024-11-13T10:42:55Z I [rs1/rs103:27017] starting PITR routine
2024-11-13T10:42:55Z I [rs1/rs102:27017] starting PITR routine
2024-11-13T10:42:55Z I [rs1/rs102:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:42:55Z I [rs1/rs103:27017] node: rs1/rs103:27017
2024-11-13T10:42:55Z I [rs1/rs101:27017] node: rs1/rs101:27017
2024-11-13T10:42:55Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:42:55Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:42:55Z I [rs1/rs101:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:42:55Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:42:55Z I [rs1/rs103:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:42:55Z I [rs1/rs101:27017] listening for the commands
2024-11-13T10:42:55Z I [rs1/rs102:27017] node: rs1/rs102:27017
2024-11-13T10:42:55Z I [rs1/rs103:27017] listening for the commands
2024-11-13T10:42:55Z I [rs1/rs102:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:42:55Z I [rs1/rs102:27017] listening for the commands
2024-11-13T10:43:00Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:43:00Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:43:00Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:43:02Z I [rs1/rs101:27017] got command resync <ts: 1731494581>, opid: 673482b5c56f177a17552c2d
2024-11-13T10:43:02Z I [rs1/rs103:27017] got command resync <ts: 1731494581>, opid: 673482b5c56f177a17552c2d
2024-11-13T10:43:02Z I [rs1/rs101:27017] got epoch {1731494580 6}
2024-11-13T10:43:02Z I [rs1/rs103:27017] got epoch {1731494580 6}
2024-11-13T10:43:02Z I [rs1/rs102:27017] got command resync <ts: 1731494581>, opid: 673482b5c56f177a17552c2d
2024-11-13T10:43:02Z I [rs1/rs102:27017] got epoch {1731494580 6}
2024-11-13T10:43:02Z D [rs1/rs103:27017] [resync] lock not acquired
2024-11-13T10:43:02Z I [rs1/rs101:27017] [resync] started
2024-11-13T10:43:02Z D [rs1/rs102:27017] [resync] lock not acquired
2024-11-13T10:43:02Z D [rs1/rs101:27017] [resync] uploading ".pbm.init" [size hint: 5 (5.00B); part size: 10485760 (10.00MB)]
2024-11-13T10:43:02Z D [rs1/rs101:27017] [resync] got backups list: 0
2024-11-13T10:43:02Z D [rs1/rs101:27017] [resync] got physical restores list: 0
2024-11-13T10:43:02Z D [rs1/rs101:27017] [resync] epoch set to {1731494582 16}
2024-11-13T10:43:02Z I [rs1/rs101:27017] [resync] succeed
2024-11-13T10:43:07Z I [rs1/rs103:27017] got command resync <ts: 1731494586>, opid: 673482ba028d9a064894f960
2024-11-13T10:43:07Z I [rs1/rs101:27017] got command resync <ts: 1731494586>, opid: 673482ba028d9a064894f960
2024-11-13T10:43:07Z I [rs1/rs103:27017] got epoch {1731494582 16}
2024-11-13T10:43:07Z I [rs1/rs101:27017] got epoch {1731494582 16}
2024-11-13T10:43:07Z I [rs1/rs102:27017] got command resync <ts: 1731494586>, opid: 673482ba028d9a064894f960
2024-11-13T10:43:07Z I [rs1/rs102:27017] got epoch {1731494582 16}
2024-11-13T10:43:07Z I [rs1/rs103:27017] [resync] started
2024-11-13T10:43:07Z D [rs1/rs101:27017] [resync] lock not acquired
2024-11-13T10:43:07Z I [rs1/rs101:27017] got command backup [name: 2024-11-13T10:43:07Z, compression: none (level: default)] <ts: 1731494587>, opid: 673482bb39af1745310cb9e7
2024-11-13T10:43:07Z D [rs1/rs102:27017] [resync] lock not acquired
2024-11-13T10:43:07Z D [rs1/rs103:27017] [resync] got backups list: 0
2024-11-13T10:43:07Z I [rs1/rs101:27017] got epoch {1731494582 16}
2024-11-13T10:43:07Z I [rs1/rs102:27017] got command backup [name: 2024-11-13T10:43:07Z, compression: none (level: default)] <ts: 1731494587>, opid: 673482bb39af1745310cb9e7
2024-11-13T10:43:07Z I [rs1/rs102:27017] got epoch {1731494582 16}
2024-11-13T10:43:07Z E [rs1/rs101:27017] [backup/2024-11-13T10:43:07Z] unable to proceed with the backup, active lock is present
2024-11-13T10:43:07Z D [rs1/rs103:27017] [resync] got physical restores list: 0
2024-11-13T10:43:07Z D [rs1/rs103:27017] [resync] epoch set to {1731494587 21}
2024-11-13T10:43:07Z I [rs1/rs103:27017] [resync] succeed
2024-11-13T10:43:07Z I [rs1/rs103:27017] got command backup [name: 2024-11-13T10:43:07Z, compression: none (level: default)] <ts: 1731494587>, opid: 673482bb39af1745310cb9e7
2024-11-13T10:43:07Z I [rs1/rs103:27017] got epoch {1731494587 21}
2024-11-13T10:43:22Z D [rs1/rs102:27017] [backup/2024-11-13T10:43:07Z] nomination timeout
2024-11-13T10:43:22Z D [rs1/rs102:27017] [backup/2024-11-13T10:43:07Z] skip after nomination, probably started by another node
2024-11-13T10:43:22Z D [rs1/rs103:27017] [backup/2024-11-13T10:43:07Z] nomination timeout
2024-11-13T10:43:22Z D [rs1/rs103:27017] [backup/2024-11-13T10:43:07Z] skip after nomination, probably started by another node
Raw output
start_cluster = True, cluster = <cluster.Cluster object at 0x7f3bed1d7790>
collection = 'replaces'

    @pytest.mark.timeout(300,func_only=True)
    @pytest.mark.parametrize('collection',['inserts','replaces','updates','deletes'])
    def test_logical_pitr_crud_PBM_T270(start_cluster,cluster,collection):
        cluster.check_pbm_status()
>       cluster.make_backup("logical")

test_rename_replicaset.py:52: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <cluster.Cluster object at 0x7f3bed1d7790>, type = 'logical'

    def make_backup(self, type):
        n = testinfra.get_host("docker://" + self.pbm_cli)
        timeout = time.time() + 120
        while True:
            running = self.get_status()['running']
            Cluster.log("Current operation: " + str(running))
            if not running:
                if type:
                    start = n.run(
                        'pbm backup --out=json --type=' + type)
                else:
                    start = n.run('pbm backup --out=json')
                if start.rc == 0:
                    name = json.loads(start.stdout)['name']
                    Cluster.log("Backup started")
                    break
                elif "resync" in start.stdout:
                    Cluster.log("Resync in progress, retrying: " + start.stdout)
                else:
                    logs = n.check_output("pbm logs -sD -t0")
>                   assert False, "Backup failed" + start.stdout + start.stderr + '\n' + logs
E                   AssertionError: Backup failed{"Error":"get backup metadata: get: context deadline exceeded"}
E                   
E                   2024-11-13T10:42:55Z I [rs1/rs103:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:42:55Z I [rs1/rs101:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:42:55Z I [rs1/rs101:27017] starting PITR routine
E                   2024-11-13T10:42:55Z I [rs1/rs103:27017] starting PITR routine
E                   2024-11-13T10:42:55Z I [rs1/rs102:27017] starting PITR routine
E                   2024-11-13T10:42:55Z I [rs1/rs102:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:42:55Z I [rs1/rs103:27017] node: rs1/rs103:27017
E                   2024-11-13T10:42:55Z I [rs1/rs101:27017] node: rs1/rs101:27017
E                   2024-11-13T10:42:55Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:42:55Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:42:55Z I [rs1/rs101:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:42:55Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:42:55Z I [rs1/rs103:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:42:55Z I [rs1/rs101:27017] listening for the commands
E                   2024-11-13T10:42:55Z I [rs1/rs102:27017] node: rs1/rs102:27017
E                   2024-11-13T10:42:55Z I [rs1/rs103:27017] listening for the commands
E                   2024-11-13T10:42:55Z I [rs1/rs102:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:42:55Z I [rs1/rs102:27017] listening for the commands
E                   2024-11-13T10:43:00Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:43:00Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:43:00Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:43:02Z I [rs1/rs101:27017] got command resync <ts: 1731494581>, opid: 673482b5c56f177a17552c2d
E                   2024-11-13T10:43:02Z I [rs1/rs103:27017] got command resync <ts: 1731494581>, opid: 673482b5c56f177a17552c2d
E                   2024-11-13T10:43:02Z I [rs1/rs101:27017] got epoch {1731494580 6}
E                   2024-11-13T10:43:02Z I [rs1/rs103:27017] got epoch {1731494580 6}
E                   2024-11-13T10:43:02Z I [rs1/rs102:27017] got command resync <ts: 1731494581>, opid: 673482b5c56f177a17552c2d
E                   2024-11-13T10:43:02Z I [rs1/rs102:27017] got epoch {1731494580 6}
E                   2024-11-13T10:43:02Z D [rs1/rs103:27017] [resync] lock not acquired
E                   2024-11-13T10:43:02Z I [rs1/rs101:27017] [resync] started
E                   2024-11-13T10:43:02Z D [rs1/rs102:27017] [resync] lock not acquired
E                   2024-11-13T10:43:02Z D [rs1/rs101:27017] [resync] uploading ".pbm.init" [size hint: 5 (5.00B); part size: 10485760 (10.00MB)]
E                   2024-11-13T10:43:02Z D [rs1/rs101:27017] [resync] got backups list: 0
E                   2024-11-13T10:43:02Z D [rs1/rs101:27017] [resync] got physical restores list: 0
E                   2024-11-13T10:43:02Z D [rs1/rs101:27017] [resync] epoch set to {1731494582 16}
E                   2024-11-13T10:43:02Z I [rs1/rs101:27017] [resync] succeed
E                   2024-11-13T10:43:07Z I [rs1/rs103:27017] got command resync <ts: 1731494586>, opid: 673482ba028d9a064894f960
E                   2024-11-13T10:43:07Z I [rs1/rs101:27017] got command resync <ts: 1731494586>, opid: 673482ba028d9a064894f960
E                   2024-11-13T10:43:07Z I [rs1/rs103:27017] got epoch {1731494582 16}
E                   2024-11-13T10:43:07Z I [rs1/rs101:27017] got epoch {1731494582 16}
E                   2024-11-13T10:43:07Z I [rs1/rs102:27017] got command resync <ts: 1731494586>, opid: 673482ba028d9a064894f960
E                   2024-11-13T10:43:07Z I [rs1/rs102:27017] got epoch {1731494582 16}
E                   2024-11-13T10:43:07Z I [rs1/rs103:27017] [resync] started
E                   2024-11-13T10:43:07Z D [rs1/rs101:27017] [resync] lock not acquired
E                   2024-11-13T10:43:07Z I [rs1/rs101:27017] got command backup [name: 2024-11-13T10:43:07Z, compression: none (level: default)] <ts: 1731494587>, opid: 673482bb39af1745310cb9e7
E                   2024-11-13T10:43:07Z D [rs1/rs102:27017] [resync] lock not acquired
E                   2024-11-13T10:43:07Z D [rs1/rs103:27017] [resync] got backups list: 0
E                   2024-11-13T10:43:07Z I [rs1/rs101:27017] got epoch {1731494582 16}
E                   2024-11-13T10:43:07Z I [rs1/rs102:27017] got command backup [name: 2024-11-13T10:43:07Z, compression: none (level: default)] <ts: 1731494587>, opid: 673482bb39af1745310cb9e7
E                   2024-11-13T10:43:07Z I [rs1/rs102:27017] got epoch {1731494582 16}
E                   2024-11-13T10:43:07Z E [rs1/rs101:27017] [backup/2024-11-13T10:43:07Z] unable to proceed with the backup, active lock is present
E                   2024-11-13T10:43:07Z D [rs1/rs103:27017] [resync] got physical restores list: 0
E                   2024-11-13T10:43:07Z D [rs1/rs103:27017] [resync] epoch set to {1731494587 21}
E                   2024-11-13T10:43:07Z I [rs1/rs103:27017] [resync] succeed
E                   2024-11-13T10:43:07Z I [rs1/rs103:27017] got command backup [name: 2024-11-13T10:43:07Z, compression: none (level: default)] <ts: 1731494587>, opid: 673482bb39af1745310cb9e7
E                   2024-11-13T10:43:07Z I [rs1/rs103:27017] got epoch {1731494587 21}
E                   2024-11-13T10:43:22Z D [rs1/rs102:27017] [backup/2024-11-13T10:43:07Z] nomination timeout
E                   2024-11-13T10:43:22Z D [rs1/rs102:27017] [backup/2024-11-13T10:43:07Z] skip after nomination, probably started by another node
E                   2024-11-13T10:43:22Z D [rs1/rs103:27017] [backup/2024-11-13T10:43:07Z] nomination timeout
E                   2024-11-13T10:43:22Z D [rs1/rs103:27017] [backup/2024-11-13T10:43:07Z] skip after nomination, probably started by another node

cluster.py:393: AssertionError

Check failure on line 71 in pbm-functional/pytest/test_rename_replicaset.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_rename_replicaset.test_logical_pitr_crud_PBM_T270[updates]

AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
 +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr
Raw output
start_cluster = True, cluster = <cluster.Cluster object at 0x7f3bed1d7790>
collection = 'updates'

    @pytest.mark.timeout(300,func_only=True)
    @pytest.mark.parametrize('collection',['inserts','replaces','updates','deletes'])
    def test_logical_pitr_crud_PBM_T270(start_cluster,cluster,collection):
        cluster.check_pbm_status()
        cluster.make_backup("logical")
        cluster.enable_pitr(pitr_extra_args="--set pitr.oplogSpanMin=0.1")
        time.sleep(5)
        for i in range(10):
            client = pymongo.MongoClient(cluster.connection)
            client["test"]["inserts"].insert_one({"key": i+10, "data": i+10})
            client["test"]["replaces"].replace_one({"key": i}, {"key": i+10, "data": i+10})
            client["test"]["updates"].update_one({"key": i}, {"$inc": { "data": 10 }})
            client["test"]["deletes"].delete_one({"key": i})
        time.sleep(5)
        pitr = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
        pitr=" --time=" + pitr
        Cluster.log("Time for PITR is: " + pitr)
        time.sleep(10)
        cluster.disable_pitr()
        time.sleep(5)
        backup_to_fail=pitr + " --ns-from=test." + collection + " --ns-to=test.test"
        result=cluster.exec_pbm_cli("restore" + backup_to_fail + " --wait")
        assert result.rc != 0
>       assert "cloning namespace (--ns-to) is already in use" in result.stderr
E       AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
E        +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr

test_rename_replicaset.py:71: AssertionError

Check failure on line 52 in pbm-functional/pytest/test_rename_replicaset.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_rename_replicaset.test_logical_pitr_crud_PBM_T270[deletes]

AssertionError: Backup failed{"Error":"get backup metadata: get: context deadline exceeded"}

2024-11-13T10:46:06Z I [rs1/rs103:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:46:06Z I [rs1/rs103:27017] starting PITR routine
2024-11-13T10:46:06Z I [rs1/rs101:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:46:06Z I [rs1/rs101:27017] starting PITR routine
2024-11-13T10:46:06Z I [rs1/rs102:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:46:06Z I [rs1/rs102:27017] starting PITR routine
2024-11-13T10:46:06Z I [rs1/rs103:27017] node: rs1/rs103:27017
2024-11-13T10:46:06Z I [rs1/rs101:27017] node: rs1/rs101:27017
2024-11-13T10:46:06Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:46:06Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:46:06Z I [rs1/rs102:27017] node: rs1/rs102:27017
2024-11-13T10:46:06Z I [rs1/rs103:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:46:06Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:46:06Z I [rs1/rs101:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:46:06Z I [rs1/rs103:27017] listening for the commands
2024-11-13T10:46:06Z I [rs1/rs102:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:46:06Z I [rs1/rs101:27017] listening for the commands
2024-11-13T10:46:06Z I [rs1/rs102:27017] listening for the commands
2024-11-13T10:46:11Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:46:11Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:46:11Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:46:14Z I [rs1/rs103:27017] got command resync <ts: 1731494773>, opid: 673483758b98b52f5b0a9bd4
2024-11-13T10:46:14Z I [rs1/rs103:27017] got epoch {1731494771 6}
2024-11-13T10:46:14Z I [rs1/rs102:27017] got command resync <ts: 1731494773>, opid: 673483758b98b52f5b0a9bd4
2024-11-13T10:46:14Z I [rs1/rs101:27017] got command resync <ts: 1731494773>, opid: 673483758b98b52f5b0a9bd4
2024-11-13T10:46:14Z I [rs1/rs102:27017] got epoch {1731494771 6}
2024-11-13T10:46:14Z I [rs1/rs101:27017] got epoch {1731494771 6}
2024-11-13T10:46:14Z D [rs1/rs102:27017] [resync] lock not acquired
2024-11-13T10:46:14Z I [rs1/rs103:27017] [resync] started
2024-11-13T10:46:14Z D [rs1/rs101:27017] [resync] lock not acquired
2024-11-13T10:46:14Z D [rs1/rs103:27017] [resync] uploading ".pbm.init" [size hint: 5 (5.00B); part size: 10485760 (10.00MB)]
2024-11-13T10:46:14Z D [rs1/rs103:27017] [resync] got backups list: 0
2024-11-13T10:46:14Z D [rs1/rs103:27017] [resync] got physical restores list: 0
2024-11-13T10:46:14Z D [rs1/rs103:27017] [resync] epoch set to {1731494774 16}
2024-11-13T10:46:14Z I [rs1/rs103:27017] [resync] succeed
2024-11-13T10:46:19Z I [rs1/rs103:27017] got command resync <ts: 1731494778>, opid: 6734837ac846a123a0f58d94
2024-11-13T10:46:19Z I [rs1/rs102:27017] got command resync <ts: 1731494778>, opid: 6734837ac846a123a0f58d94
2024-11-13T10:46:19Z I [rs1/rs103:27017] got epoch {1731494774 16}
2024-11-13T10:46:19Z I [rs1/rs101:27017] got command resync <ts: 1731494778>, opid: 6734837ac846a123a0f58d94
2024-11-13T10:46:19Z I [rs1/rs102:27017] got epoch {1731494774 16}
2024-11-13T10:46:19Z I [rs1/rs101:27017] got epoch {1731494774 16}
2024-11-13T10:46:19Z D [rs1/rs102:27017] [resync] lock not acquired
2024-11-13T10:46:19Z I [rs1/rs103:27017] [resync] started
2024-11-13T10:46:19Z D [rs1/rs101:27017] [resync] lock not acquired
2024-11-13T10:46:19Z I [rs1/rs102:27017] got command backup [name: 2024-11-13T10:46:19Z, compression: none (level: default)] <ts: 1731494779>, opid: 6734837be0169e51dbd11a75
2024-11-13T10:46:19Z D [rs1/rs103:27017] [resync] got backups list: 0
2024-11-13T10:46:19Z I [rs1/rs102:27017] got epoch {1731494774 16}
2024-11-13T10:46:19Z I [rs1/rs101:27017] got command backup [name: 2024-11-13T10:46:19Z, compression: none (level: default)] <ts: 1731494779>, opid: 6734837be0169e51dbd11a75
2024-11-13T10:46:19Z I [rs1/rs101:27017] got epoch {1731494774 16}
2024-11-13T10:46:19Z D [rs1/rs103:27017] [resync] got physical restores list: 0
2024-11-13T10:46:19Z E [rs1/rs101:27017] [backup/2024-11-13T10:46:19Z] unable to proceed with the backup, active lock is present
2024-11-13T10:46:19Z D [rs1/rs103:27017] [resync] epoch set to {1731494779 21}
2024-11-13T10:46:19Z I [rs1/rs103:27017] [resync] succeed
2024-11-13T10:46:19Z I [rs1/rs103:27017] got command backup [name: 2024-11-13T10:46:19Z, compression: none (level: default)] <ts: 1731494779>, opid: 6734837be0169e51dbd11a75
2024-11-13T10:46:19Z I [rs1/rs103:27017] got epoch {1731494779 21}
2024-11-13T10:46:34Z D [rs1/rs102:27017] [backup/2024-11-13T10:46:19Z] nomination timeout
2024-11-13T10:46:34Z D [rs1/rs102:27017] [backup/2024-11-13T10:46:19Z] skip after nomination, probably started by another node
2024-11-13T10:46:34Z D [rs1/rs103:27017] [backup/2024-11-13T10:46:19Z] nomination timeout
2024-11-13T10:46:34Z D [rs1/rs103:27017] [backup/2024-11-13T10:46:19Z] skip after nomination, probably started by another node
Raw output
start_cluster = True, cluster = <cluster.Cluster object at 0x7f3bed1d7790>
collection = 'deletes'

    @pytest.mark.timeout(300,func_only=True)
    @pytest.mark.parametrize('collection',['inserts','replaces','updates','deletes'])
    def test_logical_pitr_crud_PBM_T270(start_cluster,cluster,collection):
        cluster.check_pbm_status()
>       cluster.make_backup("logical")

test_rename_replicaset.py:52: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <cluster.Cluster object at 0x7f3bed1d7790>, type = 'logical'

    def make_backup(self, type):
        n = testinfra.get_host("docker://" + self.pbm_cli)
        timeout = time.time() + 120
        while True:
            running = self.get_status()['running']
            Cluster.log("Current operation: " + str(running))
            if not running:
                if type:
                    start = n.run(
                        'pbm backup --out=json --type=' + type)
                else:
                    start = n.run('pbm backup --out=json')
                if start.rc == 0:
                    name = json.loads(start.stdout)['name']
                    Cluster.log("Backup started")
                    break
                elif "resync" in start.stdout:
                    Cluster.log("Resync in progress, retrying: " + start.stdout)
                else:
                    logs = n.check_output("pbm logs -sD -t0")
>                   assert False, "Backup failed" + start.stdout + start.stderr + '\n' + logs
E                   AssertionError: Backup failed{"Error":"get backup metadata: get: context deadline exceeded"}
E                   
E                   2024-11-13T10:46:06Z I [rs1/rs103:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:46:06Z I [rs1/rs103:27017] starting PITR routine
E                   2024-11-13T10:46:06Z I [rs1/rs101:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:46:06Z I [rs1/rs101:27017] starting PITR routine
E                   2024-11-13T10:46:06Z I [rs1/rs102:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:46:06Z I [rs1/rs102:27017] starting PITR routine
E                   2024-11-13T10:46:06Z I [rs1/rs103:27017] node: rs1/rs103:27017
E                   2024-11-13T10:46:06Z I [rs1/rs101:27017] node: rs1/rs101:27017
E                   2024-11-13T10:46:06Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:46:06Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:46:06Z I [rs1/rs102:27017] node: rs1/rs102:27017
E                   2024-11-13T10:46:06Z I [rs1/rs103:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:46:06Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:46:06Z I [rs1/rs101:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:46:06Z I [rs1/rs103:27017] listening for the commands
E                   2024-11-13T10:46:06Z I [rs1/rs102:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:46:06Z I [rs1/rs101:27017] listening for the commands
E                   2024-11-13T10:46:06Z I [rs1/rs102:27017] listening for the commands
E                   2024-11-13T10:46:11Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:46:11Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:46:11Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:46:14Z I [rs1/rs103:27017] got command resync <ts: 1731494773>, opid: 673483758b98b52f5b0a9bd4
E                   2024-11-13T10:46:14Z I [rs1/rs103:27017] got epoch {1731494771 6}
E                   2024-11-13T10:46:14Z I [rs1/rs102:27017] got command resync <ts: 1731494773>, opid: 673483758b98b52f5b0a9bd4
E                   2024-11-13T10:46:14Z I [rs1/rs101:27017] got command resync <ts: 1731494773>, opid: 673483758b98b52f5b0a9bd4
E                   2024-11-13T10:46:14Z I [rs1/rs102:27017] got epoch {1731494771 6}
E                   2024-11-13T10:46:14Z I [rs1/rs101:27017] got epoch {1731494771 6}
E                   2024-11-13T10:46:14Z D [rs1/rs102:27017] [resync] lock not acquired
E                   2024-11-13T10:46:14Z I [rs1/rs103:27017] [resync] started
E                   2024-11-13T10:46:14Z D [rs1/rs101:27017] [resync] lock not acquired
E                   2024-11-13T10:46:14Z D [rs1/rs103:27017] [resync] uploading ".pbm.init" [size hint: 5 (5.00B); part size: 10485760 (10.00MB)]
E                   2024-11-13T10:46:14Z D [rs1/rs103:27017] [resync] got backups list: 0
E                   2024-11-13T10:46:14Z D [rs1/rs103:27017] [resync] got physical restores list: 0
E                   2024-11-13T10:46:14Z D [rs1/rs103:27017] [resync] epoch set to {1731494774 16}
E                   2024-11-13T10:46:14Z I [rs1/rs103:27017] [resync] succeed
E                   2024-11-13T10:46:19Z I [rs1/rs103:27017] got command resync <ts: 1731494778>, opid: 6734837ac846a123a0f58d94
E                   2024-11-13T10:46:19Z I [rs1/rs102:27017] got command resync <ts: 1731494778>, opid: 6734837ac846a123a0f58d94
E                   2024-11-13T10:46:19Z I [rs1/rs103:27017] got epoch {1731494774 16}
E                   2024-11-13T10:46:19Z I [rs1/rs101:27017] got command resync <ts: 1731494778>, opid: 6734837ac846a123a0f58d94
E                   2024-11-13T10:46:19Z I [rs1/rs102:27017] got epoch {1731494774 16}
E                   2024-11-13T10:46:19Z I [rs1/rs101:27017] got epoch {1731494774 16}
E                   2024-11-13T10:46:19Z D [rs1/rs102:27017] [resync] lock not acquired
E                   2024-11-13T10:46:19Z I [rs1/rs103:27017] [resync] started
E                   2024-11-13T10:46:19Z D [rs1/rs101:27017] [resync] lock not acquired
E                   2024-11-13T10:46:19Z I [rs1/rs102:27017] got command backup [name: 2024-11-13T10:46:19Z, compression: none (level: default)] <ts: 1731494779>, opid: 6734837be0169e51dbd11a75
E                   2024-11-13T10:46:19Z D [rs1/rs103:27017] [resync] got backups list: 0
E                   2024-11-13T10:46:19Z I [rs1/rs102:27017] got epoch {1731494774 16}
E                   2024-11-13T10:46:19Z I [rs1/rs101:27017] got command backup [name: 2024-11-13T10:46:19Z, compression: none (level: default)] <ts: 1731494779>, opid: 6734837be0169e51dbd11a75
E                   2024-11-13T10:46:19Z I [rs1/rs101:27017] got epoch {1731494774 16}
E                   2024-11-13T10:46:19Z D [rs1/rs103:27017] [resync] got physical restores list: 0
E                   2024-11-13T10:46:19Z E [rs1/rs101:27017] [backup/2024-11-13T10:46:19Z] unable to proceed with the backup, active lock is present
E                   2024-11-13T10:46:19Z D [rs1/rs103:27017] [resync] epoch set to {1731494779 21}
E                   2024-11-13T10:46:19Z I [rs1/rs103:27017] [resync] succeed
E                   2024-11-13T10:46:19Z I [rs1/rs103:27017] got command backup [name: 2024-11-13T10:46:19Z, compression: none (level: default)] <ts: 1731494779>, opid: 6734837be0169e51dbd11a75
E                   2024-11-13T10:46:19Z I [rs1/rs103:27017] got epoch {1731494779 21}
E                   2024-11-13T10:46:34Z D [rs1/rs102:27017] [backup/2024-11-13T10:46:19Z] nomination timeout
E                   2024-11-13T10:46:34Z D [rs1/rs102:27017] [backup/2024-11-13T10:46:19Z] skip after nomination, probably started by another node
E                   2024-11-13T10:46:34Z D [rs1/rs103:27017] [backup/2024-11-13T10:46:19Z] nomination timeout
E                   2024-11-13T10:46:34Z D [rs1/rs103:27017] [backup/2024-11-13T10:46:19Z] skip after nomination, probably started by another node

cluster.py:393: AssertionError

Check failure on line 121 in pbm-functional/pytest/test_rename_replicaset.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_rename_replicaset.test_logical_pitr_with_txn_PBM_T271[inserts]

AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
 +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr
Raw output
start_cluster = True, cluster = <cluster.Cluster object at 0x7f3bed1d7790>
collection = 'inserts'

    @pytest.mark.timeout(300,func_only=True)
    @pytest.mark.parametrize('collection',['inserts','replaces','updates','deletes'])
    def test_logical_pitr_with_txn_PBM_T271(start_cluster,cluster,collection):
        cluster.check_pbm_status()
        cluster.make_backup("logical")
        cluster.enable_pitr(pitr_extra_args="--set pitr.oplogSpanMin=0.1")
        time.sleep(5)
        for i in range(10):
            client = pymongo.MongoClient(cluster.connection)
            with client.start_session() as session:
                with session.start_transaction():
                    client["test"]["inserts"].insert_one({"key": i+10, "data": i+10}, session=session)
                    client["test"]["replaces"].replace_one({"key": i}, {"key": i+10, "data": i+10}, session=session)
                    client["test"]["updates"].update_one({"key": i}, {"$inc": { "data": 10 }}, session=session)
                    client["test"]["deletes"].delete_one({"key": i}, session=session)
                session.commit_transaction()
        time.sleep(5)
        pitr = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
        pitr=" --time=" + pitr
        Cluster.log("Time for PITR is: " + pitr)
        time.sleep(10)
        cluster.disable_pitr()
        time.sleep(5)
        backup_to_fail=pitr + " --ns-from=test." + collection + " --ns-to=test.test"
        result=cluster.exec_pbm_cli("restore" + backup_to_fail + " --wait")
        assert result.rc != 0
>       assert "cloning namespace (--ns-to) is already in use" in result.stderr
E       AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
E        +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr

test_rename_replicaset.py:121: AssertionError

Check failure on line 121 in pbm-functional/pytest/test_rename_replicaset.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_rename_replicaset.test_logical_pitr_with_txn_PBM_T271[replaces]

AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
 +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr
Raw output
start_cluster = True, cluster = <cluster.Cluster object at 0x7f3bed1d7790>
collection = 'replaces'

    @pytest.mark.timeout(300,func_only=True)
    @pytest.mark.parametrize('collection',['inserts','replaces','updates','deletes'])
    def test_logical_pitr_with_txn_PBM_T271(start_cluster,cluster,collection):
        cluster.check_pbm_status()
        cluster.make_backup("logical")
        cluster.enable_pitr(pitr_extra_args="--set pitr.oplogSpanMin=0.1")
        time.sleep(5)
        for i in range(10):
            client = pymongo.MongoClient(cluster.connection)
            with client.start_session() as session:
                with session.start_transaction():
                    client["test"]["inserts"].insert_one({"key": i+10, "data": i+10}, session=session)
                    client["test"]["replaces"].replace_one({"key": i}, {"key": i+10, "data": i+10}, session=session)
                    client["test"]["updates"].update_one({"key": i}, {"$inc": { "data": 10 }}, session=session)
                    client["test"]["deletes"].delete_one({"key": i}, session=session)
                session.commit_transaction()
        time.sleep(5)
        pitr = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
        pitr=" --time=" + pitr
        Cluster.log("Time for PITR is: " + pitr)
        time.sleep(10)
        cluster.disable_pitr()
        time.sleep(5)
        backup_to_fail=pitr + " --ns-from=test." + collection + " --ns-to=test.test"
        result=cluster.exec_pbm_cli("restore" + backup_to_fail + " --wait")
        assert result.rc != 0
>       assert "cloning namespace (--ns-to) is already in use" in result.stderr
E       AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
E        +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr

test_rename_replicaset.py:121: AssertionError

Check failure on line 99 in pbm-functional/pytest/test_rename_replicaset.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_rename_replicaset.test_logical_pitr_with_txn_PBM_T271[updates]

AssertionError: Backup failed{"Error":"get backup metadata: get: context deadline exceeded"}

2024-11-13T10:51:08Z I [rs1/rs103:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:51:08Z I [rs1/rs103:27017] starting PITR routine
2024-11-13T10:51:08Z I [rs1/rs102:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:51:08Z I [rs1/rs101:27017] pbm-agent:
Version:   2.7.0
Platform:  linux/amd64
GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
GitBranch: dev
BuildTime: 2024-11-13_09:38_UTC
GoVersion: go1.23.3
2024-11-13T10:51:08Z I [rs1/rs101:27017] starting PITR routine
2024-11-13T10:51:08Z I [rs1/rs102:27017] starting PITR routine
2024-11-13T10:51:08Z I [rs1/rs102:27017] node: rs1/rs102:27017
2024-11-13T10:51:08Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:51:08Z I [rs1/rs103:27017] node: rs1/rs103:27017
2024-11-13T10:51:08Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:51:08Z I [rs1/rs101:27017] node: rs1/rs101:27017
2024-11-13T10:51:08Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:51:08Z I [rs1/rs102:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:51:08Z I [rs1/rs103:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:51:08Z I [rs1/rs101:27017] conn level ReadConcern: majority; WriteConcern: majority
2024-11-13T10:51:08Z I [rs1/rs102:27017] listening for the commands
2024-11-13T10:51:08Z I [rs1/rs103:27017] listening for the commands
2024-11-13T10:51:08Z I [rs1/rs101:27017] listening for the commands
2024-11-13T10:51:13Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:51:13Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:51:13Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2024-11-13T10:51:15Z I [rs1/rs102:27017] got command resync <ts: 1731495074>, opid: 673484a2a4dcee9bf86a9afe
2024-11-13T10:51:15Z I [rs1/rs102:27017] got epoch {1731495073 6}
2024-11-13T10:51:15Z I [rs1/rs101:27017] got command resync <ts: 1731495074>, opid: 673484a2a4dcee9bf86a9afe
2024-11-13T10:51:15Z I [rs1/rs103:27017] got command resync <ts: 1731495074>, opid: 673484a2a4dcee9bf86a9afe
2024-11-13T10:51:15Z I [rs1/rs101:27017] got epoch {1731495073 6}
2024-11-13T10:51:15Z I [rs1/rs103:27017] got epoch {1731495073 6}
2024-11-13T10:51:15Z I [rs1/rs102:27017] [resync] started
2024-11-13T10:51:15Z D [rs1/rs101:27017] [resync] lock not acquired
2024-11-13T10:51:15Z D [rs1/rs103:27017] [resync] lock not acquired
2024-11-13T10:51:15Z D [rs1/rs102:27017] [resync] uploading ".pbm.init" [size hint: 5 (5.00B); part size: 10485760 (10.00MB)]
2024-11-13T10:51:15Z D [rs1/rs102:27017] [resync] got backups list: 0
2024-11-13T10:51:15Z D [rs1/rs102:27017] [resync] got physical restores list: 0
2024-11-13T10:51:15Z D [rs1/rs102:27017] [resync] epoch set to {1731495075 16}
2024-11-13T10:51:15Z I [rs1/rs102:27017] [resync] succeed
2024-11-13T10:51:20Z I [rs1/rs102:27017] got command resync <ts: 1731495079>, opid: 673484a7767b42afebacdae6
2024-11-13T10:51:20Z I [rs1/rs101:27017] got command resync <ts: 1731495079>, opid: 673484a7767b42afebacdae6
2024-11-13T10:51:20Z I [rs1/rs103:27017] got command resync <ts: 1731495079>, opid: 673484a7767b42afebacdae6
2024-11-13T10:51:20Z I [rs1/rs102:27017] got epoch {1731495075 16}
2024-11-13T10:51:20Z I [rs1/rs101:27017] got epoch {1731495075 16}
2024-11-13T10:51:20Z I [rs1/rs103:27017] got epoch {1731495075 16}
2024-11-13T10:51:20Z D [rs1/rs103:27017] [resync] lock not acquired
2024-11-13T10:51:20Z D [rs1/rs101:27017] [resync] lock not acquired
2024-11-13T10:51:20Z I [rs1/rs102:27017] [resync] started
2024-11-13T10:51:20Z I [rs1/rs103:27017] got command backup [name: 2024-11-13T10:51:20Z, compression: none (level: default)] <ts: 1731495080>, opid: 673484a8aee30a360299f6e8
2024-11-13T10:51:20Z I [rs1/rs101:27017] got command backup [name: 2024-11-13T10:51:20Z, compression: none (level: default)] <ts: 1731495080>, opid: 673484a8aee30a360299f6e8
2024-11-13T10:51:20Z I [rs1/rs103:27017] got epoch {1731495075 16}
2024-11-13T10:51:20Z D [rs1/rs102:27017] [resync] got backups list: 0
2024-11-13T10:51:20Z I [rs1/rs101:27017] got epoch {1731495075 16}
2024-11-13T10:51:20Z D [rs1/rs102:27017] [resync] got physical restores list: 0
2024-11-13T10:51:20Z E [rs1/rs101:27017] [backup/2024-11-13T10:51:20Z] unable to proceed with the backup, active lock is present
2024-11-13T10:51:20Z D [rs1/rs102:27017] [resync] epoch set to {1731495080 77}
2024-11-13T10:51:20Z I [rs1/rs102:27017] [resync] succeed
2024-11-13T10:51:20Z I [rs1/rs102:27017] got command backup [name: 2024-11-13T10:51:20Z, compression: none (level: default)] <ts: 1731495080>, opid: 673484a8aee30a360299f6e8
2024-11-13T10:51:20Z I [rs1/rs102:27017] got epoch {1731495080 77}
2024-11-13T10:51:35Z D [rs1/rs103:27017] [backup/2024-11-13T10:51:20Z] nomination timeout
2024-11-13T10:51:35Z D [rs1/rs103:27017] [backup/2024-11-13T10:51:20Z] skip after nomination, probably started by another node
2024-11-13T10:51:35Z D [rs1/rs102:27017] [backup/2024-11-13T10:51:20Z] nomination timeout
2024-11-13T10:51:35Z D [rs1/rs102:27017] [backup/2024-11-13T10:51:20Z] skip after nomination, probably started by another node
Raw output
start_cluster = True, cluster = <cluster.Cluster object at 0x7f3bed1d7790>
collection = 'updates'

    @pytest.mark.timeout(300,func_only=True)
    @pytest.mark.parametrize('collection',['inserts','replaces','updates','deletes'])
    def test_logical_pitr_with_txn_PBM_T271(start_cluster,cluster,collection):
        cluster.check_pbm_status()
>       cluster.make_backup("logical")

test_rename_replicaset.py:99: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <cluster.Cluster object at 0x7f3bed1d7790>, type = 'logical'

    def make_backup(self, type):
        n = testinfra.get_host("docker://" + self.pbm_cli)
        timeout = time.time() + 120
        while True:
            running = self.get_status()['running']
            Cluster.log("Current operation: " + str(running))
            if not running:
                if type:
                    start = n.run(
                        'pbm backup --out=json --type=' + type)
                else:
                    start = n.run('pbm backup --out=json')
                if start.rc == 0:
                    name = json.loads(start.stdout)['name']
                    Cluster.log("Backup started")
                    break
                elif "resync" in start.stdout:
                    Cluster.log("Resync in progress, retrying: " + start.stdout)
                else:
                    logs = n.check_output("pbm logs -sD -t0")
>                   assert False, "Backup failed" + start.stdout + start.stderr + '\n' + logs
E                   AssertionError: Backup failed{"Error":"get backup metadata: get: context deadline exceeded"}
E                   
E                   2024-11-13T10:51:08Z I [rs1/rs103:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:51:08Z I [rs1/rs103:27017] starting PITR routine
E                   2024-11-13T10:51:08Z I [rs1/rs102:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:51:08Z I [rs1/rs101:27017] pbm-agent:
E                   Version:   2.7.0
E                   Platform:  linux/amd64
E                   GitCommit: 79478b12486e64faa442adaa26183969d9b9a16e
E                   GitBranch: dev
E                   BuildTime: 2024-11-13_09:38_UTC
E                   GoVersion: go1.23.3
E                   2024-11-13T10:51:08Z I [rs1/rs101:27017] starting PITR routine
E                   2024-11-13T10:51:08Z I [rs1/rs102:27017] starting PITR routine
E                   2024-11-13T10:51:08Z I [rs1/rs102:27017] node: rs1/rs102:27017
E                   2024-11-13T10:51:08Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:51:08Z I [rs1/rs103:27017] node: rs1/rs103:27017
E                   2024-11-13T10:51:08Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:51:08Z I [rs1/rs101:27017] node: rs1/rs101:27017
E                   2024-11-13T10:51:08Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:51:08Z I [rs1/rs102:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:51:08Z I [rs1/rs103:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:51:08Z I [rs1/rs101:27017] conn level ReadConcern: majority; WriteConcern: majority
E                   2024-11-13T10:51:08Z I [rs1/rs102:27017] listening for the commands
E                   2024-11-13T10:51:08Z I [rs1/rs103:27017] listening for the commands
E                   2024-11-13T10:51:08Z I [rs1/rs101:27017] listening for the commands
E                   2024-11-13T10:51:13Z E [rs1/rs101:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:51:13Z E [rs1/rs102:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:51:13Z E [rs1/rs103:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
E                   2024-11-13T10:51:15Z I [rs1/rs102:27017] got command resync <ts: 1731495074>, opid: 673484a2a4dcee9bf86a9afe
E                   2024-11-13T10:51:15Z I [rs1/rs102:27017] got epoch {1731495073 6}
E                   2024-11-13T10:51:15Z I [rs1/rs101:27017] got command resync <ts: 1731495074>, opid: 673484a2a4dcee9bf86a9afe
E                   2024-11-13T10:51:15Z I [rs1/rs103:27017] got command resync <ts: 1731495074>, opid: 673484a2a4dcee9bf86a9afe
E                   2024-11-13T10:51:15Z I [rs1/rs101:27017] got epoch {1731495073 6}
E                   2024-11-13T10:51:15Z I [rs1/rs103:27017] got epoch {1731495073 6}
E                   2024-11-13T10:51:15Z I [rs1/rs102:27017] [resync] started
E                   2024-11-13T10:51:15Z D [rs1/rs101:27017] [resync] lock not acquired
E                   2024-11-13T10:51:15Z D [rs1/rs103:27017] [resync] lock not acquired
E                   2024-11-13T10:51:15Z D [rs1/rs102:27017] [resync] uploading ".pbm.init" [size hint: 5 (5.00B); part size: 10485760 (10.00MB)]
E                   2024-11-13T10:51:15Z D [rs1/rs102:27017] [resync] got backups list: 0
E                   2024-11-13T10:51:15Z D [rs1/rs102:27017] [resync] got physical restores list: 0
E                   2024-11-13T10:51:15Z D [rs1/rs102:27017] [resync] epoch set to {1731495075 16}
E                   2024-11-13T10:51:15Z I [rs1/rs102:27017] [resync] succeed
E                   2024-11-13T10:51:20Z I [rs1/rs102:27017] got command resync <ts: 1731495079>, opid: 673484a7767b42afebacdae6
E                   2024-11-13T10:51:20Z I [rs1/rs101:27017] got command resync <ts: 1731495079>, opid: 673484a7767b42afebacdae6
E                   2024-11-13T10:51:20Z I [rs1/rs103:27017] got command resync <ts: 1731495079>, opid: 673484a7767b42afebacdae6
E                   2024-11-13T10:51:20Z I [rs1/rs102:27017] got epoch {1731495075 16}
E                   2024-11-13T10:51:20Z I [rs1/rs101:27017] got epoch {1731495075 16}
E                   2024-11-13T10:51:20Z I [rs1/rs103:27017] got epoch {1731495075 16}
E                   2024-11-13T10:51:20Z D [rs1/rs103:27017] [resync] lock not acquired
E                   2024-11-13T10:51:20Z D [rs1/rs101:27017] [resync] lock not acquired
E                   2024-11-13T10:51:20Z I [rs1/rs102:27017] [resync] started
E                   2024-11-13T10:51:20Z I [rs1/rs103:27017] got command backup [name: 2024-11-13T10:51:20Z, compression: none (level: default)] <ts: 1731495080>, opid: 673484a8aee30a360299f6e8
E                   2024-11-13T10:51:20Z I [rs1/rs101:27017] got command backup [name: 2024-11-13T10:51:20Z, compression: none (level: default)] <ts: 1731495080>, opid: 673484a8aee30a360299f6e8
E                   2024-11-13T10:51:20Z I [rs1/rs103:27017] got epoch {1731495075 16}
E                   2024-11-13T10:51:20Z D [rs1/rs102:27017] [resync] got backups list: 0
E                   2024-11-13T10:51:20Z I [rs1/rs101:27017] got epoch {1731495075 16}
E                   2024-11-13T10:51:20Z D [rs1/rs102:27017] [resync] got physical restores list: 0
E                   2024-11-13T10:51:20Z E [rs1/rs101:27017] [backup/2024-11-13T10:51:20Z] unable to proceed with the backup, active lock is present
E                   2024-11-13T10:51:20Z D [rs1/rs102:27017] [resync] epoch set to {1731495080 77}
E                   2024-11-13T10:51:20Z I [rs1/rs102:27017] [resync] succeed
E                   2024-11-13T10:51:20Z I [rs1/rs102:27017] got command backup [name: 2024-11-13T10:51:20Z, compression: none (level: default)] <ts: 1731495080>, opid: 673484a8aee30a360299f6e8
E                   2024-11-13T10:51:20Z I [rs1/rs102:27017] got epoch {1731495080 77}
E                   2024-11-13T10:51:35Z D [rs1/rs103:27017] [backup/2024-11-13T10:51:20Z] nomination timeout
E                   2024-11-13T10:51:35Z D [rs1/rs103:27017] [backup/2024-11-13T10:51:20Z] skip after nomination, probably started by another node
E                   2024-11-13T10:51:35Z D [rs1/rs102:27017] [backup/2024-11-13T10:51:20Z] nomination timeout
E                   2024-11-13T10:51:35Z D [rs1/rs102:27017] [backup/2024-11-13T10:51:20Z] skip after nomination, probably started by another node

cluster.py:393: AssertionError

Check failure on line 121 in pbm-functional/pytest/test_rename_replicaset.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_rename_replicaset.test_logical_pitr_with_txn_PBM_T271[deletes]

AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
 +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr
Raw output
start_cluster = True, cluster = <cluster.Cluster object at 0x7f3bed1d7790>
collection = 'deletes'

    @pytest.mark.timeout(300,func_only=True)
    @pytest.mark.parametrize('collection',['inserts','replaces','updates','deletes'])
    def test_logical_pitr_with_txn_PBM_T271(start_cluster,cluster,collection):
        cluster.check_pbm_status()
        cluster.make_backup("logical")
        cluster.enable_pitr(pitr_extra_args="--set pitr.oplogSpanMin=0.1")
        time.sleep(5)
        for i in range(10):
            client = pymongo.MongoClient(cluster.connection)
            with client.start_session() as session:
                with session.start_transaction():
                    client["test"]["inserts"].insert_one({"key": i+10, "data": i+10}, session=session)
                    client["test"]["replaces"].replace_one({"key": i}, {"key": i+10, "data": i+10}, session=session)
                    client["test"]["updates"].update_one({"key": i}, {"$inc": { "data": 10 }}, session=session)
                    client["test"]["deletes"].delete_one({"key": i}, session=session)
                session.commit_transaction()
        time.sleep(5)
        pitr = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
        pitr=" --time=" + pitr
        Cluster.log("Time for PITR is: " + pitr)
        time.sleep(10)
        cluster.disable_pitr()
        time.sleep(5)
        backup_to_fail=pitr + " --ns-from=test." + collection + " --ns-to=test.test"
        result=cluster.exec_pbm_cli("restore" + backup_to_fail + " --wait")
        assert result.rc != 0
>       assert "cloning namespace (--ns-to) is already in use" in result.stderr
E       AssertionError: assert 'cloning namespace (--ns-to) is already in use' in 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n'
E        +  where 'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n' = CommandResult(backend=<testinfra.backend.docker.DockerBackend object at 0x7f3bf2117110>, exit_status=1, command=b'pbm ...'', _stderr=b'Error: parse --ns-from and --ns-to options: cloning with restore to the point-in-time is not possible\n').stderr

test_rename_replicaset.py:121: AssertionError