You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, when we have more than one yorc servers, only one yorc server receiving the deployment deletion request, deletes its local file cache. The other yorc servers may have their caches of the given deployment.
As a result, yorc server may reuse outdated information of the old topology from its local file cache. The deployment may be inconsistent occasionally.
Expected behavior
yorc should check if it has its local file cache of the given deployment and clear all keys before proceeding the request?
Steps to reproduce the issue
Have 3 yorc servers
Create an application with 1 compute, 1 software component (e.g., HelloWorld).
Deploy it
Un-deploy it
Change the HelloWorld property to print something else.
Deploy again.
Sometimes, the deployment log prints the old message from the old topology.
Additional information you deem important (e.g. issue happens only occasionally)
Issue happens only occasionally from time to time, when the yorc instances, which delete the deployment (i.e., have the cache cleared) and deploy new (i.e., still have the old cache) are not the same.
@loicalbertin We found a simple fix in a4c. In the administration section, update the deploymentID with a timestamp. For example:
Old value: (application.id + '-' + environment.name).replaceAll('[^\w\-_]', '_')
New value: (application.id + '-' + environment.name + '-' + new java.text.SimpleDateFormat("yyyyddMMHHmmss").format(new java.util.Date())).replaceAll('[^\w\-_]', '_')
This makes the deploymentID of the same topology different each time it is deployed. As a result, yorc will load a new file cache for the new deployment. So I think we can close this issue but may document it somewhere.
Bug Report
Description
When a deployment is deleted, yorc deletes its local fileCache:
https://github.com/ystia/yorc/blob/develop/deployments/deployments.go#L271
However, when we have more than one yorc servers, only one yorc server receiving the deployment deletion request, deletes its local file cache. The other yorc servers may have their caches of the given deployment.
When the same application is new deployed / updated again, one of the yorc server gets the old topology from its local cache:
https://github.com/ystia/yorc/blob/develop/storage/internal/file/store.go#L230
Actual behavior
As a result, yorc server may reuse outdated information of the old topology from its local file cache. The deployment may be inconsistent occasionally.
Expected behavior
yorc should check if it has its local file cache of the given deployment and clear all keys before proceeding the request?
Steps to reproduce the issue
Have 3 yorc servers
Sometimes, the deployment log prints the old message from the old topology.
Additional information you deem important (e.g. issue happens only occasionally)
Issue happens only occasionally from time to time, when the yorc instances, which delete the deployment (i.e., have the cache cleared) and deploy new (i.e., still have the old cache) are not the same.
Additional environment details (infrastructure implied: AWS, OpenStack, etc.)
no
Output of
yorc version
current develop
Yorc configuration file
Priority
This can be temporally fixed by scaling the yorc instances to one so I put it to medium.
The text was updated successfully, but these errors were encountered: