-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
timestamp incorrect after sync by binlog #1187
Comments
check the error log and find that drainer's generated update values are correct, maybe tidb has some bug which causes this problem. This problem can be reproduced only when drainer tryed to batch dmls execution, that is to say, log at markSuccess length is bigger than 1.
|
3 / 5 shows that the time is From the TiDB's general log: 3 / 5 is inserted by the consider that
1 / 2 / 4 is inserted by consider that
|
The above situation may happen when start a new transaction by |
It sames that it's a connection pool problem. The root cause is still the undefined behavior. I think we can close this issue now. Maybe what we can do later is to check downstream database's global.time_zone in an interval, when it diffes we can report it in drainer's log. However, users may ignore the warning in log and the can't be printed if we change time_zones two times in an interval. So maybe we should add a warning in the official document to warn our users. |
Yes, for the solution:
|
Bug Report
Please answer these questions before submitting your issue. Thanks!
deploy and start upstream v5.0.3, enable binlog
start pump v6.1.0 manually
deploy and start downtream v6.1.0, change the time zone by
set global time_zone = "UTC"
start drainer v6.1.0.
change the downtream time zone by
set global time_zone = "Asia/Shanghai"
run the sql file
What did you expect to see?
select * from test.t
should display the same data, but different.upstream v5.0.3:
downstream v6.1.0:
upstream and downtream should display as the same.
(run
drainer -V
in terminal to get drainer's version)v6.1.0
The text was updated successfully, but these errors were encountered: