You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm not sure whether this could be done easily without being considered a risk to training quality and as such not be worth the probably small payout, but currently with test train separation we have a good measure of movement from data change in the test line, but since train line is just a trailing average, not for that one.
This could be done by running the same logic that generates the test output, but feeding it train side of the split data instead, then output its results named as train with a step count of 1. To be truly useful the last 'train' value would probably need to be generated the same way, as otherwise its just a trailing average and the difference between that value and the first of the next run won't be a pure 'data shift only' measure.
Potential value from having train values also show a data shift is that a difference in size of the data shift between test and train could be informative as to the amount of overfit that is leaving the window without being resolved first.
The text was updated successfully, but these errors were encountered:
Tilps
changed the title
A report a 'train' number before step 1 like for 'test'.
Report a 'train' number before step 1 like for 'test'.
Jul 4, 2018
I'm not sure whether this could be done easily without being considered a risk to training quality and as such not be worth the probably small payout, but currently with test train separation we have a good measure of movement from data change in the test line, but since train line is just a trailing average, not for that one.
This could be done by running the same logic that generates the test output, but feeding it train side of the split data instead, then output its results named as train with a step count of 1. To be truly useful the last 'train' value would probably need to be generated the same way, as otherwise its just a trailing average and the difference between that value and the first of the next run won't be a pure 'data shift only' measure.
Potential value from having train values also show a data shift is that a difference in size of the data shift between test and train could be informative as to the amount of overfit that is leaving the window without being resolved first.
The text was updated successfully, but these errors were encountered: