You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the Motion Path [stable] graph looks like:
At the beginning of the year, the Interop score was higher than the Firefox or Chrome score, which seems impossible—54.7% of tests cannot pass in all three browsers if one browser only passes 44.3%.
The text was updated successfully, but these errors were encountered:
I just did some investigating on this. It is caused by the number of tests in the focus area fluctuating over time (presumably adding new tests later in the year). The experimental chart shows the same strange interop score, which corrects itself sometime in May. Before that time, the focus area was scored on 73 individual tests. And after May, the focus area moved to using 93 tests, which is nearly the number that is used today.
The likely way to circumvent this is to calculate the interop scores at the end of the entire yearly calculation, averaging by the number of tests present in the final runs, rather than the tests present in the runs of that day.
Currently the Motion Path [stable] graph looks like:
At the beginning of the year, the Interop score was higher than the Firefox or Chrome score, which seems impossible—54.7% of tests cannot pass in all three browsers if one browser only passes 44.3%.
The text was updated successfully, but these errors were encountered: