You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The only reason that we use megabytes in the first place is for backwards-compatibility with pre-4.10 versions of base that use this convention (see the old GCStats data type). But rather than add a separate field for peak allocated bytes, I wonder if we should instead migrate from criterion's GCStatistics data type (which attempts to offer backwards compatibility) in favor of base's RTSStats data type. RTSStats is the original source of truth for measurements in the first place, and its max_mem_in_use_bytes field is exactly what we need.
Of course, we would need to figure out an appropriate migration strategy. To introduce the use of RTSStats, we can follow a similar strategy as in #149, which migrated from GCStats to GCStatistics. We would also need to figure out what to do with the peakMbAllocated accessor in criterion. Should we remove it outright in favor of a different accessor that returns bytes instead of megabytes? Should we deprecate it by somehow emitting a warning whenever someone uses peakMbAllocated?
We'd also want to consider doing this for other fields of RTSStats that are more precise than their GCStats counterparts. For example, mutatorCpuSeconds currently uses seconds (another historical limitation imposed by GCStats), but RTSStats now measures the same information in nanoseconds.
It feels like It would be useful to have a more granular (or absolute) measurement of the "Peak Allocated".
It seems like we can either change
peakMbAllocated
topeakAllocated
or add apeakAllocated
along sidepeakMbAllocated
.For pre
4.10.0
we fill in the imprecision with zeros, and for4.10.0
on we use the precise bytes.The text was updated successfully, but these errors were encountered: