-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add runs summary as Markdown table #126
Conversation
Very nice Leo. this would be very useful. you think we can have the same table in a .xlsx format? having the detailed results in a more structured format with built-in filters would be much more useful and would allow us to easily generate graphs, as we wish, for any set of the results across any set of the conditions which is very useful as a reporting feature. otherwise, we would have to configure the .yaml to generate plots across conditions which can be cumbersome and less versatile compared to the .xlsx. @RTimothyEdwards , what do you think? I feel like this would render CACE a design-phase debugging tool as well. |
Thanks Ahmed. Interesting idea, but I'm not a fan of One problem is to represent the table as a whole, as it has a three-dimensional structure. The results can be a list of values, e.g. for a transient or MC sim. The summary only shows the first three elements and then an ellipsis, as otherwise the table would become too large. This is sufficient to get a quick overview, but does not bloat the table: MC sim:
transient sim:
|
I agree, we should use the .ods format. I don't think we need to report transient data here. It's meaningless to report the time reference and instantaneous values to this sheet. Only single-valued measures can go through this table. We usually extract two things from a transient sim:
For MC iterations, we're usually interested in the statistical measures (mean, std deviation..). The exact result of each iteration is useful to identify any outliers. A better way to report results across MC would be QQ-plots, which we can discuss in our next meeting, and if we want to report the numbers, then we could generate a separate sheet for each condition that lists the results for every iteration. Since we usually run MC for a single corner, typically the worst, the only variable condition would be the 'iterations' themselves. |
A Q-Q plot will tell you whether the result is a normal distribution or skewed in some way, but it doesn't tell you a whole lot about the data. But I agree that transient and Monte Carlo data have to be processed in some way---Tabular output is very large and tells you nothing meaningful. Given enough data points, the equivalent list for a monte carlo result would be a listing of mean and standard deviation over sub-groups of the data vs. condition. That would show the numerical results of the distribution of INL vs. input value for the DAC example (for which INL increases when the DAC output gets close to the power rail). Transient data should, in principle, be easier, because there is still a single value (such as frequency or pulse width or rise time) that is being measured over many sets of conditions and can be tabulated. Only monte carlo runs are more difficult, since the characterized values are not being checked for minimum/typical/maximum but are being measured as mean/standard deviation of a distribution of values. |
@areda0 CACE now creates a CSV file alongside the Markdown summary. Let me know if this works for you as expected. |
@mole99 Thanks for the updates. Both the .md and .csv files are working as expected, except that they only list the varying conditions, not the default conditions. While this is okay, listing all conditions would provide more insight into the data. Additionally, for reporting the pass/fail status of each value, it would be preferable to use a color code (e.g., red for failed) instead of adding a separate column for each result measure. @RTimothyEdwards Beside the statistical measures of MC results, QQ-plots is a visually nice way to check how close the distribution is to a normal one. |
Merging after discussion. |
This PR creates a summary for the simulation runs of a parameter using the
ngspice
tool.CACE saves the summary as
simulation_summary.md
in the parameter directory and prints a rendered version.The first column contains the run number, the following columns contain the changing conditions, and the last columns contain the results, which may look as follows:
Implements part of #122.
Future work: add a column indicating whether a single value passed or failed the spec (✅/❌). This requires CACE to check each result individually instead of forming a max/min over the range of values.