-
Notifications
You must be signed in to change notification settings - Fork 585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Teacher Tool: Only Auto-Run Modified Criteria #9958
Conversation
…ks/copilot_criteria
…t sure if we really want to expose this to anyone with a MakeCode iframe yet. I've tried to preserve the structure somewhat so it's easy to move back if desired.
… to anyone with a MakeCode iframe yet. I've tried to preserve the structure somewhat so it's easy to move back if desired.
…ks/copilot_criteria_before_autorun_changes
…ks/copilot_criteria_before_autorun_changes
…ks/auto_run_adjustments
const existingOutcome = teacherTool.evalResults[criteriaInstance.instanceId]?.result; | ||
if ( | ||
!fromUserInteraction && | ||
existingOutcome !== undefined && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with this explicit check, so this is a nit. Just wondering if there was as reason as to why this check is explicitly for undefined and isn't just a nullish check? so just having && existingOutcome && ...
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did that at first, but one of the enum values (EvaluationStatus.Pass) is equal to 0 so that was messing with the nullish check.
What
With this change, auto-run will only evaluate criteria that have been modified. It will not re-evaluate everything unless the share link changes. If the user presses the run button directly, everything will re-run.
I also changed
Pending
toNotStarted
because I kept confusingPending
withInProgress
.Why
We want to avoid lots of redundant, expensive calls for criteria that may be expensive to run (like AI checks).
How
I use the evaluation result as a kind of dirty flag. When a criterion is modified (or the share link changes), we reset the outcome to "Not Evaluated", then auto-run only kicks off evaluation for the criteria that have a "Not Evaluated" state.
Also...
I considered adding criteria-level auto-run configuration in addition to this, which would allow us to disable autorun entirely on specific criteria, or to only auto-run certain criteria when the results tab is active (visible), but I held off, since we may combine the rubric and results tabs, at which point, I think that level of config would become much less valuable.