Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Teacher Tool: Only Auto-Run Modified Criteria #9958

Merged
merged 42 commits into from
Apr 11, 2024

Conversation

thsparks
Copy link
Contributor

What

With this change, auto-run will only evaluate criteria that have been modified. It will not re-evaluate everything unless the share link changes. If the user presses the run button directly, everything will re-run.

I also changed Pending to NotStarted because I kept confusing Pending with InProgress.

Why

We want to avoid lots of redundant, expensive calls for criteria that may be expensive to run (like AI checks).

How

I use the evaluation result as a kind of dirty flag. When a criterion is modified (or the share link changes), we reset the outcome to "Not Evaluated", then auto-run only kicks off evaluation for the criteria that have a "Not Evaluated" state.

Also...

I considered adding criteria-level auto-run configuration in addition to this, which would allow us to disable autorun entirely on specific criteria, or to only auto-run certain criteria when the results tab is active (visible), but I held off, since we may combine the rubric and results tabs, at which point, I think that level of config would become much less valuable.

thsparks added 30 commits March 27, 2024 18:46
…t sure if we really want to expose this to anyone with a MakeCode iframe yet. I've tried to preserve the structure somewhat so it's easy to move back if desired.
… to anyone with a MakeCode iframe yet. I've tried to preserve the structure somewhat so it's easy to move back if desired.
…ks/copilot_criteria_before_autorun_changes
…ks/copilot_criteria_before_autorun_changes
@thsparks thsparks requested a review from a team April 10, 2024 22:55
const existingOutcome = teacherTool.evalResults[criteriaInstance.instanceId]?.result;
if (
!fromUserInteraction &&
existingOutcome !== undefined &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with this explicit check, so this is a nit. Just wondering if there was as reason as to why this check is explicitly for undefined and isn't just a nullish check? so just having && existingOutcome && ... instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did that at first, but one of the enum values (EvaluationStatus.Pass) is equal to 0 so that was messing with the nullish check.

@thsparks thsparks merged commit f9e3edc into master Apr 11, 2024
6 checks passed
@thsparks thsparks deleted the thsparks/auto_run_adjustments branch April 11, 2024 22:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants