Skip to content

Commit

Permalink
mitch abstract
Browse files Browse the repository at this point in the history
  • Loading branch information
dicook committed Sep 18, 2024
1 parent 979bd91 commit 2335aa2
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions education.html
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,13 @@ <h2 id="project_tagline" > <a href="https://numbats.github.io/WOMBAT2024/" > Wor

<h2> Education </h2>

This session runs from 1:00-2:30 and focuses on tools for education. Speakers are:
This session runs from 1:00-2:30 and focuses on tools for education.
<br><br>

<ul>
<li> <strong>TBA (LearnR) </strong> <br> <em> Mitch O'Hara-Wild and Cynthia Huang, Monash University </em>
<p> TBA </li>
<li> <strong>Scalable self-paced e-learning of statistical programming with fine-grained feedback and assessment </strong> <br> <em> Mitch O'Hara-Wild and Cynthia Huang, Monash University </em>
<p> Assessing statistical programming skills consistently and at scale is challenging. Much like writing style is assessed in essay tasks, discriminating code quality and style from code function or output is becoming increasingly important as students adopt code-generating tools such as LLMs. In many cases checking code output alone is insufficient to assess students’ understanding and ability to write statistical code. Instead, instructors often need to check the code itself for evidence of computational thinking, such as the use of appropriate functions, data structures, and comments. Unfortunately, manual review of code is time-consuming and subjective, and the skills needed to automate this process are complex to learn and use. In this talk, we introduce a new approach to authoring self-paced interactive modules for learning statistics with R. It is built using Quarto and WebR, leveraging literate programming to quickly create exercises and automate assessments. We discuss how this format can be used to write assessments with automated checking of multi-choice quizzes, code input and outputs, and the advantages of in-browser execution via WebR compared to existing server based solutions. </p>
</li>
<li> <strong> Developing tools for “real time” formative assessment of writing within large introductory statistics and data science courses </strong> <br><em> Anna Fergusson, University of Auckland </em>
<p> Various data technologies and automated approaches can assist with teaching and assessment, but care is needed to develop tools and practices that value and support the human learning experience, at the same time as optimising for efficiency and accuracy. For instance, introductory-level statistics and data science students need to learn how to identify and produce short written communications (including code) that are statistically and computationally sound. However, there are challenges to designing and implementing effective formative assessment of student writing and coding when courses involve hundreds or thousands of students, and scalable methods of support are needed. This talk will present pedagogical and technological explorations for developing tools that support “real time” and large-scale formative assessment of writing (including code), as well as plans for further research involving the integration of statistical pairwise-comparison ranking models and NLP algorithms.
</p>
Expand Down

0 comments on commit 2335aa2

Please sign in to comment.