๐Ÿ“‘ Emily's paper discussing automated feedback for HRI tasks was accepted at the Human-LLM Interaction workshop at HRI 2024!

In this work, we discuss how LLMs can be used to automatically create formative feedback to help people learn how to interact with robots.


Large Language Models Enable Automated Formative Feedback in Human-Robot Interaction Tasks

Emily Jensen, Sriram Sankaranarayanan, Bradley Hayes

Position Statement

Representing knowledge and assessing someoneโ€™s ability in an HRI task is difficult, due to complex objectives and high variability in human performance. In previous work, we begin to address this question by breaking down HRI tasks into objective primitives that can be combined sequentially and concurrently (e.g., maintain slow speed and reach waypoints). They then show that signal temporal logic specifications, paired with a robustness metric, are a useful tool for assessing performance along each primitive. These formal methods allow designers to precisely represent ideal trajectories. This formulation admits explainability, as one can identify and elaborate upon specific objectives that learners did not accomplish.

We claim that LLMs can be paired with formal analysis methods to provide accessible, relevant feedback for HRI tasks. While logic specifications are useful for defining and assessing a task, these representations are not easily interpreted by non-experts. Luckily, LLMs are adept at generating easy-to-understand text that explains difficult concepts. By integrating task assessment outcomes and other contextual information into an LLM prompt, we can effectively synthesize a useful set of recommendations for the learner to improve their performance.

The full paper can be accessed here and from our Publications tab.