The term Machine Teaching (MT) was originally introduced as a theoretical problem in machine learning, focusing on identifying the minimal set of examples required for an algorithm to reach a predefined target state. Later, Simard and colleagues at Microsoft Research proposed a complementary perspective within the field of Human–Computer Interaction (HCI). In this context, MT is framed as a way to improve the “teacher” in building machine learning models, rather than the classical ML focus on improving the “learner” by refining algorithms.
In this vein, Ramos and colleagues introduced Interactive Machine Teaching (IMT), formalizing the concept and emphasizing the idea of leveraging humans’ natural teaching abilities. For example, just as people teach concepts to others by starting with simple examples and gradually increasing complexity, IMT explores how humans can guide machine learning systems in a similarly structured way. This theoretical framework has been a major inspiration in my research, and I have since been studying this concept from multiple perspectives to better understand its applications and implications.
We explored Interactive Machine Teaching in the following articles:
(Sanchez et al., 2021): Studying strategies adopted by non-expert users to teach an image recognition systems to recognized hand-drawn sketches.
(Sanchez et al., 2022): Studying the role of deep learning uncertainty estimation on teaching strategies in image recognition.
(Sungeelee et al., 2024): Studying interactive machine teaching in the specific context of arm prosthesis calibration and control, involving gesture recognition.
Pattern-recognition-based arm prostheses rely on recognizing muscle activation to trigger movements. The effectiveness of this approach depends not only on the performance of the machine learner but also on the user’s understanding of its recognition capabilities, allowing them to adapt and work around recognition failures. We investigate how different model training strategies to select gesture classes and record respective muscle contractions impact model accuracy and user comprehension. We report on a lab experiment where participants performed hand gestures to train a classifier under three conditions: (1) the system cues gesture classes randomly (control), (2) the user selects gesture classes (teacher-led), (3) the system queries gesture classes based on their separability (learner-led). After training, we compare the models’ accuracy and test participants’ predictive understanding of the prosthesis’ behavior. We found that teacher-led and learner-led strategies yield faster and greater performance increases, respectively. Combining two evaluation methods, we found that participants developed a more accurate mental model when the system queried the least separable gesture class (learner-led). Our results conclude that, in the context of machine learning-based myoelectric prosthesis control, guiding the user to focus on class separability during training can improve recognition performances and support users’ mental models about the system’s behavior. We discuss our results in light of several research fields : myoelectric prosthesis control, motor learning, human-robot interaction, and interactive machine teaching.
Studying Collaborative Interactive Machine Teaching in Image Classification
Behnoosh
Mohammadzadeh, Jules
Françoise, Michèle
Gouiffès, and
1 more author
In Proceedings of the 29th International Conference on Intelligent User Interfaces, Apr 2024
While human-centered approaches to machine learning explore various human roles within the interaction loop, the notion of Interactive Machine Teaching (IMT) emerged with a focus on leveraging the teaching skills of humans as a teacher to build machine learning systems. However, most systems and studies are devoted to single users. In this article, we study collaborative interactive machine teaching in the context of image classification to analyze how people can structure the teaching process collectively and to understand their experience. Our contributions are threefold. First, we developed a web application called TeachTOK that enables groups of users to curate data and train a model together incrementally. Second, we conducted a study in which ten participants were divided into three teams that competed to build an image classifier in nine days. Qualitative results of participants’ discussions in focus groups reveal the emergence of collaboration patterns in the machine teaching task, how collaboration helps revise teaching strategies and participants’ reflections on their interaction with the TeachTOK application. From these findings we provide implications for the design of more interactive, collaborative and participatory machine learning-based systems.
2022
Deep Learning Uncertainty in Machine Teaching
Téo
Sanchez, Baptiste
Caramiaux, Pierre
Thiel, and
1 more author
In Proceedings of the 27th International Conference on Intelligent User Interfaces, Mar 2022
Machine Learning models can output confident but incorrect predictions. To address this problem, ML researchers use various techniques to reliably estimate ML uncertainty, usually performed on controlled benchmarks once the model has been trained. We explore how the two types of uncertainty—aleatoric and epistemic—can help non-expert users understand the strengths and weaknesses of a classifier in an interactive setting. We are interested in users’ perception of the difference between aleatoric and epistemic uncertainty and their use to teach and understand the classifier. We conducted an experiment where non-experts train a classifier to recognize card images, and are tested on their ability to predict classifier outcomes. Participants who used either larger or more varied training sets significantly improved their understanding of uncertainty, both epistemic or aleatoric. However, participants who relied on the uncertainty measure to guide their choice of training data did not significantly improve classifier training, nor were they better able to guess the classifier outcome. We identified three specific situations where participants successfully identified the difference between aleatoric and epistemic uncertainty: placing a card in the exact same position as a training card; placing different cards next to each other; and placing a non-card, such as their hand, next to or on top of a card. We discuss our methodology for estimating uncertainty for Interactive Machine Learning systems and question the need for two-level uncertainty in Machine Teaching.
2021
How do people train a machine? Strategies and (Mis) Understandings
Téo
Sanchez, Baptiste
Caramiaux, Jules
Françoise, and
2 more authors
Proceedings of the ACM on Human-Computer Interaction, Mar 2021