Interactive Machine Learning Heuristics

Faculty: 
Christopher Le Dantec
Students: 
Eric Corbett

 

Machine learning has enabled many new forms of user experiences and interactions across the landscape of human computer interaction such as self-driving cars, voice assistants and personalized recommendation systems. These advances push machine learning from being an underlying technical infrastructure unaddressed by HCI research to the forefront of user experience and interface desig. As such, machine learning is now a new frontier for human computer interaction: as a source of innovation for user experience and design, which in turn requires new design methods and research practices. Much of the research into this space falls under purview of interactive machine learning (IML).

IML is a new paradigm that seeks to "enable everyday users to interactively explore the model space through-trial-and-error and drive the system towards an intended behavior, reducing the need for supervision by practitioners." A key element of IML is the design of interfaces that enable and support co-adaptivity such that the end-users' interactions and target model can directly influence each other's behavior. These interactions between the system and user are emergent properties of the interface, which raises interface design and usability as a key challenges for IML. Acknowledging the challenge of designing these interfaces, noted machine learning scholar Amershi poses the question: "How we should design future human interaction with interactive machine learning, much like we have for designing traditional interfaces" ? An important step towards answering this challenge will be addressing the gulf between traditional interface evaluation methods and interface evaluation methods suitable for IML.

To address this challenge we have developed ten heuristics specific to IML systems by distilling the design principles from interactive machine learning literature and from our collective experiences designing and evaluating IML systems. The specific interpretation and relative importance of these heuristics are likely to vary depending on the level of involvement expected of the user and on the complexity of the required functionality. Compare, for example, a user seeking to instruct a content suggestion service to obtain better recommendations versus a user seeking to train a robot to perform some function in response to a given input. This variability across applications frustrates efforts to obtain concise heuristics generalizable to all IML applications. With that in mind, these heuristics attempt to cover a wide range of IML functionality.

Interactive machine learning heuristics

  1. Enable the User to Steer the Model: The interface should enable the user to iteratively steer the model towards a desired concept through the interaction techniques available and the visual feedback presented.
  2. Enable the User to Provide Feedback that Improves Concept Quality: Allow the user to provide feedback on specific instances by (re)assigning labels, selecting or re-weighting features, generating new samples or adjusting costs matrices.
  3. Capture Intent Rather than Input: What the user does is not always the same as what the user intends. Therefore, the interface should help to extract user intent from potentially noisy input actions where possible (and appropriate).
  4. Support User Assessment of Model Quality: Users need to be able to assess the quality of the current state of the model. Quality can be in terms of coverage, prediction accuracy, or confidence.
  5. Provide Instance-Based Explanations to the User: Provide human-readable illustrations of the learned concept. This can allow users to understand the model predictions in a specific instance of model failure or success.
  6. Support Rich, Natural Feedback: People want to provide feedback naturally, rather than be forced to interact in limited, system-centric ways. Support rich, user-centric feedback.
  7. Make Interactions and Constraints Explicit: Any user interaction that influences model behavior (and/or the constraints in doing so) should be made explicit.
  8. Promote Trial and Error-based Model Exploration: In constructing and refining a model, users require the ability to retrace steps in the event that recent actions have resulted in an undesired outcome. Providing revision mechanisms and history information promote the user to actively explore the model space.
  9. Error Prevention: Users are often imprecise and inconsistent. They may not stick to a concept or introduce errors and bias; all of which will have a negative effect on trained model quality. Careful design of the interface, both in terms of the information presented and guidance, can help to prevent these user errors.
  10. Help and Documentation: Sometimes users will want to learn how to provide nuanced feedback to steer the system. Therefore, tutorials should describe how various controls and actions will impact the learner.

Taken together, these ten heuristics cover the essential elements of IML user interface design. There are many ways to utilize these heuristics. To start, the heuristics can be used to guide early system development choices such as selection of machine learning algorithm, explanatory visualization technique, or interface design paradigm. Using these heuristics in an evaluation of an existing system should allow designers to achieve baseline usability for user experience with IML functionality. This is the primary function of heuristic evaluation; to identify and address usability challenges prior to user interaction. This allows for later evaluations with users to focus on more complex issues of user experience-such as trust, decision-making and overall satisfaction. These more complex aspects of user experience can be difficult to assess if a myriad of usability issues exist.

As Dudley and Kristensson noted: "Machine learning techniques are slowly creeping into the lives of non-expert users. Enabling users to efficiently interact with such algorithms is likely to be a key design challenge in the coming decade." A key step towards addressing this challenge will require researchers across the visualization, machine learning and HCI communities to develop evaluation practices suitable for the unique usability issues presented by IML. As an approach, IML is vital given the growing ubiquity of systems with underlying machine learning techniques that people are using without understanding, often leading adverse results. Using these heuristics will ideally increase user understanding and engagement with the model, hopefully reducing bias and increasing trust.

Lab: 
Faculty: 
Christopher Le Dantec

The Participatory Publics Lab is a group of researchers concerned with community engagement and design. We are part of the Digital Media program in the School of Literature, Media, and Communication at Georgia Tech.

We explore the design of mobile and social media in the context of community development and activism. We do this through different modes of participation: in the design of these technologies; in the development of discourses about these technologies; in the use, adoption, and appropriation of these technologies.

We investigate forms of civic and community engagement through participatory design, design research, ethnographic research, and critical scholarship. Our research is supported by the National Science Foundation (NSF) and as part of the Intel Science and Technology Center in Social Computing (ISTC-Social).