Towards Practices for Human-Centered Machine Learning

Stevie Chancellor
3 min readDec 28, 2023

--

An illustration of a circuit board brain
An illustration of a circuit board brain. Photo by Steve Johnson on Unsplash

(This blog post was initially posted on the GroupLens blog several months ago, but I like to keep everything here as well!)

Video of me discussing the paper!

People are excited about human-centered AI and machine learning to make AI more ethical and socially appropriate. AI has captured the popular zeitgeist with promises of generalized artificial intelligence that can solve many complex human problems. These promises of ML, however, have had negative consequences, with both ridiculous and catastrophic failures — they rack up so fast that colleagues are keeping AI Indicents databases, reports of AI ethics failures, and more to boot.

How will ML researchers and engineers avoid these problems and move towards more compassionate and responsible ML? There aren’t many concrete guidelines on what it looks like to do human-centered machine learning in practice. And while there are some pragmatic guides, they often lack the connection between technical and social/cultural/ethical focus.

In my recently published CACM article, I argue that there is a gap in building human-centered systems — the gap between the values we hold but don’t have actionable methods and technical methods that don’t align with our values. The paper argues for practices bridging the ever-significant value and the focus of ever-practical methods.

This paper synthesizes my CS and Critical Media Studies background in thinking about how we should DO HCML. It also builds on my decade of research experience in human-centered research in a challenging area — predicting and acting on dangerous mental health behaviors discussed on social media data. It builds on classical definitions of human-centeredness in defining HCML and lays out five practices for researchers and practitioners. These practices ask us to prioritize technical advancements EQUAL TO our commitments to social realities. In doing this, we can make genuinely impactful technical systems that meet people and communities where they’re at.

Here are the five big takeaways from the paper and the practices you can implement immediately.

  1. Ask if machine learning is the appropriate approach to take
  2. Acknowledge that ML decisions are “political”
  3. Consider more than just a single “user” of an ML system
  4. Recognize other fields’ contributions to HCML
  5. Think about ML failures as a point of interest, not something to be afraid of

Let’s dig into one of these that seems — considering more than just a single “user” of an ML system. When considering who “uses” a system, we often only consider the person commissioning or building the system. Even in HCI, we talk about “users” of systems and (if lucky) the people whose data goes into the model. However, many systems have much larger constellations of people “involved” in the ML model. For example, the “user” may be a government or business in facial recognition technology. But the people whose faces are in that system are also “users” of the technology. Likewise, if that facial recognition system is used in an airport to screen passengers for flight identification, everyone who walks by ambiently may interact with it. The existing ML system meaningfully impacts a user who chooses NOT to interact with that system — if opting out means they must spend more time in airport security or have their identity scrutinized more closely. Both examples make it clear that with the consideration of multiple stakeholders involved in the ML model, we should consider all the stakeholders whose data goes into creating the model.

I aim for these principles to inspire action — to encourage more profound research, empirical evaluations, and new ML methods. I also hope the practices make human-centered activities more tractable for researchers AND practitioners. I hope this inspires you and your colleagues to ask hard questions that may mean making bold decisions, taking action, and balancing these competing priorities in our work.

You can read more about this paper in the recently published Featured Article in the Communications of the ACM here.

--

--

Stevie Chancellor

Professor at Minnesota CS, Georgia Tech PhD. Human-centered machine learning, work/life balance, and productivity. @snchancellor on Twitter