AI & MACHINE LEARNING

When Human-in-the-loop Isn’t Human-centered.

For obvious reasons “human-in-the-loop” framing is everywhere these days. Sometimes as lip service or buzzword bingo, but more worryingly a pattern has emerged where it’s being used as a foil to shift work away from human-centered solutions and critical thinking work.

Products and services are no longer being built FOR PEOPLE to use, instead “loops” are being built and foisted upon people to figure out why they’re useful. The loops are architected to satisfy requirements based on what the technology can do, not what a human is trying to achieve. You’ll hear things like “This is designed to keep humans in control at every step” or “The user will shape the outcomes” when in actuality is designed to product outputs that someone is supposed to do something with. The goal becomes building a system first or essentially designing for AI, not for people.

Human-centered methodologies are not going away.

I read an article recently that captured this shift and illustrates why I feel it’s problematic. Two people (a UX architect and a researcher) on Microsoft’s Business & Industry Copilot team wrote about designing loops, not paths. The underlying ideas they write about are interesting but the framing of the information was problematic for me.

The primary message, as stated in the introductory section, is that workflow-based methodologies are outdated. Things like breaking down objectives into jobs-to-be-done or tasks needed to accomplish goals, are giving way to innovative solutions. Glossing over what exactly is meant by "innovative solutions" they go on to say “In our new era of adaptive, intelligent agents, (human-centered) workflows are a cage. As design tools, they are rigid, overcomplicated, and limiting.”

This is baffling, especially for two people who work in “User Experience,” and begins to show how over indexing on “intelligent agents” provides an excuse to ignore the people the system is supposed to help. These approaches, especially JTBD, are technology agnostic and that’s why they work. You’d be hard-pressed to come up with a methodology less complicated than “what task is the user trying to perform” and shedding that approach because the solution to supporting that task might NOT be an automated loop of agents tasks is, frankly, laughable.

No, first, identify actual human problems.

It’s hard to believe these words were written down, reviewed, and deemed ready for publish. However there they were, front and center, literally instructing people that the first step of this new innovative process was to think about what the system is trying to achieve.

“First, a goal is defined—what ultimately is the system trying to achieve? This desired outcome is the foundation on which the system will ground all its actions and adjustments.”

Again, the underlying ideas have merit, but framing it this way, without acknowledging that all of it should support an actual person, isn’t the way to go.

I don’t have a big reveal or super smart conclusion.

I only hope to help you identify what’s happening if/when you experience the human-in-the-loop excuse. All I can say is that we need human-centered methodologies if you want to create valuable products, so you keep doing what you do, talk to humans, document those jobs, and create those workflows.

Because, unfortunately for Microsoft and the other “AI” hype men, it’s clear the approach of build AI loops into everything then force people to use them isn’t viable. I mean, Microsoft literally announced recently that the (forced) inclusion of copilot into their OS might cause you download malware, and that’s your problem, so good luck. I paraphrase, but that’s the message they communicated in more business-y words.