fbpx
Breaking Campus News. Launching Media Careers.
University researchers plan to create Alexa-like device to identify implicit bias

‘Privacy, surveillance, effects on trust are central aspects of the research’

Imagine a fly on the wall that attentively listens to every last word that is spoken in the conference room. It will keep track of every workplace interaction over time, constantly collecting data.

How would you change your behavior – knowingly and subconsciously?

In spite of questions on the reliability of the “implicit bias” test and training based on its assumptions, a team of researchers wants to see if artificial intelligence can catch – and potentially curtail – implicit biases in the workplace.

Northeastern University’s Brooke Welles and Christoph Riedl, in collaboration with other academics, are studying the viability of bringing an Alexa-like device to fruition as an avenue to improve and promote diversity and inclusion in the workplace.

Their goal is to unearth hidden bias in everyday office interactions, so that decisionmakers can address it and remove such impediments to productivity.

Welles’ specialty is communication studies, and she is interested in online communication networks and the computer science behind them. Riedl’s teaching focuses on supply chain and information management systems, and he also engages with online communication networks.

The concept is laid out in a Northeastern press release: “what if a smart device, similar to the Amazon Alexa, could tell when your boss inadvertently left a female colleague out of an important decision, or made her feel that her perspective wasn’t valued?”

The College Fix worked with Welles and Riedl throughout February to set up both phone and in-person interviews, but the researchers each concluded they could not schedule real-time interviews.

They ended up each answering questions over email. The Fix asked how the results of their research could prove any more effective than implicit bias training, whether such theoretical and ever-present technology could weaken trust issues within a team, and what legal issues might result from the presence of such a device in a professional setting.

These are precisely the “research questions that [we] plan to investigate – we don’t have all the answers yet,” Riedl responded. “Privacy, surveillance, effects on trust, and other concerns are central aspects of the research.”

The team is not “yet able to answer” such questions, Welles said. “[I]ndeed, these are some of the key research questions for our project… We expect more specific results in a year or so.”

Want to show evidence of gender bias ‘so that we can feel validated’

The project received a $1.5 million grant from the U.S. Army Research Laboratory. The professors are looking at a three-year scope for the research, employing the likes of social science theories, machine learning and audio-visual and physiological sensors.

They will seek to understand patterns in how co-workers communicate, studying particularly whether AI technology can effectively promote group productivity and human engagement by ensuring inclusivity.

“We plan to study our research questions using novel sensors and analysis techniques that allow us to study groups,” Riedl (left) wrote in an email. He shared three papers the duo has co-written on “the sensors and the machine learning we use to analyze the sensor data” and  “new data modeling approaches which enable us to study emergent team process.”

While the researchers don’t specify which kinds of bias they hope to flag, Welles speaks of her personal experience with gender bias in the Northeastern press release.

“When you’re having this experience, it’s really hard as the woman in the room to intervene and be like, ‘you’re not listening to me,’ or ‘I said that and he repeated it and now suddenly we believe it,’” she said.

“I really love the idea of building a system that both empowers women with evidence that this is happening so that we can feel validated and also helps us point out opportunities for intervention,” she said.

Riedl said in the release that a gadget with an active ear for bias has applications in professional environments of all kinds, including companies, nonprofits and academia.

The researchers will program an Alexa-like device that is “sensor-equipped” to pick up on verbal and nonverbal cues, and “eventually” psychological signals, unique to the specific team in the presence of the hypothetical device.

The team is using units of measurement known as “non-verbal behavioral metrics,” according to a paper they co-wrote for last year’s Association for Computing Machinery International Conference on Multimodal Interaction. (The paper does not mention “bias,” instead using the terminology of “emergent group leaders” and “dominant contributors.”)

MORE: UC-Berkeley scientists develop AI tool to combat ‘hate speech’ online

Once it has stored an abundance of information on how the team of professionals interact, it will then make recommendations on how the team can improve by pointing out its social shortcomings.

In the Northeastern press release, Welles (right) points to one example of addressing perceived bias: identifying individuals who dominate meetings.

“You could imagine [a scenario] where maybe a manager at the end of a group deliberation gets a report that says person A was really dominating the conversation,” she said.

The device would then inform upper management that some employees may have been excluded from or left out of a discussion.

From there it would remind higher-ups to follow up with the person who ruled the room, setting forth conversations on the dynamic of the previous conversation.

There are potential obstacles to their research, though. It may have the same problem as implicit bias training: human cooperation.

As the press release paraphrases Welles, “it’s anyone’s guess how knowledge of the presence of such a device in a room will affect how its occupants interact with one another.”

Riedl also acknowledged that a smart device could inhibit the cohesion of a workplace team, rather than help them work better together, given the state of knowledge of team dynamics.

This is where AI comes in, and where Ridel hopes it can fill some serious gaps. “What I find super interesting about this project is that it is both using machine learning to study teams and then at the same time using AI to intervene on the team and make the team better.”

MORE: Researchers find facial recognition software recognizes sex, not identity

IMAGE: sakkmesterke/Shutterstock

Like The College Fix on Facebook / Follow us on Twitter

Please join the conversation about our stories on Facebook, Twitter, Instagram, Reddit, MeWe, Rumble, Gab, Minds and Gettr.

About the Author
Alexander Pease -- University of Massachusetts, Boston