Categories: world

Google removes gender spokes from Gmail's Smart Compose feature

Gmail's Smart Compose is one of Google's most interesting AI features this year, and predicts what users write in email…

Gmail’s Smart Compose is one of Google’s most interesting AI features this year, and predicts what users write in email and offer to end their sentences for them. But like many AI products, it’s just as smart as the tasks it’s trained on, and prone to making mistakes. Therefore, Google has blocked Smart Compose from suggesting gender-based pronouns like “him” and “her” in emails – Google is worried that it will guess the wrong gender.

Reuters reports that this restriction was introduced after a researcher at the company discovered the problem in January this year. The researcher writes “I meet an investor next week” in a message when Gmail suggested a follow-up question, “Want to meet him,” disadvantages the investor.

Gmail Product Manager Paul Lambert told Reuters that his team tried to solve this problem in a number of ways but no one was reliable enough. In the end, Lambert says, the simplest solution was simply removing these types of answers all together, a change that Google says affects less than one percent of Smart Compose predictions. Lambert told Reuters that it pays to be careful in cases like these because the sex is a “big big deal” to get wrong.

This small error is a good example of how software based on machine learning can mirror and strengthen social phenomena. Like many AI systems, Smart Compose teaches by studying past data, combining old emails to find out what words and phrases should suggest. (Its sister function, Smart Reply, does the same to suggest bite-size answers to emails.)

In Lambert’s example, it appears that Smart Compose had learned from previous data that investors were more likely to be men than women, so wrongly predicted that this was also.

It’s a relatively small gap, but indicates a much bigger problem. If we trust predictions made by algorithms trained using past data, we will likely repeat mistakes from the past. Guessing the wrong gender in an email does not have major consequences, but how about AI systems that make decisions about domains such as healthcare, employment and courts? Last month, Amazon reported scrapping an internal recruitment tool trained by using machine learning because it was biased against female candidates. AI bias can cost you your job, or worse.

For Google, this problem is huge. The company integrates algorithmic assessments into several of its products and sells tools for machine learning around the world. If one of its most visible AI features makes such trivial mistakes, why should consumers rely on the company’s other services?

The company has obviously seen these issues coming. On a Smart Compose help page, users warn that the AI ​​models use “can also reflect human cognitive phenomena. Being aware of this is a good start, and the conversation about how to handle it is going on.” However, in this case, the company has not fixed much &#821

1; just removed the possibility of the system making a mistake.

Share
Published by
Faela