File photo – Sophia, a robot integrating the latest technologies and artificial intelligence developed by Hanson Robotics is pictured during a presentation at the "AI for Good" Global Summit at the International Telecommunication Union (ITU) in Geneva, Switzerland June 7, 2017. (REUTERS/Denis Balibouse)
“Your email was blocked, we’ve contacted an HR representative.”
This message could go a long way towards weeding out some of the sexual explicit messaging in the workplace, most recently highlighted by a New York Times report.
Although it would by no means block all suggestive comments that occur in the workplace, there is a way to make an artificial intelligence (AI) become more aware of what is happening in the digital realm. This could happen as employees increasingly use workplace tools like Slack and Microsoft Teams, send emails using a corporate server or text using company-managed apps.
“AI services in the workplace already can analyze workers’ e-mails to determine if they feel unhappy about their job,” says Michelle Lee Flores, a labor and employment attorney. “In the same way, AI can use the data-analysis technology (such as data monitoring) to determine if sexually suggestive communications are being sent.”
Of course, there are privacy implications. In terms of Slack, it is an official communication channel sanctioned and managed by the company in question. The intent is to discuss projects related to the firm, not to ask people out on a date. Flores says AI could be seen as a reporting tool to scan messages and determine if an innocuous comment could be misinterpreted.
“If the computer and handheld devices are company issued, employees should have no expectation of privacy as to anything in the emails or texts,” she says.
When someone sends a sexually explicit image over email or one employee starts hounding another, an AI can be ever watchful, reducing how often the suggestive comments and photos are distributed. There’s also the threat of reporting. An AI can be a powerful leveraging tool, one that knows exactly what to look for at all times.
More than anything, AI could curb the tide. A bot installed on Slack or on a corporate email server could at least look for obvious harassment issues and flag them.
Dr. Jim Gunderson, an AI expert, says he could see some value in using artifical intelligence as a reporting tool, and could augment some HR functions. However, he notes that even humans sometimes have a hard time determining whether an off-hand comment was suggestive or merely a joke. He says sexual harassment is usually subtle — a word or a gesture.
“If we had the AI super-nanny that could monitor speech and gesture, action and emails in the workplace, scanning tirelessly for infractions and harassment it would inevitably exchange a sexual-harassment free workplace for an oppressive work environment,” he adds.
Part of the issue is that an AI can make mistakes. When Microsoft released a Twitter bot called Tay into the wild last year, users trained it to use hate speech.
Though artificial intelligence has become more prevalent in recent years, the technology is far from perfect. An AI could wrongly identify a message that is discussing the problem of sexual abuse or read into a comment that is meant as a harmless joke, unnecessarily putting an employee under the microscope.
But still, there is hope. Experts say an AI that watches our conversations is impartial — it can flag and block content in a way that is unobtrusive and helpful, not as a corporate overlord that is watching everything we say.