November 24, 2024
44 S Broadway, White Plains, New York, 10601
News US MARKETS

AI is coming for our anger

Stay informed with free updates

I’m a human being God damn it! My life has value! . . . I’m as mad as hell, and I’m not going to take this any more!

Howard Beale, the prophetically fuming anti-hero from the 1976 film Networkwas certainly very angry. Increasingly, according to successive Gallup surveys of the world’s emotional state, we all are.

But possibly not for much longer if artificial intelligence has any say in it. AI was already coming for our jobs; now it is coming for our fury. The question is whether anything has a right to take that fury without permission, and whether anyone is ready to fight for our right to rage.

This month, the separately listed mobile arm of Masayoshi Son’s SoftBank technology empire revealed that it was developing an AI-powered system to protect browbeaten workers in call centres from down-the-line diatribes and the broad palette of verbal abuse that falls under the definition of customer harassment.

It is unclear if SoftBank was deliberately seeking to evoke dystopia when it named this project, but “EmotionCancelling Voice Conversion Engine” has a bleakness that would turn George Orwell green.

The technology, developed at an AI research institute established by SoftBank and the University of Tokyo, is still in its R&D phase, and the early demo version suggests there is plenty more work ahead. But the principle is already sort of working, and it is as weird as you might expect.

In theory, the voice-altering AI changes the rant of an angry human caller in real time so the person at the other end hears only a softened, innocuous version. The caller’s original vocabulary remains intact (for now; give dystopia time to solve that one). But, tonally, the rage is expunged. Commercialisation and installation in call centres, reckons SoftBankcan be expected sometime before March 2026.

SoftBank’s voice-altering AI

As with so many of these projects, humans have collaborated for cash with their future AI overlords. The EmotionCancelling engine was trained using actors who performed a large range of angry phrases and a gamut of ways of giving outlet to ire such as shouting and shrieking. These provide the AI with the pitches and inflections to detect and replace.

Set aside the various hellscapes this technology conjures up. The least imaginative among us can see ways in which real-time voice alteration could open a lot of perilous paths. The issue, for now, is ownership: the lightning evolution of AI is already severely testing questions of voice ownership by celebrities and others; SoftBank’s experiment is testing the ownership of emotion.

SoftBank’s project was clearly well intentioned. The idea apparently came to one of the company’s AI engineers who watched a film about rising abusiveness among Japanese customers towards service-sector workers — a phenomenon some ascribe to the crankiness of an ageing population and the erosion of service standards by acute labour shortages.

The EmotionCancelling engine is presented as a solution to the intolerable psychological burden placed on call centre operators, and the stress of being shouted at. As well as stripping rants of their frightening tone, the AI will step in to terminate conversations it deems have been too long or vile.

But protection of the workers should not be the only consideration here. Anger may be a very unpleasant and scary thing to receive, but it can be legitimate and there must be caution in artificially writing it out of the customer relations script — particularly if it only increases when the customer realises their expressed rage is being suppressed by a machine.

Businesses everywhere can — and do — warn customers against abusing staff. But removing anger from someone’s voice without their permission (or by burying that permission in fine print) steps over an important line, especially when AI is put in charge of the removal.

The line crossed is where a person’s emotion, or a certain tone of voice, is commoditised for treatment and neutralisation. Anger is an easy target for excision, but why not get AI to protect call centre operators from disappointment, sadness, urgency, despair or even gratitude? What if it were decided that some regional accents were more threatening than others and sandpapered by algorithm without their owners knowing?

In an extensive series of essays published last weekLeopold Aschenbrenner, a former researcher at OpenAI who worked on protecting society from the technology, warned that while everyone was talking about AI, “few have the faintest glimmer of what is about to hit them”.

Our best strategy, in the face of all this, may be to remain as mad as hell.

leo.lewis@ft.com

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video