How AI Helps Recruiters Track Jobseekers’ Emotions

in #ai6 years ago

Developers claim technology can overcome bias, but questions remain over
data privacy
a.jpeg
By Patricia Nilsson

Facial recognition technology allows us to pay for lunch, unlock a phone — it can even get us arrested. Now, that technology is moving on: algorithms are not only learning to recognise who we are, but also what we feel.

So-called emotion recognition technology is in its infancy. But artificial intelligence companies claim it has the power to transform recruitment.

Their algorithms, they say, can decipher how enthusiastic, bored or honest a job applicant may be — and help employers weed out candidates with undesirable characteristics. Employers, including Unilever, are already beginning to use the technology.

London-based Human, founded in 2016, is a start-up that analyses video-based job applications. The company claims it can spot the emotional expressions of prospective candidates and match them with personality traits — information its algorithms collect by deciphering subliminal facial expressions when the applicant answers questions.

Emotion recognition technology helps employers … shortlist people they may not have considered —Yi Xu, chief executive, Human

Human sends a report to the recruiter detailing candidates’ emotional reactions to each interview question, with scores against characteristics that specify how “honest” or “passionate” an applicant is.

“If [the recruiter] says, ‘We are looking for the most curious candidate,’ they can find that person by comparing the candidates’ scores,” says Yi Xu, Human’s founder and chief executive.

Recruiters can still assess candidates at interview in the conventional way, but there is a limit to how many they can meet or the number of video applications they can watch. Ms Xu says her company’s emotion recognition technology helps employers screen a larger pool of candidates and shortlist people they may not have considered otherwise.

“An interviewer will have bias, but [with technology] they don’t judge the face but the personality of the applicant,” she says. One aim, she claims, is to overcome ethnic and gender discrimination in recruitment.

Facial recognition technology: how does it work?

In the 1970s, US psychologists Paul Ekman and Wallace V Friesen developed a taxonomy of human emotion called the Facial Action Coding System. Using the system, a Facs specialist can detect whether a smile is sincere or not simply by analysing a photograph. Artificial emotional intelligence is taught to read facial expressions in a similar way.

The algorithms of Affectiva and Human are based at least partially on Facs. A specialist first labels the emotions of hundreds or thousands of images (videos are analysed frame by frame), before letting an algorithm process them — the training phase.

During training, the algorithm is watched to see how closely it predicts emotions compared with the manual labelling done by the Facs specialist. Errors are taken into account and the model adjusts itself. The process is repeated with other labelled images until the error is minimised.

Once the training is done, the algorithm can be introduced to images it has never seen and it makes predictions based on its training.

Frederike Kaltheuner, policy adviser on data innovation at Privacy International, a global campaigning organisation, agrees that human interviewers can be biased. But she says: “new systems bring new problems”.

The biggest problem is privacy, and what happens to the data after it is analysed. Ailidh Callander, a legal officer at Privacy International, says it is unclear whether data used to train emotion recognition algorithms — such as that collected during video-based job interviews — count as “personal”, and whether data privacy legislation applies.

In Europe, data processing by AI companies may not be covered under GDPR — the EU-wide legislation to protect data privacy that comes into force in May.

Paul Ekman, who developed Facs (see box above) and now runs the Paul Ekman Group, which trains emotion recognition specialists, says reliable artificial emotional intelligence based on his methods is possible.

But he adds: “No one has ever published research that shows automated systems are accurate.”

Mr Ekman says even if artificial emotional intelligence were possible, the people interpreting the data — in this case employers — should also be trained to properly decipher the results. “Facs can tell you someone is pressing their lips together, but this can mean different things depending on culture or context,” he says.

People differ in their ability to manipulate their emotions to trick the system, and Mr Ekman says: “If people know they are being observed they change their behaviour.” Those who are told their emotions will be analysed are self-conscious.

Unilever has saved 50,000 hours of work over 18 months when it brought much of its recruitment process online and started using the video-based interviewing platform, according to HireVue. The company integrated emotion recognition technology into its service about two years ago and sold its service to Unilever. Unilever did not respond to questions.

“[Recruiters] get to spend time with the best people instead of those who got through the resume screening,” says Loren Larsen, chief technology officer of HireVue, based in Salt Lake City. “You don’t knock out the right person because they went to the wrong school.”

Human first trained its algorithms on publicly available images and videos before turning to what Ms Xu calls their “proprietary data” — the videos their clients send them. HireVue has not developed AI algorithms but taps into the emotion database of Affectiva, a leading company in emotion recognition that works in market research and advertising.

Gabi Zijderveld, chief marketing officer at Affectiva, says employers could use artificial emotional intelligence in several ways. One example is coaching for employees hoping to deliver better presentations or sales pitches. Another could be tracking the wellbeing of workers to detect burnout or depression, something Ms Zijderveld considers a valid use case for the company’s technology. But “mood tracking is scary”, she adds.

These AI companies all say it is up to employers to be transparent about how they use their technology, and the data they gather. But none checks whether they are. “That decision is not up to us, Ms Zijderveld says, “we create the technology”.

Ms Xu says: “You don’t think ‘how are people going to misuse my product’,” she says “but we are not naive enough to think [this technology] can only do good.”

Copyright The Financial Times Limited 2018

© 2018 The Financial Times Ltd. All rights reserved.

Sort:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://www.ft.com/content/e2e85644-05be-11e8-9650-9c0ad2d7c5b5

Interesting article. I find AI to be a concern. Did you write this?

i didn't right this article i just copy it but i put other info on it.

Thanks for sharing this article 😃

nice article