High definition bookshelf background (AI interviewers are also too easy to deceive! Using bookshelf

 

1. Artificial Intelligence Image Robot

Source | Quantile (ID: QbitAI) Author | Xiao Xiaodang Sitting in front of you is an AI interviewer. How can you improve its popularity? Just add a (virtual) bookshelf background to yourself. like this:

2. Artificial intelligence image generation

That's right, there's no need for any other changes. Simply changing the background, AI interviewers' favorability towards you has increased by 15% in one go! What's going on? What do AI interviewers like? This AI interviewer is from a start-up company in Munich, Germany. According to the developer, it can quickly develop a "Big Five Personality Test" by analyzing the candidate's voice, language, gestures, and facial expressions.

3. High definition artificial intelligence image materials

The results of OCEAN include: openness/novelty O (Openness to experience), conscientiousness C (Consistentness), extraversion E (Extraversion), affinity A (Agreeability)

4. Artificial intelligence image creativity

Emotionality N (Neuroticism): the higher the score of the first four items, the better; the lower the score of the fifth item, the better (less easily emotional). The investigators first recruited a woman and used her self introduction video to "test" the AI interviewer.

5. Artificial intelligence image materials

The following data are the results of this woman's direct interview:

6. Artificial intelligence image high-definition

So, what if she wears glasses?

7. Artificial intelligence image enlargement

A magical thing has happened, and the AI interviewer believes that after she wears glasses, her sense of responsibility will decrease by 10 points!

One of the methods for artificial intelligence images is to use object detection

So, try putting on a hat?

The process of artificial intelligence film reading is reflected in

It seems that AI really likes her wearing a hat. Now, in the eyes of AI interviewers, this woman appears more open, responsible, and less likely to become emotional.

10. Chat GPT artificial intelligence images

I little interesting. Next, the investigators asked another 'candidate' to test the AI interviewer. First, add a bookshelf:

The AI interviewer immediately 'liked' and gave a score almost 15% higher.

The tester placed another photo behind themselves.

AI interviewers also seem to like images with backgrounds, and therefore give higher application scores.

Moreover, even the brightness and darkness of the light can affect the mood of AI interviewers.

If the video brightness is lower, AI interviewers will also give lower scores.

Without changing facial expressions, speech sounds, language, or even gestures, just clothing, lighting, or even background can change the judgment of AI interviewers. This is the result of a survey conducted by journalists from Bavarian Broadcasting in Germany on this start-up company.

At present, the software developers involved in the company's "anonymity" claim that they referred to the AI model training method of the company, the AI system of the company, and used videos from over 12000 people of different ages, genders, and ethnic backgrounds for training. In addition, 2500 people evaluated the results based on their own personalities on the basis of OCEAN evaluation.

According to Restorio, this AI model has an evaluation accuracy of 90% that of human observers.

However, the current survey results clearly do not align with the "fair and just" interview advocated by this startup. The startup claims that candidates can record interview videos many times and ultimately decide which version to apply, and the quality of the videos should also be taken into account. All of these processes can be completed by interviewees without pressure.

In fact, there are already similar AI products that have been put into use in the United States that require corresponding restraint systems. For example, Hirevue, a company from Utah in the United States that specializes in video interview software, claims to have 700 companies as its customers.

But its products have been questioned by AI experts who believe that the software, algorithms, and interview results produced by Hirevue are not transparent. For example, Julia Angwin, the head of the American news studio The Markup, believes that the claim that "facial features can affect work performance" has no scientific basis.

Recently, Hirevue has indeed removed the feature of "facial expression analysis" from their products. Their internal evaluation found that there is really no relationship between "visual analysis results" and "work performance"

This company now plans to invest more energy in the "voice analysis" feature, and the "AI interviewer" faces not only technical doubts, but also human resources department managers who are concerned that if the company uses AI interviewers, it may lead to a decrease in the number of applicants (no one is willing to apply), as well as issues such as company data protection.

Katharina Zweig, a computer science professor at the University of Tubingen, believes that overall, AI is a great tool worth applying, but if AI is used to evaluate human behavior, there will definitely be difficulties.

If the demand for AI is indeed very high, at least it is necessary to make constraints on the corresponding clothing, environment, and many other unrelated conditions to avoid these factors misleading AI's judgment. Some netizens commented that the evaluation results of this AI were very ridiculous. He wrote a bunch of books on his background and face and joked, "Okay, now I have been selected!.

But some netizens also expressed that in the real world, "bookshelves" are indeed an impression bonus.

There is even a job like this: matching bookshelves for wealthy people disguised as "intellectuals" and placing books on them

In addition, some netizens believe that for AI models, data bias and insufficient algorithms can lead to unreliable results. Before reaching 'true AI', 'AI' was just a marketing term.

However, Retrio's AI model is actually based on the video interview results of human interviewers, and where does the "bias" of AI interviewers trained come from? It's really hard to say reference link: https://web.br.de/interaktiv/ki-bewerbung/en/ 。

https://twitter.com/hatr/status/1361756449802768387 - End - This article is the original content of NetEase News · NetEase account's signature content incentive plan account [Quantum Bit]. Without authorization from the account, it is prohibited to reprint it arbitrarily.

This article is authorized to be reprinted in its own sub bit (ID: QbitAI). If you need to reprint it again, please contact the original author and welcome to forward it to our social media.

当前非电脑浏览器正常宽度,请使用移动设备访问本站!