Can machines understand how people are?

Can machines understand how people are?

A preliminary answer is yes. There is a lot of controversy on this issue and within this general framework of madness for artificial intelligence, most of the population is immersed in a sea of ​​technological, moral, and psychological doubts that are difficult to assess in relation to behavior and personality. Thanks to technology and scientific research we can obtain developments that can change, facilitate or provide knowledge so far unknown, as long as they are used properly they can provide us with great information and solutions (we call it “applications for good”). What if we could become really more neutral in staff selection? Could situational behavior be predicted for example in cases of violent people? What if we got a better perception of our personality? Artificial intelligence (AI) can have big biases, but it can also provide a lot of information if used correctly.

Image from pixabay: will AI blend with human personality?

Finally, a world without human influence? Without anyone putting unfair labels. Many of these answers could take days if we could have a mathematical and psychological explanation. The use of statistics in psychology is common. Mathematical modeling is not, and this is little-explored conjunction.

Therefore, two research groups have joined forces to work as a team. The groups of Drs. Sergio Escalera and David Gallardo-Pujol can find answers to all these questions still unknown to many.

Dr. Sergio Escalera is a Full Professor at the Department of Mathematics and Computer Science at the University of Barcelona, ​​head of informatics, and leads the group for analyzing behavior and recovering human pose. He is an ICREA Academia awardee and Fellow of the European Laboratory for Learning and Intelligent Systems ELLIS, adjunct professor at the Universitat Oberta de Catalunya, Dalhousie University, and Aalborg University. Dr. David Gallardo-Pujol is Associate Professor of Differential Psychology at the University of Barcelona, ​​Research Dean at the UB Faculty of Psychology and Secretary-General of the European Association for Psychological Assessment, specialized in individual differences and developing research on prediction of specific behaviors in specific situations through biological, psychological, and social data of human behavior. He also works in personality assessment, quantitative methods, and behavioral genetics. Winner of the Antoni Caparrós Prize 2020 of the Social Council of the UB and the Bosch i Gimpera-UB Foundation. On the psychological side, Georgina Guilera and David Leiva also played a role.

Both, with the invaluable help of Cristina Palmero (PhD student in human behavior understanding), without whom this project would not have been possible, have delved into the challenge of researching for many months on the visual analysis of human behavior in dyadic and small group interactions, this challenge will use a large-scale, multimodal and multiview data set (UDIVA). They have been challenged to be able to obtain automatic recognition of the personality of individuals (ie, a “target” person) during dyadic interaction. They plan to exploit contextual information (e.g., information about the person they are interacting with, their relationship, the difficulty of the activity, etc.) to solve the problem and obtain automatic personality scores. The first technical publications have already been published and have launched an international competition to help solve these problems, making their data available.

Proposed model to infer personality from multimodal data. From Palmero et al. (2020)

The first problem to solve is personality. Audiovisual data associated with these tracks, as well as self-reported Big-Five personality tags (and metadata information such as gender, type of interaction, if known, etc.), are now available and ready to use. Statement-level transcripts will be provided so that verbal communication can also be used. The aim is to get very accurate scores on the personality of the participants. This is the first track.

They also focus on behavior prediction: the focus of this second track is to estimate future (e.g., up to N frames) 2D facial marks, hand, and upper body position of a target individual in a dyadic interaction, given an observed period of time from both interlocutors, two individual views. Participants are expected to exploit information that takes context into account and that may affect the behavior of individuals. The tags that will be used for this track will be generated automatically, ie they will be treated as soft tags, obtained using state-of-the-art methods for estimates of the facial structure, hand, and upper position of the track. cos in 3D. We assume that the training data may contain some tags that contain noise due to some small errors in the recognition methods. However, we will manually clean and correct incorrectly obtained annotations automatically in the validation and test sets in order to provide a fair assessment.

Examples of the tasks included in the UDIVA datawset.

We take the opportunity to disseminate what has already been published by Dr. Sergio Escalera of the publication of the ChaLearn challenges # ICCV2021 on personality recognition and behavior prediction are already published in #Codalab. Join us to push the boundaries of automatic understanding of social interactions.

The associated workshop ** DYAD @ ICCV2021: Understanding Social Behavior in Dyadic and Small Group Interactions ** now accepts articles:

Proceedings #PMLR: Deadline November 15th.

Challenge website: https://bit.ly/3v5vKqP

Workshop website: https://bit.ly/3eX4Bkm

#CV #ML #computervision #machinelearning #deeplearning #affectivecomputing #socialsignalprocessing # CVPR2021 #CallForPapers #socialcomputing #behavioralscience #HuPBA

Computer Vision Center University of Barcelona CHALEARN

IDLab – Individual differences