The ethical issues and limitations of AI technologies to detect emotional states – Dr. Sergio Escalera at 324.cat
The CVC researcher, Sergio Escalera, has been interviewed by Xavier Duran for 324.cat. In the article, Sergio explained the limitations and the ethical issues associated with the use of artificial intelligence to detect emotional states.
The use of artificial intelligence to recognize emotional patterns is still exploring its possibilities. This technology, which is applied in various fields such as personnel selection or security, needs to be evaluated in terms of (sufficient) reliability as well as its ethical application. As Dr. Escalera explained to 324.cat “Artificial intelligence techniques have come a long way in recent years. However, they still have limitations, for example in facial recognition.”
Finding out the emotional state is difficult. It is necessary to examine a number of psychological variables. Therefore, the technology must be developed with the expertise of other professionals, such as psychologists and neurologists.
This technology should not be used for decisions that have an impact on people’s lives or equal opportunities, ultimately, in human rights, because they are not sufficiently accurate and may lead to biased decisions. Algorithms do not think or feel, but the people who make them have their biases, and consciously or unconsciously they can transmit them to mathematical formulas. Therefore, according to Sergio’s thoughts, the AI systems could be a support, but not the ones which has to decide.
But, in spite of all these concerns, these technologies can be useful in certain cases. At CVC we have interdisciplinary projects to apply these to neurorehabilitation, rehabilitation and sports performance, prevention and automatic detection of risks (such as falls), diagnostic support for mental illnesses, or active aging, among others, in a joint work of various professionals.
Nevertheless, Sergio consideres that we must study well how they are developed and how they are applied: “We are a long way from being able to analyze complex mental stages. We must study the context well and ask ourselves questions: if the risk or potential benefit is greater, if the risk or potential benefit is greater, if it really means an improvement, if it helps professionals and improvement, whether it helps professionals and to what extent, whether it does not discriminate and always discriminate against it and always use it with knowledge and prudence”.
Full article: Tecnologia per detectar emocions, entre els límits tècnics i els dubtes ètics (ccma.cat) (in Catalan)