close

In the media

In the mediaNews

The ethical issues and limitations of AI technologies to detect emotional states – Dr. Sergio Escalera at 324.cat

Captura

The CVC researcher, Sergio Escalera, has been interviewed by Xavier Duran for 324.cat. In the article, Sergio explained the limitations and the ethical issues associated with the use of artificial intelligence to detect emotional states.

The use of artificial intelligence to recognize emotional patterns is still exploring its possibilities. This technology, which is applied in various fields such as personnel selection or security, needs to be evaluated in terms of (sufficient) reliability as well as its ethical application. As Dr. Escalera explained to 324.cat “Artificial intelligence techniques have come a long way in recent years. However, they still have limitations, for example in facial recognition.”

Finding out the emotional state is difficult. It is necessary to examine a number of psychological variables. Therefore, the technology must be developed with the expertise of other professionals, such as psychologists and neurologists.

This technology should not be used for decisions that have an impact on people’s lives or equal opportunities, ultimately, in human rights, because they are not sufficiently accurate and may lead to biased decisions. Algorithms do not think or feel, but the people who make them have their biases, and consciously or unconsciously they can transmit them to mathematical formulas. Therefore, according to Sergio’s thoughts, the AI systems could be a support, but not the ones which has to decide.

But, in spite of all these concerns, these technologies can be useful in certain cases. At CVC we have interdisciplinary projects to apply these to neurorehabilitation, rehabilitation and sports performance, prevention and automatic detection of risks (such as falls), diagnostic support for mental illnesses, or active aging, among others, in a joint work of various professionals.

Nevertheless, Sergio consideres that we must study well how they are developed and how they are applied: “We are a long way from being able to analyze complex mental stages. We must study the context well and ask ourselves questions: if the risk or potential benefit is greater, if the risk or potential benefit is greater, if it really means an improvement, if it helps professionals and improvement, whether it helps professionals and to what extent, whether it does not discriminate and always discriminate against it and always use it with knowledge and prudence”.

Full article: Tecnologia per detectar emocions, entre els límits tècnics i els dubtes ètics (ccma.cat) (in Catalan)

read more
In the mediaNewsPress Release

Journey through the history of TV3 thanks to Computer Vision

ViVIM

The ViVIM (Computer Vision for Multi-Platform Immersive Video) project, with the participation of the Computer Vision Center (CVC), has developed an immersive tool that allows a unique experience: to travel in a virtual elevator that transport users to the most iconic moments in the history of TV3.

The elevator travels from 1983 to 2020, being each floor a year.The user only have to choose the year and the elevator will transport him there. Once his/her destination is reached, the user can choose among different screens where are showed the most emblematic shows and broadcasts of TV3 in that particular year.

This experience shows the TV3 history, since its launch in 1983 from nowadays.

The ViVIM project is a Catalan initiative, funded by ACCIÓ, within the action plan of the Ris3CAT Media Community promoted by the Generalitat de Catalunya, and its consortium is formed by I2CAT, which coordinates the project, the CVC, the Catalan Audiovisual Media Corporation (CCMA), Vysion and Eurecat.

You can know more ViVIM here

 

read more
In the mediaNews

How to create a deepfake? Why can they be dangerous? – Dr. Dimosthenis Karatzas at “el Periodico” and “Maldita Twitchería”

Imagen2 Dimos deepfakes

Following the way launched by the debate “Fake news and Deepfakes: surviving to an invented reality“, organized by the CVC and Fundación “la Caixa, Dr. Dimosthenis Karatzas, deputy director of the Computer Vision Center (CVC), has explained to different media the problematic associated to the deepfakes.

Concretely he was interviewed by Michele Catanzaro for “el Periodico”, where he exposed what is a deepfake and how to combat it using Artificial Intelligence, and by Naiara Bellio for “Maldita Twitchería”, where he expanded on how to create different types of deepfakes.

You can read the article at “el Periodico” here (in Spanish) and watch the full program of “Maldita Twitchería” here

read more
In the mediaNews

New results of the project STOP (Suicide Prevention in Social Platforms)

suicide-5127103_1920

Suicide is a public health problem of the first order in Spain, where, according with data of the National Institute (INE), more than 3,000 persons commit suicide each year.

It is, therefore, fundamental to find new ways to prevent it. And Artificial Intelligence could help. With this objective, researchers from the Computer Vision Center (CVC), Pompeu Fabra University (UPF), Autonomous University of Barcelona (UAM) and Parc Taulí Hospital, are working on the project STOP (Suicide Prevention in Social Platforms), where they are trying to detect patterns of suicidal behaviour on social media.

As consequence of this research, they recently published a study in the Journal of Medical Internet Research, where they analysed texts, images and activities on Twitter with Artificial Intelligence. The results were very promising since they were able to detect patterns of suicidal behaviour with and accuracy of 85%.

“Analysing publications on Twitter, in a completely anonymous way, we found that could exist a correlation between the content of the images shared in this social media with the mental health of the users who published it”, explained Dr. Jordi González, CVC researcher.

Project STOP is coordinated by Dr. Ana Freire, UPF researcher

 

STOP in the media:

Un algoritme per prevenir suïcidis a les xarxes socials – 324.cat

Según la OMS, cada 40 segundos hay un suicidio en el mundo, en su mayoría por personas entre 15 y 29 años – Caretas

Aplican la Inteligencia Artificial para detectar conductas suicidas en la red – La Vanguardia

Aplican la Inteligencia Artificial para detectar conductas suicidas en la red – El Diario

Ayuda de la Inteligencia Artificial para detectar conductas suicidas en la red – El Correo Gallego

Esta inteligencia artificial ayudaría a detectar con un 85% de precisión los comportamientos suicidas en Twitter – 20 Minutos

La inteligencia artificial busca patrones de conducta suicida en Twitter – El Comercio

IA podría identificar comportamiento suicida – Parada Visual

La inteligencia artificial para detectar patrones suicidas en redes sociales – Entorno Inteligente

 

Reference: Ramírez-Cifuentes D, Freire A, Baeza-Yates R, Puntí J, Medina-Bravo P, Velazquez DA, Gonfaus JM, Gonzàlez J. Detection of Suicidal Ideation on Social Media: Multimodal, Relational, and Behavioral Analysis, J Med Internet Res 2020;22(7):e17758 doi: 10.2196/17758 PMID: 32673256

read more
In the mediaNews

“The automobile of the future” – Dr. Antonio López and Rubén Prados at Vallès Visió

MicrosoftTeams-image (4)

Dr. Antonio López, CVC researcher, and Rubén Prados, project manager of the CVC project “Public Transport with Autonomous Vehicles in a Rural Enviroment”, were interviewed by Alba Castilla at the program Visions (Vallès Visió) to talk about the vehicle and the mobility of the future.

The automotive industry is still exploring the possibilities of autonomous driving, but there is still a long way to use it in our day-to-day. So far, only small semi-autonomous add-ons such as speed or safety distance controls are commercially available. But a 100% autonomous car represents much more complex challenges.

“We are testing autonomous vehicles, but they always follow the same route, without intermediate incidents, obstacles or people. But we do not know how they will go on public roads”, explained Rubén Prados.

To get this knowledge, they are working on two directions. On the one hand, the automated system that considers the different variables for driving a vehicle autonomously. On the other hand, it feeds on the tests that people can do through a simulator, similar to that of video games, to learn, too, about human behavior. “You can develop an Artificial Intelligence that forces the cases of most interest trying to make the autonomous driving system fail. So, you have a kind of game in a system that tries to work well and another that tries to fail “, explains Dr. Antonio López.

To make this technology applicable on a day-to-day basis, the public roads would also need to be adapted. In this sense, Smart Cities will play a key role.

You can watch the interviews here (in catalan)

You can watch the full program here (in catalan)

read more
1 2 3 4 15
Page 2 of 15