MWC 2025 Pedagogic Demonstrator: Facial Expressions and Generative AI

Context

This interactive AI demonstration is a prototype designed for educational and demonstrative purposes. It illustrates how artificial intelligence can recognize human expressions and translate them into artistic representations.

The ethical and legal assessment of this demonstrator has been a collaborative effort between the Computer Vision Center (CVC) and the Observarory for Ethics in Artificial Intelligence of Catalonia (OEIAC).

Regulatory Framework

The demonstration is not commercial and is presented at MWC 2025 solely for informational purposes. However, due to the nature of the system, it may be subject to the obligations set out in the European AI Act for high-risk and limited-risk systems once these regulations come into effect. Also, this does not mean that the prototype is exempt from regulatory obligations: we are subject to ALL the rest of the active regulations (such as the General Data Protection Regulation, for instance). 

Data Privacy & Security
Ethical Considerations

Risk Assessment | PIO Model

The CVC team has assessed the risks of this demonstration using the PIO Model assessment tool. The PIO Model is an assessment tool, developed by the Observatory for Ethics in Artificial Intelligence of Catalonia (OEIAC), on ethical and legal uses that is harmonised with legislative requirements and current ethical standards and recommendations. This assessment tool consists of a checklist on ethical uses structured around seven principles, and designed for the development, deployment and evaluation of AI data and systems. 

The PIO Model allows for a pre-assessment according to the risk categories under the EU Artificial Intelligence (AI Act) regulations. In this sense, AI systems will be classified according to whether they are considered to be of unacceptable risk based on the AI Act norms. Unacceptable risk refers to a very limited set of uses of AI that are particularly harmful and contravene EU values, as they violate fundamental rights and are therefore prohibited. 

Once this distinction on unacceptable risk or not has been made, the same pre-assessment will classify the IA systems according to the IA Act as follows:  

  • High Risk: This refers to a limited number of AI systems that may have an adverse impact on the security of people or fundamental rights under the EU Charter of Fundamental Rights and are therefore considered high risk. 
  • Limited Risk: This refers to the risks associated with the lack of transparency in the use of AI and, consequently, the need for specific transparency obligations to ensure that humans are informed where necessary, building trust. 
  • Minimal Risk: This refers to all other AI systems that can be developed and used following existing legislation without additional legal obligations. On a voluntary basis, providers of such systems may choose to apply ethical principles such as those of the PIO Model. 

The outcome of this pre-assessment focusing on the IA Act risk assessment will allow users of the PIO Model to position themselves as a starting point on the assessment of some of the main legal requirements of the core EU IA regulation.

Results of our demostration:

The following risk matrix is a tool to get a snapshot of the level of risk by severity (regulatory or AI Act) and the likelihood of acting on it.

Parts of the demo & algorithms

This consists of four key stages: 

Data

Understanding the data sources and limitations of AI models is crucial for transparency. Below, we outline the data used in each component of this demonstration:

Authorship

Who is the Author of the Image?

Under current regulation, only a physical or juridic person can be the author. Since the demonstration was developed by a CVC team, the results are CVC's responsability.

Who Created the Image? 

The AI-generated image is the result of a collaborative process: 

  • The user provides the initial input (their facial expression). 
  • The AI model translates this input into a visual representation. 

AI-Generated Content & Copyright 

The legal status of AI-generated images is still evolving, however, we have a direct regulation implemented in European directive 2019/790 on Copyright, Art. 3. 

AI Act-related considerations: 

  • AI ACT - Recital 105: refers to the Directive and underlines the need for authorship authorization, unless it is one of the exceptions Art3. This would affect if the data used for training would be treated as having an authorship to protect. 
  • AI ACT - Recital 107 - recommendation for transparency to protect the guarantee of the authors. Official forms have been announced by the European Commission (not yet available at the time of this action). 

Other ethical Considerations: 

  • The AI does not claim creativity—it simply recombines existing patterns. 
  • AI-generated images should be viewed as interpretations rather than factual representations of facial expressions.  

Final Thought: 

This demonstration encourages reflection on how AI interprets human expressions and how society defines creativity and authorship in the age of artificial intelligence. 

The ethical and legal assessment of this demonstrator has been a collaborative effort between the Computer Vision Center (CVC) and the Observarory for Ethics in Artificial Intelligence of Catalonia (OEIAC).

http://www.cvc.uab.es/wp-content/uploads/2022/08/cropped-logo_cvc.jpg
OEIAC-Logotip