A lab at the library: triggering innovation in bottom up processes

In a clear example of RRI, the Library Living Lab gives citizens the opportunity to experiment and interact with technology that is normally inaccessible to all publics. A quadruple helix model that is integrating business, public administrations, university and neighbours in a pioneer experience of living labs.

The Library Living lab is located at the Miquel Batllori Library, within the Volpelleres Neighbourhood in Sant Cugat del Vallés (Barcelona). It is part of the ENOLL (European Network of Living Labs) and has been open to the public since its inauguration in October 2015. Within these three years, it has organized multiple and periodical activities that try to innovate in the way we access culture and technology, and science as a whole, while giving citizens the lead when doing so.

Quoting a report from The World Bank in association with the ENOLL, “Living Labs are “user-driven innovation environments where users and producers co-create innovation in a trusted, open ecosystem that enables business and societal innovation. In essence, a Living Lab takes research and development out of the laboratory and into the real world, engaging stakeholders, citizens, and end-users in the collaborative design of new services”.

“The Library Living Lab is many things”, Dr. Dimosthenis Karatzas, associate director of the Computer Vision Center and responsible for the project states that not only is this a bottom up initiative, but a place “in which there is a real technology transfer to society, a tangible implementation of the quadruple helix model (link to https://ec.europa.eu/digital-single-market/en/open-innovation-20), a public space and an experimental prototype for the regional library network”.

Working with the spirit of Horizon 2020’s SWAF (science with and for society), the Library Living Lab can be identified as a place in which challenges are identified and thus activities are proposed to overcome them. “The Library Living Lab works in a very specific way”, Dr. Fernando Vilariño, CVC associate director and also responsible for the project along with Dr. Karatzas explains, “we identify challenges, we propose an activity involving all the actors working over that same issue. We then carry out the proposed activity and finally compile the activity’s output in order to analyse the proposal, if it effectively solved the challenge initially identified”.

The library space has actively involved neighbours. A clear example is the 3D group.  By means of using the 3D printer, neighbours started to apply robotics to their creations: a formula 1 car, a drone and a –recently finished- catamaran (all at a lower, printable in 3D scale). Activities featured in the Library are normally referring to culture in order to give an added value to digitised collections such as Europeana.  “We brought the library to the Museum with a beautiful project called ‘the Library visits the Museum’. With this, we experimented in different ways of exploring galleries in other parts of the world, such as the Rijskmuseum in Holland, or the British National Gallery in London. In fact, this is an activity that takes place every other week with great participation from the people who visit the library” as stated by Dr. Vilariño.

The library is open to anyone, citizens or businesses, research groups or public administrations, we want to involve all societal actors and encourage activities and processes that are working towards real life challenges for all of us”, as added by Dr. Karatzas.

Find more information on the library’s projects, activities and aims at their website: http://librarylivinglab.cvc.uab.cat/

Image credit: Photography Wallpaper created by Creativeart – Freepik.com

In the media:

Facial expression classification for deep pain analysis

Pain is a subjective emotion. It is by all understood that pain won’t be suffered or expressed in the same way by people from different backgrounds, professions or nationalities. How to measure pain is still today a question in full discussion among doctors and sanitary professionals.

Methods can be invasive, as is brain screening, or non invasive, by asking the patient how much pain they are feeling (via questionnaire) or, the one concerning our area, using computer vision to analyse facial expression and thus infer pain. Researchers from CVC at Universitat Autònoma de Barcelona and Aalborg University in Denmark have focused on the latter. Working together for more than one year, they have obtained a remarkable accuracy as stated on their joint paper ‘Deep Pain: exploiting long short-tem memory Networks for facial expression classification’ published in the IEEE Transactions on Cybernetics journal.

We propose an automatic model to detect pain from facial recognition”, states Pau Rodríguez first author of the paper, CVC PhD student member of the Image Sequence Evaluation (ISE) Lab. “We can therefore predict pain in real time. We’ve used two deep learning models for this: the first one extracts facial features and the second one, which is a recurrent deep learning model, learns the evolution in time of the frame-wise characteristics in order to predict that person’s expressions and thus be much more precise at the facial recognition of pain”.

Pain measurement in context

The measurement of pain has been an unresolved issue among doctors for years. Journalist John Walsh, at a fantastic article published at The Independent, makes a thorough analysis on how pain is measured. According to Walsh, one of the most used techniques is the McGill Pain Questionnaire, developed by Dr. Ronald Melzack and Dr. Warren Togerson (Montreal University) in the 1970’s.  The Questionnaire resumed the words patients had used to describe pain, classifying them into three categories: sensory (which included heat, pressure, “throbbing” or “pounding” sensations), affective (which related to emotional effects, such as “tiring”, “sickening”, “gruelling” or “frightful”) and evaluative (evocative of an experience – from “annoying” and “troublesome” to “horrible”, “unbearable” and “excruciating”).

Although the classification makes sense, as Walsh correctly points out, words can be overlapped and are easily interchangeable. It is, no doubtedly a good attempt of escaping subjectivity, but without actually achieving it. The intensity of each word will still vary among different people.

Within this line, the USA national initiative on pain control created another questionnaire, the Pain Quality Assessement (PQAS). In this occasion, patients “were asked to indicate, on a scale of 1 to 10, how “intense” – or “sharp”, “hot”, “dull”, “cold”, “sensitive”, “tender”, “itchy”, etc – their pain has been over the past week”. But, of course, the scale would only be based on the experience of each individual. Again, it would not be the same the experience of a person that has been through much pain (a soldier, for example) as that to a person who has never been seriously injured.

In the quest of an objective method concerning pain, neurosciences also had its own say. Professor Irene Tracey, head of the University of Oxford’s Nuffield Department of Clinical Neurosciences has widely studied the brain’s response to pain. Dr. Tracey uses brain imaging and the notions of the brain’s interconnections to identify pain within the cortex. “A most objective method indeed, but highly invasive” as stated by Guillem Cucurull, PhD student of the ISE Lab at the Computer Vision Center. What this group is investigating opens a window of opportunities in the monitoring of patients in intensive care units; cheaper and much less invasive. All you need is a camera.

Pain recognition by facial analysis

The result’s achieved by Pau Rodríguez and Guillem Cucurull’s team are undoubtedly good, with an astonishing accuracy over a standard dataset commonly used by Computer vision scientists. Nevertheless, Pau Rodríguez is cautious “they are good under controlled circumstances and with this particular dataset. We don’t know how it would work with children, or people with dementia”.  The network was trained with a certain dataset and then tested with one it had not seen before thus obtaining the 97.2% accuracy they present in their paper.

The automatic model they propose predicts pain in real time and can learn on the way, increasing reliability and accuracy. They have also realised that CNNs (convolutional neural networks) perform better with less processed images, at least within this research, and avoiding facial action units (groups of muscles) which have been typically used to encode facial motion, but which authors have avoided in this research giving the neural network space for inferring the level of pain in its own learning synergy.

We measure pain in a scale from 0 to 15, where any number above 0 is interesting for us, as it is already pain”. Dataset images have been previously annotated by experienced annotators, specialists who can tell apart people with pain to people with no pain, giving each picture a number between 0 and 10. “This is how our model learns”, as explained by Pau Rodríguez, “we show the neural network a picture, with the number associated to it. The neural network then infers the features which correspond to that level of pain, acknowledging the common features and patterns and becoming a true expert in pain detection”.

The article “Deep Pain: Exploiting Long Short-Term Memory Networks for facial expression classification” can be accessed here.

Image Credit: Business photography created by Jcomp – Freepik.com

This work has been possible thanks to the support of the Spanish project TIN2015-65464-R (MINECO/FEDER), the 2016FI\_B 01163 grant by the CERCA
Programme/Generalitat de Catalunya, and the COST Actions IC1106 (Integrating Biometrics and Forensics for the Digital Age) and IC1307 iV\&L Net (European Network on Integrating Vision and Language), both supported by COST (European Cooperation in Science and Technology).

Un proyecto validará el aprendizaje a través de los videojuegos comerciales

Xbagdes project at La Vanguardia (22/03/2017):

Investigadores de la Universidad de Barcelona (UB) participan en el proyecto xBadges, que a través de un sistema innovador permite validar y certificar las competencias adquiridas a través de videojuegos comerciales.

Según un comunicado emitido por la universidad, el programa se basa en el concepto de “ludificación inversa”, que permite analizar el aprendizaje a través de elementos de telemetría introducidos en videojuegos del mercado. Read the article

CVC on the news within the Barça strategy of sports innovation #ComputerVision #Sports

El Barça sitúa la innovación deportiva en el centro de su estrategia

El FC Barcelona trabaja, al igual que la mayoría de los grandes clubes, en cómo mejorar el rendimiento de sus futbolistas, en reducir sus lesiones y en recortar el periodo de recuperación, e intenta ser pionero en márketing deportivo y en la aplicación de la tecnología al deporte. Estas tareas, que ahora se gestionaban de forma poco coordinada, se integrarán en el Barça Innovation Hub, lo que antes se había denominado el Barça Universitas, que quiere ser un centro de conocimiento e innovación deportiva de escala mundial. Go to the article