close

Press Release

NewsPress Release

New system powered by deep learning makes it possible to detect Covid-19 lesions by analysing CT chest scans

press_release_petia

Researchers from the Eurecat technology centre, the CVC and the University of Barcelona have developed an automated system that taps into Deep Learning technology to detect lesions caused by Covid-19 by reading computed tomography (CT) chest images.

The study, conducted by researchers Giuseppe Pezzano, Vicent Ribas, Petia Radeva and Oliver Díaz, was recently published in the journal ‘Computers in Biology and Medicine’.

The research “has enabled us to confirm the system’s efficiency as a decision support tool for healthcare professionals in screening for Covid-19 and to measure the severity, extent and evolution of SARS-CoV-2 pneumonia including over the medium and long term,” says principal investigator Giuseppe Pezzano, a researcher at Eurecat’s Digital Health Unit and the UB.

Specifically, the system works by “first segmenting the lungs from the CT image to narrow down the search area and then using the algorithm to analyse the lung area and detect the presence of Covid-19,” adds Pezzano. “If there is a positive finding, the image is processed to identify the areas affected by the disease.”

The algorithm has been tested on 79 volumes and 110 slices of CT scans in which Covid-19 infection had been detected obtained from three open-access image repositories. Average accuracy for SARS-CoV-2 lesion segmentation was about 99 percent with no false positives observed during identification.

“The accuracy of the tool developed as shown by the results of the study opens up a wide range of other applications in healthcare, a field in which Artificial Intelligence is proving to be increasingly helpful,” points out Vicent Ribas, one of the study researchers and head of the Data Analytics in Medicine research strand at Eurecat’s Digital Health Unit.

The method developed uses an innovative way of calculating the segmentation mask of medical images which has also brought outstanding results in segmenting nodules in CT scans.

Recently “papers have been published showing that Deep Learning algorithms and Computer Vision have achieved greater accuracy than medical experts in detecting cancer in mammograms and predicting strokes and heart attacks,” comments Petia Radeva, CVC researcher and Professor and head of the consolidated Computer Vision and Machine Learning Research Group at the University of Barcelona. “We wanted to be there on the frontline and so we’ve developed this technology to help doctors fight Covid-19 by providing them with high-precision algorithms to analyse medical images objectively, transparently and robustly.”

“This type of automated system is an extremely significant tool for health professionals for more robust and accurate diagnoses,” says UB Assistant Professor Oliver Díaz. “That’s because it can provide information which cannot be measured by a human being.”


Reference: CoLe-CNN+: Context learning – Convolutional neural network for COVID-19-Ground-Glass-Opacities detection and segmentation – ScienceDirect

Related articles:

read more
NewsPress Release

Computer Vision meets archaeology to detect near 10k Archaeological Tumuli in Galicia

remotesensing-13-04181-g007-1920×987

Researchers from the Computer Vision Center and the Landscape Archaeology Research Group (GIAP) of the Catalan Institute of Classical Archaeology (ICAC) have developed a hybrid algorithm that combines Deep Learning and Machine Learning to improve the automatic detection of archaeological tumuli avoiding the inclusion of most false positives.

Archaeological tumuli are one of the most common types of archaeological sites and can be found across the globe. This is perhaps why many studies have attempted to develop methods for their automated detection. Their characteristic tumular shape has been the primary feature for their identification on the field and in LiDAR-based topographic data, which usually takes the form of Digital Terrain Models (DTMs).

The simple shape of mounds or tumuli is ideal for their detection using deep learning approaches. Deep learning detectors usually require large quantities of training data (in the order of thousands of examples) to be able to produce significant results. However, the homogenously semi-hemispherical shape of tumuli, allows the training of usable detectors with a much lower quantity of training data, reducing considerably the effort required to obtain it and the significant computational resources necessary to train a convolutional neural network (CNN) detector.

This type of features, however, present an important drawback. Their common, simple, and regular shape is similar to many other non-archaeological features and therefore studies implementing methods for mound detection in LiDAR-derived DTMs and other high-resolution datasets are characterised by a very large presence of false positives (objects incorrectly identified as mounds).

During the initial research, the researchers from the GIAP group of the Catalan Institute of Classical Archaeology, Iban Berganzo and Hèctor A. Orengo, located almost 9000 tumuli in Galicia. However, not all of these were actual tumuli as the automated detection results also included false positives. After initial data validation was performed in collaboration with Dr. Miguel Carrero (University College London & University of Santiago de Compostela, GEPN-AAT), Dr. João Fonte (University of Exeter) and Dr. Benito Vilas (University of Vigo) they realised that from the ca. 9000 detected objects only ca. 7600 corresponded to real archaeological mounds. Although, this was an excellent result, well below the percentage of false positives presented by similar studies, they thought they could improve the detection rate while decreasing the number of false positives.

For this reason, during the summer, GIAP researchers in collaboration with CVC researcher Dr. Felipe Lumbreras developed a new approach to reduce the number of false positives while increasing the detection rate. After analysing the nature of the detected false positives, they developed a hybrid approach that mixes classical machine learning and deep learning. The objective was to obtain a more precise definition of archaeological tumuli in which not just the shape but also the multispectral characteristics of the objects will be considered when looking for tumuli.

The fist results have been published at the Remote Sensing journal. In this article, researchers give more information on the data analyzed and the performance of this innovative computer-based automatic detection initiative.

Results:

The results that this new approach has produced are nothing less than spectacular:

  • The area covered is the largest (to the extent of our knowledge) in which archaeological DL approaches have ever been applied and it covers almost 30,000 km2
  • 10,527 objects have been detected of which approximately 9,422 correspond to archaeological tumuli (after careful visual validation with high resolution imagery and pending ground validation). That is, a 89.5% of the detected tumuli correspond to true positives.
  • We have only employed open source data in this research. However, the use of higher resolution data, in particular higher resolution satellite imagery instead of the Sentinel 2 (10m/px) images employed, would radically decrease the number of false positives reaching a success rate above 97%.
  • Code, sources and results (including validation) are freely available and the code is designed to be used in freely accessible cloud computing platforms Google Colaboratory and Earth Engine) so the lack of computational resources will not pose a problem for its application to other study areas (even very large ones).

This approach provides a way forward for the detection of tumuli avoiding the inclusion of most false positives. The algorithm can be applied in areas of the world where topographic data of enough resolution are available. Providing specific training data, this hybrid approach can also be used to detect other types of features where large number of false positives area an issue.

Link to the paper: https://www.mdpi.com/2072-4292/13/20/4181


Funding

This research has received funding from multiple sources, that we would like to acknowledge here: Iban Berganzo’s PhD is funded with an Ayuda a Equipos de Investigación Científica of the Fundación BBVA for the Project DIASur. Hector A. Orengo is a Ramón y Cajal Fellow (RYC-2016-19637) of the Spanish Ministry of Science, Innovation and Universities. Felipe Lumbreras work is supported in part by the Spanish Ministry of Science and Innovation project BOSSS TIN2017-89723-P. Miguel Carrero and João Fonte are Marie Skłodowska-Curie Fellows (Grant Agreements 886793 and 794048 respectively). Some of the GPUs used in these experiments are a donation of Nvidia Hardware Grant Programme.

read more
In the mediaNewsPress Release

Radiolung, a joint project of the Computer Vision Center and the Germans Trias Hospital, wins the Innovation Award from the Lung Ambition Alliance for lung cancer early detection

Radiolung
  • Radiolung is aimed to develop an artificial intelligence system based on radiomic information to improve the malignancy detection of lung nodules.
  • The research team is compound by members of the CVC research group Interactive Augmented Modeling for Biomedicine, led by Dr. Dèbora Gil, and various departments of the Hospital and Research Institute Germans Trias i Pujol, led by Dr. Antoni Rosell.

The Lung Ambition Alliance (LAA), a consortium recently created under the auspices of the International Association for the Study of Lung Cancer (IASLC), has recognized with the Innovation Award to the CVC and Germans Trias project “Radiomics and Radiogenomics in lung cancer screening – Radiolung”. The LAA Scientific Advisory Committee has distinguished the project led by Antoni Rosell, clinician of German Trias, for its scientific quality, novelty, potential application and contribution to the main goal of the alliance, double the lung cancer survival between 2020 and 2025.

Radiolung is aimed to develop a predictive model, based on artificial intelligence and radiomics, to improve the detection of malignancy of lung nodules and thus create a software with potential to be used with clinical purposes and improve the current model of lung cancer screening.

Radiomic for a more accurate diagnosis

Currently, lung cancer screening is performed by means of low-radiation computed tomography (CT-Chest) that detects lung nodules, but does not allow clinicians to assess, with enough accuracy, whether they are benign or malignant. Therefore, most patients have to undergo radiological follow-up, consisting of complementary radiological examinations and, sometimes, take of biopsy samples to refine the diagnosis.

In order to improve the diagnostic capacity of CT scans, researchers from the CVC and Germans Trias are working on an artificial intelligence algorithm that is able to analyse the mathematical characteristics of the images, a technique known as radiomics.

“Radiomics can extract a lot of 3D measurements of CT scans far beyond the visual capacity of the human eye, and combine them with histological and molecular characteristics of the lung tumours. In this way, you can systematically find, with a single CT image, the ranges that most correlate with the malignancy and/or severity of the nodule”, explains the principal investigator of the CVC research group Interactive Augmented Modeling for Biomedicine (IAM4B), Dr. Dèbora Gil. “This analysis of multiple data is impossible for a radiologist to perform it visually, so radiomics can be a very useful tool to help specialists to make more accurate diagnoses”, Dr. Gil continues.

For his part, Antoni Rosell, clinical director of the Thorax Area at Germans Trias and principal investigator of Radiolung, points out “being able to determine the degree of aggressiveness of the lung nodule and its mutational profile in its early stages by radiomics will allow to predict the long-term behaviour of the tumour, both in terms of evolution and the risk of recurrence. In this way, we will be able to provide targeted, specific and personalized treatments to patients, even in the initial state of the disease”.

With the Innovation Award, Radiolung has obtained 30.000 € for the development of this technology. “Thanks to this award, we will be able to transfer this innovation to the clinical practice,” concludes Rosell. The project has also been recognized by a Premi Talents grant, an initiative promoted by the Fundació Catalunya-La Pedrera and the Hospital Germans Trias, with the aim of funding research projects for health professionals who finished their residency.

In the media

Premian un proyecto de Can Ruti para la detección precoz del cáncer de pulmón – La Vanguardia (in spanish)

El projecte Radiolung de Germans Trias i del Centre de Visió per Computador guanya el Premi a la Innovació per a la detecció precoç de càncer – TOT Badalona (in catalan)

read more
In the mediaNewsPress Release

Artificial intelligence tool developed to monitor via satellite the destruction of buildings in wars

Siria
  • Researchers from Barcelona and California, led by the Institute of Economic Analysis (IAE-CSIC) and the UAB, and with the participation of the Computer Vision Center (CVC), have applied machine learning to detect the destruction of buildings by artillery using neural networks.
  • This automated method would make possible to monitor the destruction of a bellicose conflict, almost in real-time, aiming to improve humanitarian response.

This method, developed in a project co-led by the IAE-CSIC and the UAB, and with the participation of the CVC researcher Dr. Joan Serrat is based on neural networks that have been trained to detect in satellite images characteristics of heavy weapons (artillery and bombing) destructive attacks, such as the debris of collapsed buildings or the presence of bomb craters.

In the study, which results are published in the journal Proceedings of the National Academy of Sciences (PNAS), the scientists have applied this method to monitor the destruction of six of Syria’s main cities (Aleppo, Daraa, Deir-Ez-Zor, Hama, Homs, and Raqqa), plagued by an armed conflict for more than ten years. The results show that this method has a high efficiency at monitoring. “Our approach can be applied to any populated area as long as repeated high-resolution satellite imagery is available”, explained the authors.

Including the time factor

“An essential element of the development is that the neural network superimposes and compares successive images of the same place, contrasting them on a timeline that always includes a first image before the war. Another novelty is the incorporation of spatial and temporal information, in other words, information that gives context to the observation of destruction. In addition, the tool incorporates a novel method of image labeling: the system can make reasonable assumptions using the contextual information and train the algorithm with the destruction information around a building”, said the IAE-CSIC researcher Dr Hannes Mueller, lead author of the article

Automated methods must be able to detect destruction in a context where the vast majority of images do not appear to be of destruction. However, they often interpret buildings as demolished that actually are not, resulting in a high false-positive rate (FPR).

For even in Aleppo, a heavily war-torn city, only 2.8% of all images of populated areas contain a building that was confirmed as destroyed by the United Nations Operational Satellite Applications Program (UNOSAT) in September 2016, where they do a manual classification.

Low accuracy is “a very serious problem. Even in cities heavily hit by conflict, only 1% of buildings are destroyed. “Hence, their detection is like looking for a needle in a haystack. If we have false positives in the images, the margin of error shoots up quickly. In this case, 20% accuracy, for example, means that if an algorithm says something is destroyed, only 20% of what it says is actually destroyed,” continued the IAE-CSIC scientist.

The study demonstrates that the trained algorithm is able to identify damage in areas of Aleppo city that are not part of the UNOSAT analysis. It also provides evidence that this method can identify shelling in all six cities.

The results of this work are promising. They enable applications for the detection and even near real-time monitoring of destruction by war conflicts.

“Our method is particularly well suited to take advantage of the increasing availability of high-resolution imagery. We have estimated that human manual labeling of our entire dataset would cost approximately $200,000, and additional image repetitions would increase this cost almost proportionally. With an automated method such as ours, the benefits are numerous. More frequent imaging helps improve accuracy and the additional cost is small”, concluded Dr. Joan Serrat, CVC and UAB researcher

In the media

Un algoritmo permite monitorizar la destrucción que causan las guerras – Agencia EFE

Un algoritmo permite monitorizar la destrucción que causan las guerras– El Diario

 

Reference: Monitoring war destruction from space using machine learning. Hannes Mueller, Andre Groeger, Jonathan Hersh, Andrea Matranga, Joan Serrat, PNAS June 8, 2021 118 (23) e2025400118; https://doi.org/10.1073/pnas.2025400118 

read more
In the mediaNewsPress Release

Journey through the history of TV3 thanks to Computer Vision

ViVIM

The ViVIM (Computer Vision for Multi-Platform Immersive Video) project, with the participation of the Computer Vision Center (CVC), has developed an immersive tool that allows a unique experience: to travel in a virtual elevator that transport users to the most iconic moments in the history of TV3.

The elevator travels from 1983 to 2020, being each floor a year.The user only have to choose the year and the elevator will transport him there. Once his/her destination is reached, the user can choose among different screens where are showed the most emblematic shows and broadcasts of TV3 in that particular year.

This experience shows the TV3 history, since its launch in 1983 from nowadays.

The ViVIM project is a Catalan initiative, funded by ACCIÓ, within the action plan of the Ris3CAT Media Community promoted by the Generalitat de Catalunya, and its consortium is formed by I2CAT, which coordinates the project, the CVC, the Catalan Audiovisual Media Corporation (CCMA), Vysion and Eurecat.

You can know more ViVIM here

 

read more
1 2 3
Page 1 of 3