close

News

In the mediaNewsPress Release

Radiolung, a joint project of the Computer Vision Center and the Germans Trias Hospital, wins the Innovation Award from the Lung Ambition Alliance for lung cancer early detection

Radiolung
  • Radiolung is aimed to develop an artificial intelligence system based on radiomic information to improve the malignancy detection of lung nodules.
  • The research team is compound by members of the CVC research group Interactive Augmented Modeling for Biomedicine, led by Dr. Dèbora Gil, and various departments of the Hospital and Research Institute Germans Trias i Pujol, led by Dr. Antoni Rosell.

The Lung Ambition Alliance (LAA), a consortium recently created under the auspices of the International Association for the Study of Lung Cancer (IASLC), has recognized with the Innovation Award to the CVC and Germans Trias project “Radiomics and Radiogenomics in lung cancer screening – Radiolung”. The LAA Scientific Advisory Committee has distinguished the project led by Antoni Rosell, clinician of German Trias, for its scientific quality, novelty, potential application and contribution to the main goal of the alliance, double the lung cancer survival between 2020 and 2025.

Radiolung is aimed to develop a predictive model, based on artificial intelligence and radiomics, to improve the detection of malignancy of lung nodules and thus create a software with potential to be used with clinical purposes and improve the current model of lung cancer screening.

Radiomic for a more accurate diagnosis

Currently, lung cancer screening is performed by means of low-radiation computed tomography (CT-Chest) that detects lung nodules, but does not allow clinicians to assess, with enough accuracy, whether they are benign or malignant. Therefore, most patients have to undergo radiological follow-up, consisting of complementary radiological examinations and, sometimes, take of biopsy samples to refine the diagnosis.

In order to improve the diagnostic capacity of CT scans, researchers from the CVC and Germans Trias are working on an artificial intelligence algorithm that is able to analyse the mathematical characteristics of the images, a technique known as radiomics.

“Radiomics can extract a lot of 3D measurements of CT scans far beyond the visual capacity of the human eye, and combine them with histological and molecular characteristics of the lung tumours. In this way, you can systematically find, with a single CT image, the ranges that most correlate with the malignancy and/or severity of the nodule”, explains the principal investigator of the CVC research group Interactive Augmented Modeling for Biomedicine (IAM4B), Dr. Dèbora Gil. “This analysis of multiple data is impossible for a radiologist to perform it visually, so radiomics can be a very useful tool to help specialists to make more accurate diagnoses”, Dr. Gil continues.

For his part, Antoni Rosell, clinical director of the Thorax Area at Germans Trias and principal investigator of Radiolung, points out “being able to determine the degree of aggressiveness of the lung nodule and its mutational profile in its early stages by radiomics will allow to predict the long-term behaviour of the tumour, both in terms of evolution and the risk of recurrence. In this way, we will be able to provide targeted, specific and personalized treatments to patients, even in the initial state of the disease”.

With the Innovation Award, Radiolung has obtained 30.000 € for the development of this technology. “Thanks to this award, we will be able to transfer this innovation to the clinical practice,” concludes Rosell. The project has also been recognized by a Premi Talents grant, an initiative promoted by the Fundació Catalunya-La Pedrera and the Hospital Germans Trias, with the aim of funding research projects for health professionals who finished their residency.

In the media

Premian un proyecto de Can Ruti para la detección precoz del cáncer de pulmón – La Vanguardia (in spanish)

El projecte Radiolung de Germans Trias i del Centre de Visió per Computador guanya el Premi a la Innovació per a la detecció precoç de càncer – TOT Badalona (in catalan)

read more
News

New agreement between CVC and AmicsUAB

_DSC0576

On September 13rd, CVC and *AmicsUAB signed an agreement in which the collaboration channels between both entities were strengthened. Josep Lladós, CVC director, and Francesc Cayuela, AmicsUAB President, had a meeting and formalized the agreement with their signatures. CVC and AmicsUAB are committed to reinforce communication and promote activities by both parties.

Thus, it is underlined once again CVC’s dedication to building bridges with other organizations and boosting synergies.

We are delighted to be part of this great UAB family.

 

 

*AmicsUAB is an organization that welcomes anyone who wants to be a friend of the UAB, either because they have a link with the University (former students, professors, researchers) and want to deepen it or as external people who are simply interested in being part of the UAB family. Its main objective is to promote an active and open relationship between the University and society, providing a point of knowledge between the two. The organization fosters the transfer of knowledge while offering services from the University and other entities to members and companies. 

 

read more
In the mediaNews

Myths and legends about videogames – Dr. Dèbora Gil at “Maldita Twitchería”

Debora twitcheria

The Computer Vision Center (CVC) continues its collaboration with Maldita Tecnología to put in context several concepts associated with Artificial Intel·ligence. In this occasion, it was the turn of Dr. Dèbora Gil, who explained in Maldita Twitcheria how videogames can help in medical and computer vision research.

First, Dr. Gil detailed to Naiara Bellio and Joselu Zafra that the technology which is behind the development of more and more realistic videogames, it is the same that the used by her research group to create the CVC lung GPS.

Also, she talked about “serious games”, videogames with a tremendous potential in the field, among others, of neurocognitive rehabilitation and in the aeronautic industry, as they are showing in the European-funded project E-PILOTS

And least, but not last, Dèbora pointed out how computer vision research is taking advantage of the enormous advances that the videogames industry are making in GPU technology, a specialized electronic circuits designed to rapidly manipulate and alter memory to accelerate the creation of high quality images

You can watch Dèbora’s intervention in “Maldita Twitchería” here (in Spanish, from 43′ 15″)

read more
News

New version of CARLA released

CARLA 0.9.12

After more than eight months waiting, the 0.9.12 release of CARLA is finally availabe in, as usual, open-source code and protocols.

In this new version of  the open-source simulator for autonomous driving research, in whose development the CVC, through the Advanced Driver Assistance Systems group lead by the ICREA researcher Antonio López is involved, there are important novelties comparing with the previous version, CARLA 0.9.11, such as:

Large Maps. Users can now perform simulations on vast maps that can span upwards of 10,000km². Also, with the updated Traffic Manager, CARLA users can now populate the map with hundreds of cars that can drive at high speeds

Physics improvements including full physics determinism in synchronous mode, improved wheel physicscustom vehicle wheel settings, and vehicle control telemetry.

New default town called HD Town 10

New generation of pedestrians and vehicles with improved meshes and details. There’s a lot of variety in the new pedestrians, who have a range of clothing, body shapes, and skin colors.

Addition of the Optical Flow Camera, allowing to capture the motion of pixels between frames.

OpenStreetMaps functionality has been extended to allow users to automatically generate traffic lights at junctions and control how and where those traffic lights are generated.

Ray/RLlib integration to enable to use CARLA as an environment in Ray for reinforcement learning experiments. This environment could be used for training and inference purposes.

Beta integration with Chrono, a highly realistic multi-physics simulation engine. This integration will allow users to delegate the simulation of advanced physics to Chrono, providing a more accurate simulation experience.

The CarSim integration is no longer in beta stages and is fully supported after making some fixes and updates to the functionality.


In this link you can find more detailed information about CARLA 0.9.12

read more
News

Document Visual Question Answering: more than understanding a document

Businesswoman standing and holding digital pen, smartphone with working on laptop in the office.

CVC researchers in collaboration with the Centre for Visual Information Technology (CVIT), IIITH, and with support from an Amazon AWS Machine Learning Research Award have been organising a series of projects and challenges in order to go further in document understanding. The long-term aim is to push for a change of paradigm in the document analysis and recognition community to come up with new ways of doing things.

Written information is all around us, and performing most everyday tasks requires us to read and understand this kind information in the world. But, although computers have proven to be good with text, they still experience some limitations when it comes to reading information from an image, such as a scanned document or a picture of a sign in the street. In these cases, computers cannot readily access the information. That’s why, for instance, spammers use images with spam text embedded in order to circumvent spam filters.

Focusing on the case of documents, there has been strong research effort on improving the machine understanding of these insatiable sources of information. Document image analysis is one of the oldest application fields of artificial intelligence and it combines two important cognitive aspects: vision and language understanding. However, until now, all efforts have focused on creating models that extract information from images in a ‘bottom-up’ approach. This is the case of the well-known Optical Character Recognition (OCR) engines, which are really useful for recognising text in any font – typed, printed or handwritten -, detecting tables and diagrams or extracting handwritten fields from pre-defined forms, but are disconnected from the final use of this information.

Written communication requires much more than extract and interpret the textual content. Documents usually include all kind of visual elements such as symbols, marks, separators, diagrams, draw connections between things, page structure, forms, the different colors and fonts used, highlighting, etc. And this sort of non-textual communication can provide us necessary information to understand the document in its global context.

In conclusion, this kind of models are focused only on ‘conversion’ of documents into digital formats, rather than really ‘understanding’ the message contained in the documents. In addition, they are also designed to work offline, as no interaction with humans is required.

Document Visual Question Answering: a higher understanding beyond recognition

With support from an Amazon AWS Machine Learning Research Award, researcher from the Computer Vision Center (CVC) and the Centre for Visual Information Technology (CVIT), IIITH, started a collaborative research in order to go further in the field of document understanding.

Known as Document Visual Question Answering (DocVQA), the research is focused on initiating a dialogue with different forms of written text such as that in a document, a book, an annual report, a comic strip, etc. and guiding machines to understand human requests to respond appropriately to them, and eventually in real time.

“More than a set of challenges and datasets, the long-term aim is to push for a change of paradigm in the document analysis and recognition community, and hopefully to come up with new ways of doing things. With methods that condition the information extraction to the high-level task defined by the user in the form of a natural language question – maintaining a human friendly interface”, explains Dr. Dimosthenis Karatzas, CVC Associate Director and principal investigator of this project.

DocVQA Challenge series

The DocVQA Challenge series was born as a result of the first year of work in this research. To date, they have setup three challenges, looking at increasingly more difficult facets of the problem. “We started with defining a large-scale, broad dataset of 12,000 documents along with 50,000 question-and-answer pairs. Then we moved to asking questions over a set of documents – a whole collection. Finally, we are currently working on a very challenging case: infographics, where textual information is intrinsically linked with graphical elements to create complex layouts that tell a story based on data”, states Dr. Karatzas.

Furthermore, the DocVQA web portal is quickly becoming the de-facto benchmark for this task and researchers use it daily to evaluate new ideas, models and methods: “To this date, we have evaluated around 1,300 submissions to the first two challenges, out of which more than 60 have been made public by their authors and feature in the ranking tables”, points out Dr Dimosthenis Karatzas.

First Workshop on DocVQA at ICDAR 2021

In the context of the 16th International Conference on Document Analysis and Recognition (ICDAR 2021), the researchers involved in this project will organize the first workshop on DocVQA. This workshop aims to create some space to discuss the DocVQA paradigm and the results of the ICDAR 2021 long-term challenge on DocVQA. DocVQA 2021 comes after the successful organization of the Document Visual Question Answering (DocVQA) challenge as part of “Text and Documents in the Deep Learning Era” Workshop in CVPR 2020. The workshop will be held on September 6th and will count with the participation of top speakers such as Amanpreet Singh (Facebook), Dr. Yijuan Lu (Microsoft), and Dr. Brian Price (Adobe).

read more
1 2 3 4 35
Page 2 of 35