Information versus knowledge: when an image is worth more than a thousand words

Information versus knowledge: when an image is worth more than a thousand words

Post originally published at the Recercaixa Blog. Josep Lladòs, Computer Vision Center Director Last year, at a debate organised by La Caixa, we talked about the challenges in the digital era: knowledge versus information. We discussed over the demands and opportunities, as well as the risks we are facing at the information society revolution. The internet is adjusting information consumption habits and allowing us to define new paradigms in which citizens can access data on the web in an ubiquitous, universal and immediate manner. Similarly, the volume of information generated along with its difficulty to be processed, as well as the source’s reputation, makes the step from information to knowledge to be a veritable challenge and need. Facts speak on their very own: 278.000 tweets are generated every minute, 3600 pictures are posted at instragram, 72 hours of video are uploaded to youtube, 204 million emails are sent, 2 million searches at Google are happening. If we want all this information to be of any use, we need to turn it into knowledge. In other words, the receptors have to receive it “treated” and correlated. Artificial Intelligence techniques able to extract the information’s semantics are increasingly necessary. We’re submerged an era of big data where analytical processes are crucial to transform information into knowledge. The case of the information interpretation contained in images is a most useful example. Computer vision can be defined like the discipline that develops computer programs which give machines vision. To see is to interpret visual information, to turn pixels, as an elemental information unit, into knowledge. Computer vision has been, in the previous years an emergent and ubiquitous Technology. On a daily basis, we use camera incorporated devices containing vision programs (as are night cameras to watch after new-borns, incorporated cameras in videogame consoles which detect our gestures and make the game avatars move, cameras that read car plates in the entrance of parking sites, cameras that detect if a ball is in or out at a tennis match or if a football Player is out of game, etc.) Vision is a technology increasingly being demanded in sectors such as transport, health or security. It has been calculated that the vision market will grow a 40% annually up until 2020. When digital images belong to scanned or photographic documents, we refer to the subarea called analysis and image recognition of documents which focuses on the issue of the document’s content automatic recognition (be it printed text, written or handwritten, as well as graphic elements). Historical archives and libraries contain millions of documents, many of them manuscripts, which contain the historical memory of societies. These documents have been inaccessible to the general public. For some years now, there have been large digitisation campaigns which allow, at least, to publish those documents online. Nevertheless, placing the images to the public domain without a degree of structure and index is highly inefficient. It is necessary to transcribe those documents and thus give them a structure and only then, the interested public will be able to consume the knowledge they store. At the EINES project, financed by Recercaixa, we are focused in documents that contain demographic information, particularly population census. This project has put together a multidisciplinary research team from the Computer Vision Center (computer engineers) and the Demographics Center (social sciences), both at the Autonomous University of Barcelona (UAB). The aim of this project is the extraction of information from the digitised images of historical census (more than a hundred years) and the subsequent analysis. The extraction has to go beyond literal transcription, identifying nominal entities (names, places, dates, professions, etc.). The dataset resulting from this, properly structured and indexed is our door to assimilate the knowledge from the past. With this information, openly shared, professionals and citizens can track community evolution, genealogies, individual trajectories, etc. At this point, we can state that one image contains more than a thousand words. The interpretation of the image can, in this context, interpret the past. In the EINES project, the image’s content extraction process takes place by two means. Firstly, computer vision technologies allow the documents to be ‘read’ in an automatic way. It must be stated at this point that technology isn’t yet mature enough to guarantee a totally automatic transcription. There’s plenty of research yet to be done. At this point is where the intervention of citizens is valuable. Digital networks offer the possibility of ‘democratizing’ the generation of knowledge by the use of crowdsourcing platforms. It is by this means that, at this project, several voluntary citizens are participating in the extraction process. This should not be seen like an altruist work, but as the triggering of social innovation. In new innovation models, there are ecosystems in which the challenges are undertaken with the active inclusion of citizens. When speaking of the recovery of the memory from historical documental sources, citizens, being natural archives, give complementary knowledge of a great value. In conclusion, new technologies are instruments that serve to the challenges generated by the exponential increase of the information at the network, and its transformation into knowledge. In the world of images, the interpretation of content is fundamental, and computer vision comes out as a facilitating technology. Furthermore, we can not forget, but drive the intervention of users within this process. The new innovation models around the so called citizen science empower citizens and makes them the subjects of the knowledge generation. Digital humanities and the interpretation of big volumes of images in archived documents in processes of assisted transcription for the use of technology are a great examples of this. EINES Project in the media:   https://www.youtube.com/watch?v=cO1LVLiixRY Plus: La Vanguardia, 25/05/2016, 'Sant Feliu de Llobregat recupera el legado de sus antepasados' El Mundo, 26/05/2016, 'Sant Feliu desentraña su primera 'red social' que se remonta a 1828' Cadena ser, 04/06/2016. Listen to the radio clip here.         Related articles: Xarxes: Connecting the lives of our ancestors