It’s all in the contrast, colour constancy for computers
In their latest’s paper at IEEE’s Transactions on Pattern Analysis and Machine Intelligence, Dr. Arash Akbarinia and Dr. Alejandro Párraga explain their ASM model (adaptive surround modulation), where they work on biologically inspired colour constancy and its application in computers.Colour is essential in our perception of the world, as it allows us to segment and differentiate objects from each other and the background. Although the illumination changes dramatically during the day, from reddish (sunrise and sunset) to blueish (midday), we perceive objects as having the same colours. This property of our visual system is called colour constancy, and is also artificially replicated in digital cameras (“white balance”). A branch of computer vision at CVC tries to understand the human visual cortex and use this knowledge to improve the performance of artificial brains, in our case, artificial vision. Why is this important? Under controlled circumstances, like the floor plan of an industrial building, where light conditions don’t change, a computer vision software with fixed parameters will be able to work optimally. But what happens when we go outside? Let’s use autonomous cars as an example: they should work on natural daylight, nocturnal artificial illumination, inside tunnels, in cities, in open landscapes, in cloudy, foggy or rainy days. With fixed parameters, the car’s visual systems is likely to go nuts. Using ideas inspired in the cortical physiology of primates, the authors have found a robust solution which allows the removal of illumination variations in images. Moreover, the solution (named ASM) is also computationally inexpensive, allowing its implementation in small devices such as webcams or mobile phones. ASM is automatic and parameter-free, and since it is inspired on biologically findings it provides an insight on the functional properties of the brain which is likely to impact within the neurophysiology and visual perception communities. How the model works “It’s all in the contrast”, explains Dr. Alejandro Párraga, second author of the paper. “Contrast greatly influences the appearance of colours in a scene”. “If you have black and white next to each other, you have very big contrast, whereas if you have two whites next to each other, you have less contrast and thus need less adaptation”, Dr. Akbarinia continues, “neurons know that, so we modelled colour constancy based on physiological findings, overlapping two asymmetric Gaussian functions whose kernels and weights adapt according to centre-surround contrasts, mimicking the way primate brains actually work”. In visual receptive fields, we can identify three areas: centre, the dual-role and the surround. These are the three areas our eyes can focus within a scene. If the scene’s contrast is low, your visual cortex will identify two areas: the centre and a large surround (meaning the surround area has overlapped the dual-role, which is the area in between); if the scene’s contrast is high, again, your brain will identify two areas, but this time, a large centre (overlapping the dual-role) and a much smaller surround. “This is what we are introducing in image processing”, explains Dr. Akbarinia, “we are replicating brain mechanisms in order to improve a computer’s response to changes in illumination in different scenes, without manually changing the dataset’s parameters each time”. Even the best computational models, convolutional neural networks (CNN), don't adapt their parameters to the scene. We know one of the reasons our visual system is so robust is because of this ability of adapting its receptive fields (parameters) to the content of the scene. Therefore, in the future, Dr. Akbarinia and Dr. Párraga are interested in seeing whether similar approaches such as ASM can improve CNN results. In short, what the model does is to compensate contrast in each situation in order to maintain colour constancy. Of course, being automatic, the model isn’t perfect, but it is highly competitive against the state of the art, and can be a most efficient solution in small camera devices for exterior locations, with multiple uses in industry, research, transport, monitoring or leisure. The Great dress debate: blue and black or white and gold Now this topic brings us back to the dress picture that went wildly viral in 2015. Was it blue and black or white and gold? As we now know, it was blue and black, but many viewers saw it as white and golden dress. It all goes down to the subject’s visual perception, which interprets the illuminant in a particular way. “In this picture, we have no reference points on where the light is coming from, leading to ambiguity and giving our brain the possibility of filling the gap”, states Dr. Alejandro Párraga, “and each brain will do so differently”. As Sussex colour group PhD student Maria Rogers' puts it for The Guardian “The Dress is a brilliant example of how breaking the perceptual system helps us to learn more about how our brains work.” At CVC, comprehending the visual cortex is the main objective of the Neurobit team, of which Dr. Párraga and Dr. Akbarinia are part of, in order to incorporate this information and research to artificial visual systems. The ASM model is a step closer to actually mimic human vision and thus allow computer vision to be more dynamic and, most importantly, robust.