Affiliation: School of Media, Design and Technology, University of Bradford, U.K.
With the recent growth in the number and complexity of imaging devices comes a vast amount of image data, particularly from data intensive devices such as hyperspectral cameras, and MRI scanners. Finding different ways to analyse and visualise this data is a key component of the ‘big-data’ challenge.
In this talk I will discuss the specific problem of multi-channel image fusion: how to fuse images with multiple channels of data into a single output image for display. I will describe the problem formulation and existing state of the art, and will focus on a novel fusion algorithm, developed at the University of East Anglia, called “Spectral Edge”.
The Spectral Edge method is grounded in the observation that human perception of images is driven by information at edges, and the key information in an image is therefore contained in the gradient, or local contrast. The goal of the method is then to generate an output image whose local contrast matches that of the multi-channel input as closely as possible. Specifically, the approach uses a constrained contrast mapping paradigm, whereby the contrast (structure tensor) of a multi-channel image is mapped exactly to a 3-channel gradient field, with constraints on the output colour provided by an initial RGB rendering. Our formulation results in a closed form solution, which leads to a fast and efficient algorithm. Finally I will discuss different approaches for reintegrating the resultant gradient field to generate output images.
The approach is generic in that it can map any N-D image data to any M-D output, and can be used in a variety of applications using the same basic algorithm. In this talk I will focus on the problem of mapping N-D inputs to 3-D (RGB) outputs. I will present results and comparisons with competing methods in several applications, including hyperspectral remote sensing, fusion of colour and near-infrared images and colour visualisation of MRI Diffusion-Tensor images.