Virtual Scenarios for Pedestrian Detection

 Can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images?

Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection bench-marking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.

Here we can see some examples of the virtual and the Daimler datasets: Here a virtual frame with its automatically generated groundtrut is shown: groundtruth Some results of the Daimler (Top) and Virtual (Bottom) classifiers:

Links

Related Publications