Road traffic crashes have become a major cause of death and injury throughout the world. Hence, in order to improve road safety, the automobile manufacture is moving towards the development of vehicles with autonomous functionalities such as keeping in the right lane, safe distance keeping between vehicles or regulating the speed of the vehicle according to the traffic conditions. A key component of these systems is vision-based road detection that aims to detect the free road surface ahead the moving vehicle. Detecting the road using a monocular vision system is very challenging since the road is an outdoor scenario imaged from a mobile platform. Hence, the detection algorithm must be able to deal with continuously changing imaging conditions such as the presence of different objects (vehicles, pedestrians), different environments (urban, highways, off-road), different road types (shape, color), and different imaging conditions (varying illumination, different viewpoints and changing weather conditions). Therefore, in this thesis, we focus on vision-based road detection using a single color camera. More precisely, we first focus on analyzing and grouping pixels according to their low-level properties. In this way, two different approaches are presented to exploit color and photometric invariance. Then, we focus the research of the thesis on exploiting context information. This information provides relevant knowledge about the road not using pixel features from road regions but semantic information from the analysis of the scene. In this way, we present two different approaches to infer the geometry of the road ahead the moving vehicle. Finally, we focus on combining these context and appearance (color) approaches to improve the overall performance of road detection algorithms. The qualitative and quantitative results presented in this thesis on real-world driving sequences show that the proposed method is robust to varying imaging conditions, road types and scenarios going beyond the state-of-the-art.