Vehicles are equipped with smarter and smarter driver assistance systems to improve driving safety year by year. One direction of safety improvement is related to the detection and recognition of driving environment, such as lane detection, traffic sign recognition, pedestrian and vehicle detection, etc. On-board pedestrian detection is a very challenging task because driving environment is very dynamic, where humans appear in wide varieties of clothing, illumination, size, speed and distance from the vehicle. Many methods for pedestrian detection have been proposed where the core element is classification. Most of these methods are based on the sliding window search methodology to localize humans in an image. It is very important to select appropriate regions in images where the human might appear. The easiest and also the most popular way is to check the whole image at all possible scales. Such methods usually produces large number of false positives and are consuming computing resources because large number of inappropriate regions were checked. In this thesis we develop a method which can reduce the search space in pedestrian detection by using simple properties of projective geometry, in both cases when camera parameters are available and when camera parameters are unavailable. We showed the efficiency of our method on public dataset with known camera parameters and self captured dataset without registered camera parameters. The self-captured dataset includes four on-board driving video sequences acquired during the day and night with labeled pedestrian locations as ground truth and it will be made available on-line.
Vehicles are equipped with smarter and smarter driver assistance systems to improve driving safety year by year. One direction of safety improvement is related to the detection and recognition of driving environment, such as lane detection, traffic sign recognition, pedestrian and vehicle detection, etc. On-board pedestrian detection is a very challenging task because driving environment is very dynamic, where humans appear in wide varieties of clothing, illumination, size, speed and distance from the vehicle. Many methods for pedestrian detection have been proposed where the core element is classification. Most of these methods are based on the sliding window search methodology to localize humans in an image. It is very important to select appropriate regions in images where the human might appear. The easiest and also the most popular way is to check the whole image at all possible scales. Such methods usually produces large number of false positives and are consuming computing resources because large number of inappropriate regions were checked. In this thesis we develop a method which can reduce the search space in pedestrian detection by using simple properties of projective geometry, in both cases when camera parameters are available and when camera parameters are unavailable. We showed the efficiency of our method on public dataset with known camera parameters and self captured dataset without registered camera parameters. The self-captured dataset includes four on-board driving video sequences acquired during the day and night with labeled pedestrian locations as ground truth and it will be made available on-line.