Traffic light detection by fusing object detection and map info

dc.contributor.advisorMatiisen, Tambet, juhendaja
dc.contributor.advisorKull, Meelis, juhendaja
dc.contributor.authorShtym, Tetiana
dc.contributor.otherTartu Ülikool. Loodus- ja täppisteaduste valdkondet
dc.contributor.otherTartu Ülikool. Arvutiteaduse instituutet
dc.date.accessioned2023-09-21T13:21:57Z
dc.date.available2023-09-21T13:21:57Z
dc.date.issued2021
dc.description.abstractTo share streets with human drivers, self-driving cars must locate traffic lights and recognize their states. While for human drivers recognizing a relevant traffic light does not require much effort, it is a challenging task for self-driving cars. Although, for state-of-the-art object detection methods detecting traffic lights is simple, identifying to which lane they apply is non-trivial. The most common approach relies on precise locations of traffic lights on the highdefinition map, localization of the car, and camera position with respect to the car. When a vehicle approaches a traffic light, traffic lights from HD-map are projected to the camera images. Then, regions that include traffic lights, or regions of interest (ROIs) for traffic lights are extracted and fed to the classifier. To mitigate localization errors, ROIs need to be enlarged. However, this can lead to imprecise classification as the bounding box might not capture traffic light adequately. In this thesis, the problem is addressed by introducing traffic light recognition by fusing object detection and HD-map information. The process is divided into three phases: get 2D traffic lights’ ROIs by projecting 3D bounding boxes from the map to the camera image; perform traffic light detection on the image to get 2D bounding boxes and traffic light states; associate traffic lights with lanes by matching detected bounding boxes with ROIs using Intersection-over-Union metric. The proposed method was integrated into Autoware.AI and tested on prerecorded routes in Tallinn and Tartu. The approach achieved an accuracy of 93% and outperformed the approach currently used by Autoware.AI.et
dc.identifier.urihttps://hdl.handle.net/10062/92344
dc.language.isoenget
dc.publisherTartu Ülikoolet
dc.rightsopenAccesset
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectdatasetset
dc.subjectneural networkset
dc.subjectobject detectionet
dc.subjecttraffic light recognitionet
dc.subjectautonomous drivinget
dc.subjectHD-mapet
dc.subjectYOLOv3et
dc.subject.othermagistritöödet
dc.subject.otherinformaatikaet
dc.subject.otherinfotehnoloogiaet
dc.subject.otherinformaticset
dc.subject.otherinfotechnologyet
dc.titleTraffic light detection by fusing object detection and map infoet
dc.typeThesiset

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Shtym_ComputerScience_2021.pdf
Size:
5.38 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: