top of page

Stopline And Lane Detection

Concept

For Year 2 of the Autodrive Challenge II (from 2022-2023) I will be serving as the Lane Detection lead at aUToronto. The team I am leading is responsible for developing computer vision and deep learning based approaches to stopline detection, lane detection, and depth estimation. I was invited to take on this role by virtue of having worked on a computer vision approach to stopline and lane detection in Year 4 of the Autodrive Challenge I (in 2021-2022) alongside Brian Lam and Amy Xin. Here are some of the projects I have done on this specific part of the self-driving perception stack.

Stopline Detection

Stoplines are found at stop signs and at traffic intersections. They are essential to detect because they demarcate a legal boundary behind which the vehicle must make a stop. Failing to do so puts both the vehicle and pedestrians crossing at intersections at risk. The pipeline uses a series of computer vision approaches including a birds eye view transformation using a calibrated homography matrix, histogram equalization, pixel and colour segmentation, hough lines, trinirization, and logic to distinguish between stoplines and other lines in the image.

Stopline detection demonstration at U of T Aerospace Institute (UTIAS) from Year 1 of the Autodrive Challenge II.

Attribution: aUToronto

During the competition, we are required to identify stoplines up to 10 meters away. I further stress tested the system by creating this sort of scenario and testing whether the detections would be made if there were occlusions on the stopline (created by me walking around them). Notice how sensitive the classical, computer vision approach is to the quality of the birds eye view (BEV) transformation. Small distortions in the BEV image paired with the occlusions causes the detections to flicker for a moment.

On the left is the stopline detection system being tested under occlusions by a pedestrian and on the right is the birds eye view of the scene generated using a homography matrix.

Attribution: aUToronto

The stopline detection output shown below is from Year 1 of the Autodrive Challenge II (in 2022). The stopline detector was tested at the competition around it's maximum distance and it worked well although there is still some flickering. This year at the competition there was a new method of scoring which was created which used the CAN communication protocol. A lot of the systems we built had to be ported over to or made compatible with this new scoring interface. Unfortunately, due to some last minute complications both the stopline detector and lane detector made were not integrated into this and so while they were operational we did not receive scores for the lane line and stopline components of the challenges. Neverthless, we still achieved a 1st place in all categories of the competition except 2 sub-challenges and were the overall winners in 2022. For the coming year, I'll be taking ownership of the lane and stopline detection systems and will ensure they are integrated into the new autonomous vehicle system that is being built so that it can be evaluated and so that we can localize the self-driving car in dynamic environments.

The stopline detection system at Y1 of the Autodrive Challenge II in MCity, Michigan in 2022.

Attribution: aUToronto

Lane Detection

The approach developed for lane detection was a computer vision one consisting of a perspective transform, edge pair detection, and line marking candidate extraction.

Lane Detection Diagram.png

The computer vision-based lane detection pipeline.

Attribution: aUToronto

Lane detection demonstration at UTIAS from Year 1 of the Autodrive Challenge II.

Attribution: aUToronto

The following are test scenarios from MCity, Michigan, the site of the competition during the Year 1 Autodrive Challenge II taken during my visit with the team during June of 2022. The lane detector is using a classical approach to detect lane lines and as such is highly susceptible to varying lighting conditions and the quality of the paint on the road. While the perception system was static in Year 1 of the competition, in future years this the vehicle will be dynamic so a more robust method for detecting lane lines is necessary. This motivates the development of deep learning based solutions to the lane detection problem.

The lane detection system at Y1 of the Autodrive Challenge II in MCity, Michigan in 2022.

Attribution: aUToronto

Another lane detection output at the 2022 competition site.

Attribution: aUToronto

Lane detection during the Dynamic Obstacles Challenge when the deer mannequin passes by.

Attribution: aUToronto

In Year 1 of the Autodrive Challenge II, the perception system we were testing was static. However, in future years it will be mounted on an electric vehicle and will be dynamic. As such, the following is a demonstration of the lane detection system when moving. While it is able to identify lane lines in the ego-lane of the sensors, it is not capable of robustly identifying lane lines elsewhere and is also limited to successful detections within areas of the road where there are well-painted lane lines. These will need to be improved in the coming year to be have a more robust lane detection system.

Lane detection when the perception system moves irregularly (or makes a U-turn).

Attribution: aUToronto

After the Y1 Competition and during the summer before I was appointed as the lane detection lead at aUToronto, I took APS360: Artificial Intelligence Fundamentals where we had to develop an AI system to solve a problem throughout the entire summer. I focused on lane detection alongside my teammates Phil Cuvin and Kelvin Cui. The video and report below details the classical and deep learning approaches we explored to achieve this task.

A video explanation of the APS360 lane detection project I did with Phil and Kelvin.

The next chapter of the lane detection and stopline detection story will be written during Year 2 of the Autodrive Challenge II (during 2022-2023). Stay tuned for what's next!

bottom of page