留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于双注意力机制的车道线检测

任凤雷 周海波 杨璐 何昕

任凤雷, 周海波, 杨璐, 何昕. 基于双注意力机制的车道线检测[J]. 中国光学(中英文), 2023, 16(3): 645-653. doi: 10.37188/CO.2022-0033
引用本文: 任凤雷, 周海波, 杨璐, 何昕. 基于双注意力机制的车道线检测[J]. 中国光学(中英文), 2023, 16(3): 645-653. doi: 10.37188/CO.2022-0033
REN Feng-lei, ZHOU Hai-bo, YANG Lu, HE Xin. Lane detection based on dual attention mechanism[J]. Chinese Optics, 2023, 16(3): 645-653. doi: 10.37188/CO.2022-0033
Citation: REN Feng-lei, ZHOU Hai-bo, YANG Lu, HE Xin. Lane detection based on dual attention mechanism[J]. Chinese Optics, 2023, 16(3): 645-653. doi: 10.37188/CO.2022-0033

基于双注意力机制的车道线检测

基金项目: 天津市自然科学基金重点项目(No. 17JCZDJC30400);广东省重点领域研发计划项目(No. 2019B090922002)
详细信息
    作者简介:

    任凤雷(1991—),男,河北沧州人,工学博士,讲师,2015年于吉林大学获得学士学位,2020年于中国科学院长春光学精密机械与物理研究所获得博士学位,主要从事数字图像处理,自动驾驶,视觉环境感知方面的研究。E-mail:renfenglei15@mails.ucas.edu.cnrenfenglei15@mails.ucas.edu.cn

    周海波(1973—),男,黑龙江肇东人,博士,教授,博士生导师,1998年、2005年于佳木斯大学分别获得学士、硕士学位,2009年于吉林大学获得博士学位,主要从事计算机视觉、人工智能、智能机器人技术等方面的研究。E-mail:haibo_zhou@163.com

  • 中图分类号: TP394.1

Lane detection based on dual attention mechanism

Funds: Supported by Key projects of Tianjin Natural Science Foundation (No. 17JCZDJC30400); Special Project for Research and Development in Key Areas of Guangdong Province (No. 2019B090922002)
More Information
  • 摘要:

    为了提升车道线检测算法在障碍物遮挡等复杂情况下的检测性能,本文提出了一种基于双注意力机制的多车道线检测算法。首先,本文通过设计基于空间和通道双注意力机制的车道线语义分割网络,得到分别代表车道线像素和背景区域的二值分割结果;然后,引入HNet网络结构,使用其输出的透视变换矩阵将分割图转换为鸟瞰视图,继而进行曲线拟合并逆变换回原图像空间,实现多车道线的检测;最后,将图像中线两侧车道线所包围的区域定义为目前行驶的行车车道。本文算法在Tusimple数据集凭借134 frame/s的实时性表现达到了96.63%的准确率,在CULane数据集取得了77.32%的精确率。实验结果表明,本文算法可以针对包括障碍物遮挡等不同场景下的多条车道线及行车车道进行实时检测,其性能相比较现有算法得到了显著的提升。

     

  • 图 1  车道线检测示意图

    Figure 1.  Schematic diagram of lane detection

    图 2  图像语义分割示意图

    Figure 2.  Schematic diagram of semantic segmentation of image

    图 3  本文车道线检测算法示意图

    Figure 3.  Schematic diagram of proposed lane detection algorithm

    图 4  扩张卷积示意图。(从左至右r值分别为1、2和4)

    Figure 4.  Diagram of atrous convolution. (r=1, 2, 4 from left to right)

    图 5  空间注意力机制示意图

    Figure 5.  Schematic diagram of the position attention module

    图 6  通道注意力机制示意图

    Figure 6.  Schematic diagram of the channel attention module

    图 7  本文算法Tusimple数据集车道线检测结果

    Figure 7.  Lane detection results of proposed algorithm on Tusimple

    图 8  本文算法CULane数据集车道线检测结果

    Figure 8.  Lane detection results of our algorithm on CULane

    表  1  本文算法在Tusimple数据集定量实验结果

    Table  1.   Quantitative experiment results of proposed algorithm on Tusimple

    Methodacc(%)FP(%)FN(%)FPS
    SCNN[18]96.536.171.807.5
    LaneNet[13]96.387.802.4452.6
    PolylaneNet[19]93.369.429.33115
    FastDraw[20]95.207.604.5090.3
    R-50-E2E[21]96.043.114.09
    Ours96.636.022.03134
    下载: 导出CSV

    表  2  CULane数据集定量实验结果

    Table  2.   Quantitative experiment results of proposed algorithm on CULane

    MethodNormalCrowdDazzleShadowNoline
    SCNN[18]90.6069.7058.5066.9043.40
    FastDraw[20]85.9063.6057.0069.9040.60
    UFSD-18[1]87.7066.0058.4062.8040.20
    UFSD-34[1]90.7070.2059.5069.3044.40
    LaneATT[22]91.1772.7165.8268.0349.13
    Ours91.2176.3369.5173.2550.16
    MethodArrowCurveCrossNightTotal
    SCNN[18]84.1064.40199066.1071.60
    FastDraw[20]79.4065.20701357.80-
    UFSD-18[1]81.0057.90174362.1068.40
    UFSD-34[1]85.7069.50203766.7072.30
    LaneATT[22]87.8263.75102068.5875.13
    Ours88.7271.25126570.7377.32
    下载: 导出CSV
  • [1] QIN Z Q, WANG H Y, LI X. Ultra fast structure-aware deep lane detection[C]. Proceedings of the 16th European Conference on Computer Vision, Springer, 2020: 276-291.
    [2] 陈晓冬, 艾大航, 张佳琛, 等. Gabor滤波融合卷积神经网络的路面裂缝检测方法[J]. 中国光学,2020,13(6):1293-1301. doi: 10.37188/CO.2020-0041

    CHEN X D, AI D H, ZHANG J CH, et al. Gabor filter fusion network for pavement crack detection[J]. Chinese Optics, 2020, 13(6): 1293-1301. (in Chinese) doi: 10.37188/CO.2020-0041
    [3] 任凤雷, 何昕, 魏仲慧, 等. 基于DeepLabV3+与超像素优化的语义分割[J]. 光学 精密工程,2019,27(12):2722-2729. doi: 10.3788/OPE.20192712.2722

    REN F L, HE X, WEI ZH H, et al. Semantic segmentation based on DeepLabV3+ and superpixel optimization[J]. Optics and Precision Engineering, 2019, 27(12): 2722-2729. (in Chinese) doi: 10.3788/OPE.20192712.2722
    [4] YU ZH P, REN X ZH, HUANG Y Y, et al. . Detecting lane and road markings at a distance with perspective transformer layers[C]. Proceedings of the 23rd International Conference on Intelligent Transportation Systems, IEEE, 2020: 1-6.
    [5] CHIU K Y, LIN S F. Lane detection using color-based segmentation[C]. Proceedings of the IEEE Intelligent Vehicles Symposium, IEEE, 2005: 706-711.
    [6] HUR J, KANG S N, SEO S W. Multi-lane detection in urban driving environments using conditional random fields[C]. Proceedings of 2013 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2013: 1297-1302.
    [7] JUNG H, MIN J, KIM J. An efficient lane detection algorithm for lane departure detection[C]. Proceedings of 2013 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2013: 976-981.
    [8] BORKAR A, HAYES M, SMITH M T. A novel lane detection system with efficient ground truth generation[J]. IEEE Transactions on Intelligent Transportation Systems, 2012, 13(1): 365-374. doi: 10.1109/TITS.2011.2173196
    [9] VAN GANSBEKE W, DE BRABANDERE B, NEVEN D, et al. . End-to-end lane detection through differentiable least-squares fitting[C]. Proceedings of 2019 IEEE/CVF International Conference on Computer Vision Workshop, IEEE, 2019: 905-913.
    [10] LIU T, CHEN ZH W, YANG Y, et al. . Lane detection in low-light conditions using an efficient data enhancement: light conditions style transfer[C]. Proceedings of 2020 IEEE Intelligent Vehicles Symposium, IEEE, 2020: 1394-1399.
    [11] CHANG D, CHIRAKKAL V, GOSWAMI S, et al. . Multi-lane detection using instance segmentation and attentive voting[C]. Proceedings of the 19th International Conference on Control, Automation and Systems, IEEE, 2020: 1538-1542.
    [12] KIM J, LEE M. Robust lane detection based on convolutional neural network and random sample consensus[C]. Proceedings of the 21st International Conference on Neural Information Processing, Springer, 2014: 454-461.
    [13] NEVEN D, DE BRABANDERE B, GEORGOULIS S, et al. . Towards end-to-end lane detection: an instance segmentation approach[C]. Proceedings of 2018 IEEE intelligent vehicles symposium (IV), IEEE, 2018: 286-291.
    [14] LEE H, SOHN K, MIN D. Unsupervised low-light image enhancement using bright channel prior[J]. IEEE Signal Processing Letters, 2020, 27: 251-255. doi: 10.1109/LSP.2020.2965824
    [15] YOO S, LEE H S, MYEONG H, et al. . End-to-end lane marker detection via row-wise classification[C]. Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2020: 4335-4343.
    [16] FU J, LIU J, TIAN H J, et al. . Dual attention network for scene segmentation[C]. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2019: 3141-3149.
    [17] HE K M, ZHANG X Y, REN SH Q, et al. . Deep residual learning for image recognition[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2016: 770-778.
    [18] PAN X G, SHI J P, LUO P, et al. . Spatial as deep: spatial CNN for traffic scene understanding[C]. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI Press, 2018: 7276-7283.
    [19] CHEN ZH P, LIU Q F, LIAN CH F. PointLaneNet: efficient end-to-end CNNs for accurate real-time lane detection[C]. Proceedings of 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2019: 2563-2568.
    [20] PHILION J. FastDraw: addressing the long tail of lane detection by adapting a sequential prediction network[C]. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2019: 11574-11583.
    [21] YOO S, LEE H S, MYEONG H, et al. . End-to-end lane marker detection via row-wise classification[C]. Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2020: 4335-4343.
    [22] TABELINI L, BERRIEL R, PAIXÃO T M, et al. . Keep your eyes on the lane: Real-time attention-guided lane detection[C]. Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2021: 294-302.
    [23] 陈晓冬, 盛婧, 杨晋, 等. 多参数Gabor预处理融合多尺度局部水平集的超声图像分割[J]. 中国光学,2020,13(5):1075-1084. doi: 10.37188/CO.2020-0025

    CHEN X D, SHENG J, YANG J, et al. Ultrasound image segmentation based on a multi-parameter Gabor filter and multiscale local level set method[J]. Chinese Optics, 2020, 13(5): 1075-1084. (in Chinese) doi: 10.37188/CO.2020-0025
    [24] 周文舟, 范晨, 胡小平, 等. 多尺度奇异值分解的偏振图像融合去雾算法与实验[J]. 中国光学,2021,14(2):298-306. doi: 10.37188/CO.2020-0099

    ZHOU W ZH, FAN CH, HU X P, et al. Multi-scale singular value decomposition polarization image fusion defogging algorithm and experiment[J]. Chinese Optics, 2021, 14(2): 298-306. (in Chinese) doi: 10.37188/CO.2020-0099
  • 加载中
图(8) / 表(2)
计量
  • 文章访问数:  1204
  • HTML全文浏览量:  541
  • PDF下载量:  310
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-03-04
  • 修回日期:  2022-04-06
  • 网络出版日期:  2022-06-16

目录

    /

    返回文章
    返回