留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

八度卷积和双向门控循环单元结合的X光安检图像分类

吴海滨 魏喜盈 王爱丽 岩堀祐之

吴海滨, 魏喜盈, 王爱丽, 岩堀祐之. 八度卷积和双向门控循环单元结合的X光安检图像分类[J]. 中国光学(中英文), 2020, 13(5): 1138-1146. doi: 10.37188/CO.2020-0073
引用本文: 吴海滨, 魏喜盈, 王爱丽, 岩堀祐之. 八度卷积和双向门控循环单元结合的X光安检图像分类[J]. 中国光学(中英文), 2020, 13(5): 1138-1146. doi: 10.37188/CO.2020-0073
WU Hai-bin, WEI Xi-ying, WANG Ai-li, YUJI Iwahori. X-ray security inspection images classification combined octave convolution and bidirectional GRU[J]. Chinese Optics, 2020, 13(5): 1138-1146. doi: 10.37188/CO.2020-0073
Citation: WU Hai-bin, WEI Xi-ying, WANG Ai-li, YUJI Iwahori. X-ray security inspection images classification combined octave convolution and bidirectional GRU[J]. Chinese Optics, 2020, 13(5): 1138-1146. doi: 10.37188/CO.2020-0073

八度卷积和双向门控循环单元结合的X光安检图像分类

doi: 10.37188/CO.2020-0073
基金项目: 国家自然科学基金(No. 61671190)
详细信息
    作者简介:

    吴海滨(1977—),男,上海人,博士,教授,2002年于哈尔滨工业大学获得硕士学位,2008年于哈尔滨理工大学获得博士学位,现为哈尔滨理工大学测控技术与通信工程学院教授,主要从事机器视觉、医学虚拟现实、深度学习图像分类方面的研究。E-mail:woo@hrbust.edu.cn

    王爱丽(1979—),女,天津人,博士,副教授,2008年于哈尔滨工业大学获得博士学位,现为哈尔滨理工大学测控技术与通信工程学院副教授,主要从事机器视觉、深度学习图像分类方面的研究。E-mail:aili925@hrbust.edu.cn

  • 中图分类号: TP391.4

X-ray security inspection images classification combined octave convolution and bidirectional GRU

Funds: Supported by National Natural Science Foundation of China (No. 61671190)
More Information
  • 摘要: 针对主动视觉安检方法准确率低、速度慢,不适用于实时交通安检的问题,提出了八度卷积(OctConv)和注意力机制双向门控循环单元(GRU)神经网络相结合的X光安检图像分类方法。首先,利用八度卷积代替传统卷积,对输入的特征向量进行高低分频,并降低低频特征的分辨率,在有效提取X光安检图像特征的同时,减少了空间冗余。其次,通过注意力机制双向GRU,动态学习调整特征权重,提高危险品分类准确率。最后,在通用SIXRay数据集上的实验表明,对8000幅测试样本的整体分类准确率(ACC)、特征曲线下方面积(AUC)、正类分类准确率(PRE)分别为98.73%、91.39%、85.44%,检测时间为36.80 s。相对于目前主流模型,本文方法有效提高了X光安检图像危险品分类的准确率和速度。

     

  • 图 1  X光安检图像分类算法框图

    Figure 1.   Block diagram of X-ray security image classification algorithm

    图 2  八度卷积结构

    Figure 2.  The structure of octave convolution

    图 3  双层BiGRU结构

    Figure 3.  The structure of double-layer BiGRU

    图 4  SIXray 数据集

    Figure 4.  SIXRay dataset

    表  1  SIXray数据集样本分布

    Table  1.   Sample distribution in SIXray dataset

    正类样本 (8929)负类样本
    枪支刀具扳手钳子剪子
    31311943219939619831050302
    下载: 导出CSV

    表  2  不同类别数据增强前后对比结果

    Table  2.   Comparison results of different types of data before and after data augmentation

    种类增强前后负类样本数正类样本数不平衡比率
    枪支增强前72255270526.27
    增强后89672126597.08
    刀具增强前73212174841.88
    增强后93723860810.89
    扳手增强前72948201236.26
    增强后9238099519.28
    钳子增强前71524343620.82
    增强后85574167575.10
    剪子增强前7415380791.89
    增强后99760257138.80
    下载: 导出CSV

    表  3  不同模型的ACC (%)比较

    Table  3.   Comparison of ACC (%) for different network modules

    方法枪支刀具扳手钳子剪子平均
    InceptionV394.6387.5288.9780.5096.9589.71
    VGG1997.8898.3697.4896.0397.3397.42
    ResNet98.3699.2098.1696.1097.8097.92
    DenseNet98.6999.2598.1896.1697.6597.99
    STN-DenseNet99.1598.7397.5296.3298.4698.03
    OnlyBiGRU98.7799.4097.7394.3799.1497.88
    CNN-ABiGRU98.8999.4298.8997.0798.9698.65
    OctConv-ABiGRU98.6099.2599.1097.5099.2098.73
    下载: 导出CSV

    表  4  不同模型的AUC (%) 比较

    Table  4.   Comparison of AUC (%) for different network modules

    方法枪支刀具扳手钳子剪子平均
    InceptionV363.3454.5751.3352.9250.7454.57
    VGG1993.3489.0377.4976.5771.0881.50
    ResNet94.0688.6876.0073.9260.4578.64
    DenseNet93.9190.3772.5974.6561.0878.52
    STN-DenseNet95.6993.5875.6076.9865.0981.39
    OnlyBiGRU92.7393.9068.0373.3389.4283.48
    CNN-ABiGRU93.9693.9482.2280.0987.9987.65
    OctConv-ABiGRU91.5394.5987.8486.1596.7091.39
    下载: 导出CSV

    表  5  不同网络用时比较

    Table  5.   Comparison of detection time for different network modules

    方法参数量(百万)模型大小(MB)检测时间(s)
    VGG1945.1234441.56
    DenseNet57.2243724.91
    CNN-ABiGRU14.4210875.14
    OctConv-ABiGRU121.47138236.80
    下载: 导出CSV

    表  6  不同方法的分类精度比较

    Table  6.   Comparison of PRE (%) for different network modules

    方法枪支刀具扳手钳子剪子平均
    VGG1987.2086.4056.6055.2046.2066.32
    DenseNet88.2082.1851.2554.5038.5062.93
    CNN-ABiGRU88.5087.2063.0061.2076.4075.26
    OctConv-ABiGRU86.7892.2277.4476.2294.5685.44
    下载: 导出CSV
  • 陈志强, 张丽, 金鑫. X射线安全检查技术研究新进展[J]. 科学通报,2017,62(13):1350-1365. doi: 10.1360/N972016-00698

    CHEN ZH Q, ZHANG L, JIN X. Recent progress on X-ray security inspection technologies[J]. Chinese Science Bulletin, 2017, 62(13): 1350-1365. (in Chinese) doi: 10.1360/N972016-00698
    CAO S S, LIU Y H, SONG W W, et al.. Toward human-in-the-loop prohibited item detection in X-ray baggage images[C]. Proceedings of 2019 Chinese Automation Congress (CAC), IEEE, 2019: 4360-4364.
    LYU SH J, TU X, LU Y. X-Ray image classification for parcel inspection in high-speed sorting line[C]. Proceedings of the 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), IEEE, 2018: 1-5.
    费彬, 孙京阳, 张俊举, 等. 基于稀疏处理的多能X射线分离成像[J]. 光学 精密工程,2017,25(4):1106-1111. doi: 10.3788/OPE.20172504.1106

    FEI B, SUN J Y, ZHANG J J, et al. Separation of multi-energy X-ray imaging based on sparse processing[J]. Optics and Precision Engineering, 2017, 25(4): 1106-1111. (in Chinese) doi: 10.3788/OPE.20172504.1106
    王旖旎. 基于Inception V3的图像状态分类技术[J]. 液晶与显示,2020,35(4):389-394. doi: 10.3788/YJYXS20203504.0389

    WANG Y N. Image classification technology based on inception V3[J]. Chinese Journal of Liquid Crystals and Displays, 2020, 35(4): 389-394. (in Chinese) doi: 10.3788/YJYXS20203504.0389
    CHOUAI M, MERAH M, SANCHO-GOMEZ J L, et al. Supervised feature learning by adversarial autoencoder approach for object classification in dual X-Ray image of luggage[J]. Journal of Intelligent Manufacturing, 2020, 31(5): 1101-1112. doi: 10.1007/s10845-019-01498-5
    张万征, 胡志坤, 李小龙. 基于LeNet-5的卷积神经图像识别算法[J]. 液晶与显示,2020,35(5):486-490. doi: 10.3788/YJYXS20203505.0486

    ZHANG W ZH, HU ZH K, LI X L. Convolutional neural image recognition algorithm based on LeNet-5[J]. Chinese Journal of Liquid Crystals and Displays, 2020, 35(5): 486-490. (in Chinese) doi: 10.3788/YJYXS20203505.0486
    刘恋秋. 基于深度卷积生成对抗网络的图像识别算法[J]. 液晶与显示,2020,35(4):383-388. doi: 10.3788/YJYXS20203504.0383

    LIU L Q. Image recognition algorithms based on deep convolution generative adversarial network[J]. Chinese Journal of Liquid Crystals and Displays, 2020, 35(4): 383-388. (in Chinese) doi: 10.3788/YJYXS20203504.0383
    龚希, 吴亮, 谢忠, 等. 融合全局和局部深度特征的高分辨率遥感影像场景分类方法[J]. 光学学报,2019,39(3):0301002. doi: 10.3788/AOS201939.0301002

    GONG X, WU L, XIE ZH, et al. Classification method of high-resolution remote sensing scenes based on fusion of global and local deep features[J]. Acta Optica Sinica, 2019, 39(3): 0301002. (in Chinese) doi: 10.3788/AOS201939.0301002
    贠卫国, 史其琦, 王民. 基于深度卷积神经网络的多特征融合的手势识别[J]. 液晶与显示,2019,34(4):417-422. doi: 10.3788/YJYXS20193404.0417

    YUN W G, SHI Q Q, WANG M. Multi-feature fusion gesture recognition based on deep convolutional neural network[J]. Chinese Journal of Liquid Crystals and Displays, 2019, 34(4): 417-422. (in Chinese) doi: 10.3788/YJYXS20193404.0417
    LIU J Y, LENG X X, LIU Y. Deep convolutional neural network based object detector for X-Ray baggage security imagery[C]. Proceedings of 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), IEEE, 2019: 1757-1761.
    AKCAY S, KUNDEGORSKI M E, WILLCOCKS C G, et al. Using deep convolutional neural network architectures for object classification and detection within X-ray baggage security imagery[J]. IEEE Transactions on Information Forensics and Security, 2018, 13(9): 2203-2215. doi: 10.1109/TIFS.2018.2812196
    ZHU Y, ZHANG H G, AN J Y, et al. GAN-based data augmentation of prohibited item X-ray images in security inspection[J]. Optoelectronics letters, 2020, 16(3): 225-229.
    AKÇAY S, ATAPOUR-ABARGHOUEI A, BRECKON T P. Skip-GANomaly: skip connected and adversarially trained encoder-decoder anomaly detection[C]. Proceedings of 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, 2019.
    AYDIN I, KARAKOSE M, AKIN E. A new approach for baggage inspection by using deep convolutional neural networks[C]. Proceedings of 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), IEEE, 2018: 1-6.
    MERY D, SVEC E, ARIAS M, et al. Modern computer vision techniques for X-Ray testing in baggage inspection[J]. IEEE Transactions on Systems,Man,and Cybernetics:Systems, 2017, 47(4): 682-692. doi: 10.1109/TSMC.2016.2628381
    GALVEZ R L, DADIOS E P, BANDALA A A, et al.. Threat object classification in X-ray images using transfer learning[C]. Proceedings of 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), IEEE, 2018: 1-5.
    HOWARD A G, ZHU M L, CHEN B, et al.. MobileNets: efficient convolutional neural networks for mobile vision applications[C]. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2017.
    IANDOLA F N, HAN S, MOSKEWICZ M W, et al.. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size[C]. Proceedings of 2017 International Conference on Learning Representations (ICLR), Toulon, France, 2017.
    CHEN Y P, FAN H Q, XU B, et al.. Drop an octave: reducing spatial redundancy in convolutional neural networks with octave convolution[C]. Proceedings of 2019 IEEE/CVF International Conference on Computer Vision, IEEE, 2019: 3434-3443.
    CHO K, VAN MERRIËNBOER B, GULCEHRE C, et al.. Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, 2014: 1724-1734.
    董潇潇, 何小海, 吴晓红, 等. 基于注意力掩模融合的目标检测算法[J]. 液晶与显示,2019,34(8):825-833. doi: 10.3788/YJYXS20193408.0825

    DONG X X, HE X H, WU X H, et al. Object detection algorithm based on attention mask fusion[J]. Chinese Journal of Liquid Crystals and Displays, 2019, 34(8): 825-833. (in Chinese) doi: 10.3788/YJYXS20193408.0825
    MIAO C J, XIE L X, WAN F, et al.. SIXray: a large-scale security inspection X-ray benchmark for prohibited item discovery in overlapping images[C]. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2019: 2119-2128.
    SZEGEDY C, VANHOUCKE V, IOFFE S, et al.. Rethinking the inception architecture for computer vision[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2016: 2818-2826.
    SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. Proceedings of the 3rd International Conference on Learning Representations, 2014.
    HE K M, ZHANG X Y, REN SH Q, et al.. Deep residual learning for image recognition[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2016.
    HUANG G, LIU ZH, VAN DER MAATEN L, et al.. Densely connected convolutional networks[C]. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2017.
    WANG A L, WANG M H, JIANG K Y, et al.. A novel lidar data classification algorithm combined densenet with STN[C]. Proceedings of 2019 IEEE International Geoscience and Remote Sensing Symposium, IEEE, 2019: 2483-2486.
  • 加载中
图(4) / 表(6)
计量
  • 文章访问数:  3045
  • HTML全文浏览量:  688
  • PDF下载量:  203
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-04-23
  • 修回日期:  2020-06-15
  • 网络出版日期:  2020-09-16
  • 刊出日期:  2020-10-01

目录

    /

    返回文章
    返回

    重要通知

    2024年2月16日科睿唯安通过Blog宣布,2024年将要发布的JCR2023中,229个自然科学和社会科学学科将SCI/SSCI和ESCI期刊一起进行排名!《中国光学(中英文)》作为ESCI期刊将与全球SCI期刊共同排名!