留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于视觉显著性加权与梯度奇异值最大的红外与可见光图像融合

程博阳 李婷 王喻林

程博阳, 李婷, 王喻林. 基于视觉显著性加权与梯度奇异值最大的红外与可见光图像融合[J]. 中国光学(中英文), 2022, 15(4): 675-688. doi: 10.37188/CO.2022-0124
引用本文: 程博阳, 李婷, 王喻林. 基于视觉显著性加权与梯度奇异值最大的红外与可见光图像融合[J]. 中国光学(中英文), 2022, 15(4): 675-688. doi: 10.37188/CO.2022-0124
CHENG Bo-yang, LI Ting, WANG Yu-lin. Fusion of infrared and visible light images based on visual saliency weighting and maximum gradient singular value[J]. Chinese Optics, 2022, 15(4): 675-688. doi: 10.37188/CO.2022-0124
Citation: CHENG Bo-yang, LI Ting, WANG Yu-lin. Fusion of infrared and visible light images based on visual saliency weighting and maximum gradient singular value[J]. Chinese Optics, 2022, 15(4): 675-688. doi: 10.37188/CO.2022-0124

基于视觉显著性加权与梯度奇异值最大的红外与可见光图像融合

doi: 10.37188/CO.2022-0124
基金项目: 国家重大航天工程
详细信息
    作者简介:

    程博阳(1992—),男,北京人,博士,中国空间技术研究院遥感卫星总体部工程师。2015年于吉林大学获得理学学士学位,2020年于中国科学院大学获得工学博士学位,主要从事空间遥感相机总体设计与图像融合工作。E-mail:boyangwudi@163.com

  • 中图分类号: TP394.1;TH691.9

Fusion of infrared and visible light images based on visual saliency weighting and maximum gradient singular value

Funds: Supported by National Major Aerospace Project
More Information
  • 摘要:

    为了综合利用红外与可见光图像的光谱显著性信息,同时提高融合图像的视觉对比度,本文提出了一种基于视觉显著性加权与梯度奇异值最大的红外与可见光图像融合方法。首先,该全新算法通过滚动引导剪切波变换作为多尺度分析工具,来获取图像的近似层分量与多方向细节层分量。其次,针对反映图像主体能量特征的近似层分量,采用视觉显著性加权融合作为其融合规则,该方法利用显著性加权系数矩阵指导图像内的光谱显著性信息有效融合,提高了融合图像的视觉观察度。此外,采用基于梯度奇异值最大原则来指导细节层分量的融合,该方法可以极大程度地将隐藏在两种源图像内的梯度特征还原到融合图像中,使融合图像具有更加清晰的边缘细节。为了验证本文算法的有效性,进行了5组独立的融合实验,最终的实验结果表明,本文算法融合图像的对比度更高,边缘细节更加丰富,并且相较于其它现有典型方法,AVG、IE、QE、SF、SD、SCD等客观参数指标分别提高了16.4%、3.9%、11.8%、17.1%、21.4%、10.1%,因此具有更加优良的视觉效果。

     

  • 图 1  基于MS-RGF分解后的多尺度图像

    Figure 1.  Multi-scale images decomposed based on MS-RGF

    图 2  L=8的伪极化坐标网络

    Figure 2.  Pseudo-polar coordinate network with L= 8

    图 3  剪切波在频域的滤波器组

    Figure 3.  The filter bank of shearlet in frequency domain

    图 4  多方向剪切波变换的效果图

    Figure 4.  Effect diagrams of multi-directional shearlet transform

    图 5  RGST的分解与重构示意图

    Figure 5.  Schematic diagram of decomposition and reconstruction of RGST

    图 6  本文融合算法示意图

    Figure 6.  Schematic diagram of the fusion algorithm in this paper

    图 7  融合实验采用的红外与可见光图像

    Figure 7.  Infrared and visible light images used in the fusion experiment

    图 8  不同分解级数下的AVG值比较

    Figure 8.  Comparison of AVG values under different decomposition levels

    图 9  不同分解级数下的IE值比较

    Figure 9.  Comparison of IE values under different decomposition levels

    图 10  第一组图像融合实验结果

    Figure 10.  The first group of image fusion experiment

    图 11  第二组图像融合实验结果

    Figure 11.  The second group of image fusion experiment

    图 12  第三组图像融合实验

    Figure 12.  The third group of image fusion experiment

    图 13  第四组图像融合实验

    Figure 13.  The fourth group of image fusion experiment

    图 14  第五组图像融合实验

    Figure 14.  The fifth group of image fusion experiment

    表  1  第1组图像融合实验的客观评价指标

    Table  1.   Objective evaluation indicators for the first group of image fusion experiments

    第1组
    融合实验
    评价指标
    AVGIEQESFSDSCDt
    CVT10.597.100.5818.8835.671.543.93
    NSCT6.427.510.4511.4147.221.59109.8
    ADF10.226.910.5317.6430.761.512.07
    WLS11.147.140.39820.3841.191.744.18
    MSVD9.366.840.3716.6329.261.520.76
    TSF9.617.270.5617.7640.581.680.13
    本文方法11.447.420.6220.6547.661.788.82
    下载: 导出CSV

    表  2  第2组图像融合实验的客观评价指标

    Table  2.   Objective evaluation indicators for the second group of image fusion experiments

    第2组
    融合实验
    评价指标
    AVGIEQESFSDSCDt
    CVT8.767.050.5821.6733.651.511.81
    NSCT5.737.170.4212.1537.881.2065.1
    ADF7.556.830.5017.1828.281.501.25
    WLS8.887.060.4620.8833.541.652.36
    MSVD7.846.830.4619.7528.421.540.35
    TSF7.687.110.5519.4035.161.580.13
    本文方法9.447.260.6222.9540.111.654.64
    下载: 导出CSV

    表  3  第3组图像融合实验的客观评价指标

    Table  3.   Objective evaluation indicators for the third group of image fusion experiments

    第3组
    融合实验
    评价指标
    AVGIEQESFSDSCDt
    CVT4.986.910.5914.8034.321.602.22
    NSCT4.137.370.549.7850.491.6291.1
    ADF3.036.620.418.8828.991.521.42
    WLS5.117.100.5515.4747.801.813.16
    MSVD3.956.650.4611.9929.521.530.45
    TSF4.9187.080.6314.9339.021.700.14
    本文方法5.757.150.6516.3948.651.827.51
    下载: 导出CSV

    表  4  第4组图像融合实验的客观评价指标

    Table  4.   Objective evaluation indicators for the fourth group of image fusion experiments

    第4组
    融合实验
    评价指标
    AVGIEQESFSDSCDt
    CVT9.186.910.3917.2733.981.481.34
    NSCT6.067.180.3111.1538.071.2129.46
    ADF5.376.620.3410.1027.901.460.90
    WLS9.826.960.3917.9934.191.581.29
    MSVD7.946.660.3214.5728.341.450.18
    TSF8.137.040.4316.8237.051.630.11
    本文方法9.847.150.4318.4239.041.682.44
    下载: 导出CSV

    表  5  第5组图像融合实验的客观评价指标

    Table  5.   Objective evaluation indicators for the fifth group of image fusion experiments

    第5组
    融合实验
    评价指标
    AVGIEQESFSDSCDt
    CVT12.257.540.5024.8346.911.752.25
    NSCT9.757.810.4318.7955.871.6453.80
    ADF9.196.970.4217.9632.861.741.33
    WLS12.537.350.3824.6243.741.872.64
    MSVD10.666.990.4322.5933.351.780.32
    TSF12,007.680.5325.7452.171.840.15
    本文方法14.317.760.5728.7657.921.893.42
    下载: 导出CSV
  • [1] 陈清江, 张彦博, 柴昱洲, 等. 有限离散剪切波域的红外可见光图像融合[J]. 中国光学,2016,9(5):523-531. doi: 10.3788/co.20160905.0523

    CHEN Q J, ZHANG Y B, CHAI Y ZH, et al. Fusion of infrared and visible images based on finite discrete shearlet domain[J]. Chinese Optics, 2016, 9(5): 523-531. (in Chinese) doi: 10.3788/co.20160905.0523
    [2] 王成, 张艳超. 像素级自适应融合的夜间图像增强[J]. 液晶与显示,2019,34(9):888-896. doi: 10.3788/YJYXS20193409.0888

    WANG CH, ZHANG Y CH. Night image enhancement based on pixel level adaptive image fusion[J]. Chinese Journal of Liquid Crystals and Displays, 2019, 34(9): 888-896. (in Chinese) doi: 10.3788/YJYXS20193409.0888
    [3] 陈广秋, 高印寒, 才华, 等. 局部化NSST与PCNN相结合的图像融合[J]. 液晶与显示,2015,30(4):701-712. doi: 10.3788/YJYXS20153004.0701

    CHEN G Q, GAO Y H, CAI H, et al. Image fusion algorithm based on local NSST and PCNN[J]. Chinese Journal of Liquid Crystals and Display, 2015, 30(4): 701-712. (in Chinese) doi: 10.3788/YJYXS20153004.0701
    [4] 陈广秋, 陈昱存, 李佳悦, 等. 基于DNST和卷积稀疏表示的红外与可见光图像融合[J]. 吉林大学学报(工学版),2021,51(3):996-1010.

    CHEN G Q, CHEN Y C, LI J Y, et al. Infrared and visible image fusion based on discrete nonseparable shearlet transform and convolutional sparse representation[J]. Journal of Jilin University (Engineering and Technology Edition), 2021, 51(3): 996-1010. (in Chinese)
    [5] PRAKASH O, PARK C M, KHARE A, et al. Multiscale fusion of multimodal medical images using lifting scheme based biorthogonal wavelet transform[J]. Optik, 2019, 182: 995-1014. doi: 10.1016/j.ijleo.2018.12.028
    [6] TAO T W, LIU M X, HOU Y K, et al. Latent low-rank representation with sparse consistency constraint for infrared and visible image fusion[J]. Optik, 2022, 261: 169102. doi: 10.1016/j.ijleo.2022.169102
    [7] LIU Y Y, HE K J, XU D, et al. Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition[J]. Optik, 2022, 258: 168914. doi: 10.1016/j.ijleo.2022.168914
    [8] ANOOP SURAJ A, FRANCIS M, KAVYA T S, et al. Discrete wavelet transform based image fusion and de-noising in FPGA[J]. Journal of Electrical Systems and Information Technology, 2014, 1(1): 72-81. doi: 10.1016/j.jesit.2014.03.006
    [9] DONOHO D L, DUNCAN M R. Digital curvelet transform: strategy, implementation, and experiments[J]. Proceedings of SPIE, 2000, 4056: 12-30. doi: 10.1117/12.381679
    [10] CUNHA A L D, ZHOU J, DO M N. The nonsubsampled contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101. doi: 10.1109/TIP.2006.877507
    [11] GUO K H, LABATE D. Optimally sparse multidimensional representation using shearlets[J]. SIAM Journal on Mathematical Analysis, 2007, 39(1): 298-318. doi: 10.1137/060649781
    [12] KONG W W, MIAO Q G, LEI Y, et al. Guided filter random walk and improved spiking cortical model based image fusion method in NSST domain[J]. Neurocomputing, 2022, 488: 509-527. doi: 10.1016/j.neucom.2021.11.060
    [13] 陈广秋, 梁小伟, 段锦, 等. 多级方向引导滤波器及其在多传感器图像融合中的应用[J]. 吉林大学学报(理学版),2019,57(1):129-138. doi: 10.13413/j.cnki.jdxblxb.2017447

    CHEN G Q, LIANG X W, DUAN J, et al. Multistage directional guided filter and its application in multi-sensor image fusion[J]. Journal of Jilin University (Science Edition), 2019, 57(1): 129-138. (in Chinese) doi: 10.13413/j.cnki.jdxblxb.2017447
    [14] ZHANG Q, SHEN X Y, XU L, et al.. Rolling guidance filter[C]. Proceedings of the 13th European Conference on Computer Vision, Springer, 2014: 815-830.
    [15] 程博阳. 基于滚动引导剪切波变换的红外与可见光图像融合研究[D]. 长春: 中国科学院大学(中国科学院长春光学精密机械与物理研究所), 2020.

    CHENG B Y. Research on fusion of infrared and visible light image based on rolling guidance shearlet transform[D]. Changchun: Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 2020.
    [16] GUO ZH Y, YU X T, DU Q L. Infrared and visible image fusion based on saliency and fast guided filtering[J]. Infrared Physics &Technology, 2022, 123: 104178.
    [17] LI W SH, LI R Y, FU J, et al. MSENet: a multi-scale enhanced network based on unique features guidance for medical image fusion[J]. Biomedical Signal Processing and Control, 2022, 74: 103534. doi: 10.1016/j.bspc.2022.103534
    [18] CHAO ZH, DUAN X G, JIA SH F, et al. Medical image fusion via discrete stationary wavelet transform and an enhanced radial basis function neural network[J]. Applied Soft Computing, 2022, 118: 108542. doi: 10.1016/j.asoc.2022.108542
    [19] GUO Y N, LI X, GAO A, et al.. A scale-aware pansharpening method with rolling guidance filter[C]. Proceedings of 2017 IEEE International Geoscience and Remote Sensing Symposium, IEEE, 2017: 5121-5124.
    [20] 刘博, 韩广良, 罗惠元. 基于多尺度细节的孪生卷积神经网络图像融合算法[J]. 液晶与显示,2021,36(9):1283-1293. doi: 10.37188/CJLCD.2020-0339

    LIU B, HAN G L, LUO H Y. Image fusion algorithm based on multi-scale detail Siamese convolutional neural network[J]. Chinese Journal of Liquid Crystals and Displays, 2021, 36(9): 1283-1293. (in Chinese) doi: 10.37188/CJLCD.2020-0339
    [21] XIANG I B, YU Z, FU G Z. Quadtree-based multi-focus image fusion using a weighted focus-measure[J]. Inform. Fusion, 2015, 22: 105-118. doi: 10.1016/j.bspc.2021.102852
    [22] JOSE J, GAUTAM N, TIWARI M, et al. An image quality enhancement scheme employing adolescent identity search algorithm in the NSST domain for multimodal medical image fusion[J]. Biomedical Signal Processing and Control, 2021, 66: 102480. doi: 10.1016/j.bspc.2021.102480
    [23] ACHANTA R, HEMAMI S, ESTRADA F, et al. . Frequency-tuned salient region detection[C]. Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2009: 1597-1604.
    [24] CHENG B Y, JIN L X, LI G N. Adaptive fusion framework of infrared and visual image using saliency detection and improved dual-channel PCNN in the LNSST domain[J]. Infrared Physics &Technology, 2018, 79: 30-43.
    [25] CHENG B Y, JIN L X, LI G N. Infrared and visual image fusion using LNSST and an adaptive dual-channel PCNN with triple-linking strength[J]. Neurocomputing, 2018, 310: 135-147. doi: 10.1016/j.neucom.2018.05.028
    [26] 陈广秋, 高印寒, 段锦, 等. 基于奇异值分解的PCNN红外与可见光图像融合[J]. 液晶与显示,2015,30(1):126-136. doi: 10.3788/YJYXS20153001.0126

    CHEN G Q, GAO Y H, DUAN J, et al. Fusion algorithm of infrared and visible images based on singular value decomposition and PCNN[J]. Chinese Journal of Liquid Crystals and Displays, 2015, 30(1): 126-136. (in Chinese) doi: 10.3788/YJYXS20153001.0126
    [27] NENCINI F, GARZELLI A, BARONTI S, et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion, 2007, 8(2): 143-156. doi: 10.1016/j.inffus.2006.02.001
    [28] LIU Y, LIU SH P, WANG Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164. doi: 10.1016/j.inffus.2014.09.004
    [29] BAVIRISETTI D P, DHULI R. Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform[J]. IEEE Sensors Journal, 2016, 16(1): 203-209. doi: 10.1109/JSEN.2015.2478655
    [30] JIN L M, ZHI Q Z, BO W. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics &Technology, 2017, 82: 8-17.
    [31] NAIDU V P S. Image fusion technique using multi-resolution singular value decomposition[J]. Defence Science Journal, 2011, 61(5): 479-484. doi: 10.14429/dsj.61.705
    [32] BAVIRISETTI D P, DHULI R. Two-scale image fusion of visible and infrared images using saliency detection[J]. Infrared Physics &Technology, 2016, 76: 52-64.
    [33] D. P. B, R. D Two-scale image fusion of visible and infrared images using saliency detection[J]. Infrared Physics &Technology, 2016, 76: 52-64. doi: 10.1016/j.cmpb.2019.04.010
    [34] LIN Y C, CAO D X, ZHOU X C. Adaptive infrared and visible image fusion method by using rolling guidance filter and saliency detection[J]. Optik, 2022, 262: 169218.
    [35] ZHE L, YU Q S, VICTOR S. MRI and PET image fusion using the nonparametric density model and the theory of variable-weight[J]. Computer Methods and Programs in Biomedicine, 2019, 175: 73-82.
    [36] BAI X ZH, ZHANG Y, ZHOU F G, et al. Quadtree-based multi-focus image fusion using a weighted focus-measure[J]. Information Fusion, 2015, 22: 105-118. doi: 10.1016/j.inffus.2014.05.003
    [37] FARID M S, MAHMOOD A, AL-MAADEED S A. Multi-focus image fusion using Content Adaptive Blurring[J]. Information Fusion, 2019, 45: 96-112. doi: 10.1016/j.inffus.2018.01.009
    [38] YIN M, DUAN P H, LIU W, et al. A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation[J]. Neurocomputing, 2017, 226: 182-191. doi: 10.1016/j.neucom.2016.11.051
    [39] KONG X Y, LIU L, QIAN Y SH, et al. . Infrared and visible image fusion using structure-transferring fusion method[J]. Infrared Physics & Technology, 2019, 98: 161-173. ASLANTAS V, BENDES E. A new image quality metric for image fusion: the sum of the correlations of differences[J]. AEU - International Journal of Electronics and Communications, 2015, 69(12): 1890-1896.
    [40] ASLANTAS V, BENDES E. A new image quality metric for image fusion: the sum of the correlations of differences[J]. AEU - International Journal of Electronics and Communications, 2015, 69(12): 1890-1896.
  • 加载中
图(14) / 表(5)
计量
  • 文章访问数:  176
  • HTML全文浏览量:  95
  • PDF下载量:  79
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-06-13
  • 修回日期:  2022-06-29
  • 网络出版日期:  2022-06-29

目录

    /

    返回文章
    返回