留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于模糊逻辑与自适应策略的红外可见光图像融合

杨勇 刘家祥 黄淑英 王晓争 夏钰锟

杨勇,刘家祥,黄淑英,等. 基于模糊逻辑与自适应策略的红外可见光图像融合[J]. 北京亚洲成人在线一二三四五六区学报,2025,51(7):2196-2208 doi: 10.13700/j.bh.1001-5965.2023.0383
引用本文: 杨勇,刘家祥,黄淑英,等. 基于模糊逻辑与自适应策略的红外可见光图像融合[J]. 北京亚洲成人在线一二三四五六区学报,2025,51(7):2196-2208 doi: 10.13700/j.bh.1001-5965.2023.0383
YANG Y,LIU J X,HUANG S Y,et al. Fuzzy logic and adaptive strategy for infrared and visible light image fusion[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(7):2196-2208 (in Chinese) doi: 10.13700/j.bh.1001-5965.2023.0383
Citation: YANG Y,LIU J X,HUANG S Y,et al. Fuzzy logic and adaptive strategy for infrared and visible light image fusion[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(7):2196-2208 (in Chinese) doi: 10.13700/j.bh.1001-5965.2023.0383

基于模糊逻辑与自适应策略的红外可见光图像融合

doi: 10.13700/j.bh.1001-5965.2023.0383
基金项目: 

国家自然科学基金(62072218,61862030);天津市自然科学基金(24JCZDJC00130);沧州市天津工业大学研究院项目(TGCYY-Z-0303)

详细信息
    通讯作者:

    E-mail:shuyinghuang2010@126.com

  • 中图分类号: TP391.41

Fuzzy logic and adaptive strategy for infrared and visible light image fusion

Funds: 

National Natural Science Foundation of China (62072218,61862030); Natural Science Foundation of Tianjin (24JCZDJC00130); Project of Cangzhou Institute of Tiangong University (TGCYY-Z-0303)

More Information
  • 摘要:

    由于成像机制不同,红外图像能捕捉目标信息,可见光图像提供纹理细节,需融合两者以提升视觉感知与机器识别效果。基于模糊逻辑理论,提出一种多级模糊逻辑判别与自适应参数融合策略(MFD-APFS)的红外与可见光图像融合方法。将红外图像与可见光图像分别进行结构块分解,得到由信号强度分量重构的对比度细节图像组;将源图像组与对比度细节图像组分别输入设计的模糊逻辑判别系统,对图像组进行模糊逻辑判别得到各自的显著性图像,并对得到的显著性图像组进行二次模糊逻辑判别,得到联合的显著性图像;利用引导滤波技术,将显著性图像引导源图像,得到多幅决策图,通过自适应参数的融合策略,得到最终的融合图像。将MFD-APFS方法在红外和可见光图像公开数据集上进行实验测试,结果表明,相比7种主流的融合方法,对于客观度量指标SSIM-F和QAB/F,在TNO数据集上分别提升了0.169和0.1403,在RoadScenes数据集上分别提升了0.17530.0537;主观视觉效果表明,所提方法可以生成目标清晰、细节丰富的融合图像,较好地保留了红外图像目标信息及可见光图像纹理信息。

     

  • 图 1  结构块分解图示

    Figure 1.  Structural patch decomposition

    图 2  多级模糊逻辑判别与自适应参数融合策略框架

    Figure 2.  Framework based on multistage fuzzy logic discrimination and adaptive parameter fusion strategy

    图 3  模糊逻辑判别结构

    Figure 3.  Fuzzy logic discrimination structure

    图 4  模糊逻辑判别器1隶属度函数

    Figure 4.  Membership function of fuzzy logic discriminator 1

    图 5  模糊逻辑判别器2隶属度函数

    Figure 5.  Membership function of fuzzy logic discriminator 2

    图 6  APFS与固定参数融合结果比较

    Figure 6.  Comparison of fusion results between APFS and fixed parameters

    图 7  “nato_camp”图像组融合结果

    Figure 7.  Fusion results of “nato_camp” image set

    图 8  “Sandpath”图像组融合结果

    Figure 8.  Fusion results of “Sandpath” image set

    图 9  “Kaptein_1654”图像组融合结果

    Figure 9.  Fusion results of “Kaptein_1654” image set

    表  1  模糊逻辑判别器1的MF-ss模糊规则

    Table  1.   MF-ss fuzzy rules for fuzzy logic discriminator 1

    $ {\text{MF}}{{\text{-sf}}}^{} $ $ {\text{MF}} {{\text{-ss}}}^{} $
    $ \mu _{{\text{in\_1}}}^{\text{L}} $ $ \mu _{{\text{in\_1}}}^{\text{M}} $ $ \mu _{{\text{in\_1}}}^{\text{H}} $
    $ \mu _{{\text{in\_1}}}^{\text{L}} $ $ \mu _ {{\rm{w}}1} $ $ \mu _ {{\rm{w}}1} $ $ \mu _ {{\rm{w}}1} $
    $ \mu _{{\text{in\_1}}}^{\text{M}} $ $ \mu _ {{\rm{w}}2} $ $ \mu _ { {\rm{w}}2} $ $ \mu _{{\rm{w}}1} $
    $ \mu _{{\text{in\_1}}}^{\text{H}} $ $ \mu _{{\rm{w}}3 } $ $ \mu _{{\rm{w}}3 } $ $ \mu _{{\rm{w}}3 } $
    下载: 导出CSV

    表  2  模糊逻辑判别器1的MF-so模糊规则

    Table  2.   MF-so fuzzy rules for fuzzy logic discriminator 1

    $ {\text{MF}} {{\text{-sf}}}^{} $ $ {\text{MF}} {{\text{-so}}}^{} $
    $ \mu _{{\text{in\_1}}}^{\text{L}} $ $ \mu _{{\text{in\_1}}}^{\text{M}} $ $ \mu _{{\text{in\_1}}}^{\text{H}} $
    $ \mu _{{\text{in\_1}}}^{\text{L}} $ $ \mu _{{\rm{w}}1} $ $ \mu _{{\rm{w}}1} $ $ \mu _{{\rm{w}}1}$
    $ \mu _{{\text{in\_1}}}^{\text{M}} $ $ \mu _{{\rm{w}}2} $ $ \mu _{{\rm{w}}2} $ $ \mu _{{\rm{w}}1} $
    $ \mu _{{\text{in\_1}}}^{\text{H}} $ $ \mu _{{\rm{w}}3} $ $ \mu _{{\rm{w}}3} $ $ \mu _{{\rm{w}}3} $
    下载: 导出CSV

    表  3  模糊逻辑判别器2的模糊规则

    Table  3.   Fuzzy rules for fuzzy logic discriminator 2

    $ {\text{MF}} {\text{-sf}} $ $ {\text{MF}} {{\text{-sp}}}^{} $
    $ \mu _{{\text{in\_2}}}^{\text{L}} $ $ \mu _{{\text{in\_2}}}^{\text{M}} $ $ \mu _{{\text{in\_2}}}^{\text{H}} $
    $ \mu _{{\text{in\_2}}}^{\text{L}} $ $ \mu _{w1}^{} $ $ \mu _{w2}^{} $ $ \mu _{w3}^{} $
    $ \mu _{{\text{in\_2}}}^{\text{M}} $ $ \mu _{w1}^{} $ $ \mu _{w2}^{} $ $ \mu _{w3}^{} $
    $ \mu _{{\text{in\_2}}}^{\text{H}} $ $ \mu _{w3}^{} $ $ \mu _{w3}^{} $ $ \mu _{w3}^{} $
    下载: 导出CSV

    表  4  模糊逻辑判别器1输入隶属度函数的参数

    Table  4.   Fuzzy logic discriminator 1 parameters of input membership function

    隶属度函数 l m u
    $ \mu _{{\text{in\_1}}}^{\text{L}} $ −102 76.5 153
    $ \mu _{{\text{in\_1}}}^{\text{M}} $ 0 127.5 255
    $ \mu _{{\text{in\_1}}}^{\text{H}} $ 102 178.5 357
    下载: 导出CSV

    表  5  模糊逻辑判别器1输出隶属度函数的参数

    Table  5.   Fuzzy logic discriminator 1 parameters of output membership function

    隶属度函数 a b c d
    $ \mu _{{\text{out\_1}}}^{\text{L}} $ −0.36 −0.04 0.04 0.06
    $ \mu _{{\text{out\_1}}}^{\text{M}} $
    $ \mu _{{\text{out\_1}}}^{\text{H}} $ 0.94 0.96 1.04 1.36
    下载: 导出CSV

    表  6  模糊逻辑判别器2输入隶属度函数的参数

    Table  6.   Fuzzy logic discriminator 2 parameters of input membership function

    隶属度函数 a b c d
    $ \mu _{{\text{in\_2}}}^{\text{L}} $ 0.2711 0.1289 0.05 0.5
    $ \mu _{{\text{in\_2}}}^{\text{M}} $
    $ \mu _{{\text{in\_2}}}^{\text{H}} $ 0.5 0.95 1.129 1.271
    下载: 导出CSV

    表  7  模糊逻辑判别器2输出隶属度函数的参数

    Table  7.   Fuzzy logic discriminator 2 parameters of output membership function

    隶属度函数 a b c d
    $ \mu _{{\text{out\_2}}}^{\text{L}} $ −0.36 −0.04 0.025 0.05
    $ \mu _{{\text{out\_2}}}^{\text{M}} $
    $ \mu _{{\text{out\_2}}}^{\text{H}} $ 0.95 0.975 1.04 1.36
    下载: 导出CSV

    表  8  不同融合方法在TNO数据集上的客观指标对比

    Table  8.   Comparison of objective indicators of different fusion methods on TNO dataset

    方法 E FMIw FMIdct SSIM-F QAB/F SD
    LatLRR[39] 6.5075 0.3795 0.3264 0.7419 0.4102 28.0106
    GTF[37] 6.5799 0.4393 0.4280 0.7586 0.4445 28.2344
    DeepFuse[38] 6.9073 0.4216 0.4118 0.7532 0.4590 36.8281
    MDLatLRR[40] 6.8711 0.4459 0.4001 0.7979 0.5004 34.4431
    FusionGAN[12] 6.3708 0.3681 0.3573 0.5223 0.2155 25.0899
    DenseFuse[9] 6.8381 0.4272 0.4166 0.7550 0.4610 36.4431
    U2Fusion[15] 6.8546 0.3624 0.3400 0.7522 0.4451 32.8386
    MFD-APFS 7.0301 0.5242 0.4280 0.9669 0.6407 48.7431
     注:加粗数值表示最优结果。
    下载: 导出CSV

    表  9  不同融合方法在RoadScenes数据集上的客观指标对比

    Table  9.   Comparison of objective indicators of different fusion methods on RoadScenes dataset

    方法 E FMIw FMIdct SSIM-F QAB/F SD
    LatLRR[39] 6.8162 0.3408 0.2880 0.6855 0.4200 36.7768
    GTF[37] 7.7020 0.3811 0.4184 0.7032 0.4454 62.4172
    DeepFuse[38] 7.3193 0.4047 0.3646 0.7273 0.4771 50.7207
    MDLatLRR[40] 6.5869 0.3900 0.3457 0.6898 0.4826 41.3768
    FusionGAN[12] 7.3122 0.3691 0.3717 0.5695 0.3296 48.1683
    DenseFuse[9] 7.3185 0.4074 0.3657 0.7293 0.4786 50.6692
    U2Fusion[15] 7.2966 0.4034 0.3524 0.7432 0.4770 47.1512
    MFD-APFS 7.0024 0.4990 0.3167 0.9185 0.5323 47.4502
     注:加粗数值表示最优结果。
    下载: 导出CSV
  • [1] MA W H, WANG K, LI J W, et al. Infrared and visible image fusion technology and application: a review[J]. Sensors, 2023, 23(2): 599. doi: 10.3390/s23020599
    [2] LI H, MANJUNATH B S, MITRA S K. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3): 235-245. doi: 10.1006/gmip.1995.1022
    [3] SINGH S, SINGH H, GEHLOT A, et al. IR and visible image fusion using DWT and bilateral filter[J]. Microsystem Technologies, 2023, 29(4): 457-467. doi: 10.1007/s00542-022-05315-7
    [4] REN Z G, REN G Q, WU D H. Fusion of infrared and visible images based on discrete cosine wavelet transform and high pass filter[J]. Soft Computing, 2023, 27(18): 13583-13594. doi: 10.1007/s00500-022-07175-9
    [5] AGHAMALEKI J A, GHORBANI A. Image fusion using dual tree discrete wavelet transform and weights optimization[J]. The Visual Computer, 2023, 39(3): 1181-1191. doi: 10.1007/s00371-021-02396-9
    [6] HUANG Z H, LI X, WANG L, et al. Spatially adaptive multi-scale image enhancement based on nonsubsampled contourlet transform[J]. Infrared Physics & Technology, 2022, 121: 104014.
    [7] YANG C X, HE Y N, SUN C, et al. Infrared and visible image fusion based on QNSCT and guided filter[J]. Optik, 2022, 253: 168592. doi: 10.1016/j.ijleo.2022.168592
    [8] XU M L, TANG L F, ZHANG H, et al. Infrared and visible image fusion via parallel scene and texture learning[J]. Pattern Recognition, 2022, 132: 108929. doi: 10.1016/j.patcog.2022.108929
    [9] LI H, WU X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. doi: 10.1109/TIP.2018.2887342
    [10] TANG L F, YUAN J T, ZHANG H, et al. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware[J]. Information Fusion, 2022, 83: 79-92.
    [11] TANG L F, XIANG X Y, ZHANG H, et al. DIVFusion: darkness-free infrared and visible image fusion[J]. Information Fusion, 2023, 91: 477-493. doi: 10.1016/j.inffus.2022.10.034
    [12] MA J Y, YU W, LIANG P W, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. doi: 10.1016/j.inffus.2018.09.004
    [13] YANG Y, LIU J X, HUANG S Y, et al. Infrared and visible image fusion via texture conditional generative adversarial network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(12): 4771-4783. doi: 10.1109/TCSVT.2021.3054584
    [14] YANG X, HUO H T, LI J, et al. DSG-Fusion: infrared and visible image fusion via generative adversarial networks and guided filter[J]. Expert Systems with Applications, 2022, 200: 116905. doi: 10.1016/j.eswa.2022.116905
    [15] XU H, MA J Y, JIANG J J, et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. doi: 10.1109/TPAMI.2020.3012548
    [16] YANG Y, LIU J X, HUANG S Y, et al. VMDM-fusion: a saliency feature representation method for infrared and visible image fusion[J]. Signal, Image and Video Processing, 2021, 15(6): 1221-1229. doi: 10.1007/s11760-021-01852-2
    [17] ZHANG C F, ZHANG Z Y, FENG Z L. Image fusion using online convolutional sparse coding[J]. Journal of Ambient Intelligence and Humanized Computing, 2023, 14(10): 13559-13570. doi: 10.1007/s12652-022-03822-z
    [18] LI H Y, ZHANG C F, HE S D, et al. A novel fusion method based on online convolutional sparse coding with sample-dependent dictionary for visible-infrared images[J]. Arabian Journal for Science and Engineering, 2023, 48(8): 10605-10615. doi: 10.1007/s13369-023-07716-w
    [19] TAO T W, LIU M X, HOU Y K, et al. Latent low-rank representation with sparse consistency constraint for infrared and visible image fusion[J]. Optik, 2022, 261: 169102. doi: 10.1016/j.ijleo.2022.169102
    [20] GUO Z Y, YU X T, DU Q L. Infrared and visible image fusion based on saliency and fast guided filtering[J]. Infrared Physics & Technology, 2022, 123: 104178.
    [21] YANG Y, LIU J X, HUANG S Y, et al. Infrared and visible image fusion based on modal feature fusion network and dual visual decision[C]//Proceedings of the IEEE International Conference on Multimedia and Expo. Piscataway: IEEE Press, 2021: 1-6.
    [22] MA K D, LI H, YONG H W, et al. Robust multi-exposure image fusion: a structural patch decomposition approach[J]. IEEE Transactions on Image Processing, 2017, 26(5): 2519-2532. doi: 10.1109/TIP.2017.2671921
    [23] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. doi: 10.1109/TIP.2003.819861
    [24] YANG Y, QUE Y, HUANG S Y, et al. Multimodal sensor medical image fusion based on type-2 fuzzy logic in NSCT domain[J]. IEEE Sensors Journal, 2016, 16(10): 3735-3745. doi: 10.1109/JSEN.2016.2533864
    [25] ZAND M D, ANSARI A H, LUCAS C, et al. Risk assessment of coronary arteries heart disease based on neuro-fuzzy classifiers[C]//Proceedings of the 17th Iranian Conference of Biomedical Engineering. Piscataway: IEEE Press, 2010: 1-4.
    [26] RAHMAN M A, LIU S, WONG C Y, et al. Multi-focal image fusion using degree of focus and fuzzy logic[J]. Digital Signal Processing, 2017, 60: 1-19. doi: 10.1016/j.dsp.2016.08.004
    [27] BALASUBRAMANIAM P, ANANTHI V P. Image fusion using intuitionistic fuzzy sets[J]. Information Fusion, 2014, 20: 21-30. doi: 10.1016/j.inffus.2013.10.011
    [28] MANCHANDA M, SHARMA R. A novel method of multimodal medical image fusion using fuzzy transform[J]. Journal of Visual Communication and Image Representation, 2016, 40: 197-217. doi: 10.1016/j.jvcir.2016.06.021
    [29] RIZZI A, GATTA C, PIACENTINI B, et al. Human-visual-system-inspired tone mapping algorithm for HDR images[C]//Proceedings of the Human Vision and Electronic Imaging IX. Bellingham: SPIE, 2004.
    [30] WANG Z J, TONG X Y. Consistency analysis and group decision making based on triangular fuzzy additive reciprocal preference relations[J]. Information Sciences, 2016, 361: 29-47.
    [31] CHAIRA T. A rank ordered filter for medical image edge enhancement and detection using intuitionistic fuzzy set[J]. Applied Soft Computing, 2012, 12(4): 1259-1266. doi: 10.1016/j.asoc.2011.12.011
    [32] YANG Y, WU J H, HUANG S Y, et al. Multimodal medical image fusion based on fuzzy discrimination with structural patch decomposition[J]. IEEE Journal of Biomedical and Health Informatics, 2019, 23(4): 1647-1660. doi: 10.1109/JBHI.2018.2869096
    [33] CUI G M, FENG H J, XU Z H, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications, 2015, 341: 199-209. doi: 10.1016/j.optcom.2014.12.032
    [34] WANG Z, SIMONCELLI E P, BOVIK A C. Multiscale structural similarity for image quality assessment[C]//Proceedings of the Asilomar Conference on Signals, Systems & Computers. Piscataway: IEEE Press, 2003: 1398-1402.
    [35] TOET. TNO image fusion dataset[EB/OL]. (2022-10-15)[2023-06-01]. http://figshare.com/articles/TN_Image_Fusion_Dataset/1008029.
    [36] XU H, MA J Y, LE Z L, et al. FusionDN: a unified densely connected network for image fusion[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12484-12491. doi: 10.1609/aaai.v34i07.6936
    [37] MA J Y, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100-109. doi: 10.1016/j.inffus.2016.02.001
    [38] PRABHAKAR K R, SRIKAR V S, BABU R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 4724-4732.
    [39] LI H, WU X J. Multi-focus image fusion using dictionary learning and low-rank representation[C]//Proceedings of the International Conference on Image and Graphics. Berlin: Springer, 2017: 675-686.
    [40] LI H, WU X J, KITTLER J. MDLatLRR: a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4733-4746.
    [41] VAN AARDT J. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2008, 2(1): 023522. doi: 10.1117/1.2945910
    [42] HAGHIGHAT M, RAZIAN M A. Fast-FMI: non-reference image fusion metric[C]//Proceedings of the IEEE 8th International Conference on Application of Information and Communication Technologies. Piscataway: IEEE Press, 2014: 1-3.
    [43] YANG C, ZHANG J Q, WANG X R, et al. A novel similarity based quality metric for image fusion[J]. Information Fusion, 2008, 9(2): 156-160. doi: 10.1016/j.inffus.2006.09.001
    [44] QU G H, ZHANG D L, YAN P F. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313. doi: 10.1049/el:20020212
    [45] MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. doi: 10.1016/j.inffus.2018.02.004
  • 加载中
图(9) / 表(9)
计量
  • 文章访问数:  379
  • HTML全文浏览量:  47
  • PDF下载量:  45
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-06-16
  • 录用日期:  2023-11-24
  • 网络出版日期:  2025-04-14
  • 整期出版日期:  2025-07-14

目录

    /

    返回文章
    返回
    常见问答