Volume 51 Issue 7
Jul.  2025
Turn off MathJax
Article Contents
YANG Y,LIU J X,HUANG S Y,et al. Fuzzy logic and adaptive strategy for infrared and visible light image fusion[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(7):2196-2208 (in Chinese) doi: 10.13700/j.bh.1001-5965.2023.0383
Citation: YANG Y,LIU J X,HUANG S Y,et al. Fuzzy logic and adaptive strategy for infrared and visible light image fusion[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(7):2196-2208 (in Chinese) doi: 10.13700/j.bh.1001-5965.2023.0383

Fuzzy logic and adaptive strategy for infrared and visible light image fusion

doi: 10.13700/j.bh.1001-5965.2023.0383
Funds:

National Natural Science Foundation of China (62072218,61862030); Natural Science Foundation of Tianjin (24JCZDJC00130); Project of Cangzhou Institute of Tiangong University (TGCYY-Z-0303)

More Information
  • Corresponding author: E-mail:shuyinghuang2010@126.com
  • Received Date: 16 Jun 2023
  • Accepted Date: 24 Nov 2023
  • Available Online: 17 Apr 2025
  • Publish Date: 14 Apr 2025
  • Due to different imaging mechanisms, infrared imaging can capture target information under special conditions where the targetis obstructed, while visible light imaging can capture the texture details of the scenarios. Therefore, to obtain a fusion image containing both target information and texture details, infrared imaging and visible light imaging are generally combined to facilitate visual perception and machine recognition. Based on fuzzy logic theory, an infrared and visible light image fusion method was proposed,combining multistage fuzzy discrimination and adaptive parameter fusion strategy (MFD-APFS). First, the infrared and visible light images were decomposed into structural patches, obtaining the contrast-detail image set reconstructed by the signal intensity component. Second, the source image stand contrast-detail image set were processed through a designed fuzzy discrimination system, generating saliency maps for each set. A second-stage fuzzy discrimination was then applied to produce a unified saliency map. Finally, the guided filtering technique was used, with the saliency map guiding the source image to obtain multiple decision graphs. The final fusion image was obtained through the adaptive parameter fusion strategy. The proposed MFD-APFS method was experimentally evaluated on publicly available infrared and visible light datasets. Compared to the seven mainstream fusion methods, the proposed method shows improvements in objective metrics. On the TNO dataset, SSIM-F and QAB/F were improved by 0.169 and 0.1403, respectively, and on the RoadScenes dataset, they were improved by 0.1753 and 0.0537, respectively. Furthermore, the subjective visual analysis indicates that the proposed method is capable of generating fusion images with clear targets and enriched details while retaining infrared target information and visible texture information.

     

  • loading
  • [1]
    MA W H, WANG K, LI J W, et al. Infrared and visible image fusion technology and application: a review[J]. Sensors, 2023, 23(2): 599. doi: 10.3390/s23020599
    [2]
    LI H, MANJUNATH B S, MITRA S K. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3): 235-245. doi: 10.1006/gmip.1995.1022
    [3]
    SINGH S, SINGH H, GEHLOT A, et al. IR and visible image fusion using DWT and bilateral filter[J]. Microsystem Technologies, 2023, 29(4): 457-467. doi: 10.1007/s00542-022-05315-7
    [4]
    REN Z G, REN G Q, WU D H. Fusion of infrared and visible images based on discrete cosine wavelet transform and high pass filter[J]. Soft Computing, 2023, 27(18): 13583-13594. doi: 10.1007/s00500-022-07175-9
    [5]
    AGHAMALEKI J A, GHORBANI A. Image fusion using dual tree discrete wavelet transform and weights optimization[J]. The Visual Computer, 2023, 39(3): 1181-1191. doi: 10.1007/s00371-021-02396-9
    [6]
    HUANG Z H, LI X, WANG L, et al. Spatially adaptive multi-scale image enhancement based on nonsubsampled contourlet transform[J]. Infrared Physics & Technology, 2022, 121: 104014.
    [7]
    YANG C X, HE Y N, SUN C, et al. Infrared and visible image fusion based on QNSCT and guided filter[J]. Optik, 2022, 253: 168592. doi: 10.1016/j.ijleo.2022.168592
    [8]
    XU M L, TANG L F, ZHANG H, et al. Infrared and visible image fusion via parallel scene and texture learning[J]. Pattern Recognition, 2022, 132: 108929. doi: 10.1016/j.patcog.2022.108929
    [9]
    LI H, WU X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. doi: 10.1109/TIP.2018.2887342
    [10]
    TANG L F, YUAN J T, ZHANG H, et al. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware[J]. Information Fusion, 2022, 83: 79-92.
    [11]
    TANG L F, XIANG X Y, ZHANG H, et al. DIVFusion: darkness-free infrared and visible image fusion[J]. Information Fusion, 2023, 91: 477-493. doi: 10.1016/j.inffus.2022.10.034
    [12]
    MA J Y, YU W, LIANG P W, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. doi: 10.1016/j.inffus.2018.09.004
    [13]
    YANG Y, LIU J X, HUANG S Y, et al. Infrared and visible image fusion via texture conditional generative adversarial network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(12): 4771-4783. doi: 10.1109/TCSVT.2021.3054584
    [14]
    YANG X, HUO H T, LI J, et al. DSG-Fusion: infrared and visible image fusion via generative adversarial networks and guided filter[J]. Expert Systems with Applications, 2022, 200: 116905. doi: 10.1016/j.eswa.2022.116905
    [15]
    XU H, MA J Y, JIANG J J, et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. doi: 10.1109/TPAMI.2020.3012548
    [16]
    YANG Y, LIU J X, HUANG S Y, et al. VMDM-fusion: a saliency feature representation method for infrared and visible image fusion[J]. Signal, Image and Video Processing, 2021, 15(6): 1221-1229. doi: 10.1007/s11760-021-01852-2
    [17]
    ZHANG C F, ZHANG Z Y, FENG Z L. Image fusion using online convolutional sparse coding[J]. Journal of Ambient Intelligence and Humanized Computing, 2023, 14(10): 13559-13570. doi: 10.1007/s12652-022-03822-z
    [18]
    LI H Y, ZHANG C F, HE S D, et al. A novel fusion method based on online convolutional sparse coding with sample-dependent dictionary for visible-infrared images[J]. Arabian Journal for Science and Engineering, 2023, 48(8): 10605-10615. doi: 10.1007/s13369-023-07716-w
    [19]
    TAO T W, LIU M X, HOU Y K, et al. Latent low-rank representation with sparse consistency constraint for infrared and visible image fusion[J]. Optik, 2022, 261: 169102. doi: 10.1016/j.ijleo.2022.169102
    [20]
    GUO Z Y, YU X T, DU Q L. Infrared and visible image fusion based on saliency and fast guided filtering[J]. Infrared Physics & Technology, 2022, 123: 104178.
    [21]
    YANG Y, LIU J X, HUANG S Y, et al. Infrared and visible image fusion based on modal feature fusion network and dual visual decision[C]//Proceedings of the IEEE International Conference on Multimedia and Expo. Piscataway: IEEE Press, 2021: 1-6.
    [22]
    MA K D, LI H, YONG H W, et al. Robust multi-exposure image fusion: a structural patch decomposition approach[J]. IEEE Transactions on Image Processing, 2017, 26(5): 2519-2532. doi: 10.1109/TIP.2017.2671921
    [23]
    WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. doi: 10.1109/TIP.2003.819861
    [24]
    YANG Y, QUE Y, HUANG S Y, et al. Multimodal sensor medical image fusion based on type-2 fuzzy logic in NSCT domain[J]. IEEE Sensors Journal, 2016, 16(10): 3735-3745. doi: 10.1109/JSEN.2016.2533864
    [25]
    ZAND M D, ANSARI A H, LUCAS C, et al. Risk assessment of coronary arteries heart disease based on neuro-fuzzy classifiers[C]//Proceedings of the 17th Iranian Conference of Biomedical Engineering. Piscataway: IEEE Press, 2010: 1-4.
    [26]
    RAHMAN M A, LIU S, WONG C Y, et al. Multi-focal image fusion using degree of focus and fuzzy logic[J]. Digital Signal Processing, 2017, 60: 1-19. doi: 10.1016/j.dsp.2016.08.004
    [27]
    BALASUBRAMANIAM P, ANANTHI V P. Image fusion using intuitionistic fuzzy sets[J]. Information Fusion, 2014, 20: 21-30. doi: 10.1016/j.inffus.2013.10.011
    [28]
    MANCHANDA M, SHARMA R. A novel method of multimodal medical image fusion using fuzzy transform[J]. Journal of Visual Communication and Image Representation, 2016, 40: 197-217. doi: 10.1016/j.jvcir.2016.06.021
    [29]
    RIZZI A, GATTA C, PIACENTINI B, et al. Human-visual-system-inspired tone mapping algorithm for HDR images[C]//Proceedings of the Human Vision and Electronic Imaging IX. Bellingham: SPIE, 2004.
    [30]
    WANG Z J, TONG X Y. Consistency analysis and group decision making based on triangular fuzzy additive reciprocal preference relations[J]. Information Sciences, 2016, 361: 29-47.
    [31]
    CHAIRA T. A rank ordered filter for medical image edge enhancement and detection using intuitionistic fuzzy set[J]. Applied Soft Computing, 2012, 12(4): 1259-1266. doi: 10.1016/j.asoc.2011.12.011
    [32]
    YANG Y, WU J H, HUANG S Y, et al. Multimodal medical image fusion based on fuzzy discrimination with structural patch decomposition[J]. IEEE Journal of Biomedical and Health Informatics, 2019, 23(4): 1647-1660. doi: 10.1109/JBHI.2018.2869096
    [33]
    CUI G M, FENG H J, XU Z H, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications, 2015, 341: 199-209. doi: 10.1016/j.optcom.2014.12.032
    [34]
    WANG Z, SIMONCELLI E P, BOVIK A C. Multiscale structural similarity for image quality assessment[C]//Proceedings of the Asilomar Conference on Signals, Systems & Computers. Piscataway: IEEE Press, 2003: 1398-1402.
    [35]
    TOET. TNO image fusion dataset[EB/OL]. (2022-10-15)[2023-06-01]. http://figshare.com/articles/TN_Image_Fusion_Dataset/1008029.
    [36]
    XU H, MA J Y, LE Z L, et al. FusionDN: a unified densely connected network for image fusion[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12484-12491. doi: 10.1609/aaai.v34i07.6936
    [37]
    MA J Y, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100-109. doi: 10.1016/j.inffus.2016.02.001
    [38]
    PRABHAKAR K R, SRIKAR V S, BABU R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 4724-4732.
    [39]
    LI H, WU X J. Multi-focus image fusion using dictionary learning and low-rank representation[C]//Proceedings of the International Conference on Image and Graphics. Berlin: Springer, 2017: 675-686.
    [40]
    LI H, WU X J, KITTLER J. MDLatLRR: a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4733-4746.
    [41]
    VAN AARDT J. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2008, 2(1): 023522. doi: 10.1117/1.2945910
    [42]
    HAGHIGHAT M, RAZIAN M A. Fast-FMI: non-reference image fusion metric[C]//Proceedings of the IEEE 8th International Conference on Application of Information and Communication Technologies. Piscataway: IEEE Press, 2014: 1-3.
    [43]
    YANG C, ZHANG J Q, WANG X R, et al. A novel similarity based quality metric for image fusion[J]. Information Fusion, 2008, 9(2): 156-160. doi: 10.1016/j.inffus.2006.09.001
    [44]
    QU G H, ZHANG D L, YAN P F. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313. doi: 10.1049/el:20020212
    [45]
    MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. doi: 10.1016/j.inffus.2018.02.004
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)  / Tables(9)

    Article Metrics

    Article views(396) PDF downloads(45) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return